Aaron Newsome
aaron-newsome
AI & ML interests
None yet
Recent Activity
new activity
about 19 hours ago
unsloth/Qwen3.5-122B-A10B-GGUF:very fast!!! liked
a model 6 days ago
unsloth/Qwen3.5-397B-A17B-GGUF liked
a model 6 days ago
Qwen/Qwen3.5-397B-A17B Organizations
None yet
very fast!!!
🤗 ❤️ 1
3
#2 opened 1 day ago
by
rosspanda0
Core dumped for me,
7
#3 opened 10 days ago
by
aaron-newsome
How to enable vision encoder?
1
#10 opened 6 days ago
by
stefan28123
Q3_K_XL works surprisingly fast for 3x3090 + 128 ram
🔥 8
4
#4 opened 10 days ago
by
fizzacles
Chat template issues with newer llama.cpp?
#9 opened 13 days ago
by
aaron-newsome
Question : Real-world use cases for Step-3.5-Flash
8
#24 opened 16 days ago
by
Geodd
Hot Damn This Model Cooks!
👍 6
12
#5 opened 2 months ago
by
aaron-newsome
Jan 21: All GLM-4.7-Flash quants reuploaded - much better outputs!
🔥 ❤️ 7
29
#10 opened about 1 month ago
by
danielhanchen
This model is slow and ugly.
➕ 2
3
#14 opened about 1 month ago
by
sccssc
IQuestLab is more like IFakeEvals...
🚀 🔥 3
2
#5 opened about 2 months ago
by
coolpoodle
Never mind the benchmarks, MiniMax M2.1 outshines GLM 4.7
🤝 2
4
#11 opened about 2 months ago
by
aaron-newsome
Report: getting 20 t/s with UD-Q4_K_XL and 72 VRAM
🔥 1
10
#2 opened 2 months ago
by
SlavikF
UD-Q5_K_XL seemingly broken
6
#2 opened 2 months ago
by
Nimbz
mmproj?
10
#1 opened 3 months ago
by
aaron-newsome
Can't get started MiniMax-M2-UD-Q8_K_XL with llama-cli (llama.cpp)
1
#8 opened 3 months ago
by
alexmv2025
Should it run in 24GB VRAM?
👍 1
2
#2 opened 3 months ago
by
dkackman
thinking disables tools
5
#6 opened 3 months ago
by
ktsaou
Q4_K_XL seems corrupted
3
#3 opened 4 months ago
by
aaron-newsome
System prompt weirdness
2
#2 opened 4 months ago
by
mikehenderson976
Dual RTX Pro 6000 Blackwell 96GB - IT FITS!
🤯 1
3
#11 opened 4 months ago
by
aaron-newsome