Phil
phil111
AI & ML interests
None yet
Recent Activity
new activity 2 days ago
deepseek-ai/DeepSeek-V4-Flash:Too big to run locally. new activity 2 days ago
deepseek-ai/DeepSeek-V4-Pro:16 - 24B models with FP8 quantization new activity 12 days ago
Qwen/Qwen3.6-35B-A3B:Is it really better in real world task?Organizations
None yet
Too big to run locally.
π€―π 9
19
#12 opened 13 days ago
by
Dampfinchen
16 - 24B models with FP8 quantization
π 4
6
#152 opened 11 days ago
by
Duonglv
Is it really better in real world task?
π 1
9
#50 opened 14 days ago
by
BornSaint
Why this release?
ππ€― 8
4
#3 opened 14 days ago
by
neoOpus
This LLM is a test maxer, not a general purpose AI model.
π 1
21
#17 opened 23 days ago
by
phil111
IQ4_XS vs. Q5_K_P
11
#15 opened 26 days ago
by
Data-vanOrtus
I find these models interesting, but have a couple thoughts.
1
#1 opened 20 days ago
by
phil111
Your 260k dictionary is breaking Gemma 4's back.
8
#25 opened 23 days ago
by
phil111
Gemma 4 E4B will be as encyclopedically well-read as the 12b model?
3
#48 opened 28 days ago
by
Regrin
Impressive skills for its size, but this doesn't belong on normal peoples' phones.
#10 opened 30 days ago
by
phil111
Gemma 4, Compared to G3, Has Notable Improvements And Regressions
5
#10 opened about 1 month ago
by
phil111
Good Work. Please consider addressing factual hallucinations.
π 1
3
#6 opened about 1 month ago
by
phil111
Thanks. This is by far the best denial stripping I've ever seen.
π 3
2
#30 opened about 1 month ago
by
phil111
More reserved settings and standard Q4_K_M still works best for me.
10
#43 opened about 1 month ago
by
phil111
There's got to be a better way.
23
#6 opened about 2 months ago
by
phil111
There's got to be a better way.
23
#6 opened about 2 months ago
by
phil111