-
-
-
-
-
-
TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
Text Generation
•
33B
•
Updated
•
139k
•
603
huggingface/falcon-40b-gptq
Text Generation
•
7B
•
Updated
•
23
•
14
TheBloke/MythoMax-L2-13B-GPTQ
Text Generation
•
13B
•
Updated
•
555
•
219
TheBloke/Falcon-180B-GPTQ
Text Generation
•
179B
•
Updated
•
27
•
10
TheBloke/sqlcoder-7B-GPTQ
Text Generation
•
7B
•
Updated
•
10
•
7
Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int4
Text Generation
•
0.5B
•
Updated
•
1.59k
•
14
Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
2B
•
Updated
•
91
•
16
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
74B
•
Updated
•
2.7k
•
30
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4
Text Generation
•
15B
•
Updated
•
89.5k
•
25
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1
2B
•
Updated
•
4
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v1
Text Generation
•
8B
•
Updated
•
5
•
6
ModelCloud/DeepSeek-R1-Distill-Qwen-7B-gptqmodel-4bit-vortex-v2
Text Generation
•
8B
•
Updated
•
420
•
8
ModelCloud/QwQ-32B-gptqmodel-4bit-vortex-v1
Text Generation
•
33B
•
Updated
•
6
•
12
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
•
5B
•
Updated
•
12.6k
•
44
inclusionAI/Ling-Coder-lite-GPTQ-Int8
5B
•
Updated
•
28
•
16
AngelSlim/Qwen3-14B_int4_gptq
15B
•
Updated
•
4
•
1
QuantTrio/Qwen3-Coder-30B-A3B-Instruct-GPTQ-Int8
Text Generation
•
31B
•
Updated
•
4.9k
•
8
ModelCloud/Marin-32B-Base-GPTQMODEL-AWQ-W4A16
Text Generation
•
33B
•
Updated
•
5
•
2
ModelCloud/Granite-4.0-H-1B-GPTQMODEL-W4A16
Text Generation
•
1B
•
Updated
•
3
•
1
ModelCloud/Granite-4.0-H-350M-GPTQMODEL-W4A16
Text Generation
•
0.3B
•
Updated
•
22
•
1
ModelCloud/Brumby-14B-Base-GPTQMODEL-W4A16
Text Generation
•
15B
•
Updated
•
3
•
1
ModelCloud/Brumby-14B-Base-GPTQMODEL-W4A16-v2
Text Generation
•
15B
•
Updated
•
3
•
1
ModelCloud/bloom-560m-gptqmodel-4bit
0.6B
•
Updated
•
2
•
1
avtc/GLM-4.5-Air-GPTQMODEL-W8A16
Text Generation
•
116B
•
Updated
•
6
•
2
SEOKDONG/gpt-oss-safeguard-20b-kor-enterprise-gptq-4bit
Text Generation
•
21B
•
Updated
•
92
•
1
btbtyler09/Devstral-Small-2-24B-Instruct-INT4-INT8-Mixed-GPTQ
Image-Text-to-Text
•
7B
•
Updated
•
1.56k
•
3
ModelCloud/Qwen3-Coder-30B-A3B-Instruct-GPTQMODEL-W4A16-A
Text Generation
•
31B
•
Updated
•
4
•
1
ModelCloud/Qwen3-Coder-30B-A3B-Instruct-GPTQMODEL-W4A16-B
Text Generation
•
31B
•
Updated
•
9
•
1
FayeQuant/GLM-4.7-Flash-GPTQ-4bit
Text Generation
•
30B
•
Updated
•
2.2k
•
1
Mohaaxa/qwen2.5-1.5b-gptq-4bit-v2
Text Generation
•
2B
•
Updated
•
21
•
1