qwen3-0.6b-multicode-grpo-gguf
GGUF conversion of chaddy81/qwen3-0.6b-multicode-grpo.
Training Pipeline:
- Base: Qwen/Qwen3-0.6B
- SFT: chaddy81/qwen3-0.6b-multicode-sft
- GRPO: chaddy81/qwen3-0.6b-multicode-grpo
Available Files
| File | Quant | Size |
|---|---|---|
| qwen3-0.6b-multicode-grpo-q8_0.gguf | Q8_0 | 609.8 MB |
Usage
With Ollama
huggingface-cli download chaddy81/qwen3-0.6b-multicode-grpo-gguf qwen3-0.6b-multicode-grpo-q8_0.gguf
echo "FROM ./qwen3-0.6b-multicode-grpo-q8_0.gguf" > Modelfile
ollama create qwen3-0.6b-multicode-grpo -f Modelfile
ollama run qwen3-0.6b-multicode-grpo
With llama.cpp
./llama-cli -m qwen3-0.6b-multicode-grpo-q8_0.gguf -p "Your prompt"
- Downloads last month
- 29
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support