Update README.md
#2
by
valstu
- opened
README.md
CHANGED
|
@@ -8,7 +8,7 @@ This is a GGUF quantization of the [Poro-34B](https://huggingface.co/LumiOpen/Po
|
|
| 8 |
|
| 9 |
Please refer to that repository's model card for details.
|
| 10 |
|
| 11 |
-
The current revision is a quantization of the
|
| 12 |
|
| 13 |
The conversion was done with [llama.cpp](https://github.com/ggerganov/llama.cpp) version b2354 (e25fb4b18fcedb9bed6be4585cf842e9a669b28b)
|
| 14 |
on a Google Compute machine generously sponsored by [Valohai](https://valohai.com/).
|
|
|
|
| 8 |
|
| 9 |
Please refer to that repository's model card for details.
|
| 10 |
|
| 11 |
+
The current revision is a quantization of the 1000B token checkpoint.
|
| 12 |
|
| 13 |
The conversion was done with [llama.cpp](https://github.com/ggerganov/llama.cpp) version b2354 (e25fb4b18fcedb9bed6be4585cf842e9a669b28b)
|
| 14 |
on a Google Compute machine generously sponsored by [Valohai](https://valohai.com/).
|