Original model: Poro-34B-chat
GGUF-format model files quantized using llama.cpp
We have Q4_K_M and Q5_K_M quantized models available.
Chat template
4-bit
5-bit