---
base_model:
- deepseek-ai/DeepSeek-V3.2-Speciale
---
[
](https://devquasar.com)
'Make knowledge free for everyone'
- uploading -
# !EXPERIMENTAL
Channel wise INT8 weigths for CPU inference. SGLang suppose to support this on CPU with AMX support (AFAIK - Xeon 6!?)
I'd appreciate a test from anyone with the necessary Hardware.
[SGLang how to ](https://lmsys.org/blog/2025-07-14-intel-xeon-optimization/)
Channel wise INT8 version of [deepseek-ai/DeepSeek-V3.2-Speciale](https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Speciale)
