Model breaks apart when used with different languages
#38
by
nephepritou
- opened
Description
For example, let's take a conversation:
> User: Hello! Write me simple shell script to get SHA256 checksum of files in folder.
> Model: <think>step_1; step_2; step_3;</think> response
It works great and has no issues. But then:
> User: Translate it to Russian
> Model: <think> step_4;step_5;step_6;</think> PERFECT response IN Russian without errors
Still works great. Try other option (request in Greek to translate to Russian):
> User: Μεταφράστε το στα Ρωσική
> Model: <think> step_4; step_4; step_5; step_4; step_6;</think> Response in step_6; Russ##&*an with step_7;step_4; errors
Now it broken completely. Try other option:
> User: Переведи на русский
> Model: <think> step_4; step_4; step_5; step_4; step_6;</think> Response in step_6; Russ##&*an with step_7;step_4; errors
Observations
- There are no such issues with GGUF model using llama.cpp or ik_llama.cpp, no matter which quant to use. Only VLLM and SGLang are affected;
- Same issue happening with Glm 4.5 Air AWQ, but I though it was because of lobotomized AWQ;
- Qwen models (both instruct and thinking, 30B and 80B, dense and MoE, BF16, FP8 and AWQ) are not affected at all;
- Noticed [gMASK] in jinja, tried to ask model itself "What is
[gMASK]%template content%" and got broken output immediately; - Changing sampling parameters (or enabling speculative decoding, which disables min_p) may help. For example, it gives adequate responses (SOME TIMES) with
top_p: 0.9; - Inference from Zai.org in model card works flawless, no issues.
May it be caused by specific UTF-8 issues with jinja? Or my configuration with 4 GPUs where 3 3090 and 1 4090? Or VLLM has issues with default sampling params?
I've tried to copy llama.cpp default sampling parameters and now it works much better. I've used
temperature: 0.8top_p: 0.95min_p: 0.05
Is this intended way to deploy model?
I've tried to copy llama.cpp default sampling parameters and now it works much better. I've used
temperature: 0.8top_p: 0.95min_p: 0.05Is this intended way to deploy model?
Nevermind, just needs 1-2 more messages to break again.