REAP the Experts: Why Pruning Prevails for One-Shot MoE compression
Paper
•
2510.13999
•
Published
•
7
𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
📄 Paper • 💻 Code
W4A16 quantized version of Cerebras' official GLM-4.6-REAP-218B-A32B.
| Property | Value |
|---|---|
| Base Model | cerebras/GLM-4.6-REAP-218B-A32B |
| Parameters | 218B total, 32B activated |
| Quantization | W4A16 (4-bit weights, 16-bit activations) |
| Original Size | ~436GB |
| Quantized Size | ~116GB |
Tested on 8x RTX 3090:
| Metric | Value |
|---|---|
| Prompt Tokens | ~21,178 |
| Completion Tokens | 393 |
| Time to First Token | 23.82s |
vllm serve 0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound \
--tensor-parallel-size 8 \
--trust-remote-code \
--quantization gptq
REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.
| Dataset | Samples | Purpose | Why It Matters |
|---|---|---|---|
| evol-codealpaca-v1 | 700 | Code generation | 51% of mix — Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability |
| xlam-function-calling-60k | 330 | Function/tool calling | 24% of mix — Tool use requires structured JSON output; experts handling schema generation must be preserved |
| SWE-smith-trajectories | 330 | Agentic multi-turn | 24% of mix — Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning |
REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight × activation_norm
4. Prune lowest-saliency experts
Key Insight: Experts are TASK-SPECIFIC
├── Some experts specialize in natural language
├── Some experts specialize in code syntax
├── Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context
If calibration lacks code → code-specialized experts appear "unused" → get pruned → model loses coding ability
Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:
We followed this exact recipe for reproducibility.
Our calibration mix: 0xSero/glm47-reap-calibration-v2
@article{lasby2025reap,
title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
journal={arXiv preprint arXiv:2510.13999},
year={2025},
url={https://arxiv.org/abs/2510.13999}
}