File size: 1,433 Bytes
cc80311
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: mit
language:
- en
pipeline_tag: text-generation
arxiv:
- https://arxiv.org/abs/2508.06595
library_name: transformers
---
## Model Details

Best [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) checkpoint unlearned using [RR](https://arxiv.org/abs/2406.04313) with the Keyword-Bio forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595). 

### sources
- Base model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
- Repository: [https://github.com/xyzhu123/Synthetic_Textbook)
### Performance
|                                             |  WMDP-Bio |  tinyMMLU  |  GSM8k  | TriviaQA |
|---------------------------------------------|:---------:|:----------:|:-------:|:--------:| 
| Mistral-7B-Instruct-v0.3                    |   67.48  |   64.20    |  50.19  |  56.81   |
| Mistral-7B-Instruct-v0.3_RR_Keyword-Bio   |   49.18  |   63.95   |  47.38  |  57.48  |
## Citation
If you find this useful in your research, please consider citing our paper:
```
@misc{zhu2025llmunlearningexpertcurated,
      title={LLM Unlearning Without an Expert Curated Dataset}, 
      author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger},
      year={2025},
      eprint={2508.06595},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.06595}, 
}
```