--- license: mit language: - en pipeline_tag: text-generation arxiv: - https://arxiv.org/abs/2508.06595 library_name: transformers --- ## Model Details Best [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) checkpoint unlearned using [RR](https://arxiv.org/abs/2406.04313) with the Keyword-Bio forget set. For more details, please check [our paper](https://arxiv.org/abs/2508.06595). ### sources - Base model: [Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) - Repository: [https://github.com/xyzhu123/Synthetic_Textbook) ### Performance | | WMDP-Bio | tinyMMLU | GSM8k | TriviaQA | |---------------------------------------------|:---------:|:----------:|:-------:|:--------:| | Mistral-7B-Instruct-v0.3 | 67.48 | 64.20 | 50.19 | 56.81 | | Mistral-7B-Instruct-v0.3_RR_Keyword-Bio | 49.18 | 63.95 | 47.38 | 57.48 | ## Citation If you find this useful in your research, please consider citing our paper: ``` @misc{zhu2025llmunlearningexpertcurated, title={LLM Unlearning Without an Expert Curated Dataset}, author={Xiaoyuan Zhu and Muru Zhang and Ollie Liu and Robin Jia and Willie Neiswanger}, year={2025}, eprint={2508.06595}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.06595}, } ```