Mistral-AVS-LoRA
This repository contains LoRA adapter weights for a Mistral-7B-Instruct model fine-tuned to generate patient-friendly After-Visit Summaries (AVS) from Brief Hospital Course (BHC) notes. The goal is to improve clarity and reduce hallucinations while producing accurate, accessible summaries for patients.
Only LoRA weights are included. Load them on top of the original Mistral base model.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModelForCausalLM
base = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
model = PeftModelForCausalLM.from_pretrained(base, "williach31/mistral-7b-bhc-to-avs-lora")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
Training
The adapter was trained on a dataset derived from Brief Hospital Course (BHC) notes paired with patient-facing summaries, using an instruction-style prompt template and PEFT-based LoRA fine-tuning.
Intended Use
Research and educational purposes in clinical NLP, particularly generating simplified summaries of medical notes. Not intended for clinical decision-making or deployment without human review.
Acknowledgements & Training Data Sources
This work builds on the datasets, methodology, and evaluation frameworks introduced by:
Methodology
Hegselmann, S., Shen, Z., Gierse, F., Agrawal, M., Sontag, D., & Jiang, X. (2024).
A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language Models.
Proceedings of the 5th Conference on Health, Inference, and Learning (CHIL 2024).
PMLR 248:339–379.
https://proceedings.mlr.press/v248/hegselmann24a.html
Dataset
Hegselmann, S., Shen, Z., Gierse, F., Agrawal, M., Sontag, D., & Jiang, X. (2024).
Medical Expert Annotations of Unsupported Facts in Doctor-Written and LLM-Generated Patient Summaries.
PhysioNet.
https://physionet.org/content/ann-pt-summ/1.0.0/
DOI: https://doi.org/10.13026/a66y-aa53
These works informed the dataset design, hallucination mitigation strategies, and evaluation approach used in fine-tuning this model.
License
Released under Apache 2.0, consistent with the Mistral base model and licensing constraints of the upstream research datasets.
Citation
If you use this model, please credit the original dataset creators above and this repository as appropriate.
Model tree for williach31/mistral-7b-bhc-to-avs-lora
Base model
mistralai/Mistral-7B-v0.3