DeGAML-LLM Checkpoints

This repository contains pre-trained checkpoints for the generalization module of our proposed DeGAML-LLM framework - a novel meta-learning approach that decouples generalization and adaptation for Large Language Models.

πŸ”— Links

πŸ“¦ Available Checkpoints

All checkpoints are trained on Qwen2.5-0.5B-Instruct using LoRA adapters optimized with the DeGAML-LLM framework:

Checkpoint Name Dataset Size
qwen0.5lora__ARC-c.pth ARC-Challenge ~4.45 GB
qwen0.5lora__ARC-e.pth ARC-Easy ~4.45 GB
qwen0.5lora__BoolQ.pth BoolQ ~4.45 GB
qwen0.5lora__HellaSwag.pth HellaSwag ~4.45 GB
qwen0.5lora__PIQA.pth PIQA ~4.45 GB
qwen0.5lora__SocialIQA.pth SocialIQA ~4.45 GB
qwen0.5lora__WinoGrande.pth WinoGrande ~4.45 GB

πŸš€ Usage

Download

from huggingface_hub import hf_hub_download

# Download a specific checkpoint
checkpoint_path = hf_hub_download(
    repo_id="Nitin2004/DeGAML-LLM-checkpoints",
    filename="qwen0.5lora__ARC-c.pth"
)

Load with PyTorch

import torch

# Load the checkpoint
checkpoint = torch.load(checkpoint_path)
print(checkpoint.keys())

Use with DeGAML-LLM

Refer to the DeGAML-LLM repository for detailed usage instructions on how to integrate these checkpoints with the framework.

πŸ“Š Performance

These checkpoints achieve state-of-the-art results on common-sense reasoning tasks when used with the DeGAML-LLM adaptation framework. See the project page for complete benchmark results.

πŸ“„ Citation

If you use these checkpoints in your research, please cite:

@article{degaml-llm2025,
  title={Decoupling Generalization and Adaptation in Meta-Learning for Large Language Models},
  author={Vetcha, Nitin and Xu, Binqian and Liu, Dianbo},
  year={2026}
}

πŸ“§ Contact

For questions or issues, please:

πŸ“œ License

Apache License 2.0 - See LICENSE for details.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support