Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models
Abstract
Composition-RL improves reasoning capabilities by automatically composing multiple problems into new verifiable questions for reinforcement learning training.
Large-scale verifiable prompts underpin the success of Reinforcement Learning with Verifiable Rewards (RLVR), but they contain many uninformative examples and are costly to expand further. Recent studies focus on better exploiting limited training data by prioritizing hard prompts whose rollout pass rate is 0. However, easy prompts with a pass rate of 1 also become increasingly prevalent as training progresses, thereby reducing the effective data size. To mitigate this, we propose Composition-RL, a simple yet useful approach for better utilizing limited verifiable prompts targeting pass-rate-1 prompts. More specifically, Composition-RL automatically composes multiple problems into a new verifiable question and uses these compositional prompts for RL training. Extensive experiments across model sizes from 4B to 30B show that Composition-RL consistently improves reasoning capability over RL trained on the original dataset. Performance can be further boosted with a curriculum variant of Composition-RL that gradually increases compositional depth over training. Additionally, Composition-RL enables more effective cross-domain RL by composing prompts drawn from different domains. Codes, datasets, and models are available at https://github.com/XinXU-USTC/Composition-RL.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning to Self-Verify Makes Language Models Better Reasoners (2026)
- Less Noise, More Voice: Reinforcement Learning for Reasoning via Instruction Purification (2026)
- Beyond Variance: Prompt-Efficient RLVR via Rare-Event Amplification and Bidirectional Pairing (2026)
- Save the Good Prefix: Precise Error Penalization via Process-Supervised RL to Enhance LLM Reasoning (2026)
- Prompt Augmentation Scales up GRPO Training on Mathematical Reasoning (2026)
- DARL: Encouraging Diverse Answers for General Reasoning without Verifiers (2026)
- Beyond Correctness: Learning Robust Reasoning via Transfer (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper