-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 21 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 12 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 10
Collections
Discover the best community collections!
Collections including paper arxiv:2403.07691
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
Paper • 2404.07738 • Published • 2 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 123
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 49 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 81 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116
-
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 19 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64
-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 21 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 12 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 10
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 49 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 81 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 116
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
Paper • 2404.07738 • Published • 2 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 123
-
A General Theoretical Paradigm to Understand Learning from Human Preferences
Paper • 2310.12036 • Published • 19 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 70 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64