AI

Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation

1 Mins read

Aligning large language models (LLMs) with human expectations without human-annotated preference data is an important problem. In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive prompt pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive prompt pairs and calculate a self-rewarding score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-rewarding score. In the experimental stage, our DLMA method could surpass the RLHF method without relying on human-annotated preference data.


Source link

Related posts
AI

How Untold Studios empowers artists with an AI assistant built on Amazon Bedrock

9 Mins read
Untold Studios is a tech-driven, leading creative studio specializing in high-end visual effects and animation. Our commitment to innovation led us to…
AI

Accelerate your Amazon Q implementation: starter kits for SMBs

4 Mins read
Whether you’re a small or medium-sized business (SMB) or a managed service provider at the beginning of your cloud journey, you might…
AI

Princeton University Researchers Introduce Self-MoA and Self-MoA-Seq: Optimizing LLM Performance with Single-Model Ensembles

3 Mins read
Large Language Models (LLMs) such as GPT, Gemini, and Claude utilize vast training datasets and complex architectures to generate high-quality responses. However,…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *