AI

Mitigating Hallucinated Translations in Large Language Models with Hallucination-focused Preference Optimization

1 Mins read

Machine Translation (MT) is undergoing a paradigm shift, with systems based on fine-tuned large language models (LLM) becoming increasingly competitive with traditional encoder-decoder models trained specifically for translation tasks. However, LLM-based systems are at a higher risk of generating hallucinations, which can severely undermine user’s trust and safety. Most prior research on hallucination mitigation focuses on traditional MT models, with solutions that involve post-hoc mitigation – detecting hallucinated translations and re-translating them. While effective, this approach introduces additional complexity in deploying extra tools in production and also increases latency. To address these limitations, we propose a method that intrinsically learns to mitigate hallucinations during the model training phase. Specifically, we introduce a data creation framework to generate hallucination focused preference datasets. Fine-tuning LLMs on these preference datasets reduces the hallucination rate by an average of 96% across five language pairs, while preserving overall translation quality. In a zero-shot setting our approach reduces hallucinations by 89% on an average across three unseen target languages.


Source link

Related posts
AI

Key features & Benefits in 2025

7 Mins read
Network planning tools help businesses optimize performance, manage resources efficiently, and ensure scalable, reliable network designs for growth and stability. To help,…
AI

Major Providers Comparison in 2025

5 Mins read
We analyzed top 15 LLMs and their input/output pricing options along with their performance. LLM API pricing can be complex and depends…
AI

This artist collaborates with AI and robots

3 Mins read
“[Chung] comes from drawing, and then they start to work with AI, but not like we’ve seen in this generative AI movement…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *