AI

GENAUDIT: A Machine Learning Tool to Assist Users in Fact-Checking LLM-Generated Outputs Against Inputs with Evidence

2 Mins read

With the recent progress made in the field of Artificial Intelligence (AI) and mainly Generative AI, the ability of Large Language Models (LLMs) to generate text in response to inputs or prompts has been demonstrated. These models are capable of generating text just like a human, answering questions, summarizing long textual paragraphs, and whatnot. However, even after access to reference materials, they are imperfect and can generate errors. Such errors can have serious consequences in important applications like document-grounded question answering for industries like banking or healthcare.

To address that, a team of researchers has recently presented GENAUDIT, a tool created especially to help fact-check LLM replies for jobs with a document foundation. GENAUDIT functions by recommending changes to the response generated by the language model. It highlights statements from the reference document that don’t hold up and suggests changes or deletions in response. It also offers proof from the reference text to support the LLM’s factual assertions.

In order to construct GENAUDIT, models that are specifically designed to perform these tasks have been trained. These models have been taught to extract evidence from the reference document to support factual statements, identify unsupported claims, and recommend suitable modifications. GENAUDIT has an interactive interface to help with decision-making and user interaction. With the help of this interface, users can examine and approve recommended adjustments and supporting documentation. 

The team has shared that in-depth assessments of GENAUDIT have been carried out by human raters, who evaluated its performance in multiple categories by examining how well it could identify flaws in LLM outputs while summarising documents. The findings from the evaluations demonstrated that GENAUDIT is capable of accurately identifying faults in outputs from eight distinct LLMs in a variety of fields.

To optimize GENAUDIT’s error detection performance, the team has suggested a technique that maximizes error recall while reducing accuracy loss. This strategy guarantees that the system detects the majority of faults while keeping accuracy levels largely intact. 

The team has summarized their primary contributions as follows.

  1. GENAUDIT has been introduced which is a tool to support fact-checking language model outputs in tasks that are based on documents. This tool highlights supporting data for assertions made in LLM-generated content, finds flaws, and offers solutions. 
  1. Refined LLMs that serve as backend models for fact-checking have been assessed and provided. These versions perform comparably, especially in few-shot conditions, to the most advanced proprietary LLMs.
  1. Evaluation has been conducted on GENAUDIT’s effectiveness in fact-checking errors present in summaries generated by eight different LLMs across documents from three different fields.
  1. A technique that is used during decoding time that aims to improve error detection recall at the expense of a minor reduction in precision has been presented and evaluated. This approach strikes a balance between preserving overall accuracy and enhancing error detection.

In conclusion, GENAUDIT is a great tool to help improve fact-checking procedures in jobs with a strong document foundation and increase the dependability of LLM-generated information in important applications.


Check out the Paper, Project, and GithubAll credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 38k+ ML SubReddit


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.




Source link

Related posts
AI

Theory of Mind Meets LLMs: Hypothetical Minds for Advanced Multi-Agent Tasks

3 Mins read
In the ever-evolving landscape of artificial intelligence (AI), the challenge of creating systems that can effectively collaborate in dynamic environments is a…
AI

PRISE: A Unique Machine Learning Method for Learning Multitask Temporal Action Abstractions Using Natural Language Processing (NLP)

2 Mins read
In the domain of sequential decision-making, especially in robotics, agents often deal with continuous action spaces and high-dimensional observations. These difficulties result…
AI

FLUTE: A CUDA Kernel Designed for Fused Quantized Matrix Multiplications to Accelerate LLM Inference

3 Mins read
Large Language Models (LLMs) face deployment challenges due to latency issues caused by memory bandwidth constraints. Researchers use weight-only quantization to address…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *