AI

MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs

1 Mins read

We introduce MIA-Bench, a new benchmark designed to evaluate multimodal large language models (MLLMs) on their ability to strictly adhere to complex instructions. Our benchmark comprises a diverse set of 400 image-prompt pairs, each crafted to challenge the models’ compliance with layered instructions in generating accurate responses that satisfy specific requested patterns. Evaluation results from a wide array of state-of-the-art MLLMs reveal significant variations in performance, highlighting areas for improvement in instruction fidelity. Additionally, we create extra training data and explore supervised fine-tuning to enhance the models’ ability to strictly follow instructions without compromising performance on other tasks. We hope this benchmark not only serves as a tool for measuring MLLM adherence to instructions, but also guides future developments in MLLM training methods.


Source link

Related posts
AI

MIND (Math Informed syNthetic Dialogue): How Structured Synthetic Data Improves the Mathematical and Logical Capabilities of AI-Powered Language Models

4 Mins read
Large language models (LLMs) can understand and generate human-like text across various applications. However, despite their success, LLMs often need help in…
AI

DIFFUSEARCH: Revolutionizing Chess AI with Implicit Search and Discrete Diffusion Modeling

3 Mins read
Large Language Models (LLMs) have gained significant attention in AI research due to their impressive capabilities. However,  their limitation lies with long-term…
AI

JAMUN: A Walk-Jump Sampling Model for Generating Ensembles of Molecular Conformations

2 Mins read
The dynamics of protein structures are crucial for understanding their functions and developing targeted drug treatments, particularly for cryptic binding sites. However,…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *