AI

MIA-Bench: Towards Better Instruction Following Evaluation of Multimodal LLMs

1 Mins read

We introduce MIA-Bench, a new benchmark designed to evaluate multimodal large language models (MLLMs) on their ability to strictly adhere to complex instructions. Our benchmark comprises a diverse set of 400 image-prompt pairs, each crafted to challenge the models’ compliance with layered instructions in generating accurate responses that satisfy specific requested patterns. Evaluation results from a wide array of state-of-the-art MLLMs reveal significant variations in performance, highlighting areas for improvement in instruction fidelity. Additionally, we create extra training data and explore supervised fine-tuning to enhance the models’ ability to strictly follow instructions without compromising performance on other tasks. We hope this benchmark not only serves as a tool for measuring MLLM adherence to instructions, but also guides future developments in MLLM training methods.


Source link

Related posts
AI

Controllable Safety Alignment (CoSA): An AI Framework Designed to Adapt Models to Diverse Safety Requirements without Re-Training

4 Mins read
As large language models (LLMs) become increasingly capable and better day by day, their safety has become a critical topic for research….
AI

Meta AI Releases LayerSkip: A Novel AI Approach to Accelerate Inference in Large Language Models (LLMs)

2 Mins read
Accelerating inference in large language models (LLMs) is challenging due to their high computational and memory requirements, leading to significant financial and…
AI

DPLM-2: A Multimodal Protein Language Model Integrating Sequence and Structural Data

3 Mins read
Proteins, vital macromolecules, are characterized by their amino acid sequences, which dictate their three-dimensional structures and functions in living organisms. Effective generative…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *