AI

Mini-InternVL: A Series of Multimodal Large Language Models (MLLMs) 1B to 4B, Achieving 90% of the Performance with Only 5% of the Parameters

3 Mins read

Multimodal large language models (MLLMs) rapidly evolve in artificial intelligence, integrating vision and language processing to enhance comprehension and interaction across diverse data types. These models excel in tasks like image recognition and natural language understanding by combining visual and textual data processing into one coherent framework. This integrated approach allows MLLMs to perform highly on tasks requiring multimodal inputs, proving valuable in fields such as autonomous navigation, medical imaging, and remote sensing, where simultaneous visual and textual data analysis is essential.

Despite their advantages, MLLMs face substantial limitations due to their computational intensity and extensive parameter requirements, limiting their adaptability on devices with constrained resources. Many MLLMs rely on general-purpose training data, often derived from internet sources, which impacts their performance when applied to specialized domains. This reliance on vast datasets and large-scale computing power creates significant barriers to deploying these models for tasks requiring nuanced, domain-specific understanding. These challenges are amplified in fields like remote sensing or autonomous driving, where domain adaptation is crucial but complex and costly.

Existing MLLMs typically incorporate vision encoders like CLIP, designed to align vision data with language models for a cohesive multimodal framework. However, these models often encounter limitations in specialized domains due to a lack of comprehensive visual knowledge across these fields. Most current MLLMs use pre-trained vision encoders aligned with large language models, which require substantial adjustments to their architecture and training schedules when applied to different domains. This process, though effective, can be inefficient and makes deploying these models on smaller devices challenging, as their reliance on internet-domain data limits their ability to adapt seamlessly to domain-specific tasks without extensive reconfiguration.

Researchers from Shanghai AI Laboratory, Tsinghua University, Nanjing University, Fudan University, The Chinese University of Hong Kong, SenseTime Research and Shanghai Jiao Tong University have introduced Mini-InternVL, a series of lightweight MLLMs with parameters ranging from 1B to 4B to deliver efficient multimodal understanding across various domains. Mini-InternVL seeks to maintain 90% of the performance of larger multimodal models using only 5% of the parameters, making it both resource-effective and accessible on consumer-grade devices. The research team designed Mini-InternVL as a pocket-sized solution adaptable to tasks such as autonomous driving, medical imaging, and remote sensing while offering lower computational overhead than traditional MLLMs. By creating a unified adaptation framework, Mini-InternVL supports effective model transfer across domains, promoting accessibility and applicability across specialized fields.

Mini-InternVL employs a robust vision encoder called InternViT-300M, distilled from the larger InternViT-6B model. This vision encoder enhances the model’s representational capacity, allowing for effective cross-domain transfer with reduced resource requirements. The Mini-InternVL series comprises three model variants: Mini-InternVL-1B, Mini-InternVL-2B, and Mini-InternVL-4B, with parameter counts of 1 billion, 2 billion, and 4 billion, respectively. Each variant is connected to pre-trained language models like Qwen2-0.5B, InternLM2-1.8B, and Phi-3-Mini, allowing for flexible deployment. Training occurs in two stages: first, through language-image alignment, where the model is pre-trained on extensive datasets across various tasks, ensuring robust alignment of visual and textual elements. Second, the model undergoes visual instruction tuning, which involves training on datasets specific to multimodal tasks such as image captioning, chart interpretation, and visual question answering. The diverse range of tasks during this multi-stage training enhances Mini-InternVL’s adaptability and performance in real-world scenarios.

Mini-InternVL demonstrates significant performance achievements on various multimodal benchmarks, achieving up to 90% of the performance of larger models like InternVL2-Llama3-76B with only 5% of its parameters. Specifically, Mini-InternVL-4B performed well on general multimodal benchmarks, scoring 78.9 on the MMBench and 81.5 on ChartQA, both essential benchmarks for vision-language tasks. The model also performed competitively on domain-specific tasks, matching or even outperforming some proprietary models in accuracy and efficiency. For instance, in the autonomous driving domain, Mini-InternVL-4B achieved an accuracy score comparable to models using significantly more resources. Furthermore, Mini-InternVL models excelled in medical imaging and remote sensing, demonstrating strong generalization capabilities with minimal fine-tuning. The Mini-InternVL-4B model achieved a final average score of 72.8 across multiple benchmarks, highlighting its strength as a lightweight, high-performing model capable of transferring seamlessly across specialized fields without excessive resource demands.

The researchers successfully addressed the high computational barriers in multimodal model deployment by introducing Mini-InternVL. This model demonstrates that efficient architecture and training methods can achieve competitive performance levels while significantly reducing resource requirements. By employing a unified adaptation framework and a robust vision encoder, Mini-InternVL provides a scalable solution for specialized applications in resource-limited environments, advancing the practical applicability of multimodal large language models in specialized fields.


Check out the Paper and Model Card on Hugging Face. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 55k+ ML SubReddit.

[Trending] LLMWare Introduces Model Depot: An Extensive Collection of Small Language Models (SLMs) for Intel PCs


Nikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.



Source link

Related posts
AI

Frenzy: A Memory-Aware Serverless Computing Method for Heterogeneous GPU Clusters

2 Mins read
Artificial Intelligence (AI) has been making significant advances with an exponentially growing trajectory, incorporating vast amounts of data and building more complex…
AI

This AI Paper by The Data Provenance Initiative Team Highlights Challenges in Multimodal Dataset Provenance, Licensing, Representation, and Transparency for Responsible Development

4 Mins read
The advancement of artificial intelligence hinges on the availability and quality of training data, particularly as multimodal foundation models grow in prominence….
AI

Redesigning Datasets for AI-Driven Mathematical Discovery: Overcoming Current Limitations and Enhancing Workflow Representation

3 Mins read
Current datasets used to train and evaluate AI-based mathematical assistants, particularly LLMs, are limited in scope and design. They often focus on…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *