AI

BiomedGPT: A Versatile Transformer-Based Foundation Model for Biomedical AI with Enhanced Multimodal Capabilities and Performance

2 Mins read

Traditional biomedical AI models are often specialized and need more flexibility, making them less effective for real-world applications requiring integrating various data types. Generalist AI models, particularly those based on transformers, offer a versatile solution by handling textual and visual data. These models can streamline complex tasks like radiology interpretation and clinical summarization, overcoming the limitations of narrow, task-specific systems. Unlike many biomedical models, which are cumbersome and closed-source, generalist models can simplify deployment and management by consolidating multiple functions into a single system, improving efficiency and adaptability in medical settings.

Researchers from Lehigh University and other institutions present BiomedGPT, an open-source, lightweight vision–language foundation model designed for various biomedical tasks. BiomedGPT achieved state-of-the-art results in 16 out of 25 experiments while maintaining a computing-friendly model scale. Human evaluations showed robust performance in radiology visual question answering, report generation, and summarization, with low error rates and competitive summarization ability. BiomedGPT, trained with diverse, cross-disciplinary data, demonstrates effective transfer and zero-shot learning capabilities. Despite its potential, further improvements are needed for clinical deployment, particularly in safety, equity, and bias considerations.

BiomedGPT is a transformer-based model optimized for the biomedical field, combining concepts from Vision Transformers and language models. Its encoder-decoder architecture, featuring a BERT-style and GPT-style decoder, supports multimodal tasks with enhanced convergence through multi-head attention and normalization. The model comes in three sizes (BiomedGPT-S, M, and B) and processes inputs via a unified token vocabulary for text and image patches. It undergoes pretraining with a mix of vision and text tasks, fine-tuned on specific datasets. Evaluated using accuracy, F1 score, and ROUGE-L, BiomedGPT’s capabilities include 3D imaging extension and instruction-tuning for zero-shot tasks.

BiomedGPT utilizes masked modeling and supervised learning during its pretraining phase, leveraging 14 diverse datasets to build strong data representations. The model is available in three sizes: small (BiomedGPT-S), medium (BiomedGPT-M), and base (BiomedGPT-B). BiomedGPT was adapted for several biomedical applications during fine-tuning, including medical image classification, text understanding, summarization, image captioning, and visual question answering (VQA). These applications aim to enhance disease diagnostics, clinical documentation, and healthcare chatbot development.

In performance evaluations, BiomedGPT excelled across various multimodal tasks. It achieved 86.1% accuracy in VQA on the SLAKE dataset, surpassing the previous state-of-the-art. BiomedGPT outperformed previous models in medical image classification on seven out of nine MedMNIST-Raw datasets. For text understanding and summarization, BiomedGPT-B demonstrated superior results compared to BioGPT and LLaVA-Med. The model also showed effective zero-shot capabilities for biomedical VQA and report generation, though there is still potential for improvement. Human evaluations of BiomedGPT’s radiology task performance indicated high accuracy and competitive results in radiology report generation and summarization.

The study demonstrates that BiomedGPT achieves strong transfer-learning performance across vision, language, and multimodal domains by integrating diverse biomedical data within a unified framework. However, challenges persist, such as the need for high-quality annotated biomedical data and the risk of negative transfer when expanding to new data types like 3D images. Evaluation of generated text remains difficult, with emerging metrics like the F1-RadGraph score helping to assess factual accuracy. While scaling improves performance, it also introduces efficiency and training challenges. BiomedGPT’s capabilities, particularly in zero-shot scenarios, are limited by current resources and training strategies, though fine-tuning shows promise.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 48k+ ML SubReddit

Find Upcoming AI Webinars here



Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.



Source link

Related posts
AI

Language Models Reinforce Dialect Discrimination – The Berkeley Artificial Intelligence Research Blog

3 Mins read
Sample language model responses to different varieties of English and native speaker reactions. ChatGPT does amazingly well at communicating with people in…
AI

Salesforce AI Research Unveiled SFR-RAG: A 9-Billion Parameter Model Revolutionizing Contextual Accuracy and Efficiency in Retrieval Augmented Generation Frameworks

4 Mins read
Generative AI has emerged as a pivotal field with the rise of large language models (LLMs). These models are capable of producing…
AI

This AI Paper by NVIDIA Introduces NVLM 1.0: A Family of Multimodal Large Language Models with Improved Text and Image Processing Capabilities

4 Mins read
Multimodal large language models (MLLMs) focus on creating artificial intelligence (AI) systems that can interpret textual and visual data seamlessly. These models…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *