AI

Google AI Proposes a Machine Learning Framework for Understanding AI Models in Medical Imaging

3 Mins read

Recent advancements in machine learning have been actively used to improve the domain of healthcare. Despite performing remarkably well on various tasks, these models are often unable to provide a clear understanding of how specific visual changes affect ML decisions. These AI models have shown great promise and even human capabilities in some cases, but there remains a critical need for explanations of what signals these models have learned. Such explanations are essential to building trust among medical professionals and potentially uncovering novel scientific insights from the data, which are not yet recognized by experts. Google researchers introduced a novel framework, StylEx, that leverages generative AI to address the challenges in the field of medical imaging, especially focusing on the lack of explainability in AI models. 

Current methods for explaining AI models in computer vision, particularly in medical imaging, often rely on techniques that generate heatmaps indicating the importance of different pixels in an image. These methods, while useful for showing the “where” of important features, fall short of explaining the “what” and “why” behind these features. Specifically, they do not typically explain higher-level characteristics like texture, shape, or size that might underlie the model’s decisions. To overcome these limitations, Google’s StylEx leverages a StyleGAN-based image generator guided by a classifier. This approach aims to generate hypotheses by identifying and visualizing visual signals correlated with a classifier’s predictions. 

The workflow involves four key steps: training a classifier to confirm the presence of relevant signals in the imagery, training a StylEx model to generate images guided by the classifier, automatically detecting and visualizing the top visual attributes influencing the classifier, and having an interdisciplinary panel of experts review these findings to formulate hypotheses for future research. First, the proposed workflow starts by training a classifier on a given medical imaging dataset to perform a specific task, ensuring that the classifier achieves high performance (above 0.8 accuracy). This step confirms that the images contain relevant information for the task. 

Second, a StyleGAN2-based generator is trained to produce realistic images while preserving the classifier’s decision-making process. This generator is adapted to focus on attributes that significantly affect the classifier’s output. The third stage involves automatically selecting the top attributes in the StyleSpace of the generator that influence the classifier’s predictions. For each image, the researchers manipulate each coordinate in the StyleSpace to measure its effect on the classification output, identifying attributes that significantly change the prediction. This process results in counterfactual visualizations, where each attribute is independently adjusted to show its impact.

Finally, an interdisciplinary panel of experts, including clinicians, social scientists, and machine learning engineers, reviews these visualizations. This panel interprets the attributes to determine whether they correspond to known clinical features, potential biases, or novel findings. The panel’s insights are then used to generate hypotheses for further research, considering both biological and socio-cultural determinants of health.

In conclusion, the proposed framework enhances the explainability of AI models in medical imaging. By generating counterfactual images and visualizing the attributes that affect classifier predictions, this approach provides a deeper understanding of the “what” behind the model’s decisions. The interdisciplinary panel involvement, beyond physiology or pathophysiology, ensures that these insights are rigorously interpreted, accounting for potential biases and suggesting new avenues for scientific inquiry. 


Check out the Paper and Blog. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our Telegram Channel, Discord Channel, and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 44k+ ML SubReddit


Pragati Jhunjhunwala is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Kharagpur. She is a tech enthusiast and has a keen interest in the scope of software and data science applications. She is always reading about the developments in different field of AI and ML.




Source link

Related posts
AI

OpenFGL: A Comprehensive Benchmark for Advancing Federated Graph Learning

9 Mins read
Graph neural networks (GNNs) have emerged as powerful tools for capturing complex interactions in real-world entities and finding applications across various business…
AI

Table-Augmented Generation (TAG): A Breakthrough Model Achieving Up to 65% Accuracy and 3.1x Faster Query Execution for Complex Natural Language Queries Over Databases, Outperforming Text2SQL and RAG Methods

4 Mins read
Artificial intelligence (AI) and database management systems have increasingly converged, with significant potential to improve how users interact with large datasets. Recent…
AI

Mixture-of-Experts (MoE) Architectures: Transforming Artificial Intelligence AI with Open-Source Frameworks

5 Mins read
Mixture-of-experts (MoE) architectures are becoming significant in the rapidly developing field of Artificial Intelligence (AI), allowing for the creation of systems that…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *