Efficient Causal-Based Bias Mitigation in Medical Imaging with Sensitive Attributes

Efficient Causal-Based Bias Mitigation in Medical Imaging with Sensitive Attributes

Contact person: Emre Kavak, Christian Wachinger

Overview
This project explores causal-based approaches for mitigating bias in deep learning models applied to medical imaging, with a focus on efficiency improvements. Unlike unsupervised settings where demographic or sensitive attributes are unavailable, this project involves scenarios where such sensitive attributes (e.g., age, sex, race) are accessible. The goal is to develop more efficient regularization methods that ensure models rely on task-relevant features rather than spurious correlations that might arise due to these sensitive attributes.

Motivation
Bias in medical AI systems can have serious consequences. For instance, skin lesion detection models trained on predominantly lighter skin tones might fail when diagnosing darker skin tones. While fairness techniques often attempt to remove such bias, they tend to be computationally expensive and inefficient. This project seeks to improve these techniques, leveraging access to sensitive demographic data to guide the bias mitigation process more effectively, while making the process more efficient and scalable for high-dimensional medical imaging tasks.

Objectives

  • Causal Framework for Bias Mitigation: Investigate causal models that promote fairness by ensuring that predictions rely solely on task-relevant features, while sensitive attributes are properly accounted for to reduce bias.
  • Efficient Regularization: Develop computationally efficient regularization techniques that enforce causal invariance in the presence of sensitive attributes, without incurring prohibitive computational costs.
  • Application to Medical Imaging: Apply and evaluate these techniques in medical imaging tasks, including skin lesion detection and chest X-ray analysis, to demonstrate the methods’ effectiveness in mitigating bias and improving fairness.

Example Applications

  • Medical Imaging:
    • Skin Lesion Detection: Ensure models focus on lesion features, not influenced by skin tone, by leveraging sensitive attributes like skin tone during training.
    • Chest X-ray Analysis: Mitigate bias in diagnostic systems to ensure fair performance across different patient demographics, such as age, sex, or race.
  • General Computer Vision in Healthcare:
    • Radiology Image Classification: Reduce the risk of biased predictions based on irrelevant factors like patient positioning or imaging conditions by utilizing sensitive demographic information in a controlled way.

Student Requirements
Ideal candidates will have:

  • Strong experience in machine learning, particularly in deep learning and computer vision.
  • Familiarity with medical imaging pipelines or a strong willingness to learn.
  • Interest in fairness, causal inference, and bias mitigation with access to sensitive demographic data.

Expected Outcomes

  • Development of efficient regularization methods for causal bias mitigation, making them feasible for high-dimensional tasks like medical imaging.
  • Quantitative evaluation of fairness improvements, with reduced performance disparities for different demographic groups in medical imaging tasks.
  • Insight into how causal frameworks can be efficiently applied to sensitive, real-world healthcare problems.

Related Work

  • Bias Mitigation in Medical AI: https://arxiv.org/pdf/2105.06422
  • Efficient Regularization for Causal Inference: https://arxiv.org/pdf/2212.08645
  • Causal Inference for Fairness: https://arxiv.org/pdf/2207.11385