Unsupervised Bias Mitigation with Causality-Inspired Methods in Computer Vision and Medical Imaging 

Unsupervised Bias Mitigation with Causality-Inspired Methods in Computer Vision and Medical Imaging

Contact person: Emre Kavak, Christian Wachinger

Overview 

This project focuses on developing novel unsupervised methods for bias mitigation, particularly in computer vision and medical imaging. Bias in AI systems, especially in high-stakes domains like healthcare, can lead to unfair outcomes, such as poorer performance for specific demographic groups (e.g., biological ex, age, or race). These issues persist even when demographic information is not explicitly included during model training. 

Using causality-inspired approaches, this project seeks to tackle bias without relying on demographic labels. Instead, it leverages the intrinsic patterns of model behavior and representations to identify and mitigate biases in an unsupervised manner. 

Motivation 

Bias in machine learning models can have critical real-world consequences: 

  • Medical Imaging: An AI system trained on datasets biased toward lighter skin tones may perform poorly when diagnosing conditions in patients with darker skin tones. 
  • Computer Vision: Facial recognition systems often exhibit higher error rates for minority groups due to imbalanced training data. This project aims to address such disparities by developing methods that promote fairness without requiring sensitive demographic labels. 

Objectives 

  • Investigate how causality-inspired methods can detect and mitigate bias in learned representations. 
  • Develop an unsupervised framework to enhance fairness across diverse groups, focusing on settings where demographic labels are unavailable. 
  • Apply and evaluate the proposed methods on both general computer vision tasks and medical imaging datasets. 

Example Applications 

  • General Computer Vision: 
    • Reducing gender or race biases in image classification tasks (e.g., object detection systems that disproportionately associate certain objects with specific demographic groups). 
    • Ensuring fair performance in emotion recognition systems across different facial features or skin tones. 
  • Medical Imaging: 
    • Addressing disparities in model accuracy for different patient demographics (e.g., ensuring consistent detection of diseases across varying skin tones or anatomical differences). 
    • Improving fairness in automated diagnostic systems, such as those used for skin lesion detection or chest X-ray analysis. 

Student Requirements 

Ideal candidates will have: 

  • A strong foundation in machine learning and deep learning. 
  • Experience with computer vision (PyTorch, TensorFlow) or medical imaging pipelines. 
  • Interest in fairness, causality, and bias mitigation techniques. 

Expected Outcomes 

  • Development of a scalable framework for unsupervised bias mitigation. 
  • Demonstration of the framework on benchmark computer vision datasets (e.g., COCO, CelebA) and/or medical imaging datasets (e.g., NIH ChestX-ray14, ISIC skin lesion datasets). 
  • Quantitative and qualitative evaluations showcasing improved fairness metrics and reduced performance disparities.

Related Work