Git Product home page Git Product logo

unimatch's Introduction

Evaluating Fairness for Semi-supervised Cardiac Magnetic Resonance Image Segmenation

The Project

  • Documents for poster, slides, thesis, and main results are available in the docs and all of the results are in the corresponding notebooks.
  • Code for the UKBB usecase is avialable in more-scenarios
  • Project setup for UKBB usecase is in this readme

Poster

Motivation

In healthcare, the workload is always increasing, and the workforce is limited, which calls for automation of certain tasks to save time. An example showcasing this idea is Rwanda, where in 2015, there were only 11 radiologists available for a population of 12 million [62]. Fairness is a major concern in in healthcare, where the models are used to make decisions that affect human lives.

Objective

We would like to leverage semi-supervised learning, patient textual information and Distribution Alignment [3] to mitigate bias. Using semi-supervised helps with scarcity of data in low-data regimes and provides a means of representation learning which could offer help with bias mitigation.

Methodology

We employ a three-stage pipeline where we employ a number of techniques. For the preprocessing step we have data balanacing and slice selection. We employ data balancing based on protected attributes to build different train and test sets. Our slice selection strategy includes using either all slices or only those with the lowest error scores.

The second stage is method choice where we use two training methods: fully-supervised and the semi-supervised pipeline from UniMatch [1].

The third stage is the model choice where we tried three segmentation models: Unet [2], UNet Multi-Modal which uses image features and text embeddings from BERT [4], and UNet Multi-Task which learns both classification and segmentation.

Segmentation

Results

  • No technique could completely mitigate bias for all protected groups and intersectional groups.
  • Lack of bias between groups does not preclude bias within intersectional groups.
  • Incorporating ethnicity label into certain pipelines lowers overall performance and sometimes introduced disparities between more groups.
  • We cannot definitively say that the cause of the bias is the encoding of sex or ethnicity features in the images.
  • Using different test sets resulted in varied behavior when analyzing bias.

All Experiments All Experiments

UniMatch vs. UNet

UniMatch Unet

UniMatch Paper

PWC PWC PWC PWC PWC PWC PWC PWC

This codebase contains a strong re-implementation of FixMatch in the field of semi-supervised semantic segmentation, as well as the official PyTorch implementation of our UniMatch in the natural, remote sensing, and medical scenarios.

Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation
Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi
In Conference on Computer Vision and Pattern Recognition (CVPR), 2023

We provide a list of Awesome Semi-Supervised Semantic Segmentation works.

Results

You can check our training logs for convenient comparisons during reproducing.

Note: we have added and updated some results in our camera-ready version. Please refer to our latest version.

Pascal VOC 2012

Labeled images are sampled from the original high-quality training set. Results are obtained by DeepLabv3+ based on ResNet-101 with training size 321.

Method 1/16 (92) 1/8 (183) 1/4 (366) 1/2 (732) Full (1464)
SupBaseline 45.1 55.3 64.8 69.7 73.5
U2PL 68.0 69.2 73.7 76.2 79.5
ST++ 65.2 71.0 74.6 77.3 79.1
PS-MT 65.8 69.6 76.6 78.4 80.0
UniMatch (Ours) 75.2 77.2 78.8 79.9 81.2

Cityscapes

Results are obtained by DeepLabv3+ based on ResNet-50/101. We reproduce U2PL results on ResNet-50.

Note: the results differ from our arXiv-V1 because we change the confidence threshold from 0.95 to 0, and change the ResNet output stride from 8 to 16. Therefore, it is currently more efficient to run.

You can click on the numbers to be directed to corresponding checkpoints.

ResNet-50 1/16 1/8 1/4 1/2 ResNet-101 1/16 1/8 1/4 1/2
SupBaseline 63.3 70.2 73.1 76.6 SupBaseline 66.3 72.8 75.0 78.0
U2PL 70.6 73.0 76.3 77.2 U2PL 74.9 76.5 78.5 79.1
UniMatch (Ours) 75.0 76.8 77.5 78.6 UniMatch (Ours) 76.6 77.9 79.2 79.5

COCO

Results are obtained by DeepLabv3+ based on Xception-65.

You can click on the numbers to be directed to corresponding checkpoints.

Method 1/512 (232) 1/256 (463) 1/128 (925) 1/64 (1849) 1/32 (3697)
SupBaseline 22.9 28.0 33.6 37.8 42.2
PseudoSeg 29.8 37.1 39.1 41.8 43.6
PC2Seg 29.9 37.5 40.1 43.7 46.1
UniMatch (Ours) 31.9 38.9 44.4 48.2 49.8

More Scenarios

We also apply our UniMatch in the scenarios of semi-supervised remote sensing change detection and medical image segmentation, achieving tremendous improvements over previous methods:

Getting Started

Installation

cd UniMatch
conda create -n unimatch python=3.10.4
conda activate unimatch
pip install -r requirements.txt
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html

Pretrained Backbone

ResNet-50 | ResNet-101 | Xception-65

├── ./pretrained
    ├── resnet50.pth
    ├── resnet101.pth
    └── xception.pth

Dataset

Please modify your dataset path in configuration files.

The groundtruth masks have already been pre-processed by us. You can use them directly.

├── [Your Pascal Path]
    ├── JPEGImages
    └── SegmentationClass
    
├── [Your Cityscapes Path]
    ├── leftImg8bit
    └── gtFine
    
├── [Your COCO Path]
    ├── train2017
    ├── val2017
    └── masks

Usage

UniMatch

# use torch.distributed.launch
sh scripts/train.sh <num_gpu> <port>
# to fully reproduce our results, the <num_gpu> should be set as 4 on all three datasets
# otherwise, you need to adjust the learning rate accordingly

# or use slurm
# sh scripts/slurm_train.sh <num_gpu> <port> <partition>

To train on other datasets or splits, please modify dataset and split in train.sh.

FixMatch

Modify the method from 'unimatch' to 'fixmatch' in train.sh.

Supervised Baseline

Modify the method from 'unimatch' to 'supervised' in train.sh, and double the batch_size in configuration file if you use the same number of GPUs as semi-supervised setting (no need to change lr).

Citation

If you find this project useful, please consider citing:

@inproceedings{unimatch,
  title={Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation},
  author={Yang, Lihe and Qi, Lei and Feng, Litong and Zhang, Wayne and Shi, Yinghuan},
  booktitle={CVPR},
  year={2023}
}

We have some other works on semi-supervised semantic segmentation:

unimatch's People

Contributors

msskzx avatar liheyoung avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.