Git Product home page Git Product logo

diga's Introduction

πŸ¦ΈπŸ»β€β™€οΈ DIGA - Dynamically Instance-Guided Adaptation: A Backward-free Approach for Test-Time Domain Adaptive Semantic Segmentation

We are still working on this repo now.

Environment Setup

Before everything start, please make sure the following environment variables(UDATADIR, UPRJDIR, UOUTDIR) are setup. The following operation would only modifying the above three folders on your devices.

example:

# e.g. 
export UDATADIR=~/data # dir for dataset
export UPRJDIR=~/code # dir for code
export UOUTDIR=~/output # dir for output such as logs
export WANDB_API_KEY="xxx360492802218be41f76xxxx" # your wandb key
export NUM_WORKERS=0 # number of works used
mkdir -p $UDATADIR $UDATADIR $UOUTDIR # create dir if it does not exist

ps. Wandb is a wonderful tool for visualization similar to Tensorboard but offer more functions (link).

Code Setup

  1. Copy or clone this project.
  2. Organize the files in the following structure.

Code for this project should be put in $UPRJDIR with the following structure:

$UPRJDIR
    β”œβ”€β”€ DIGA
            β”œβ”€β”€ ...
            β”œβ”€β”€ Readme.md
            β”œβ”€β”€ ...

Then you can use cd $UPRJDIR/DIGA to get in the project directory.

Dataset Setup

  1. Init file structure by running cp -r $UPRJDIR/DIGA/src/utils/advent_list_lib/* $UDATADIR/
  2. download dataset

Please read the following before get start.

The environment variable $UDATADIR setted in last step would be used to locate the dataset files when program is running. Each dataset is contained in one folder, the structure in $UDATADIR should be like this:

$UDATADIR
    β”œβ”€β”€ GTA5 # (GTA5 in paper)
    β”œβ”€β”€ synthia # (Synthia in paper)
    β”œβ”€β”€ Cityscapes # (CityScapes in paper)
    β”œβ”€β”€ BDD # (BDD100K in paper)
    β”œβ”€β”€ IDD # (IDD in paper)
    β”œβ”€β”€ NTHU # (CrossCity in paper)
    β”œβ”€β”€ Mapillary # (Mapillary in paper)

In most dataset folders, there is one folder called β€œadvent_list”, which contains the list for training, validation, testing set [following implementation in ADVENT].

We provide the lists file in this project, the folder is $UPRJDIR/DIGA/src/utils/advent_list_lib

Running the follows would set up the folders for you automatically:

# copy DIGA/src/utils/advent_list_lib/* to $UDATADIR/
cp -r $UPRJDIR/DIGA/src/utils/advent_list_lib/* $UDATADIR/

Now, the structure should look like this:

$UDATADIR
    β”œβ”€β”€ GTA5 # (GTA5 in paper)
    β”‚   β”œβ”€β”€ advent_list
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ synthia # (Synthia in paper)
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ Cityscapes # (CityScapes in paper)
    β”‚   β”œβ”€β”€ advent_list
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ BDD # (BDD100K in paper)
    β”‚   β”œβ”€β”€ advent_list
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ IDD # (IDD in paper)
    β”‚   β”œβ”€β”€ ...
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ NTHU # (CrossCity in paper)
    β”‚   β”œβ”€β”€ advent_list
    β”‚   β”œβ”€β”€ ...
    β”œβ”€β”€ Mapillary # (Mapillary in paper)
    β”‚   β”œβ”€β”€ advent_list
    β”‚   β”œβ”€β”€ ...

The next step is downloading each dataset. It would cost too much space to offer instructions about downloading and there are many tutorial online. Instead, we offer an illustration as follows about the final dataset structure. You can download the datasets and organize them like follows. Some instructions can be find here (link1, link2)

$UDATADIR
β”œβ”€β”€ BDD
β”‚   β”œβ”€β”€ advent_list
β”‚   β”œβ”€β”€ images
β”‚   β”œβ”€β”€ labels
β”œβ”€β”€ Cityscapes
β”‚   β”œβ”€β”€ README
β”‚   β”œβ”€β”€ advent_list
β”‚   β”œβ”€β”€ gtFine
β”‚   β”œβ”€β”€ leftImg8bit
β”œβ”€β”€ GTA5
β”‚   β”œβ”€β”€ advent_list
β”‚   β”œβ”€β”€ images
β”‚   β”œβ”€β”€ labels
β”œβ”€β”€ IDD
β”‚   β”œβ”€β”€ gtFine
β”‚   β”œβ”€β”€ iddscripts
β”‚   β”œβ”€β”€ leftImg8bit
β”œβ”€β”€ Mapillary
β”‚   β”œβ”€β”€ LICENSE
β”‚   β”œβ”€β”€ README
β”‚   β”œβ”€β”€ advent_list
β”‚   β”œβ”€β”€ config.json
β”‚   β”œβ”€β”€ small
β”‚   β”œβ”€β”€ testing
β”‚   β”œβ”€β”€ training
β”‚   └── validation
β”œβ”€β”€ NTHU
β”‚   β”œβ”€β”€ Rio
β”‚   β”œβ”€β”€ Rome
β”‚   β”œβ”€β”€ Taipei
β”‚   β”œβ”€β”€ Tokyo
β”‚   └── advent_list
β”œβ”€β”€ synthia
β”‚   β”œβ”€β”€ Depth
β”‚   β”œβ”€β”€ GT
β”‚   β”œβ”€β”€ RGB

Pre-trained model Setup

  1. Download pre-trained models
  2. Organize the files in the following structure.

Source models are required to be put in specific directory for running. The pre-trained models are available here (link).

ps. We use consistent models from previous work of this repo link.

$UDATADIR
β”œβ”€β”€ models # create a new folder called "models" under $UDATADIR 
    β”œβ”€β”€ DA_Seg_models # downloaded from above link
        β”œβ”€β”€ GTA5
        β”œβ”€β”€ GTA5_baseline.pth
        β”œβ”€β”€ SYNTHIA
        β”œβ”€β”€ SYNTHIA_source.pth
        β”œβ”€β”€ ...
β”œβ”€β”€ ...
β”œβ”€β”€ BDD

The following cmd would do this automatically:

mkdir -p $UDATADIR/models
cd $UDATADIR/models
wget -O download.zip https://www.dropbox.com/s/gpzm15ipyt01mis/DA_Seg_models.zip?dl=0https://www.dropbox.com/s/gpzm15ipyt01mis/DA_Seg_models.zip?dl=0
unzip download.zip -d ./

ps. adding, deleting, editing path could be done at configs/model/net/gta5_source.yaml

Python Environment

We provided 3 ways to setup the environments.

  • Using Develop Environment in VS Code (most Recommended)
  • Using Docker (Recommended)
  • Using Pip/Conda

Using Develop Environment in VS Code (Recommended)

If you are using VS Code, this is the most recommended way. You can set up all the environment for this project in just one step.

  1. Open this project in VS Code
  2. Install extension β€œDev Containers”
  3. Press Cmd/Ctrl+Shift+P β†’ Dev Container: Rebuild and Reopen in Container

ps. The config folder .devcontainer has been included in our project, you can edit it as if you would introduce or remove some libraries.

ps. Details and Instructions about Dev Container in VS Code can be found here (link).

Using Docker

Dockerfile is at .devcontainer/Dockerfile

Using Pip/Conda

  1. Install pytorch form official website (link).
pip install \
    tensorboard
    pandas \
    opencv-python \
    pytorch-lightning \
    hydra-core \
    hydra-colorlog \
    hydra-optuna-sweeper \
    torchmetrics \
    pyrootutils \
    pre-commit \
    pytest \
    sh \
    omegaconf \
    rich \
    fiftyone \
    jupyter \
    wandb \
    grad-cam \
    tensorboardx \
    ipdb \
    hydra-joblib-launcher

ps. We use pytorch/pytorch:1.12.1-cuda11.3-cudnn8-devel while other versions should also work.

Running the code

Get Start

The following command will use test the performance of a model pre-trained on GTA5 with target as the validation set of Cityscapes. Hyper-parameters are set as default.

python src/train.py experiment=ttda

Output Example:

wandb: Run summary:
...
wandb: test/acc/dataloaderr13_idx_0 55.01257 # mIoU of 13 class
wandb: test/acc/dataloaderr16_idx_0 49.24151 # mIoU of 16 class
wandb:   test/acc/dataloaderr_idx_0 45.81422 # mIoU of 19 class
...

Custom

You can use the following to start an experiment with custom source model, target dataset and hyper-parameters.

# e.g.
python src/train.py \
    experiment=ttda \
    model/net=gta5_source \
    datamodule/test_list=cityscapes \
    model.cfg.bn_lambda=0.8 \
    model.cfg.proto_lambda=0.8 \
    model.cfg.fusion_lambda=0.8 \
    model.cfg.confidence_threshold=0.9 \
    model.cfg.proto_rho=0.1

The available choices are listed as following:

  • Source model:

    model/net=gta5_source,synthia_source,gta5_synthia_source

  • Target dataset:

    datamodule/test_list=idd,crosscity,cityscapes,bdd,mapillary

  • Hyper-Paramerters:

    bn_lambda: 0-1 (default 0.8)

    proto_lambda: 0-1 (default 0.8)

    fusion_lambda: 0-1 (default 0.8)

    confidence_threshold: 0-1 (default 0.9)

    proto_rho: 0-1 (default 0.1)

ps. File based customization is also supported by modifying configs/model/diga.yaml . Note that the cmd line has higher priority if there are conflicting options.

diga's People

Contributors

waybaba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

diga's Issues

versions of pytorch-lighting and hydra

Hello,

Thanks for your inspiring work!
When running this code, we found the incompatibility between package hydra and pytroch-lighting. Can you share the version of them, a detailed version of packages would be great!

Visualization

Thanks for your excellent job!

How can we get the visualization of segmented results?

How to get the per-class mIoU?

Thanks for your excellent job!

I wonder to know how can we get per-class mIoU, instead of an overall mIoU? Thanks again.

The download method for BDD100K dataset labels.

Hello
1.I couldn't find the download method for the labels of BDD100K dataset. The PNG images for the paths used in the experiment were not included in the Segmentation files downloaded from the official website.
2.I couldn't find any code for aligning classes other than cityscapes in the source code you provided. Could you share more code with me?
Thank you

A few questions about Table 6 and 7 in th supp.

A simple but very effective method! But a little question,

  1. Is DAM or SAM can be used as a further incremental improvement for the DA methods? i.e. Apply DAM or SAM in pre-trained weight by ProDA.
  2. Can you provide the code and pre-trained weight for Tab.6? Because I am more familiar with the structure under the MMSeg framework.

About IDD dataset

Thank you for sharing your excellent work.

I have a question about the IDD dataset.

You described that you used 100 images as the IDD validation set.

However, the number of samples in the original IDD validation set is 981.

So, could you let me know how you split the 100 images from the IDD validation set?

I'm looking forward to hearing from you.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.