Git Product home page Git Product logo

theo2021 / onda Goto Github PK

View Code? Open in Web Editor NEW
27.0 5.0 2.0 48.08 MB

Source code for "Online Unsupervised Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions", ECCV 2022. This is the code has been implemented to perform training and evaluation of UDA approaches in continuous scenarios. The library has been implemented in PyTorch 1.7.1. Some newer versions should work as well.

License: GNU General Public License v2.0

Python 100.00%
catastrophic-forgetting domain-adaptation machine-learning ml online-adaptation online-learning pytorch unsupervised-domain-adaptation

onda's Introduction

🌊 OnDa

Online Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions

Theodoros Panagiotakopoulos1* Pier Luigi Dovesi2 Linus Härenstam-Nielsen3,4* Matteo Poggi5

1 King 2 Univrses 3 Kudan 4 Technical University of Munich 5 University of Bologna

* Part of the work carried out while at Univrses.

📜 Source code for Online Unsupervised Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions, ECCV 2022.

📽️ Check out our project page and video.

OnDA (literally "wave" in Italian) allows for adapting across a flow of domains, while avoiding catastrophic forgetting.

This code has been implemented to perform training and evaluation of UDA approaches in continuous scenarios. The library has been implemented in PyTorch 1.7.1. Some newer versions should work as well.

Method Cover

All assets to run a simple inference can be found here.

Moreover, recording and tracking for the run is happening through wandb if you haven't an account is necessary to track the adaptation.

Citation

If you find this repo useful for your work, please cite our paper:

@inproceedings{Panagiotakopoulos_ECCV_2022,
  title     = {Online Domain Adaptation for Semantic Segmentation in Ever-Changing Conditions},
  author    = {Panagiotakopoulos, Theodoros and
               Dovesi, Pier Luigi and
               H{\"a}renstam-Nielsen, Linus and
               Poggi, Matteo},
  booktitle = {European Conference on Computer Vision (ECCV)},
  year = {2022}
}

Repositories

We would advise you to use conda or miniconda to run the package. Run the following command to install the necessary modules:

conda env create -f environment.yml

After creating the environment, load it using conda activate ouda.

You would then need to login to wandb to record the experiments simply type wandb login.

Creating the rainy dataset

First download the Cityscapes dataset from here. To add rain to the cityscapes dataset you need to follow the steps as shown here. The autors provide the rain mask for each image. With their dev-kit one can create the rainy images.

If you have trouble creating the rainy or foggy dataset, please contact us in [email protected] and we can provide you the dataset.

Download the pretrained source model and prototypes

Download the files precomputed_prototypes.pickle , pretrained_resnet50_miou645.pth and save them into a folder named pretrained

Edit configuration

Open the file configs/hybrid_switch.yml and edit the PATH variable with the location of the dataset. The path should point to the leftImg8bit and gtFine folders. Make sure that the paths for the pretrained models at METHOD.ADAPTATION.PROTO_ONLINE_HYBRIDSWITCH.LOAD_PROTO and MODEL.LOAD are correct. The paths should point to the pretrained source and prototypes downloaded in the previous steps.

Run

We recommend using a powerful graphics card with at least 16GB of VRAM. To run this code it needs a bit over 1 day in an RTX3090. If necessary one can play arround with the batch size and resolution on the configuration file to test the approach, but results will not be replicated.

To run first one should initialise wandb wandb login and then simply run python train_ouda.py --cfg=configs/hybrid_switch.yml

The run performs evaluation accross domains from the start and for each pass through the data. We demonstrated how to run the hybrid switch but by configuring or selecting other configuration files one can use different switches or approaches. By default the approach will create folders to save predictions.

clip

Code library

The approaches can be found under framework/domain_adaptation/methods: The code that handles the prototypes can be found in: framework/domain_adaptation/methods/prototype_handler.py While the switching approach is written here: framework/domain_adaptation/methods/prototypes.py The Confidence switch (and Soft) is here: framework/domain_adaptation/methods/prototypes_hswitch.py The Confidence Derivative Switch is here: framework/domain_adaptation/methods/prototypes_vswitch.py Lastly the code for the hybrid switch can be found here: framework/domain_adaptation/methods/prototypes_hybrid_switch.py Advent is the implementation here: framework/domain_adaptation/methods/advent_da.py

Regards

Don't hesitate to contact us if there are questions about the code or about the different options in the cfg file.

New work

More adaptation? Check our newer work of HAMLET: To Adapt or Not to Adapt? Real-Time Adaptation for Semantic Segmentation.

Thank you!!

onda's People

Contributors

mattpoggi avatar pierluigidovesi avatar theo2021 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

cv-seg sjyu001

onda's Issues

Question about Equ. (2)

Hi @theo2021 ,

For Eq. 2, I want to know why you use the global variance rather than the variance of each prototype. I.e., .sum(axis=0) in below function.

`
def global_var(self):

    global_squared_mean = (
        self.squared_mean.T * self.counter / self.counter.sum()
    ).T.sum(axis=0)

    global_mean = (self.prototypes.T * self.counter / self.counter.sum()).T.sum(
        axis=0
    )

    return torch.sqrt(global_squared_mean - global_mean**2)

`

Question about Figure 4 in the paper

Hi, thanks for your great work! I have a question about Figure 4 in the paper.

If I understand it correctly, the dashed line is the performance of models in the bold line on different domains. In that case, in Figure 4(b) there are 2 yellow bold segments, does it mean that there should be two models and 2 two yellow dashed lines on 25mm for evaluating on all domains?

What's more, now that the mIoU of yellow dashed line is basically above red, purple, brown lines, does it mean that the model adapted to 25mm could perform better than your online adapted models? If that's true, why do we still need to perform online adaptation for other conditions?

I think there must be something wrong for my understanding about your figure. Could you explain more about it so that I don't misunderstand it? Thanks in advance!

Cannot reproduce the results of Table 1(a) for OnDA-Hybrid Switch for 200mm rain condition

We followed the instructions given in the ReadMe file and tried to reproduce the results of Table 1(a) for online hybrid switch settings. As can be seen from the image, we can reproduce the results for all the rain conditions with max ~1% difference, but for 200mm rain condition the difference between the reported and reproduced is around 10%. Any help regarding this issue will be really appreciated.
OnDA_reproduced

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.