Git Product home page Git Product logo

semicd's Introduction

Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images

Wele Gedara Chaminda Bandara, and Vishal M. Patel

PWC PWC PWC PWC

PWC PWC PWC PWC

๐Ÿ“– ๐Ÿ“– ๐Ÿ“– ๐Ÿ“– View paper here.

๐Ÿ”– ๐Ÿ”– ๐Ÿ”– View project page here.

This repocitory contains the official implementation of our paper: Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images.

๐Ÿ’ฌ Requirements

This repo was tested with Ubuntu 18.04.3 LTS, Python 3.8, PyTorch 1.1.0, and CUDA 10.0. But it should be runnable with recent PyTorch versions >=1.1.0.

The required packages are pytorch and torchvision, together with PIL and opencv for data-preprocessing and tqdm for showing the training progress.

conda create -n SemiCD python=3.8

conda activate SemiCD

pip3 install -r requirements.txt

๐Ÿ’ฌ Datasets

We use two publicly available, widely-used CD datasets for our experiments, namely LEVIR-CD and WHU-CD. Note that LEVIR-CD and WHU-CD are building CD datasets.

As we described in the paper, following previous works ChangeFormer and BIT-CD on supervised CD, we create non-overlapping patches of size 256x256 for the training. The dataset preparation codes are written in MATLAB and can be found in dataset_preperation folder. These scripts will also generate the supervised and unsupervised training scripts that we used to train the model under diffrent percentage of labeled data.

Instead, you can directely download the processed LEVIR-CD and WHU-CD through the following links. Save these datasets anywhere you want and change the data_dir to each dataset in the corresponding config file.

The processed LEVIR-CD dataset, and supervised-unsupervised splits can be downloaded here.

The processed WHU-CD dataset, and supervised-unsupervised splits can be downloaded here.

๐Ÿ’ฌ Training

To train a model, first download processed dataset above and save them in any directory you want, then set data_dir to the dataset path in the config file in configs/config_LEVIR.json/configs/config_WHU.json and set the rest of the parameters, like experim_name, sup_percent, unsup_percent, supervised, semi, save_dir, log_dir ... etc., more details below.

๐Ÿ‘‰ Training on LEVIR-CD dataset

Then simply run:

python train.py --config configs/config_LEVIR.json

The following table summarizes the required changes in config file to train a model supervised or unsupervised with different percentage of labeled data.

Setting Required changes in config_LEVIR.json file
Supervised - 5% labeled data Experiment name: SemiCD_(sup)_5, sup_percent= 5, model.supervised=True, model.semi=False
Supervised - 10% labeled data Experiment name: SemiCD_(sup)_10, sup_percent= 10, model.supervised=True, model.semi=False
Supervised - 20% labeled data Experiment name: SemiCD_(sup)_20, sup_percent= 20, model.supervised=True, model.semi=False
Supervised - 40% labeled data Experiment name: SemiCD_(sup)_40, sup_percent= 40, model.supervised=True, model.semi=False
Supervised - 100% labeled data (Oracle) Experiment name: SemiCD_(sup)_100, sup_percent= 100, model.supervised=True, model.semi=False
Semi-upervised - 5% labeled data Experiment name: SemiCD_(semi)_5, sup_percent= 5, model.supervised=Flase, model.semi=True
Semi-upervised - 10% labeled data Experiment name: SemiCD_(semi)_10, sup_percent= 10, model.supervised=Flase, model.semi=True
Semi-upervised - 20% labeled data Experiment name: SemiCD_(semi)_20, sup_percent= 20, model.supervised=Flase, model.semi=True
Semi-upervised - 40% labeled data Experiment name: SemiCD_(semi)_40, sup_percent= 40, model.supervised=Flase, model.semi=True

๐Ÿ‘‰ Training on WHU-CD dataset

Please follow the same changes that we outlined above to WHU-CD dataset as well. Then simply run:

python train.py --config configs/config_WHU.json

๐Ÿ‘‰ Training with cross-domain data (i.e., LEVIR as supervised and WHU as unsupervised datasets)

In this case we use LEVIR-CD as the supervised dataset and WHU-CD as the unsupervised dataset. Therefore, you need to update the train_supervised data_dir as the path to LEVIR-CD dataset, and train_unsupervised data_dir as the path to WHU-CD dataset in config_LEVIR-sup_WHU-unsup.json. Then change the sup_percent in the config file as you want and then simply run:

python train.py --config configs/config_LEVIR-sup_WHU-unsup.json

๐Ÿ‘‰ Monitoring the training log via TensorBoard

The log files and the .pth checkpoints will be saved in saved\EXP_NAME, to monitor the training using tensorboard, please run:

tensorboard --logdir saved

To resume training using a saved .pth model:

python train.py --config configs/config_LEVIR.json --resume saved/SemiCD/checkpoint.pth

Results: The results will be saved in saved as an html file, containing the validation results, and the name it will take is experim_name specified in configs/config_LEVIR.json.

๐Ÿ’ฌ Inference

For inference, we need a pretrained model, the pre-chage and pos-change imags that we wouldlike to dtet changes and the config used in training (to load the correct model and other parameters),

python inference.py --config config_LEVIR.json --model best_model.pth --images images_folder

Here are the flags available for inference:

--images       Folder containing the jpg images to segment.
--model        Path to the trained pth model.
--config       The config file used for training the model.

๐Ÿ’ฌ Pre-trained models

Pre-trained models can be downloaded from the following links.

Pre-trained models on LEVIR-CD can be downloaded from here.

Pre-trained models on WHU-CD can be downloaded from here.

Pre-trained models for cross-dataset experiments can be downloaded from here.

๐Ÿ’ฌ Citation

If you find this repo useful for your research, please consider citing the paper as follows:

@misc{bandara2022revisiting,
      title={Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images}, 
      author={Wele Gedara Chaminda Bandara and Vishal M. Patel},
      year={2022},
      eprint={2204.08454},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

semicd's People

Contributors

wgcban avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

semicd's Issues

about comparison experiment

I wonder how can u get the results of SemiCDNet๏ผŒcause I cannot find their codes and the same experiment from their paper. Did u repetition the SemiCDNet๏ผŸ
image

Problems with custom dataset

Hello, I'm working with your github repo and a custom dataset in semi-supervised mode and I'm having issues making it work.
The program seems to have no issue whatsoever with the LEVIR dataset but when using my custom dataset, for some reason the IoU(change-ul) remains 0.00 (Loss and labeled change seem fine). I've tried playing around with the various decoder parameters to see if I could spot any variation in the metric with no encouraging results. Would you have any idea as to why that could be the case ? Thank you

Downloading link of processed datasets

Hi,

Could you please provide a Google Drive link for downloading the two processed datasets? Since I can not download from dropbox, the process will fail each time it is going to finish. Thanks very much.

about the best_model.pth

Excuse me, is the released model.pth file re-trained, it seems to be a little bit more accurate than what is reported in the paper.

inference results are all zeros

Hi. I loaded the pretrained weights and configs for Levir_CD and ran the inference codes.

I got the following output info:

Test Results | PixelAcc: 0.0163, IoU(no-change): 0.0163, IoU(change): 0.0000 |:   9%| | 12/128 [00:0/root/autodl-nas/levir_CD/test/test.txt
Test Results | PixelAcc: 0.0160, IoU(no-change): 0.0160, IoU(change): 0.0000 |:  10%| | 13/128 [00:0/root/autodl-nas/levir_CD/test/test.txt
Test Results | PixelAcc: 0.0149, IoU(no-change): 0.0149, IoU(change): 0.0000 |:  11%| | 14/128 [00:0/root/autodl-nas/levir_CD/test/test.txt
Test Results | PixelAcc: 0.0139, IoU(no-change): 0.0139, IoU(change): 0.0000 |:  12%| | 15/128 [00:0/root/autodl-nas/levir_CD/test/test.txt

I checked the results and found them to be all zeros (black maps). I also checked the output results using:

        #PREDICT
        with torch.no_grad():
            output = multi_scale_predict(model, image_A, image_B, scales, num_classes)
        print(output.shape)
        prediction = np.asarray(np.argmax(output, axis=0), dtype=np.uint8)
        print(prediction.shape)
        print(np.min(prediction))
        print(np.max(prediction))

and confirmed that the output are all zero matixes, which is quite confusing.

The train.py file could not be found

Very nice work. Thanks so much for sharing your hard work.
When I was training, I got an error that the train.py file could not be found. The specific error screenshot is as follows:
image

Thanks!

heatmap visualization

I am very sorry to bother you. I am very interested in your research direction. May I ask where I can find your visual code?

Question about the supervised-unsupervised splits

Thank you for your contribution. Are the supervised-unsupervised splits randomly partitioned, or do they follow any principles? Because I want to divide the LEVIR-CD and WHU-CD datasets into 1% labeled and 99% unlabeled. Therefore, I would like to inquire about your data splitting method.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.