Git Product home page Git Product logo

afa's Introduction

Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers

Code of CVPR 2022 paper: Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers.

[arXiv] [Project] [Poster]


AFA flowchart

Abastract

Weakly-supervised semantic segmentation (WSSS) with image-level labels is an important and challenging task. Due to the high training efficiency, end-to-end solutions for WSSS have received increasing attention from the community. However, current methods are mainly based on convolutional neural networks and fail to explore the global information properly, thus usually resulting in incomplete object regions. In this paper, to address the aforementioned problem, we introduce Transformers, which naturally integrate global information, to generate more integral initial pseudo labels for end-to-end WSSS. Motivated by the inherent consistency between the self-attention in Transformers and the semantic affinity, we propose an Affinity from Attention (AFA) module to learn semantic affinity from the multi-head self-attention (MHSA) in Transformers. The learned affinity is then leveraged to refine the initial pseudo labels for segmentation. In addition, to efficiently derive reliable affinity labels for supervising AFA and ensure the local consistency of pseudo labels, we devise a Pixel-Adaptive Refinement module that incorporates low-level image appearance information to refine the pseudo labels. We perform extensive experiments and our method achieves 66.0% and 38.9% mIoU on the PASCAL VOC 2012 and MS COCO 2014 datasets, respectively, significantly outperforming recent end-to-end methods and several multi-stage competitors. Code will be made publicly available.

Preparations

VOC dataset

1. Download

wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
tar –xvf VOCtrainval_11-May-2012.tar

2. Download the augmented annotations

The augmented annotations are from SBD dataset. Here is a download link of the augmented annotations at DropBox. After downloading SegmentationClassAug.zip, you should unzip it and move it to VOCdevkit/VOC2012. The directory sctructure should thus be

VOCdevkit/
└── VOC2012
    ├── Annotations
    ├── ImageSets
    ├── JPEGImages
    ├── SegmentationClass
    ├── SegmentationClassAug
    └── SegmentationObject

COCO dataset

1. Download

wget http://images.cocodataset.org/zips/train2014.zip
wget http://images.cocodataset.org/zips/val2014.zip

After unzipping the downloaded files, for convenience, I recommand to organizing them in VOC style.

MSCOCO/
├── JPEGImages
│    ├── train
│    └── val
└── SegmentationClass
     ├── train
     └── val

2. Generating VOC style segmentation labels for COCO

To generate VOC style segmentation labels for COCO dataset, you could use the scripts provided at this repo. Or, just downloading the generated masks from Google Drive.

Create and activate conda environment

conda create --name py36 python=3.6
conda activate py36
pip install -r requirments.txt

Clone this repo

git clone https://github.com/rulixiang/afa.git
cd afa

Download Pre-trained weights

Download the ImageNet-1k pre-trained weights from the official SegFormer implementation and move them to pretrained/.

[Optional] Build python extension module

To use the regularized loss, you need to download and compile the python extension, which is provied here. This module is not necessary and only brings subtle improvement to the final performance on VOC according to the ablation.

Train

To start training, just run the scripts under launch/.

# train on voc
bash launch/run_sbatch_attn_reg.sh
# train on coco
bash launch/run_sbatch_attn_reg_coco.sh

You should get the training logs by running the above commands. Also, check our training log under logs/.

Results

The generated CAMs and semantic segmentation results on the DAVIS 2017 dataset. The model is trained on VOC 2012 dataset. For more results, please see the [Project page] or [Paper].

Visualization. Left: CAMs of the cls branch. Right: Prediction of the seg branch.
CAM Seg prediction

Citation

Please kindly cite our paper if you find it's helpful in your work.

@inproceedings{ru2022learning,
    title = {Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers},
    author = {Lixiang Ru and Yibing Zhan and Baosheng Yu and Bo Du}
    booktitle = {CVPR},
    year = {2022},
  }

Acknowledgement

We use SegFormer and their pre-trained weights as the backbone, which is based on MMSegmentation. We heavily borrowed 1-stage-wseg to construct our PAR. Also, we use the Regularized Loss and the Random Walk Propagation in PSA. Many thanks to their brilliant works!

afa's People

Contributors

rulixiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

afa's Issues

where are test codes

I reproduced your code in the VOC dataset, how to test the training results? I didn't find the test codes.
Can you help me ?

pseudo label

请问如何用 cam 生成 pseudo labels ,对应的代码在哪里,蟹蟹了。

CNN & Trans CAM

Could you provide the code or steps for generating the CAM mentioned in the Figure 9 of the paper ?

Welcome update to OpenMMLab 2.0

Welcome update to OpenMMLab 2.0

I am Vansin, the technical operator of OpenMMLab. In September of last year, we announced the release of OpenMMLab 2.0 at the World Artificial Intelligence Conference in Shanghai. We invite you to upgrade your algorithm library to OpenMMLab 2.0 using MMEngine, which can be used for both research and commercial purposes. If you have any questions, please feel free to join us on the OpenMMLab Discord at https://discord.gg/amFNsyUBvm or add me on WeChat (van-sin) and I will invite you to the OpenMMLab WeChat group.

Here are the OpenMMLab 2.0 repos branches:

OpenMMLab 1.0 branch OpenMMLab 2.0 branch
MMEngine 0.x
MMCV 1.x 2.x
MMDetection 0.x 、1.x、2.x 3.x
MMAction2 0.x 1.x
MMClassification 0.x 1.x
MMSegmentation 0.x 1.x
MMDetection3D 0.x 1.x
MMEditing 0.x 1.x
MMPose 0.x 1.x
MMDeploy 0.x 1.x
MMTracking 0.x 1.x
MMOCR 0.x 1.x
MMRazor 0.x 1.x
MMSelfSup 0.x 1.x
MMRotate 1.x 1.x
MMYOLO 0.x

Attention: please create a new virtual environment for OpenMMLab 2.0.

Reproducibility concern.

I repeated this code several times, but it didn't reach 62.6% (61 around W.O. reg & CRF). Btw, why can CRF be increased by 2.2, while I use single-scale CRF with an increase of less than 1.0? At the same time, the loss of reg did not bring a 1.2 performance improvement (not small), and the result I got was about 0.5.

Reproduce

Hi, thanks for sharing the very interesting work!
However, when I reproduced, I only achieved the effect of 'miou': 0.6101686126054515 at 20000 iters.
Can you share the code that uses CRF for verification?
Thanks again and looking forward to your reply!

No module named 'bilateralfilter'

When I try to run bash launch/run_sbatch_attn_reg.sh
Traceback (most recent call last):
File "scripts/dist_train_voc.py", line 21, in
from utils.losses import DenseEnergyLoss, get_aff_loss, get_energy_loss
File "./utils/losses.py", line 9, in
from bilateralfilter import bilateralfilter, bilateralfilter_batch
ModuleNotFoundError: No module named 'bilateralfilter'

VOC Dateset

./afa-master/datasets/voc/cls_labels_onehot.npy
我想训练自己的数据集
请问这个cls_labels_onehot.npy文件是如何生成的?

About PAR

nice work!
Why the original image of HW3,What is the purpose of this?

Using pre-trained weights from SegFormer confused me

To my understanding, the SegFormer's weights are trained with fully supervised annotations with class and location informations, directly using those weights as initial parameters dosen't really looks like a real weakly supervised schema.

PyCharm to a server for training

the error message "ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set" means that when connecting PyCharm to a server for training, the environment variable RANK is expected but not set, causing an initialization error in torch.distributed.

为什么计算seg loss的时候使用refined_aff_label?

您好:
在阅读代码时我注意到,代码中计算seg loss的方式好像跟论文中不太一致。如下:
seg_loss = get_seg_loss(segs, refined_aff_label.type(torch.long), ignore_index=cfg.dataset.ignore_index)
# reg_loss = get_energy_loss(img=inputs, logit=segs, label=refined_aff_label, img_box=img_box, loss_layer=loss_layer)
#seg_loss = F.cross_entropy(segs, pseudo_label.type(torch.long), ignore_index=cfg.dataset.ignore_index)
这里使用的是refined_aff_label 而不是注释中的pseudo_label,和论文中不一致,我有些疑惑,恳请您解答。

ModuleNotFoundError: No module named 'bilateralfilter'

Thank you for a great job!
When I run the code met a problem "from bilateralfilter import bilateralfilter, bilateralfilter_batch
ModuleNotFoundError: No module named 'bilateralfilter' " in losses.py. I can't install it by pip and conda install command. Could you give me some advice? what is the "bilateralfilter "? How can I get it? Thank you very much.
I'm looking forward to hearing from you.

Fairness concern.

Is it fair to use the mit-b1 backbone pre trained by segformer? I think the use of the pre-trained semantic segmentation model has played a key role in this work. I don't think the mit-b1 RRM comparison set in the paper can prove its fairness.

single gpu train

when I use single GPU to train VOC datasets ,I got an error:
image

can you help me?

python 3.7 cuda 11.1 torch 1.8 got error

image

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256]] is at version 4; expected version 3 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I looked online for all possible solutions, to no avail.Do I have to be consistent with your version?

Duplicate GPU detected

this problem:
Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 4f000
how to solves?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.