Git Product home page Git Product logo

enjoy-hamburger's Introduction

Enjoy-Hamburger 🍔

Official implementation of Hamburger, Is Attention Better Than Matrix Decomposition? (ICLR 2021, top 3%)

Squirtle (憨憨) invites you to enjoy Hamburger! 憨 shares the same pronunciation as ham, which means simple and plain in Chinese.

Update

  • 2022.04.01 - Add Light-Ham (VAN-Huge). Given 3 runs, Light-Ham (VAN-Base) produced an averaged mIoU (MS) of 49.6 on ADE20K val set from results of 49.6, 49.9, and 49.2. Note that if we reduce steps K from 6 to 3 under Light-Ham (VAN-Base), the performance will drop to 48.8 (1 run), demonstrating the significance of optimization-driven strategy & MD in Hamburger.

  • 2022.03.26 - Release Light-Ham, a light-weight segmentation baseline for modern backbones. Using the VAN backbone, Light-Ham-VAN sets the best Pareto frontier (Params/FLOPs-mIoU curves) up to date for ADE20K.

    Method Backbone Iters mIoU Params FLOPs Config Download
    Light-Ham-D256 VAN-Tiny 160K 40.9 4.2M 6.5G config Google Drive
    Light-Ham VAN-Tiny 160K 42.3 4.9M 11.3G config Google Drive
    Light-Ham-D256 VAN-Small 160K 45.2 13.8M 15.8G config Google Drive
    Light-Ham VAN-Small 160K 45.7 14.7M 21.4G config Google Drive
    Light-Ham VAN-Base 160K 49.6 27.4M 34.4G config Google Drive
    Light-Ham VAN-Large 160K 51.0 45.6M 55.0G config Google Drive
    Light-Ham VAN-Huge 160K 51.5 61.1M 71.8G config Google Drive
    - - - - - - - -
    Segformer VAN-Base 160K 48.4 29.3M 68.6G - -
    Segformer VAN-Large 160K 50.3 47.5M 89.2G - -
    - - - - - - - -
    HamNet VAN-Tiny-OS8 160K 41.5 11.9M 50.8G config Google Drive
    HamNet VAN-Small-OS8 160K 45.1 24.2M 100.6G config Google Drive
    HamNet VAN-Base-OS8 160K 48.7 36.9M 153.6G config Google Drive
    HamNet VAN-Large-OS8 160K 50.2 55.1M 227.7G config Google Drive
  • 2022.03.06 - Update HamNet using MMSegmentation. HamNet achieves SOTA performance for ResNet-101 backbone on ADE20K val set, enabling R101 to match modern backbones like ResNeSt, Swin Transformer or ConvNeXt using similar computing budget. Code and checkpoint are available.

    Method Backbone Crop Size Lr schd mIoU (SS) mIoU (MS) Params FLOPs
    DANet ResNet-101 512x512 160000 - 45.2 69M 1119G
    OCRNet ResNet-101 520x520 150000 - 45.3 56M 923G
    DNL ResNet-101 512x512 160000 - 46.0 69M 1249G
    HamNet ResNet-101 512x512 160000 44.9 46.0 57M 918G
    HamNet+ ResNet-101 512x512 160000 45.6 46.8 69M 1111G
    - - - - - - - -
    DeeplabV3 ResNeSt-101 512x512 160000 45.7 46.6 66M 1051G
    UPerNet Swin-T 512x512 160000 44.5 45.8 60M 945G
    UPerNet ConvNeXt-T 512x512 160000 46.0 46.7 60M 939G
  • 2021.09.09 - Release the arXiv version. This is a short version including some future works based Hamburger. A long version concerning the implicit perspective of Hamburger will be updated later.

  • 2021.05.12 - Release Chinese Blog 3.

  • 2021.05.10 - Release Chinese Blog 1 and Blog 2 on Zhihu. Blog 3 is incoming.

  • 2021.04.14 - Herald the incoming arXiv version concerning implicit models and one-step gradient.

  • 2021.04.13 - Add poster and thumbnail icon for ICLR 2021.

Introduction

This repo provides the official implementation of Hamburger for further research. We sincerely hope that this paper can bring you inspiration about the Attention Mechanism, especially how the low-rankness and the optimization-driven method can help model the so-called Global Information in deep learning. We also highlight Hamburger as a semi-implicit model and one-step gradient as an alternative for training both implicit and semi-implicit models.

We model the global context issue as a low-rank completion problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs.

contents

We are working on some exciting topics. Please wait for our new papers. :)

Enjoy Hamburger, please!

Organization

This section introduces the organization of this repo.

We strongly recommend our readers to enjoy the arXiv version or the blogs to more comprehensively understand this paper.

  • blog.
    • Some random thoughts on Hamburger and beyond (Chinese Blog 1).
    • Connections and differences between Hamburger and implicit models. (incoming arXiv version, Chinese Blog 2)
    • Highlight one-step gradient. (incoming arXiv version, Chinese Blog 2)
    • Possible directions based on Hamburger. (current arXiv version, Chinese Blog 3)
    • FAQ.
  • seg.
    • We provide the PyTorch implementation of Hamburger (V1) in the paper and an enhanced version (V2) flavored with Cheese. Some experimental features are included in V2+.
    • We release the codebase for systematical research on the PASCAL VOC dataset, including the two-stage training on the trainaug and trainval datasets and the MSFlip test.
    • We offer three checkpoints of HamNet, in which one is 85.90+ with the test server link, while the other two are 85.80+ with the test server link 1 and link 2. You can reproduce the test results using the checkpoints combined with the MSFlip test code.
    • Statistics about HamNet that might ease further research.
  • gan.
    • Official implementation of Hamburger in TensorFlow.
    • Data preprocessing code for using ImageNet in tensorflow-datasets. (Possibly useful if you hope to run the JAX code of BYOL or other ImageNet training code with the Cloud TPUs.)
    • Training and evaluation protocol of HamGAN on the ImageNet.
    • Checkpoints of HamGAN-strong and HamGAN-baby.

TODO:

  • Chinese Blog 1, Blog 2 and Blog 3.
  • Release the arXiv version.
  • English Blog.
  • README doc for HamGAN.
  • PyTorch Hamburger using less encapsulation.
  • Suggestions for using and further developing Hamburger. (See arXiv)
  • We also consider adding a collection of popular context modules to this repo. It depends on the time. No Guarantee. Perhaps GuGu 🕊️ (which means standing someone up).

Citation

If you find our work interesting or helpful to your research, please consider citing Hamburger. :)

@inproceedings{
    ham,
    title={Is Attention Better Than Matrix Decomposition?},
    author={Zhengyang Geng and Meng-Hao Guo and Hongxu Chen and Xia Li and Ke Wei and Zhouchen Lin},
    booktitle={International Conference on Learning Representations},
    year={2021},
}

Contact

Feel free to contact me if you have additional questions or have interests in collaboration. Please drop me an email at [email protected]. Find me at Twitter or WeChat. Thank you!

Acknowledgments

Our research is supported with Cloud TPUs from Google's Tensorflow Research Cloud (TFRC). Nice and joyful experience with the TFRC program. Thank you!

We would like to sincerely thank MMSegmentation, EMANet, PyTorch-Encoding, YLG, and TF-GAN for their awesome released code.

enjoy-hamburger's People

Contributors

dependabot[bot] avatar gsunshine avatar patchtester avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

enjoy-hamburger's Issues

question about the implementation of one-step gradient

Hi, @Gsunshine

I notice that the implementation of one-step gradient in your code only consists the build of coefficient, as demonstrated by this line.

In my opinion, the function F is composed of two parts: the construction of coefficient and base, but the base part is omitted.

Is this a deliberate design or a mistake?

Thank you for your great work. Best wishes.

PixPin_2023-12-01_21-49-42

ZeroDivisionError: float division by zero

hello
Thank you for the models you provided in the field of semantic segmentation. I encounter this error while training hamnet_van_base_512x512_160k_ade20k model using Custom Dataset. I have two more classes. Separated from the background, I made 20K as the ade20k dataset. In debugging, I saw that the addressing of the data set was correct.
Please help me.
correct_k.mul_(100.0 / target[target != ignore_index].numel())) ZeroDivisionError: float division by zero

缺少主干网络

源码中没有VAN 主干网络吗==!什么时候加上呢,期待

About Fix point iteration

Thank you for your excellent work! I note the sample code in the blog said there should be a fixed point iteration before the one-step grident, so that it's guaranteed to be a contraction mapping right? I search the code with keyword "fix", "fixed", "iterations"and so on, but I cannot find fixed point iteration. So where is it?

KeyError: 'van_tiny is not in the models registry'

Hi there,
I set up environment on docker with torch=1.11.0, cuda=11.3, mmcv-full=1.5.0. When I ran the code, I got this trace back info shown in picture
error
the "hamenet_light_van_tiny_512x1024_160k_cityscapes.py" is the config file modified by myself based on ade20k one. The error msg seems to be unrelated with data, so it shouldn't be a problem.
Could you please give me any suggestion?
Thanks!

Confusion on update rules of Ham

I give try to read the arxiv paper but failed to understand the mathematical intuition of the update rules.

In section 2.2.2,

aaassss

I didn't understand the last line, "...and softmax is applied column-wise and $T$ is the temperature. Further, we can obtain a hard assignment by a one-hot vector when $T \rightarrow 0$." What is $T$ (Temperature) here?

image

I couldn't get the justification for the replacement of $\arg \min$ with softmax. And what is their update rule doing for $D\leftarrow XC^{T}\text{diag}(C1_n)^{-1}$

All of these questions might be irrelevant to the GitHub issue, but I was really struggling to understand these facts. It will be a great help if these things are described here. Thanks in advance.

Default process group has not been initialized

请问一下,我用train.sh能够将模型跑通,因为是单卡,GPUS=1,我想深入了解下模型的参数,想尝试debug,但是已经设置了--gpus 1,norm_cfg = dict(type='BN', requires_grad=True),仍然报错Default process group has not been initialized, please make sure to call init_process_group.请问有什么办法解决吗?

BatchNormalization

Hello,

I have a question regarding the batch normalization. Is there a reason why you choose such a small momentum 3e-4 for the batch normalization?

Thank you in advance.

Applying Hamburger to other models makes the training collapse soon

I've been working on applying Hamburger to other detection models (to be specific, Mask2Former & SparseInst), mainly by inserting Hamburger after the neck to align the multi-scale features, but the training process always collapses after only several iterations because of the nan output.
Given that the training recipe is rather general, and further reducing the lr does no help, I guess this indicates the gradient propagation is unstable? (p.s. applying @torch.no_grad() to local_inference() is also unhelpful)
Thus I'm wondering what's the intrinsic cause for this? have you ever met similar cases? or any suggestions for a fix?

Any idea would be appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.