Git Product home page Git Product logo

pytorch-gdn's Introduction

PyTorch GDN

Generalized divisive normalization layer

This utility provides a PyTorch implementation of the GDN non-linearity based on the papers:

"Density Modeling of Images using a Generalized Normalization Transformation"

Johannes Ballé, Valero Laparra, Eero P. Simoncelli

https://arxiv.org/abs/1511.06281

"End-to-end Optimized Image Compression"

Johannes Ballé, Valero Laparra, Eero P. Simoncelli

https://arxiv.org/abs/1611.01704

The implementation is based on the available Tensorflow implementation un the contrib package (https://www.tensorflow.org/api_docs/python/tf/contrib/layers/gdn)

Usage

The GDN layer can be used as a normal non-linearity in PyTorch but must be instantiated with the number of channels at the application and the torch device where it will be used. The GDN layer supports 4-d inputs (batch of images) or 5-d inputs (batch of videos). The 5-d input is handled by unfolding the sequence dimension.

device = torch.device('cuda')
n_ch = 8

gdn = GDN(n_ch, device)

input = torch.randn(1, 8, 32, 32).to(device)
output = gdn(input)

In an example application, the normal GDN should be used with convolutions in an Encoder, and the inverse GDN should be used in the decoder with transposed convolutions.

Other parameters that can be used with the GDN are:

gdn = GDN(8, device
          inverse = True,
          beta_min=1-e6,
          gamma_init=.1,
          reparam_offset=2**-18
)

pytorch-gdn's People

Contributors

jorge-pessoa avatar sdpenguin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pytorch-gdn's Issues

exporting to ONNX possible for model containing LowerBound?

When trying to export a pytorch model containing GDN to ONNX, it complains about:
RuntimeError: ONNX export failed: Couldn't export Python operator LowerBound

Has anyone tried this? I don't know for the moment whether we have to add the LowerBound 'operator' to onnx, or
rewrite GDN in some way to avoid the 'operator'.

GDN with dataparallel

Hi,
Thank you for this implementation.
This GDN module works well on a single GPU, but when I warp my model into the Data parallel module and use multi-GPUs to train my model, the training failed. Could you please check it?

The issue of the number of channels and CPU usage.

I've encountered an interesting issue: when the number of channels in GDN exceeds 181, the CPU usage becomes remarkably high. It seems like 181 is a critical threshold, but I'm not sure if others have experienced the same problem

Legacy autograd function with non-static forward method is deprecated

I get many of these errors
/build/python-pytorch/src/pytorch-1.3.1-opt-cuda/torch/csrc/autograd/python_function.cpp:620: UserWarning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

Likely from the LowerBound(Function) class. Luckily it still works in pytorch -1.3.1

LowerBound purpose unclear

What is the point of your LowerBound custom autograd function? Is the functionality different than standard torch.max(x, b) or F.threshold(x,b,b)? I don't see a situation where your variable pass_through_2 in .backward() would ever be true (unless you have a negative reparam_offset).

Correct for GDN's math equation

Hi, thanks for sharing! I found a little mistake in your comment of GDN's math equation. In my opinion, the x has been squared.

In your code:

y[i] = x[i] / sqrt(beta[i] + sum_j(gamma[j, i] * x[j]))

IMO this equation should be

y[i] = x[i] / sqrt(beta[i] + sum_j(gamma[j, i] * x[j]^2))

Please check for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.