Git Product home page Git Product logo

pytorch_fft's Introduction

A PyTorch wrapper for CUDA FFTs License

A package that provides a PyTorch C extension for performing batches of 2D CuFFT transformations, by Eric Wong

Update: FFT functionality is now officially in PyTorch 0.4, see the documentation here. This repository is only useful for older versions of PyTorch, and will no longer be updated.

Installation

This package is on PyPi. Install with pip install pytorch-fft.

Usage

  • From the pytorch_fft.fft module, you can use the following to do foward and backward FFT transformations (complex to complex)
    • fft and ifft for 1D transformations
    • fft2 and ifft2 for 2D transformations
    • fft3 and ifft3 for 3D transformations
  • From the same module, you can also use the following for real to complex / complex to real FFT transformations
    • rfft and irfft for 1D transformations
    • rfft2 and irfft2 for 2D transformations
    • rfft3 and irfft3 for 3D transformations
  • For an d-D transformation, the input tensors are required to have >= (d+1) dimensions (n1 x ... x nk x m1 x ... x md) where n1 x ... x nk is the batch of FFT transformations, and m1 x ... x md are the dimensions of the d-D transformation. d must be a number from 1 to 3.
  • Finally, the module contains the following helper functions you may find useful
    • reverse(X, group_size=1) reverses the elements of a tensor and returns the result in a new tensor. Note that PyTorch does not current support negative slicing, see this issue. If a group size is supplied, the elements will be reversed in groups of that size.
    • expand(X, imag=False, odd=True) takes a tensor output of a real 2D or 3D FFT and expands it with its redundant entries to match the output of a complex FFT.
  • For autograd support, use the following functions in the pytorch_fft.fft.autograd module:
    • Fft and Ifft for 1D transformations
    • Fft2d and Ifft2d for 2D transformations
    • Fft3d and Ifft3d for 3D transformations
# Example that does a batch of three 2D transformations of size 4 by 5. 
import torch
import pytorch_fft.fft as fft

A_real, A_imag = torch.randn(3,4,5).cuda(), torch.zeros(3,4,5).cuda()
B_real, B_imag = fft.fft2(A_real, A_imag)
fft.ifft2(B_real, B_imag) # equals (A, zeros)

B_real, B_imag = fft.rfft2(A) # is a truncated version which omits
                                   # redundant entries

reverse(torch.arange(0,6)) # outputs [5,4,3,2,1,0]
reverse(torch.arange(0,6), 2) # outputs [4,5,2,3,0,1]

expand(B_real) # is equivalent to  fft.fft2(A, zeros)[0]
expand(B_imag, imag=True) # is equivalent to  fft.fft2(A, zeros)[1]
# Example that uses the autograd for 2D fft:
import torch
from torch.autograd import Variable
import pytorch_fft.fft.autograd as fft
import numpy as np

f = fft.Fft2d()
invf= fft.Ifft2d()

fx, fy = (Variable(torch.arange(0,100).view((1,1,10,10)).cuda(), requires_grad=True), 
          Variable(torch.zeros(1, 1, 10, 10).cuda(),requires_grad=True))
k1,k2 = f(fx,fy)
z = k1.sum() + k2.sum()
z.backward()
print(fx.grad, fy.grad)

Notes

  • This follows NumPy semantics and behavior, so ifft2(fft2(x)) = x. Note that CuFFT semantics for inverse FFT only flip the sign of the transform, but it is not a true inverse.
  • Similarly, the real to complex / complex to real variants also follow NumPy semantics and behavior. In the 1D case, this means that for an input of size N, it returns an output of size N//2+1 (it omits redundant entries, see the Numpy docs)
  • The functions in the pytorch_fft.fft module do not implement the PyTorch autograd Function, and are semantically and functionally like their numpy equivalents.
  • Autograd functionality is in the pytorch_fft.fft.autograd module.

Repository contents

  • pytorch_fft/src: C source code
  • pytorch_fft/fft: Python convenience wrapper
  • build.py: compilation file
  • test.py: tests against NumPy FFTs and Autograd checks

Issues and Contributions

If you have any issues or feature requests, file an issue or send in a PR.

pytorch_fft's People

Contributors

gdlg avatar riceric22 avatar rotmanmi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch_fft's Issues

Is it okay if I don't use the GPU?

It's embarrassing to say that as a student I cannot afford a CUDA GPU for now. So I tried the example without CUDA. But it does not work, so must I have a GPU to use this package?

Thanks very much!

Here is what I tried:

import torch
from torch.autograd import Variable
import pytorch_fft.fft.autograd as fft
import numpy as np

f = fft.Fft2d()
invf= fft.Ifft2d()

fx, fy = (Variable(torch.arange(0,100).view((1,1,10,10)), requires_grad=True), 
          Variable(torch.zeros(1, 1, 10, 10),requires_grad=True))
k1,k2 = f(fx,fy)

It gives:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-90-747c69555382> in <module>()
      4 fx, fy = (Variable(torch.arange(0,100).view((1,1,10,10)), requires_grad=True), 
      5           Variable(torch.zeros(1, 1, 10, 10),requires_grad=True))
----> 6 k1,k2 = f(fx,fy)
      7 z = k1.sum() + k2.sum()
      8 z.backward()

~/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_fft/fft/autograd.py in forward(self, X_re, X_im)
     33     def forward(self, X_re, X_im):
     34         X_re, X_im = make_contiguous(X_re, X_im)
---> 35         return fft2(X_re, X_im)
     36 
     37     def backward(self, grad_output_re, grad_output_im):

~/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_fft/fft/fft.py in fft2(X_re, X_im)
     39 def fft2(X_re, X_im):
     40     if 'Float' in type(X_re).__name__ :
---> 41         f = th_fft.th_Float_fft2
     42     elif 'Double' in type(X_re).__name__:
     43         f = th_fft.th_Double_fft2

AttributeError: module 'pytorch_fft._ext.th_fft' has no attribute 'th_Float_fft2'

CUFFT error: Plan creation failed

I have some issues installing this package. I tried pip install, but it installed old version with Rfft missing.
When I tried to install manually,
I ran:

python build.py
python setup.py install

Then running test.py I got the following error:

>> python test.py
CUFFT error: Plan creation failedTraceback (most recent call last):
  File "test.py", line 149, in <module>
    test_c2c(*args)
  File "test.py", line 29, in test_c2c
    run_c2c(x, z, _f1, _f2, _if1, _if2, 1e-6)
  File "test.py", line 13, in run_c2c
    assert np.allclose(y1.cpu().numpy(), y_np.real, atol=atol)
AssertionError

Any idea what's wrong?
Thanks

Variable not support

Hi,

fft can not be applied on Variable. However Variable is important for computing gradient. 

Best wishes,

Qiuqiang

How to center the zero frequency in FFT's output?

I was wondering if there's an implementation to centre the zero frequency components of the FFT function's output. More or less like Matlab's 'fftshift'.

Thanks in advance for any help that you can provide.

AttributeError: 'module' object has no attribute 'th_Double_fft2'

I get the following error:

File "/packages/development/anaconda/2-4.3.1/lib/python2.7/site-packages/pytorch_fft/fft/fft.py", line 41, in fft2
f = th_fft.th_Double_fft2
AttributeError: 'module' object has no attribute 'th_Double_fft2'

Any idea how to fix this?

Real2Complex and Complex2Real FFT

It would be good to also expand the macros to generate code for real to complex / complex to real FFTs so that the more efficient routines can be utilized.

Autogrid error with multiple ffts

Hi, I'm trying to do some linear operations with two ffts.

class fft_autotest(torch.nn.Module):
    def __init__(self):
        super(fft_autotest, self).__init__()        

    def forward(self, x1, x2):
        f = fft.Fft()
        x1_fre,x1_fim = f(x1,torch.zeros_like(x1))        
        x2_fre,x2_fim = f(x2,torch.zeros_like(x2))
        return x1_fre+x2_fre

x1 = Variable(torch.rand(3,2).cuda(), requires_grad=True)
x2 = Variable(torch.rand(3,2).cuda(), requires_grad=True) 
func = fft_autotest();
test = gradcheck(func, (x1,x2), eps=1e-2)  
print(test)

which will output error

RuntimeError: for output no. 0,
 numerical:(
 1.0000  1.0000  0.0000  0.0000  0.0000  0.0000
 1.0000 -1.0000  0.0000  0.0000  0.0000  0.0000
 0.0000  0.0000  1.0000  1.0000  0.0000  0.0000
 0.0000  0.0000  1.0000 -1.0000  0.0000  0.0000
 0.0000  0.0000  0.0000  0.0000  1.0000  1.0000
 0.0000  0.0000  0.0000  0.0000  1.0000 -1.0000
[torch.FloatTensor of size 6x6]
, 
 1.0000  1.0000  0.0000  0.0000  0.0000  0.0000
 1.0000 -1.0000  0.0000  0.0000  0.0000  0.0000
 0.0000  0.0000  1.0000  1.0000  0.0000  0.0000
 0.0000  0.0000  1.0000 -1.0000  0.0000  0.0000
 0.0000  0.0000  0.0000  0.0000  1.0000  1.0000
 0.0000  0.0000  0.0000  0.0000  1.0000 -1.0000
[torch.FloatTensor of size 6x6]
)
analytical:(
 0  0  0  0  0  0
 0  0  0  0  0  0
 0  0  0  0  0  0
 0  0  0  0  0  0
 0  0  0  0  0  0
 0  0  0  0  0  0
[torch.FloatTensor of size 6x6]
, 
 2  2  0  0  0  0
 2 -2  0  0  0  0
 0  0  2  2  0  0
 0  0  2 -2  0  0
 0  0  0  0  2  2
 0  0  0  0  2 -2
[torch.FloatTensor of size 6x6]
)

The interesting observation is that the second analytical output equals to the summation of the numerical outputs. I tried with different output function and thing always holds. Any ideas why this coincidence happens?

Thanks!

Error while installing

I'm very interested in this project, but went into some problems installing it both from .git and pip.

I'm using conda with pytorch 0.2.0 and python3.6.2.And the gcc version is:

$ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

It seems someone will went into this when compiling pytorch from source without nvdia driver. So can I use cpu part of pytorch_fft without GPU and CUDA?

Thanks if you have any suggestions!

$ pip install pytorch-fft
Collecting pytorch-fft
  Downloading pytorch_fft-0.14.tar.gz
Requirement already satisfied: cffi>=1.0.0 in ./anaconda3/envs/pytorch/lib/python3.6/site-packages (from pytorch-fft)
Requirement already satisfied: pycparser in ./anaconda3/envs/pytorch/lib/python3.6/site-packages (from cffi>=1.0.0->pytorch-fft)
Building wheels for collected packages: pytorch-fft
  Running setup.py bdist_wheel for pytorch-fft ... error
  Complete output from command /home/rex/soft/anaconda3/envs/pytorch/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-54i25d_y/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmpol0xwf0ypip-wheel- --python-tag cp36:
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.6
  creating build/lib.linux-x86_64-3.6/pytorch_fft
  copying pytorch_fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft
  creating build/lib.linux-x86_64-3.6/pytorch_fft/fft
  copying pytorch_fft/fft/fft.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
  copying pytorch_fft/fft/autograd.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
  copying pytorch_fft/fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
  creating build/lib.linux-x86_64-3.6/pytorch_fft/_ext
  copying pytorch_fft/_ext/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/_ext
  creating build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft
  copying pytorch_fft/_ext/th_fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft
  running build_ext
  generating cffi module 'build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.c'
  creating build/temp.linux-x86_64-3.6
  building 'pytorch_fft._ext.th_fft._th_fft' extension
  creating build/temp.linux-x86_64-3.6/build
  creating build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6
  gcc -pthread -B /home/rex/soft/anaconda3/envs/pytorch/compiler_compat -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/rex/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/rex/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/tmp/pip-build-54i25d_y/pytorch-fft/pytorch_fft/src -I/home/rex/soft/anaconda3/envs/pytorch/include/python3.6m -c build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.o
  gcc -pthread -shared -B /home/rex/soft/anaconda3/envs/pytorch/compiler_compat -L/home/rex/soft/anaconda3/envs/pytorch/lib -Wl,-rpath=/home/rex/soft/anaconda3/envs/pytorch/lib,--no-as-needed build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.o -L/usr/local/cuda/lib64 -L/home/rex/soft/anaconda3/envs/pytorch/lib -lcufft -lpython3.6m -o build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft/_th_fft.abi3.so
  /home/rex/soft/anaconda3/envs/pytorch/compiler_compat/ld: cannot find -lcufft
  collect2: error: ld returned 1 exit status
  error: command 'gcc' failed with exit status 1

  ----------------------------------------
  Failed building wheel for pytorch-fft
  Running setup.py clean for pytorch-fft
Failed to build pytorch-fft
Installing collected packages: pytorch-fft
  Running setup.py install for pytorch-fft ... error
    Complete output from command /home/rex/soft/anaconda3/envs/pytorch/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-54i25d_y/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-xrvt870i-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.6
    creating build/lib.linux-x86_64-3.6/pytorch_fft
    copying pytorch_fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft
    creating build/lib.linux-x86_64-3.6/pytorch_fft/fft
    copying pytorch_fft/fft/fft.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
    copying pytorch_fft/fft/autograd.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
    copying pytorch_fft/fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/fft
    creating build/lib.linux-x86_64-3.6/pytorch_fft/_ext
    copying pytorch_fft/_ext/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/_ext
    creating build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft
    copying pytorch_fft/_ext/th_fft/__init__.py -> build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft
    running build_ext
    generating cffi module 'build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.c'
    creating build/temp.linux-x86_64-3.6
    building 'pytorch_fft._ext.th_fft._th_fft' extension
    creating build/temp.linux-x86_64-3.6/build
    creating build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6
    gcc -pthread -B /home/rex/soft/anaconda3/envs/pytorch/compiler_compat -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/rex/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include -I/home/rex/soft/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/ffi/../../lib/include/TH -I/tmp/pip-build-54i25d_y/pytorch-fft/pytorch_fft/src -I/home/rex/soft/anaconda3/envs/pytorch/include/python3.6m -c build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.c -o build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.o
    gcc -pthread -shared -B /home/rex/soft/anaconda3/envs/pytorch/compiler_compat -L/home/rex/soft/anaconda3/envs/pytorch/lib -Wl,-rpath=/home/rex/soft/anaconda3/envs/pytorch/lib,--no-as-needed build/temp.linux-x86_64-3.6/build/temp.linux-x86_64-3.6/pytorch_fft._ext.th_fft._th_fft.o -L/usr/local/cuda/lib64 -L/home/rex/soft/anaconda3/envs/pytorch/lib -lcufft -lpython3.6m -o build/lib.linux-x86_64-3.6/pytorch_fft/_ext/th_fft/_th_fft.abi3.so
    /home/rex/soft/anaconda3/envs/pytorch/compiler_compat/ld: cannot find -lcufft
    collect2: error: ld returned 1 exit status
    error: command 'gcc' failed with exit status 1

    ----------------------------------------
Command "/home/rex/soft/anaconda3/envs/pytorch/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-54i25d_y/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-xrvt870i-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-54i25d_y/pytorch-fft/

Run test.py Error

Please help me to solve this issue.

Traceback (most recent call last):
torch.Size([3, 4, 5, 7]) torch.Size([3, 4, 5, 7])
torch.Size([3, 4, 5, 7]) torch.Size([3, 4, 5, 7])
File "F:/PycharmProjects/KGEProject2.0/utils/seqFea_test.py", line 182, in
test_c2c(*args)
File "F:/PycharmProjects/KGEProject2.0/utils/seqFea_test.py", line 47, in test_c2c
run_c2c(x, z, _f1, _f2, _if1, _if2, 1e-6)
File "F:/PycharmProjects/KGEProject2.0/utils/seqFea_test.py", line 28, in run_c2c
y1, y2 = _f1(x, z)
File "D:\Anaconda3\lib\site-packages\pytorch_fft-0.14-py3.6-win-amd64.egg\pytorch_fft\fft\fft.py", line 28, in fft
return _fft(X_re, X_im, f, 1)
File "D:\Anaconda3\lib\site-packages\pytorch_fft-0.14-py3.6-win-amd64.egg\pytorch_fft\fft\fft.py", line 18, in fft
f(X_re, X_im, Y1, Y2)
File "D:\Anaconda3\lib\site-packages\torch\utils\ffi_init
.py", line 202, in safe_call
result = torch._C._safe_call(*args, **kwargs)
torch.FatalError: invalid argument 2: out of range at d:\pytorch\pytorch\torch\lib\thc\generic/THCTensor.c:23

pip install problem

when I run pip to install, I got a error below, what is wrong? Thank you
`pip install pytorch-fft
Collecting pytorch-fft
Downloading https://files.pythonhosted.org/packages/f6/d8/00edd29004d9943ebd42c665ac6f25ac57c7b9165656da514bb62737d85d/pytorch_fft-0.14.tar.gz
Requirement already satisfied: cffi>=1.0.0 in /datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages (from pytorch-fft) (1.10.0)
Requirement already satisfied: pycparser in /datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages (from cffi>=1.0.0->pytorch-fft) (2.18)
Building wheels for collected packages: pytorch-fft
Running setup.py bdist_wheel for pytorch-fft ... error
Complete output from command /datah/gyhu/soft/Anaconda3/envs/gymlab/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-install-c68r__q6/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d /tmp/pip-wheel-84v_y0fw --python-tag cp35:
Including CUDA code.
Including CUDA code.
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/pytorch_fft
copying pytorch_fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft
creating build/lib.linux-x86_64-3.5/pytorch_fft/_ext
copying pytorch_fft/_ext/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/_ext
creating build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/fft.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/autograd.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
creating build/lib.linux-x86_64-3.5/pytorch_fft/_ext/th_fft
copying pytorch_fft/_ext/th_fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/_ext/th_fft
running build_ext
generating cffi module 'build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c'
creating build/temp.linux-x86_64-3.5
building 'pytorch_fft._ext.th_fft._th_fft' extension
creating build/temp.linux-x86_64-3.5/build
creating build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/tmp
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft/src
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/TH -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC -I/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft/src -I/datah/gyhu/soft/Anaconda3/envs/gymlab/include/python3.5m -c build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c -o build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.o
In file included from /datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THC.h:4:0,
from build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c:434:
/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:9:18: fatal error: cuda.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1


Failed building wheel for pytorch-fft
Running setup.py clean for pytorch-fft
Failed to build pytorch-fft
Installing collected packages: pytorch-fft
Running setup.py install for pytorch-fft ... error
Complete output from command /datah/gyhu/soft/Anaconda3/envs/gymlab/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-install-c68r__q6/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-record-122qb4h5/install-record.txt --single-version-externally-managed --compile:
Including CUDA code.
Including CUDA code.
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/pytorch_fft
copying pytorch_fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft
creating build/lib.linux-x86_64-3.5/pytorch_fft/_ext
copying pytorch_fft/_ext/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/_ext
creating build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/fft.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
copying pytorch_fft/fft/autograd.py -> build/lib.linux-x86_64-3.5/pytorch_fft/fft
creating build/lib.linux-x86_64-3.5/pytorch_fft/_ext/th_fft
copying pytorch_fft/_ext/th_fft/init.py -> build/lib.linux-x86_64-3.5/pytorch_fft/_ext/th_fft
running build_ext
generating cffi module 'build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c'
creating build/temp.linux-x86_64-3.5
building 'pytorch_fft._ext.th_fft._th_fft' extension
creating build/temp.linux-x86_64-3.5/build
creating build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/tmp
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft
creating build/temp.linux-x86_64-3.5/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft/src
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/TH -I/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC -I/tmp/pip-install-c68r__q6/pytorch-fft/pytorch_fft/src -I/datah/gyhu/soft/Anaconda3/envs/gymlab/include/python3.5m -c build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c -o build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.o
In file included from /datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THC.h:4:0,
from build/temp.linux-x86_64-3.5/pytorch_fft._ext.th_fft._th_fft.c:434:
/datah/gyhu/soft/Anaconda3/envs/gymlab/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:9:18: fatal error: cuda.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

----------------------------------------

Command "/datah/gyhu/soft/Anaconda3/envs/gymlab/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-install-c68r__q6/pytorch-fft/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-record-122qb4h5/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-c68r__q6/pytorch-fft/
`

pip install issue: libcufft.so.8.0 not found

After running: pip install pytorch-fft, and importing the library I get the following error: ImportError: libcufft.so.8.0: cannot open shared object file: No such file or directory

I am using Python 2.7 and torch 0.2 as I can see from pip freeze. Is there some other dependency which I am missing?

pip install not working due to cuda.h missing error

I have tried pip install pytorch-fft, on two different machines with gpus, and on both I am getting this error:
venv/local/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:9:18: fatal error: cuda.h: No such file or directory

I tried python 2.7 and 3.6. I also tried to install without using a virtual environment.

P.S.: I first got the error which says the package cffi was missing. I installed that package, and now I am getting this error.

I would appreciate your help!

Normalization

Looking forward for normalization argument, which is corresponding to numpy.fft, for example,

np.fft.fft(x, norm='ortho')

This is done by dividing the result by sqrt(N)

Benchmark against tensorflow's fft

I am originally a user of keras with tensorflow backend. I am working with networks that need to use the fft (in particular FFT2D). It turns out that the fft in tensorflow is apparently inherently slow (cf: tensorflow/tensorflow#6541).

I wanted to know if someone had taken the time to benchmark them against each other. I think I will try to do it myself, but if someone has done it, I might as well use that.

Cuda 9.1

After install with
pip install pytorch_fft

and running
python -c "import pytorch_fft"

I get the following traceback:
Traceback (most recent call last):
File "", line 1, in
File "/env/lib/python3.5/site-packages/pytorch_fft/init.py", line 1, in from . import fft
File "/env/lib/python3.5/site-packages/pytorch_fft/fft/init.py", line 1, in from .fft import *
File "/env/lib/python3.5/site-packages/pytorch_fft/fft/fft.py", line 3, in from .._ext import th_fft
File "/env/lib/python3.5/site-packages/pytorch_fft/_ext/th_fft/init.py", line 3, in from ._th_fft import lib as _lib, ffi as _ffi
ImportError: libcufft.so.9.1: cannot open shared object file: No such file or directory

I am using python 3.5 with CUDA 9.1. Perhaps this does not work with CUDA 9.1?

When I run:
ls $LD_LIBRARY_PATH | grep libcufft.so.9.1
I get:
libcufft.so.9.1
libcufft.so.9.1.85

So it does appear that the shared libraries should be found by python. Any idea? Thanks.

Autograd functionality doesn't pass PyTorch autograd check

import torch
import pytorch_fft.fft.autograd as afft
from torch.autograd import gradcheck, Variable

in = (Variable(torch.randn(5,10).double().cuda(), requires_grad=True),)*2
test = gradcheck(afft.Fft(), in)
print(test)

The above snippet prints false.

@rotmanmi I think the autograd implementation may be missing a few well-placed transposes, though I'm not sure what exactly would be the right fix.

Specify output array and in-place operations?

Either I am missing something or it is not present in this package:

Is there a way to specify target output arrays for the fft? If not, is this planned?

Cufft can do inplace fft, which one could then invoke simply by specifying the output array as the input array.

Having this would make memory-intense work which requires precise control possible. Related to this: Are the cufft plans cached somewhere in memory or on gpu and reusable? Are there plans to make these accessible?

why is there a '/2' for backword of rfft?

I am new to pytorch, and I am trying create rfft (real-to-complex fast fourier transoformation) operator using Caffe2 with Eigen. When I came into this code
`class Rfft(torch.autograd.Function):
def forward(self, X_re):
X_re = X_re.contiguous()
self._to_save_input_size = X_re.size(-1)
return rfft(X_re)

def backward(self, grad_output_re, grad_output_im):
    # Clone the array and make contiguous if needed
    grad_output_re = contiguous_clone(grad_output_re)
    grad_output_im = contiguous_clone(grad_output_im)

    if self._to_save_input_size & 1:
        grad_output_re[...,1:] /= 2
    else:
        grad_output_re[...,1:-1] /= 2

    if self._to_save_input_size & 1:
        grad_output_im[...,1:] /= 2
    else:
        grad_output_im[...,1:-1] /= 2

    gr = irfft(grad_output_re,grad_output_im,self._to_save_input_size, normalize=False)
    return gr`

I am not quite clear why there is ' /= 2' ? Is it for the lack of image part of rfft input?
Thanks in advance for help!

backward() error when first fft, then ifft, and then fft

The following is the code:

import torch
from torch.autograd import Variable
import pytorch_fft.fft as fft

f = fft.autograd.Fft2d()
invf = fft.autograd.Ifft2d()

x = Variable(torch.arange(0, 16).view(1, 1, 4, 4), requires_grad=True).cuda()
y = Variable(torch.zeros(1, 1, 4, 4), requires_grad=True).cuda()

fx, fy = f(x, y)
ffx, ffy = invf(fx, fy)
fffx, fffy = f(ffx, ffy)

fffx.sum().backward()

and error message:

Traceback (most recent call last):
File "test_fft.py", line 16, in
fffx.sum().backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 144, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
RuntimeError: could not compute gradients for some functions

But if first call fft, then fft, and then ifft, backward() is okay.

slow speed on torch.DataParallel models

Hi,

I try to use this framework to compute FFT with torch.DataParallel(model)

but it seems that with the same batch_size in one GPU and 4 GPUs, the fft will consume much more time:

with batch_size 16 on one GPU [784 X 8192 size with 1d fft]:

it will cost about 0.60s in fft, 0.21s in ifft.

but with batch_size 64 on 4 GPUs:

it will cost 4s in fft, and 1s in ifft.

So could you provide enhancements to multigpu FFT? thanks.

build err

Hi,I tried pip install pytorch-fft it succeed,but when i tried build.py it end with err:
No such file or directory:'/tmp/tmpdq7_hm2i/pytorch_fft._ext.th_fft._th_fft.so'
My os is ubuntu 16.04 and python 3.6 version

Batchwise fft

Hey,
fft computation works fine in an online training. However, it doesn't work for batch data. I tried with mini-batches where rows are samples and columns are features. It seems fft function computes transform of entire matrix instead of each row. I also tried reshaping matrix into 3-D tensor (batch_size, 1, num_features). This too doesn't work.
Is it possible to do fft row-wise or for multiple instance at once ?

Thanks,
Asif

NotImplementedError because type is Tensor

Hello,

I'm using PyTorch 0.4 and I cannot call any of the fft methods, because I always fall into the NotImplementedError case.

Indeed, methods have such codes :

def fft(X_re, X_im): 
    if 'Float' in type(X_re).__name__ :
        f = th_fft.th_Float_fft1
    elif 'Double' in type(X_re).__name__: 
        f = th_fft.th_Double_fft1
    else: 
        raise NotImplementedError
    return _fft(X_re, X_im, f, 1)

But the functions args are of type Tensor (if i print type(X_re).__name__) for example.

I simply tried with the homepage example :

# Example that uses the autograd for 2D fft:
import torch
from torch.autograd import Variable
import pytorch_fft.fft.autograd as fft
import numpy as np

f = fft.Fft2d()
invf= fft.Ifft2d()

fx, fy = (Variable(torch.arange(0,100).view((1,1,10,10)).cuda(), requires_grad=True), 
          Variable(torch.zeros(1, 1, 10, 10).cuda(),requires_grad=True))
k1,k2 = f(fx,fy)
z = k1.sum() + k2.sum()
z.backward()
print(fx.grad, fy.grad)

Thanks in advance for your help

Install not working with pip

When I do pip install pytorch-fft, I get the following error:

Collecting pytorch-fft
  Downloading https://files.pythonhosted.org/packages/9d/63/9d4a93f1c197f2a0d4f0a5ea7406da0d2bdc084be5826087c235da23eb4b/pytorch_fft-0.15.tar.gz
    ERROR: Complete output from command python setup.py egg_info:
    ERROR: Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-k3e2_fwg/pytorch-fft/setup.py", line 6, in <module>
        import build
      File "/tmp/pip-install-k3e2_fwg/pytorch-fft/build.py", line 3, in <module>
        from torch.utils.ffi import create_extension
      File "/volatile/home/Zaccharie/workspace/[project]/venv/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 1, in <module>
        raise ImportError("torch.utils.ffi is deprecated. Please use cpp extensions instead.")
    ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.
    ----------------------------------------
ERROR: Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-k3e2_fwg/pytorch-fft/

Build issue: No module named _th_fft

I've installed pytorch_fft from source using python setup.py install --prefix=/my/prefix. I'm getting an error:

import pytorch_fft
#Traceback (most recent call last):
#  File "<stdin>", line 1, in <module>
#  File "pytorch_fft/__init__.py", line 1, in <module>
#    from . import fft
#  File "pytorch_fft/fft/__init__.py", line 1, in <module>
#    from .fft import *
#  File "pytorch_fft/fft/fft.py", line 3, in <module>
#    from .._ext import th_fft
#  File "pytorch_fft/_ext/th_fft/__init__.py", line 3, in <module>
#    from ._th_fft import lib as _lib, ffi as _ffi
#ImportError: No module named _th_fft

It seems the installer is confused (note the strange /home/... directory structure):

pytorch_fft $ find build/
#build/
#build/lib.linux-x86_64-2.7
#build/lib.linux-x86_64-2.7/_th_fft.so
#build/lib.linux-x86_64-2.7/pytorch_fft
#build/lib.linux-x86_64-2.7/pytorch_fft/fft
#build/lib.linux-x86_64-2.7/pytorch_fft/fft/autograd.py
#build/lib.linux-x86_64-2.7/pytorch_fft/fft/__init__.py
#build/lib.linux-x86_64-2.7/pytorch_fft/fft/fft.py
#build/lib.linux-x86_64-2.7/pytorch_fft/__init__.py
#build/lib.linux-x86_64-2.7/pytorch_fft/_ext
#build/lib.linux-x86_64-2.7/pytorch_fft/_ext/__init__.py
#build/lib.linux-x86_64-2.7/pytorch_fft/_ext/th_fft
#build/lib.linux-x86_64-2.7/pytorch_fft/_ext/th_fft/__init__.py
#build/temp.linux-x86_64-2.7
#build/temp.linux-x86_64-2.7/_th_fft.c
#build/temp.linux-x86_64-2.7/build
#build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7
#build/temp.linux-x86_64-2.7/build/temp.linux-x86_64-2.7/_th_fft.o
#build/temp.linux-x86_64-2.7/home
#build/temp.linux-x86_64-2.7/home/mscho
#build/temp.linux-x86_64-2.7/home/mscho/vadim
#build/temp.linux-x86_64-2.7/home/mscho/vadim/pytorch_fft
#build/temp.linux-x86_64-2.7/home/mscho/vadim/pytorch_fft/pytorch_fft
#build/temp.linux-x86_64-2.7/home/mscho/vadim/pytorch_fft/pytorch_fft/src
#build/temp.linux-x86_64-2.7/home/mscho/vadim/pytorch_fft/pytorch_fft/src/th_fft_cuda.o
#build/bdist.linux-x86_64

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.