Git Product home page Git Product logo

ffdnet's Introduction

ffdnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ffdnet's Issues

Image denoising

How to know the noise intensity of an image, if test a good image and what result will get

Exact training set of FFDNet

Hi,

Thanks for your amazing work.

I would like to confirm the exact training set of FFDNet, in your paper, you stated that "We collected a large dataset of source images, including 400 BSD images, 400 images selected from the validation set of ImageNet [53], and the 4,744 images from the Waterloo Exploration Database [54]".

I would like to ask where could I find the 400 BSD images and the 400 images selected from the validation set of ImageNet?

Spatially Variant Noise for Pytorch Codes

Hi, I was wondering if anyone can help with implementing non-uniform noise level map in the Pytorch version. It is only implemented for the MATLAB version in the existing codes. Thanks!

Unknown layer type 'SubP'

Hi there,

I just run the demo_awgn_gray and got this error

Demo_AWGN_Gray
Error using vl_simplenn (line 377)
Unknown layer type 'SubP'.
Error in Demo_AWGN_Gray (line 88)
res = vl_simplenn(net,input,[],[],'conserveMemory',true,'mode','test');

Any idea to fix it?

Best regards,

Smaller size of denoising network

Hi,

I tried to use an abstract interpretation based verifier to over-approximate and analyze your network FFDNet.
But the network contains 175,616 neurons, 15 layers to denoise even mnist image.
The verifier can't scale up to handle this large and deep network.
Even though I don't think it is possible, but still want to check if you have a smaller size network?
Thanks!

my comments on this paper

As for me , this article has nearly the same idea with Demosaic area , especially with paper : Deep Joint Demosaic and Denoise,which is from Fredo

the fast speed just comes from packing the original image to a smaller resolution

the good noise adaptive ability comes from the addtional noise estimation input

comparison with RBDN

Hello, it is more of a question than an issue.
I was reading recent papers on image denoising and have read DnCNN and FFDNet papers. I was curious as to why you have not compared FFDNet with RBDN -- it was mentioned in related work in FFDNet. Is the comparison going to be there in final version of FFDNet -- as RBDN is also quite recent.

Thanks,
Touqeer

convert models

@cszn hi guys,thanks for your great job,it performs better than CBDnet ,dncnn in my data,i want transplants this algorithm to embedded platform. i have trouble in convert this training result .pth to .onnx,can you give me some advice,looking for you replay,thanks a lot.

The most interesting is how to work with different Noise Level

The method could be framework for other image processing like Demosaic, HDRNet and Learning to see in the dark...

一个统一的处理架构针对几乎连续的场景,这是Deep Learning可以真正工业化的方法。
赞!

This is not an issue. Appciate your innovation!

Logic while training FFDNET

In your file model_train.m, you multiply the sigmas with an array of random numbers. Since randn produces negative numbers such as -3.14, the sigma will exceed 75 in the image. My question is that why do you multiply the sigmas with that array

K = randi(8);
labels = imdb.HRlabels(:,:,:,batch);
labels = data_augmentation(labels,K);
sigma_max = 75;
sigmas = (rand(1,size(labels,4))*sigma_max)/255;
<-------------- This line! ------------------>
new_arr = bsxfun(@times,randn(size(labels)), reshape(sigmas,[1,1,1,size(labels,4)]));
inputs = labels + new_arr;

彩色图像和黑白图像是分开训练的吗?

你好,我最近在看你写的FFDNet的论文,想自己训练一个模型,我看到代码里面黑白图像和彩色图像是分开来处理的,请问是这样子的吗?还是我看错了?谢谢

Error/PSNR while training

I am having trouble while using your training code. The error is not decreasing after a certain threshold (2.8). Can you tell me how did you manage to train your model.

What's FFDNet+?

Thanks for publishing the code. The FFDNet shows very good performance on real noise images as denoted as FFDNet+. But I cannot find any description about it. Could you add some explanation on it?

All the best.

pytorch version is very slow compared with matconvnet

I use your matlab version and pytorch for test ffdnet color model, matlab version's time is about 20ms, but pytorch version's time is about 1s. Cuda version is 8.0 and cudnn is 6.0.21. Can you have some ideas about this? What is your testing time and pytorch version?

Elements in noise level map

Hi,

For AWGN images, you mention in the paper that the noise level map should take uniform element \sigma.
May I know why the noise level map is set in this way for AWGN?
And is it ok to denoise AWGN image with non-uniform noise level map?

Thanks,
Yuyi

Request for test code

Hi @ @cszn
This project and paper are great.
I want to test spatially variant noise images with noise level mask (Fig5 in arxiv paper), can you share me the test codes?
Thanks a lot

how to train your model

Hi @ @cszn
This project is such good. But, I want to train mu own dataset, can you share me the training codes? Thanks a lot

Training problem

I use the Train400 dataset for FFDNet-master\TrainingCodes\FFDNet_TrainingCodes_v1.0\Demo_Train_FFDNet_gray.m.
The training looks no problem, outputs like below -
train: epoch 499 : 1/ 4: loss: 2.3612
train: epoch 499 : 2/ 4: loss: 3.2255
train: epoch 499 : 3/ 4: loss: 3.5821
train: epoch 499 : 4/ 4: loss: 3.0635
train: epoch 500 : 1/ 4: loss: 3.9296
train: epoch 500 : 2/ 4: loss: 4.1511
train: epoch 500 : 3/ 4: loss: 2.4029
train: epoch 500 : 4/ 4: loss: 2.2997
But I did not see the final trained model. Could you please help to give some recommendation that what should I do?

Keras Training and Testing Code

First of all, thank you for sharing your scientific progress with the github community.

I opened this issue to ask and keep a track in a possible keras implementation of this work.
Is there any plan to release a keras version in a near future?

http://www.ipol.im/pub/pre/231/ 这个地址的pytorch代码有问题

`
imgn_train = img_train + noise

Create input Variables

img_train = Variable(img_train.cuda())
imgn_train = Variable(imgn_train.cuda())
noise = Variable(noise.cuda())
stdn_var = Variable(torch.cuda.FloatTensor(stdn))

Evaluate model and optimize it

out_train = model(imgn_train, stdn_var)
loss = criterion(out_train, noise) / (imgn_train.size()[0] * 2)
loss.backward()
optimizer.step()
`

这个代码是你在首页推荐的,http://www.ipol.im/pub/pre/231/ 里面的代码。
image

你看他计算 loss 的时候,是将模型的输出与 noise 进行计算,也就是说模型预测的是 噪声,怎么感觉跟你的论文说的不一样啊

When I use the pytorch model, I meet an error.

please! When I use the pytorch model, I meet an error.
I want to denoise an image using a one of the pretrained models

python test_ffdnet_ipol.py
--input input.png
--noise_sigma 25
--add_noise True

I have:

Testing FFDNet model

Parameters:
add_noise: True
input: 111.png
suffix:
noise_sigma: 0.09803921568627451
dont_save_results: False
no_gpu: False
cuda: True

rgb: False
im shape: (518, 774)
Loading model ...

Process finished with exit code 0
I have no use to get anything, but I have configured the libraries that need to be configured.
Then I debugged it and when I debugged this statement, the project exited.
if args['cuda']:
state_dict = torch.load(model_fn)
device_ids = [0]
model = nn.DataParallel(net, device_ids=device_ids).cuda()

I want to know why I can't use the training model?@cszn

Can't run the demo

I can't run the demo, because 'vl_simplenn_tidy' don‘t exist. And it seems that some functions don't exist too.

Request for FFDNET+ sigmas applied to DND

Hi @ @cszn
I was very impressed from your works including good peformance of DND set.
I heared that your sigmas are manually selected,
could you share me the applied sigmas for each images at the DND?

Thanks a lot

Where is "main_train_ffdnet.py" ???

Hello Sir.

I tried to get "main_train_ffdnet.py" .
When I clicked link, but I couldn't get this file.

How to get this file..

Thanks,,,
Edward Cho.

training in special cases

Hi, initially I would like to thank you for your very good work.
Then I have a question: I've images with more than one channel and the noise level is different in different channels.
How should be setted the noise level map?
Thanks for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.