sg-nm / operation-wise-attention-network Goto Github PK
View Code? Open in Web Editor NEWAttention-based Adaptive Selection of Operations for Image Restoration in the Presence of Unknown Combined Distortions (CVPR 2019)
License: MIT License
Attention-based Adaptive Selection of Operations for Image Restoration in the Presence of Unknown Combined Distortions (CVPR 2019)
License: MIT License
Sorry to bother you, but I did have some problems running the code you provided.After I successfully run the code, the values of PSNR and SSIM are much higher than the data provided in your paper, which makes me feel a little confused, because I did not modify your code, except GPUID and batchsize.The train.h5 dataset is also generated from the file you mentioned, and validation. H5 is copied directly from rl-restore.I was wondering if there was a problem with validation data because you didn't mention the validate data in readme.md exactly.
Thank you for your code and work!
I want to reproduce your results in the paper.
But train.h5 files are not exactly same because of the randomness when we generate the train.h5 file. I want to train the model using the train.h5 you used.
Can you upload or share the train.h5 file?
Hi,
Thanks to your amazing work!
I want to test your model in my own data, but it always out of memory, even if I try it on GTX 1080Ti. So I want to know the graphics card you use in your experiment.
hello ,good work,but I have a question.
I want to test your model in my own data, and i would like to know how toprepare my own dataset. My file structure is as follows;and i don't know what should i put in the target folder?
--Project
----dataset
------yourdata_test
--------input
----------1.jpg
----------2.jpg
...
--------target
Sorry to bother you again.After I successfully run the trained model mode_best. pth, the result is PSNR=34.5027 and SSIM=0.7611, which is inconsistent with the value mentioned in the paper.
Hello, may I ask if the validation data you use when training the network is validation. H5 in rl-restore?I'm not sure when I was training the model.And when I set validation. H5 as validation data, the result is quite different from your paper's.
Hi,
First of all, thanks so much for your amazing work! I am trying your code and have a question:
In the current test code, you are still dealing with images with the size of 63x63, just the same as the training images. However, in your examples of object detection, the images are apparently on their original size, and the size is different from each image. How are you dealing with this situation? Do you directly input such image and get its output, or do you resize it first and input, and resize back the output image?
Thanks for your help in advance!
Hello, I have observed that after you have completed the work of training the model, you use images which suffer from different types of injuries(blur, rain, noise, etc) to test.Unfortunately, I did not find the relevant test data set.Could you please provide the test data?Also, I was wondering if you were just training with moderate images and testing them on different types of images (including mild, moderate, Server, raindrop, Blur, Noise, JPEG)?Looking forward to your reply. Thank you very much!
I generate the training dataset using run RL-Restore,and then I put (train.h5)to dataset/train/
, but when I run python main.py -m mix -g 1
, there is an error that no such file named dataset /val/
I would like to know what should I put in the ./dataset/val
folder?
hope for your helping, thank you so much!
hello ,good work,but I have a question,how to complete the visualization of Mean and variance of attention weight?
Thank you very much for your work in this area. When I ran your code, I encountered some problems.
All the recovered images showed some strange blue distortion. I used input images in LR-Rstore/data/test/mine
with your pre-training model Trained_model/model_best.pth
, can you tell me where the problem is, and look forward to your reply
Hello,thanks for your excellent jobs!
# save images
vutils.save_image(output.data, './results/Outputs/%05d.png' % (int(i)), padding=0, normalize=False)
My torchvision version may be different from the one above,which will not change the tensor,but my torchivision will .
Hello,thanks for your excellent jobs! Could you please provide the datasets referred in the ablation study? Thank you very much
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.