Git Product home page Git Product logo

deeplpf's People

Contributors

dependabot[bot] avatar learning2hash avatar sjmoran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplpf's Issues

Poor performance on MIT Adobe Fivek datasets

Hello, thank you for excellent work! I have met some problem when I trained and tested your model on the dataset mentioned in your paper. I follow the ways mentioned in DPE to process the MIT Adobe FiveK dataset and use their test list for testing. However, I got very poor performance with PSNR 17.49 dB on the FiveK-DPE test dataset.
the following is my processed image paired.
0001
input image
0001
expertC image

Training giving strange results

Hey **Sean, totally am inspired by your work.

I've turned DeepLPF into device-agnostic code, to run on "cpu" (I'm on M1 mac and "mps" is still unreliable). I've been successful in testing images based on your existing checkpoints, however am getting strange results when training on my own data, and I can't figure out what would be giving this "look". I've followed the training data image prep as per your readme file.
input:
img3
groundtruth:
img3
test:
img3_TEST_1_1_PSNR_4 726_SSIM_0 202

Any suggestions?

Wrong loss function in paper/poster?

Hi,
I believe the paper and poster both show a loss function with contradictive parts: the first part (Lab) gets bigger as images are more different, whereas the second part (MS-SSIM) gets bigger as images are more similar.
image
The code, however, does not make that mistake, and actually the loss is Lab + (1 - MS-SSIM) (not sure why 1 is necessary).

Issue in data.py

Line 232 in data.py should be img_id = file.split(".")[0] instead of img_id = file.split("-")[0]

Varying input image size

Hello,
Could you please explain how DeepLPF handles various image sizes at (1) train time (2) inference/prediction time?
Also if you could point to parts of the code that support that, that would be highly appreciated.
Thanks!

data.py only have 248 lines

I want to train the DeepLPF again, but I see the data.py only have 248 lines, so where should I modify the path?

In the other hand, I want to know what means of 3 pre-trained models, I think the adobe_dpe is 2250 training images and 500 testing images, adobe_upe is 4500 training images and 500 testing image, but how about distort-and-recover?

Last question, how do you export the MIT-ADOBE-FIVEK datasets, do you have do any operation such as resizeing?

I use fivek datasets last 500 images as testing images which resize to have a long-edge of 512 pixels in your three pre-trained model and get a not very good result (psnr 21.57 in dpe, 22.51 in upe and 20.87 in distort-and-recover)

Input to filter blocks

Hi,
According to your poster, the elliptical and graduated filters should be taking concat(y_hat_2, backbone_features) as input, yet according to the following code (see starting point), all 3 filter block types (incl. polynomial/cubic) take concat(y_hat_1, backbone_features)

img_cubic = self.cubic_filter.get_cubic_mask(feat, img)

image

Could you please help clarify?

Thanks!

Test on custom dataset

Hello, Authors! Thanks for your excellent work. I would like to know whether it is possible to test custom images other than the Adobe dataset. In the code, I found that the data loader is just for Adobe.

higher batch sizes

Deer author,I want to know when you will update higher batch sizes。

about the traning dataset

The original mit-adobe-fivek dpe if too big for me and as you said"Lightroom to pre-process the images according to the procedure outlined in the DeepPhotoEnhancer (DPE) paper". Can you provide with a processed input dataset?

Adobe-UPE dataset

Adobe-UPE dataset can't bt found, could you provide another link please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.