Git Product home page Git Product logo

mahmoudnafifi / deep_white_balance Goto Github PK

View Code? Open in Web Editor NEW
493.0 26.0 62.0 105.52 MB

Reference code for the paper: Deep White-Balance Editing (CVPR 2020). Our method is a deep learning multi-task framework for white-balance editing.

License: Other

MATLAB 45.63% Python 54.37%
multitask-learning white-balance color-correction color-constancy auto-white-balance image-manipulation color-manipulation image-enhancement color-enhancement color-processing

deep_white_balance's Introduction

Deep White-Balance Editing, CVPR 2020 (Oral)

Mahmoud Afifi1,2 and Michael S. Brown1

1Samsung AI Center (SAIC) - Toronto

2York University

Oral presentation

deep_WB_fig

Reference code for the paper Deep White-Balance Editing. Mahmoud Afifi and Michael S. Brown, CVPR 2020. If you use this code or our dataset, please cite our paper:

@inproceedings{afifi2020deepWB,
  title={Deep White-Balance Editing},
  author={Afifi, Mahmoud and Brown, Michael S},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

network

Training data

  1. Download the Rendered WB dataset.

  2. Copy both input images and ground-truth images in a single directory. Each pair of input/ground truth images should be in the following format: input image: name_WB_picStyle.png and the corresponding ground truth image: name_G_AS.png. This is the same filename style used in the Rendered WB dataset. As an example, please refer to dataset directory.

Code

We provide source code for Matlab and PyTorch platforms. There is no guarantee that the trained models produce exactly the same results.

1. Matlab (recommended)

Prerequisite

  1. Matlab 2019b or higher
  2. Deep Learning Toolbox

Get Started

Run install_.m

Demos:
  1. Run demo_single_image.m or demo_images.m to process a single image or image directory, respectively. The available tasks are AWB, all, and editing. If you run the demo_single_image.m, it should save the result in ../result_images and output the following figure:

  1. Run demo_GUI.m for a gui demo.

Training Code:

Run training.m to start training. You should adjust training image directories from the datasetDir variable before running the code. You can change the training settings in training.m before training.

For example, you can use epochs and miniBatch variables to change the number of training epochs and mini-batch size, respectively. If you set fold = 0 and trainingImgsNum = 0, the training will use all training data without fold cross-validation. If you would like to limit the number of training images to be n images, set trainingImgsNum to n. If you would like to do 3-fold cross-validation, use fold = testing_fold. Then the code will train on the remaining folds and leave the selected fold for testing.

Other useful options include: patchsPerImg to select the number of random patches per image and patchSize to set the size of training patches. To control the learning rate drop rate and factor, please check the get_training_options.m function located in the utilities directory. You can use the loadpath variable to continue training from a training checkpoint .mat file. To start training from scratch, use loadpath=[];.

Once training started, a .cvs file will be created in the reports_and_checkpoints directory. You can use this file to visualize training progress. If you run Matlab with a graphical interface and you want to visualize some of input/output patches during training, set a breakpoint here and write the following code in the command window:

close all; i = 1; figure; subplot(2,3,1);imshow(extractdata(Y(:,:,1:3,i))); subplot(2,3,2);imshow(extractdata(Y(:,:,4:6,i))); subplot(2,3,3);imshow(extractdata(Y(:,:,7:9,i))); subplot(2,3,4); imshow(gather(T(:,:,1:3,i))); subplot(2,3,5); imshow(gather(T(:,:,4:6,i))); subplot(2,3,6); imshow(gather(T(:,:,7:9,i)));

You can change the value of i in the above code to see different images in the current training batch. The figure will show you produced patches (first row) and the corresponding ground truth patches (second row). For non-graphical interface, you can edit your custom code here to save example patches periodically. Hint: you may need to use a persistent variable to control the process. Alternative solutions include using custom trianing loop.

2. PyTorch

Prerequisite

  1. Python 3.6

  2. pytorch (tested with 1.2.0 and 1.5.0)

  3. torchvision (tested with 0.4.0 and 0.6.0)

  4. cudatoolkit

  5. tensorboard (optional)

  6. numpy

  7. Pillow

  8. future

  9. tqdm

  10. matplotlib

  11. scipy

  12. scikit-learn

The code may work with library versions other than the specified.

Get Started

Demos:
  1. Run demo_single_image.py to process a single image. Example of applying AWB + different WB settings: python demo_single_image.py --input_image ../example_images/00.jpg --output_image ../result_images --show. This example should save the output image in ../result_images and output the following figure:

  1. Run demo_images.py to process image directory. Example: python demo_images.py --input_dir ../example_images/ --output_dir ../result_images --task AWB. The available tasks are AWB, all, and editing. You can also specify the task in the demo_single_image.py demo.
Training Code:

Run training.py to start training. You should adjust training image directories before running the code.

Example: CUDA_VISIBLE_DEVICE=0 python train.py --training_dir ../dataset/ --fold 0 --epochs 500 --learning-rate-drop-period 50 --num_training_images 0. In this example, fold = 0 and num_training_images = 0 mean that the training will use all training data without fold cross-validation. If you would like to limit the number of training images to be n images, set num_training_images to n. If you would like to do 3-fold cross-validation, use fold = testing_fold. Then the code will train on the remaining folds and leave the selected fold for testing.

Other useful options include: --patches-per-image to select the number of random patches per image, --learning-rate-drop-period and --learning-rate-drop-factor to control the learning rate drop period and factor, respectively, and --patch-size to set the size of training patches. You can continue training from a training checkpoint .pth file using --load option.

If you have TensorBoard installed on your machine, run tensorboard --logdir ./runs after start training to check training progress and visualize samples of input/output patches.

Results

results

This software is provided for research purposes only and CAN NOT be used for commercial purposes.

Maintainer: Mahmoud Afifi ([email protected])

Related Research Projects

  • When Color Constancy Goes Wrong: The first work to directly address the problem of incorrectly white-balanced images; requires a small memory overhead and it is fast (CVPR 2019).
  • White-Balance Augmenter: An augmentation technique based on camera WB errors (ICCV 2019).
  • Interactive White Balancing:A simple method to link the nonlinear white-balance correction to the user's selected colors to allow interactive white-balance manipulation (CIC 2020).
  • Exposure Correction: A single coarse-to-fine deep learning model with adversarial training to correct both over- and under-exposed photographs (CVPR 2021).

deep_white_balance's People

Contributors

mahmoudnafifi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deep_white_balance's Issues

about using deep white balance in video files?

Hi. Thank you for great work and published code for free to public.
I am using your code in Matlab. GUI preview is great addition.
I want ask you, could you modify code to support video files? It would be easier than manually extract frames and merging sequence.
Thank you in advance!

How to compute the MAE mean angular error in image

HI,

i am confused in how to compute the MAE in image , i just know the formula the arccos(V1,V2),and ,V1,V2,is vector belong 13 or 31

and your output is image and ground truth is image,i can't find the way to compute the mae from image

how to train your model on set2 or cube+ dataset?

after checking your code, I find that your dataloader is only suitable to set1, but set1 consists of a large amount of dataset. I want to train your model on set2 or cube+, but I didn't the corresponding dataloader

Details about the results of WB manipulation(Table 2)

Is it a quantitative result calculated by all WB settings, generate five setting images (2850K, 3800K, 5500K, 6500K, 7500K) from any image in the dataset? If so, the Cloudy ground-truth image (6500K) is missing in the cube++ dataset, how can I handle it?
Thanks for your impressive work! I gained a lot!

About mapping_func

Hi, thanks for this great works. After reading the source codes, I am curious about the role of get_mapping_func in deepWB.py.
deepWB_map
Why do we have to get mapping from image_resized to output_awb first and calculate the output of image? Why donot we calculate output of image directly?

Evaluation metrics

We evaluate the results of the pre-trained model on the Set 2, but it is inconsistent with the results of the paper. Our results: DeltaE 2000 mean: 5.28, MSE mean= 138.94, MAE mean= 4.35. The evaluation algorithm is referenced from WB_sRGB.
I would like to know what is causing this deviation, and could you release the evaluation method?
Thanks!

awb_gt = cv2.imread(awb_path, cv2.IMREAD_COLOR)
file_name = os.path.basename(awb_path).replace(".png", "_AWB.png")
awb = cv2.imread(opt.result_root+file_name, cv2.IMREAD_COLOR)
deltaE00, MSE, MAE = evaluate_cc(awb, awb_gt, metadata["cc_mask_area"], opt=3)

Automatic Image Rotation applied to output images

Hi,

First and foremost, I wanted to express my deep gratitude for the repository and the research you've put into it. It's been instrumental for me in refining the white balancing of my personal photographs.

However, I've noticed some output images to be rotating counter-clockwise. I understand that there might be multiple reasons for this, but could you guide me to the portion of the code that might be causing this behavior? I'd greatly appreciate any insights.

Warm regards,
Botan

I need help of white balance case

Hi,

I have some jpeg encrypted by ransomware, that ransomware encrypted 150kb from the header but i repaired to get image viewer and have poblem of white balance.

When i try your source the image look not good, could you check these files and help me how to fix color like this:

Sample.zip

How to use wget download the dataset?

Hi, thanks for your very instructive nice work! I want to use this dataset but its' too large.

So how to use wget download the dataset, could you show me a template?

Example command in readme doesn't work

Hi, it seems at some point you changed your code and forgot to update some commands in readme.md, specifically the first command of the pytorch section, which should instead be python demo_single_image.py --input ../example_images/00.jpg --output_dir ../result_images --show, at least that's how I had to write it for it to work just fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.