Git Product home page Git Product logo

deephdrvideo's Introduction

Code for HDR Video Reconstruction

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)
Guanying Chen, Chaofeng Chen, Shi Guo, Zhetong Liang, Kwan-Yee K. Wong, Lei Zhang

Table of Contents

Overview:

We provide testing and training codes. Details of the training and testing dataset can be found in DeepHDRVideo-Dataset. Datasets, the trained models, and the computed results can be download in BaiduYun.

Dependencies

This method is implemented in PyTorch and tested with Ubuntu (14.04 and 16.04) and Centos 7.

  • Python 3.7
  • PyTorch 1.10 and torchvision 0.30

You are highly recommended to use Anaconda and create a new environment to run this code. The following is an example procedure to install the dependencies.

# Create a new python3.7 environment named hdr
conda create -n hdr python=3.7

# Activate the created environment
source activate hdr

pip install -r requirements.txt

# Build deformable convolutional layer, tested with PyTorch 1.1, g++5.5, and Cuda 9.0
cd extensions/dcn/
python setup.py develop
# Please refer to https://github.com/xinntao/EDVR if you have difficulty in building this module

Testing

Please first go through DeepHDRVideo-Dataset to familiarize yourself with the testing dataset.

The trained models can be found in BaiduYun (Models/). Download and place it to data/models/.

Testing on the synthetic test dataset

The synthetic test dataset can be found in BaiduYun (/Synthetic_Dataset/HDR_Synthetic_Test_Dataset.tgz). Download and unzip it to data/. Note that we donot perform global motion alignment for this synthetic dataset.

# Test our method on two-exposure data. Results can be found in data/models/CoarseToFine_2Exp/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark syn_test_dataset --bm_dir data/HDR_Synthetic_Test_Dataset \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. The results can be found in data/models/CoarseToFine_3Exp/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark syn_test_dataset --bm_dir data/HDR_Synthetic_Test_Dataset \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the TOG13 dataset

Please download this dataset from TOG13_Dynamic_Dataset.tgz and unzip to data/. Normally when testing on a video, we have to first compute the similarity transformation matrices between neighboring frames using the following commands.

# However, this is optional as the downloaded dataset already contains the required transformation matrices for each scene in Affine_Trans_Matrices/.
python utils/compute_nbr_trans_for_video.py --in_dir data/TOG13_Dynamic_Dataset/ --crf data/TOG13_Dynamic_Dataset/BaslerCRF.mat --scene_list 2Exp_scenes.txt
python utils/compute_nbr_trans_for_video.py --in_dir data/TOG13_Dynamic_Dataset/ --crf data/TOG13_Dynamic_Dataset/BaslerCRF.mat --scene_list 3Exp_scenes.txt
# Test our method on two-exposure data. The results can be found in data/models/CoarseToFine_2Exp/
# Specify the testing scene with --test_scene. Available options are Ninja-2Exp-3Stop WavingHands-2Exp-3Stop Skateboarder2-3Exp-2Stop ThrowingTowel-2Exp-3Stop 
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark tog13_online_align_dataset --bm_dir data/TOG13_Dynamic_Dataset --test_scene ThrowingTowel-2Exp-3Stop --align \ --mnet_name weight_net --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth 
# To test on a specific scene, you can use the --test_scene argument, e.g., "--test_scene ThrowingTowel-2Exp-3Stop".

# Test our method on three-exposure data. The results can be found in data/models/CoarseToFine_3Exp/
# Specify the testing scene with --test_scene. Available options are Cleaning-3Exp-2Stop Dog-3Exp-2Stop CheckingEmail-3Exp-2Stop Fire-2Exp-3Stop
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark tog13_online_align_dataset --bm_dir data/TOG13_Dynamic_Dataset --test_scene Dog-3Exp-2Stop --align \
    --mnet_name weight_net --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth 

Testing on the captured static dataset

The global motion augmented static dataset can be found in BaiduYun (/Real_Dataset/Static/).

# Test our method on two-exposure data. Download static_RGB_data_2exp_rand_motion_release.tgz and unzip to data/
# Results can be found in data/models/CoarseToFine_2Exp/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/static_RGB_data_2exp_rand_motion_release --test_scene all \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. Download static_RGB_data_3exp_rand_motion_release.tgz and unzip to data/
# The results can be found in data/models/CoarseToFine_3Exp/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/static_RGB_data_3exp_rand_motion_release --test_scene all \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the captured dynamic with GT dataset

The dynamic with GT dataset can be found in BaiduYun (/Real_Dataset/Dynamic/).

# Test our method on two-exposure data. Download dynamic_RGB_data_2exp_release.tgz and unzip to data/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/dynamic_RGB_data_2exp_release --test_scene all \
    --mnet_name weight_net  --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth

# Test our method on three-exposure data. Download dynamic_RGB_data_3exp_release.tgz and unzip to data/
python run_model.py --gpu_ids 0 --model hdr3E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/dynamic_RGB_data_3exp_release --test_scene all \
    --mnet_name weight_net  --fnet_checkp data/models/CoarseToFine_3Exp/flow_net.pth --mnet_checkp data/models/CoarseToFine_3Exp/weight_net.pth --mnet2_checkp data/models/CoarseToFine_3Exp/refine_net.pth

Testing on the captured dynamic without GT dataset

The dynamic with GT dataset can be found in BaiduYun (/Real_Dataset/Dynamic_noGT/).

# Test our method on two-exposure data. Download dynamic_data_noGT_2exp_RGB_JPG.tgz and unzip to data/
# Note that we provide the JPG dataset only for illustrating the testing process
# Results can be found in data/models/CoarseToFine_2Exp/
python run_model.py --gpu_ids 0 --model hdr2E_flow2s_model \
    --benchmark real_benchmark_dataset --bm_dir data/dynamic_data_noGT_2exp_RGB_JPG --test_scene all \
    --mnet_name weight_net --mnet_checkp data/models/CoarseToFine_2Exp/weight_net.pth --fnet_checkp data/models/CoarseToFine_2Exp/flow_net.pth --mnet2_checkp data/models/CoarseToFine_2Exp/refine_net.pth
# It is similar to test on three-exposure data

Testing on the customized dataset

You have two options to test our method on your dataset. In the first option, you have to implement a customized Dataset class to load your data, which should not be difficult. Please refer to datasets/tog13_online_align_dataset.py.

If you don't want to implement your own Dataset class, you may reuse datasets/tog13_online_align_dataset.py. However, you have to first arrange your dataset similar to the TOG13 dataset. Then you can run utils/compute_nbr_trans_for_video.py to compute the similarity transformation matrices between neighboring frames to enable global alignment.

# Use gamma curve if you do not know the camera response function
python utils/compute_nb_transformation_video.py --in_dir /path/to/your/dataset/ --crf gamma --scene_list your_scene_list

HDR evaluation metrics

We evaluate PSRN, HDR-VDP, HDR-VQM metrics using the Matlab code. Please first install HDR Toolbox to read HDR. Then set the paths of the ground-truth HDR and the estimated HDR in matlab/config_eval.m. Last, run main_eval.m in the Matlab console in the directory of matlab/.

main_eval(2, 'Ours')
main_eval(3, 'Ours')

Tonemapping

All visual results in the experiment are tonemapped using Reinhard et al.’s method. Please first install luminance-hdr-cli. In Ubuntu, you may use sudo apt-get install -y luminance-hdr to install it. Then you can use the following command to produce the tonemmapped results.

python utils/tonemapper.py -i /path/to/HDR/

Precomputed results

The precomputed results can be found in BaiduYun (/Results).

Training

The training process is described in docs/training.md.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Citation

If you find this code useful in your research, please consider citing:

@article{chen2021hdr,
  title={{HDR} Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset},
  author={Chen, Guanying and Chen, Chaofeng and Guo, Shi and Liang, Zhetong and Wong, Kwan-Yee~K. and Zhang, Lei},
  journal=ICCV,
  year={2021}
}

deephdrvideo's People

Contributors

guanyingc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deephdrvideo's Issues

matlab评估结果遇到问题

***** GT dir: ../data/dynamic_RGB_data_2exp_release/, 76 Expo , Max num: 76
***** Est dir: ../data/Details/01/*****
Loading HDR list: ../data/dynamic_RGB_data_2exp_release/hdr_list.txt
Grab all *.hdr files in ../data/Details/01/
Found 76 hdr files
Found 76 HDRs, test 76 HDRs
Starting parallel pool (parpool) using the 'local' profile ...
Connected to parallel pool with 8 workers.
错误使用 hdrvdp_visual_pathway
MEX 文件
'/Users/xugangwei/Desktop/matlab/DeepHDRVideo/matlab/Library/hdrvdp-2.2.1/matlabPyrTools_1.4_fixed/pointOp.mexmaci64'
无效:
dlopen(/Users/xugangwei/Desktop/matlab/DeepHDRVideo/matlab/Library/hdrvdp-2.2.1/matlabPyrTools_1.4_fixed/pointOp.mexmaci64,
0x0006): Library not loaded: @loader_path/libmex.dylib
Referenced from: <9F1237D5-8AD4-BB28-F5F8-A352CDC5C208>
/Users/xugangwei/Desktop/matlab/DeepHDRVideo/matlab/Library/hdrvdp-2.2.1/matlabPyrTools_1.4_fixed/pointOp.mexmaci64
Reason: tried:
'/Users/xugangwei/Desktop/matlab/DeepHDRVideo/matlab/Library/hdrvdp-2.2.1/matlabPyrTools_1.4_fixed/libmex.dylib'
(no such file), '/usr/local/lib/libmex.dylib' (no such file), '/usr/lib/libmex.dylib' (no such file, not in dyld
cache)

出错 hdrvdp (第 266 行)
[B_R L_adapt_reference band_freq bb_padvalue] = hdrvdp_visual_pathway( reference, 'reference', metric_par, -1 );

出错 paral_eval_HDRs (第 37 行)
parfor i = 1: num

出错 main_eval (第 32 行)
[psnrTs, psnrLs, ~, hdrvdps] = paral_eval_HDRs(gt_dir, est_dir, max_num, est_hdr_max); % parfor loop

你好,我在MacBook Air (M1)上面下载的Matlab_R2023a,在评估结果时遇到这个问题,请问您知道这该如何解决吗?

Question about Data augmentation

In the paper, you mentioned adding noise to the training data and perturbeding the tone. May I know where the code for this part is? I couldn't find them. Thank you

Implementation details.

你好,在你的论文里,你说 “We first trained the CoarseNet with 10 epochs using a batch size of 16, and then trained the RefineNet with 15 epochs us- ing a batch size of 8. The learning rate was initially set to 0.0001 and halved every 5 epochs for both networks. We then end-to-end finetuned the whole network for 2 epochs using a learning rate of 0.00002.”

但是,你的代码里默认每个阶段都训练30个epoch?请问复现您论文里的结果是按照论文里写的还是代码里默认的?非常期待您的回复。

Some questions about HDR-VDP-2.

Thanks for releasing the code and dataset!
I have some questions about HDR-VDP-2:
First, I found that you adopted 'sRGB-display' as the mode of HDR-VDP-2. I think this mode is for inputting images in the pixel domain. In this mode, HDR-VDP-2 will map input images from the pixel domain to the linear domain. However, the input images in your code are normalized HDR (linear) images.
Second, other modes in HDR-VDP-2 require absolute luminance values rather than normalized values. I wonder how to compute this metric.
Thank you!

About comparison with Kalantari's method

Dear Guanying,

I have a question about the comparative experiment, and I hope you could give some suggestions.

You give results with your method and Kalantari19's method to compare. However, in Kalantari19's paper they said "
use mini-batches of size 10 and perform training for 60,000 iterations" without specifying concrete number of epochs. So I wonder how you train their method and how epochs you use for training Kalantari19's with your dataset.

Looking forward to your reply. Thanks.

The reimplement performance is lower than the performance released in the paper

@guanyingc

I try to reimplement the performance (PSNR) of the real-world dataset with your released code and model. However, I found that the reimplement performance is lower than the performance released in the paper, no matter is static scenes or dynamic scenes.

I am sure that I do not modify any code. If it is normal to get such results?

Thanks a lot and hope for your quick reply.

Global alignment

Hello. According to your paper, global alignment is performed using a similarity transformation in the preprocessing stage. I wonder if this global alignment process is performed during training phase in the code because I couldn't find it. Thank you!

About metrics on synthetic dataset

Dear author

Thanks for your work. I test on matlab about your epoch2 result. But it seems that the metrics is a little lower. For example, the average PSNR-T of poker fullshot+carousel fireworks is 40.34 in your paper, but my result is 39.015.
So I'd like to know did you use VGG_Loss in stage 3 training? If not, it's my mistake.

author and affiliation style in your paper

Hi guangying

I am wondering how do you produce the author&affiliation in your paper with latex? The first two rows are author names marked with affiliation numbers, and the next two rows are affiliations. I failed to do so in the official cvpr template.

Thanks

Question about evaluation on TOG_Dynamic_Dataset

Dear,

You provide Matlab codes for evaluation and it works well on synthetic testing dataset. When I want to use those codes for evaluation on TOG_Dynamic_Dataset, some errors happened.

I got estimated HDR frames of TOG13, but TOG_Dynamic_Dataset doesn't contains hdr_list.txt and hdr_expos.txt so Matlab gave errors. But when I create hdr_list.txt and hdr_expos.txt with similar format of hdr_expos.txt in synthetic testing dataset, evaluation results of Matlab are inaccurate.

For example, I created hdr_expos.txt in dir 'ThrowingTowel-2Exp-3Stop' of TOG13 dataset. Then I wrote following contents in hdr_expos.txt because Exposures.txt in this dir contains 2 number : -3 and 0.
"
0000_ThrowingTowel-2Exp-3Stop_Img073.hdr -3
0001_ThrowingTowel-2Exp-3Stop_Img074.hdr 0
...........
"

Then I set path of Matlab as follows:
conf.gt.realS.path = '../data/TOG13/ThrowingTowel-2Exp-3Stop/';

Then evaluation results are:
48/86: PSNR-T 9.43, PSNR-L -9.94, SSIM 0.000 HDR-VDP 57.52
.......

We can see PSNR-T is lower than 10 and PSNR-L is lower than 0.

So how should I do evaluations on TOG13 dataset? And how should I configure hdr_list.txt and hdr_expos.txt?

About code of Kalantari‘19

Dear,

I ran your program and results of yours are really attractive. I also want to run code of Kalantari'19, but I didn't find it of authors' website.

How do you run their code ?Could you please provide URL or Google Drive of it?

Sorry to disturb you again and give this unreasonable request.

About fine-tuning epoch

Dear author,
May I ask why the stage3:fine-tuning, you use 2 epochs to refine it? I run the stage3 at 5 epochs and test on sythetic dataset. Find that the psnr-t is much higher than epoch2.
Looking forward to your reply

Result on test sequence Carousel_fireworks

Hi,

I notice in one issue, you think vimeo-90k dataset is not perfect but effective in getting the higher performance. So now I'm preparing training with/without it to see the difference and considering how to make another suitable synthetic dataset.

However, after training without vimeo-90k, I found results on test sequence Carousel_fireworks are obviously lower than other sequences. PSNR on Carousel_fireworks is 31.62, while PSNR on Poker_fullshot is 41.31 (2-alternating exposed inputs). Although training with vimeo-90k is not finished, I think maybe PSNR on Carousel_fireworks is still bad.

So do you also get a bad PSNR score on Carousel_fireworks in your work? And I also wonder about the reason.

Save checkpoint and make_dir problem

Hi,
Thanks to the author for the code, it really help me a lot. In the process of retraining the model, we set make_dir as True.We only modify dataset_dir in train_opt.py. In the code, an error occurred in the creation the saving folder. The error is as follows:

Traceback (most recent call last):
File "main.py", line 8, in
log = logger.Logger(args)
File "I:\DeepHDRVideo-master\utils\logger.py", line 29, in init
self._setup_dirs(args)
File "I:\DeepHDRVideo-master\utils\logger.py", line 76, in _setup_dirs
self.log_fie = open(file_dir, 'w')
FileNotFoundError: [Errno 2] No such file or directory: 'logdir/syn_vimeo_dataset\ICCV\11-28,spynet_2triple,weight_net,LReLU,hdr3E_flow_model,kaiming,l1,cm_d-256,cr_h-256,ht_r-0.9,ba_h-16,in_r-0.0001,in_ldr,sc_h-320,concat\11-28,spynet_2triple,weight_net,LReLU,hdr3E_flow_model,kaiming,l1,cm_d-256,cr_h-256,ht_r-0.9,ba_h-16,in_r-0.0001,in_ldr,sc_h-320,concat,14:47:07'

Whether my settings are wrong. Hope to get your help, thank you!

Issues while running setup.py to build deformable Conv module. During testing time, getting undefined symbol error. Cuda version mismatch a problem?

I was running your code with Pytorch 1.1.0, gcc-5.5.0 and Cuda 10.1. I got the following output while running setup.py

Dconv

After that while testing on synthetic data when I ran run_model.py, I got some undefined symbol error as below:

TestError_Syn

Since your code was tested with Cuda-9.0 and I had Cuda-10.1 installed in my system, so is the cuda version mismatch a only issue or are there some other issues for the above mentioned error? Can you please help me out as how I can rectify the above mentioned error?

About the comparison between HDRVideo and AHDRNet

您好,您的这篇工作十分精彩,算法性能也很优异,您在论文中提到的对比算法中有一个算法是19年提出的多曝光图像融合算法AHDRNet,想请问一下您是怎么用HDRVideo这篇工作的数据集去训练AHDRNet的呢(注意到您这篇文章训练的数据集代码中对参考帧有一个tone perturbation的操作,在AHDRNet训练过程中是否也需要有这个操作?),我按照您这篇工作中提供的数据集处理代码训练了一下AHDRNet,但是效果并不理想,在合成的测试集(两曝光为例)上远没有达到Psnr=39.05,尤其是参考帧为高曝光帧的时候,并且用其算法恢复出的视频效果并不理想,您是否方便提供一下您复现AHDRNet的训练代码,十分感谢!!

About the dataset named Vimeo 90K

Dear Guanying,

I have a question about the datasets, as noted in paper,you used Vimeo-90K to train the network,Vimeo-90K is a very big dataset,and I want to konw if you used all 91, 701 preprocessed 7-frame clips to train the network,or you just used
a part of Vimeo-90K to train the network?
In addition,What's the performance of the algorithm if we don't use Vimeo to train the network? I wonder if the performance will degrades a lot?

Looking forward to your reply!

About the weight of perceptual loss

Dear author:
In your source code, the weight of perceptual loss is set to 1. So I wonder what the weight is in your experiment of the paper.

Request for Pre-trained Baseline Model (Yan19 and Kalantari19) for Fair Comparison.

I hope this message finds you well. I am currently working on a project about HDR video reconstruction based on your dataset. In our effort to conduct fair comparisons, we are interested in obtaining the baseline pre-trained model of Yan19 and Kalantari19 mentioned in your work.

Thank you for your time and consideration. We look forward to your positive response.

About Vimeo-90k dataset utilized for training

Dear author,

I have some questions and hope to get some suggestions from you.

  1. Have you ever tested models without vimeo-90k training data? Is your method still better than Kalantari's in such condition? I just want to know how effective training with vimeo is.
  2. I found that the ground truth of vimeo data is generated by converting the LDR images to linear HDR images for each single one. I'm a little worried about whether this data can work and whether it is meaningful. (because the ground truth of some real-world datasets is generated by merging multi-exposed still images like [1] and your paper )

Looking forward to your reply!

Reference:
[1] Kalantari, Nima Khademi, and Ravi Ramamoorthi. "Deep high dynamic range imaging of dynamic scenes." ACM Trans. Graph. 36.4 (2017): 144-1.

Flickering problem in 3-exposure-3stop video

Hi,
i'm testing your code on my own datasets, captured with a Canon and preprocessed as required.
In particular i'm using sequences of three alternating exposures.
When i have a configuration {EV-2, EV+0, EV+2, ...} your code works fine.
Instead, when i have a configuration {EV-3, EV+0, EV+3, ...}, i get 'periodically' flickering: i.e., if H_i is a HDR frame, it doesn't match with H_{i+1} and H_{i+2} (flickering), but it matches with H_{i+3}.
Why do i get this flickering problem?
I attach two hdr video of a static scene to show this. Below I specify exposure times (in seconds). Aperture and iso are constant in both (ISO 800, f3.5).
1- First HDR video: {EV-2, EV+0, EV+2, ...}, exposure times {1/320, 1/80, 1/20}
2- Second HDR video: {EV-3, EV+0, EV+3, ...}, exposure times {1/640, 1/80, 1/10}

3expo_2stop.mov
3expo_3stop.mov

Questions Regarding the Evaluation

Thanks for releasing the code and dataset!
I have some questions regarding the evaluation:

  • Could you also provide the script to test on the captured dynamic without GT dataset?
  • The matlab code for HDR-VQM measurement seems to be missing.

Thank you!

Looking for supplementary material.

Thank you for your outstanding work. I would like to know where I can find the supplementary materials for this work. I'm particularly curious about how the authors handled the data from different alternating exposures.

Unable to Download Data and Models from Baidu

Hi,

First of all, thank you for your hard work and for providing such valuable resources.

I am currently trying to download the data and models provided in this repository. However, I am facing an issue with Baidu. The service requires a Chinese phone number for verification, and unfortunately, international numbers are not supported.

Could you please provide an alternative method for accessing the data and models?

Thank you for your assistance and understanding.

Problem in reproducing kalantari13`s method on the proposed real world dataset.

Hello there,

I'm currently facing challenges while attempting to replicate the Kalantari13 method, "Patch-based High Dynamic Range Video," using your provided dataset. While successfully reconstructing sequences labeled "Fire" and "Throwing Towels" from TOG13 dataset, I've encountered issues when working with the dynamic real benchmark dataset.

The error log from matlab console is like:

**********************************
Working on the "reorg_dynamic_RGB_data_2exp_release\scene_017_ref_high_step3" dataset

Generating input pyramids Done
Generating the initial flows and search maps ...
Working on frame 1/3
Initial motion estimation (global)错误使用 get_q_RANSAC (line 24)
k should be less or equal than N_I

出错 RANSAC (line 399)
        q = get_q_RANSAC(N, N_I_star, k);

出错 FitModel (line 38)
ransac_results = RANSAC(X(:,ind_valid), ransac_options);

出错 ComputeSimilarity (line 64)
[H, ~] = FitModel(flowBeginX(inds), flowBeginY(inds), flowEndX(inds), flowEndY(inds), sigmaSimilarity, 'similarity');

出错 ComputeGlobalMotionFlow (line 11)
simFlow.right = ComputeSimilarity(expAdj{1}, expAdj{2});

出错 InitialMotionEstimation (line 5)
[warped, simFlow] = ComputeGlobalMotionFlow(imgs, expoTimes);

出错 GenerateHDRVideo (line 86)
    [nnf, flow] = InitialMotionEstimation(imgs, curExpoTimes);

出错 main_loop (line 121)
    GenerateHDRVideo(vidObj, exposureTimes, sceneName);

Thank you in advance for your time and assistance!

cudnn error

Hi, many thanks for your work and codes. I tested the pretrained model on GPU and it works. But I tried to retrained the model but I got cudnn error like"
File "main.py", line 33, in
main(args)
File "main.py", line 21, in main
train_utils.train(args, log, train_loader, model, epoch, recorder)
File "/media/ye/ssd11/DeepHDRVideo-master/utils/train_utils.py", line 17, in train
pred = model.forward(split='train');
File "/media/ye/ssd11/DeepHDRVideo-master/models/hdr2E_flow_model.py", line 93, in forward
self.fpred = self.fnet(fnet_in)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/ye/ssd11/DeepHDRVideo-master/models/archs/flow_networks.py", line 94, in forward
flow1, flow2 = self.moduleBasici
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/media/ye/ssd11/DeepHDRVideo-master/models/archs/flow_networks.py", line 33, in forward
flows = self.moduleBasic(inputs).clamp(-320, 320)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "/home/ye/.opt/miniconda/envs/torch11/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
"
And I checked the codes and there is no setting for cudnn. So could you help me for this problem? Many thansk.

How to load the checkpoint

Dear author

I forget to set the model to record the mnet2_15.pth, it just record the 14th epoch. So I change the start_epoch to 14 and add the path of 14th epoch's checkpoint. But it seems doesn't work. So what should I do to load the checkpoint in training stage 2?

Gamma correction issue

Hi, i'm trying your code on my own dataset. I captured sequences with 3 alternate exposures (3 exposures, 2 stops) with my Canon.
Then i process the raw following your pipeline: normalization (division by saturation level), demosaicing and gamma correction. Everything works fine and i get my hdr video.
Then I use a minimal raw to rgb process: only normalization and demosaicing, removing gamma correction. I get a video with flickering problems: hdr frames sequence is bad and there's no match between frames.

The problem is in gamma correction. Why is gamma correction necessary in raw to rgb process to get a good hdr video?

About the Tog13 datasets

Dear guanying,
The file (TOG13 dataset) you provided in google drive already contains the hdr images , I want to know if those hdr images are predictd by the network you proposed in paper ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.