Git Product home page Git Product logo

onpix / lcdpnet Goto Github PK

View Code? Open in Web Editor NEW
130.0 3.0 15.0 473 KB

Official PyTorch code and dataset of the paper "Local Color Distributions Prior for Image Enhancement" [ECCV2022]

Home Page: https://whyy.site/paper/lcdp

License: MIT License

Python 100.00%
computer-vision deep-learning eccv2022 exposure-correction low-level-vision pytorch low-light-enhance low-light-image-enhancement overexposure-correction

lcdpnet's Introduction

Abstract: Existing image enhancement methods are typically designed to address either the over- or under-exposure problem in the input image. When the illumination of the input image contains both over- and under-exposure problems, these existing methods may not work well. We observe from the image statistics that the local color distributions (LCDs) of an image suffering from both problems tend to vary across different regions of the image, depending on the local illuminations. Based on this observation, we propose in this paper to exploit these LCDs as a prior for locating and enhancing the two types of regions (i.e., over-/under-exposed regions). First, we leverage the LCDs to represent these regions, and propose a novel local color distribution embedded (LCDE) module to formulate LCDs in multi-scales to model the correlations across different regions. Second, we propose a dual-illumination learning mechanism to enhance the two types of regions. Third, we construct a new dataset to facilitate the learning process, by following the camera image signal processing (ISP) pipeline to render standard RGB images with both under-/over-exposures from raw data. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art methods quantitatively and qualitatively.

📻 News

  • 2023.7.21: if you have an interest in low-light enhancement and NeRF, please check out my latest ICCV2023 work, LLNeRF ! 🔥🔥🔥
  • 2023.7.21: Update README
  • 2023.2.7: Merge tar.gz files of our dataset to a single 7z file.
  • 2023.2.8: Update packages version in requirements.txt.
  • 2023.2.8: Upload env.yaml.

🔥 Our Model

Our model

⚙️ Setup

  1. Clone git clone https://github.com/onpix/LCDPNet.git
  2. Go to directory cd LCDPNet
  3. Install required packages pip install -r requirements.txt

We also provide env.yaml for quickly installing packages. Note that you may need to modify the env name to prevent overwriting your existing enviroment, or modify cudatoolkit and cudnn version in env.yaml to match your local cuda version.

⌨️ How to run

To train our model:

  1. Prepare data: Modify src/config/ds/train.yaml and src/config/ds/valid.yaml.
  2. Modify configs in src/config. Note that we use hydra for config management.
  3. Run: python src/train.py name=<experiment_name> num_epoch=200 log_every=2000 valid_every=20

To test our model:

  1. Prepare data: Modify src/config/ds/test.yaml
  2. Run: python src/test.py checkpoint_path=<file_path>

📂 Dataset & Pretrained Model

The LCDP Dataset is here: [Google drive]. Please unzip lcdp_dataset.7z. The training and test images are:

Train Test
Input input/*.png test-input/*.png
GT gt/*.png test-gt/*.png

We provide the two pretrained models: pretrained_models/trained_on_ours.ckpt and pretrained_models/trained_on_MSEC.ckpt for researchers to reproduce the results in Table 1 and Table 2 in our paper. Note that we train pretrained_models/trained_on_MSEC.ckpt on the Expert C subset of the MSEC dataset with both over and under-exposed images.

Filename Training data Testing data Test PSNR Test SSIM
trained_on_ours.ckpt Ours Our testing data 23.239 0.842
trained_on_MSEC.ckpt MSEC MSEC testing data (Expert C) 22.295 0.855

Our model is lightweight. Experiments show that increasing model size will further improve the quality of the results. To train a bigger model, increase the values in runtime.bilateral_upsample_net.hist_unet.channel_nums.

🔗 Cite This Paper

If you find our work or code helpful, or your research benefits from this repo, please cite our paper:

@inproceedings{wang2022lcdp,
    title =        {Local Color Distributions Prior for Image Enhancement},
    author =       {Haoyuan Wang, Ke Xu, and Rynson W.H. Lau},
    booktitle =    {Proceedings of the European Conference on Computer Vision (ECCV)},
    year =         {2022}
}

lcdpnet's People

Contributors

haizadtarik avatar onpix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

lcdpnet's Issues

test problem

AttributeError: Can't pickle local object 'DeepWBNet.init..'
运行test.py提示无法序列化这个类。
另外想问下你的python版本那是多少

test problem

Hello author, why are the values of ssim and psnr 0 when testing with the training model 《trained_on_ours.ckpt》 you provided?

Test step: 218, Manual PSNR: 0.0, Manual SSIM: 0.0
[ TIMER ] Total time usage: 416.22171092033386

How to plot figures like Fig. 2 for a given dataset

Hello, authors! I want to know how you obtain the Input luminance and the groundtruth/output luminance for Fig. 2. Is the luminance calculated from the pixel values of the images? What is the formula?
Thanks!

Excuse me, is there an easy way to use the pretrained model?

I'am sorry for this issue, but I cannot run this project correctly.
I just want to test the effect of your trainning model but it is hard to run this code.
Is there an easy way to use the pretrained model? Just input an image and output an image.
Thank you.

Test.py Error

Hello, and thank you for your incredible work.

I attempted to test your model with my own datasets, and I configured the input path and checkpoint path accordingly. However, when I attempt to run test.py, I encounter the following error:

Traceback (most recent call last):
File "C:\work\enhancement\LCDPNet-main\src\test.py", line 31, in main
trainer = Trainer(
File "C:\Users\hyun\anaconda3\envs\torch38\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 69, in insert_env_defaults
return fn(self, **kwargs)
TypeError: init() got an unexpected keyword argument 'gpus'

I have examined the Trainer class, and I cannot find the 'gpus' argument either. Could you assist me in resolving this issue?

Thank you.

Enhanced images with light sources have artifact

Hello,

Thank you for your contribution. I am conducting an experimental research on image enhancement. I've inferred results for a small dataset. I realized that the light source on the one of the results have a 'saturation artifact'. Do you know what causes this artifact and ow to eliminate it? May be clipping the output works. What do you think?

Output image:
Screen Shot 2022-12-16 at 16 22 02

Original Image:
Screen Shot 2022-12-16 at 16 22 47

May I know what is the average time for validating one image?

It was found from test.py that the model you provided takes more than 2 seconds on average to process a single image.
I tested input images with sizes of 512x512 or larger, up to 1280x1280.

image

How long does it take for you to test one image?
Can you provide the code to convert the model to ONNX or TRT?

TypeError: cannot pickle 'cv2.TonemapReinhard' object

你好!我想用自己的图片测试一下pretrain模型的效果,
在src/config/ds/test.yaml配置了图片路径
然后运行 python src/test.py checkpoint_path=trained_on_ours.ckptpython src/test.py checkpoint_path=trained_on_ours.ckpt

报了这个错误解决不了,想请教一下:
屏幕截图 2022-10-18 104314

Test.py execution problem

image
image

"I am testing the model you provided and have also made modifications to the test.yaml file. The test datasets are also the images you provided. How can I solve this error? Thank you."

The last line is the path of my model

Issues with training and testing on the MSEC dataset

Hello, when I train on the dataset you proposed, everything goes relatively well. However, I have the following problem when I train on the MSEC dataset, can you help me? Thanks!
image
I'm sure my path settings are correct.
image
Actually, I found that in the training set, validation set and testing set of MSEC, five input maps correspond to one ground truth image, how can I solve this problem so that I can train and test normally.

Test-GT

I am very interested in your work, how can I get the GT of my test set when I test on my own dataset.
image

Error when trying to run train.py

I got error messages when I tried to run train.py and test.py. How can I solve the problems?
Thank you in advance.

Missing logger folder: /home/arm/Desktop/LCDPNet/tb_logs/lcdpnet:ours@ours
train_ds - GT Directory path: [yellow]['raise/gt/.png'][/yellow]
train_ds - Input Directory path: [yellow]['raise/input/
.png'][/yellow]
train_ds Dataset length: 607, batch num: 37
valid_ds - GT Directory path: [yellow]['raise/valid-gt/.png'][/yellow]
valid_ds - Input Directory path: [yellow]['raise/valid-input/
.png'][/yellow]
valid_ds Dataset length: 0, batch num: 0
Error occured! Your ds is: TYPE=valid_ds, config:
{'class': 'img_dataset', 'name': 'ours-cslab.valid', 'input': ['raise/valid-input/.png'], 'GT': ['raise/valid-gt/.png']}
Error executing job with overrides: ['name=ours', 'num_epoch=200', 'log_every=2000', 'valid_every=20']
Traceback (most recent call last):
File "src/train.py", line 124, in main
trainer.fit(model, datamodule, ckpt_path=opt.checkpoint_path)
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 604, in fit
self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt
return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 90, in launch
return function(*args, **kwargs)
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1037, in _run
self._call_setup_hook() # allow user to setup lightning_module in accelerator environment
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1284, in _call_setup_hook
self._call_lightning_datamodule_hook("setup", stage=fn)
File "/home/arm/anaconda3/envs/cloned_py37/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1361, in _call_lightning_datamodule_hook
return fn(*args, **kwargs)
File "/home/arm/Desktop/LCDPNet/src/data/img_dataset.py", line 185, in setup
self.valid_dataset = ImagesDataset(opt, ds_type=VALID_DATA, transform=self.valid_transform, batchsize=opt.valid_batchsize)
File "/home/arm/Desktop/LCDPNet/src/data/img_dataset.py", line 99, in init
raise RuntimeError(f'[ Err ] Dataset input nums is 0!')
RuntimeError: [ Err ] Dataset input nums is 0!

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

The weight of the loss function

Hello, I am very enlightened after reading this paper. However, the paper mentions in the loss function section that four hyperparameters are selected in order to balance the four loss functions, and there are more details in the supplementary materials. Then can you provide supplementary materials? I would like to know if these four hyperparameters are set based on experience or for some other reason. I look forward to your answer. Thank you.

test

0c41fd88f424818ab6722cf55c1602d
How to solve this problem?

Pink pixels

Hey! Great repo. Any idea why i'm getting pink pixels? Is this something you've seen? I have made decent modifications to simplify the inference for my project, but I can't see what I've done that could result in the pink artefacts. Thanks!

IMG_8687-out
IMG_8687

How to cancel the distributed training?

Hi, I am coming again.

I want to ask how to cancel the distributed training, as I meet this error:

RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:46, unhandled cuda error, NCCL version 2.10.3
ncclUnhandledCudaError: Call to CUDA function failed.

Thank you!

prepare_data

def prepare_data(self):
    # download, split, etc...
    # only called on 1 GPU/TPU in distributed
    ...
    
 May I ask if this part is missing complete code?

test issue

(LCDPNet) D:\1\LCDPNet-main>python src/test.py checkpoint_path=pretrained_models/trained_on_MSEC.ckpt
Global seed set to 233
D:\Anaconda3\envs\LCDPNet\lib\site-packages\hydra_internal\defaults_list.py:251: UserWarning: In 'config': Defaults list is missing _self_. See https://hydra.cc/docs/upgrades/1.0_to_1.1/default_composition_order for more informat
ion
warnings.warn(msg, UserWarning)
Check runtime config: use "D:\1\LCDPNet-main\src\config\runtime\lcdpnet.default.yaml" as template.
Running config: {'aug': {'crop': False, 'downsample': [512, 512], 'h-flip': True, 'v-flip': True}, 'train_ds': {'class': 'img_dataset', 'name': 'lcdp_data.train', 'input': ['your_dataset_path/input/'], 'GT': ['your_dataset_path/gt/
']}, 'test_ds': {'class': 'img_dataset', 'name': 'lcdp_data.test', 'input': ['./imgs/test-input/'], 'GT': ['./imgs/test-gt/
']}, 'valid_ds': {'class': 'img_dataset', 'name': 'lcdp_data.valid', 'input': ['your_dataset_path/valid-in
put/'], 'GT': ['your_dataset_path/valid-gt/']}, 'runtime': {'bilateral_upsample_net': {'hist_unet': {'n_bins': 8, 'hist_as_guide': False, 'channel_nums': [8, 16, 32, 64, 128], 'encoder_use_hist': False, 'guide_feature_from_hist':
True, 'region_num': 2, 'use_gray_hist': False, 'conv_type': 'drconv', 'down_ratio': 2, 'hist_conv_trainable': False, 'drconv_position': [0, 1]}, 'modelname': 'bilateral_upsample_net', 'predict_illumination': False, 'loss': {'mse': 1
.0, 'cos': 0.1, 'ltv': 0.1}, 'luma_bins': 8, 'channel_multiplier': 1, 'spatial_bin': 16, 'batch_norm': True, 'low_resolution': 256, 'coeffs_type': 'matrix', 'conv_type': 'conv', 'backbone': 'hist-unet', 'illu_map_power': False}, 'hi
st_unet': {'n_bins': 8, 'hist_as_guide': False, 'channel_nums': False, 'encoder_use_hist': False, 'guide_feature_from_hist': False, 'region_num': 8, 'use_gray_hist': False, 'conv_type': 'drconv', 'down_ratio': 2, 'hist_conv_trainabl
e': False, 'drconv_position': [1, 1]}, 'modelname': 'lcdpnet', 'use_wavelet': False, 'use_attn_map': False, 'use_non_local': False, 'how_to_fuse': 'cnn-weights', 'backbone': 'bilateral_upsample_net', 'conv_type': 'conv', 'backbone_o
ut_illu': True, 'illumap_channel': 3, 'share_weights': True, 'n_bins': 8, 'hist_as_guide': False, 'loss': {'ltv': 0, 'cos': 0, 'weighted_loss': 0, 'tvloss1': 0, 'tvloss2': 0, 'tvloss1_new': 0.01, 'tvloss2_new': 0.01, 'l1_loss': 1.0,
'ssim_loss': 0, 'psnr_loss': 0, 'illumap_loss': 0, 'hist_loss': 0, 'inter_hist_loss': 0, 'vgg_loss': 0, 'cos2': 0.5}}, 'project': 'default_proj', 'name': 'default_name', 'comment': False, 'debug': False, 'val_debug_step_nums': 2, '
gpu': -1, 'backend': 'ddp', 'runtime_precision': 16, 'amp_backend': 'native', 'amp_level': 'O1', 'dataloader_num_worker': 5, 'mode': 'train', 'logger': 'tb', 'num_epoch': 1000, 'valid_every': 10, 'savemodel_every': 4, 'log_every': 1
00, 'batchsize': 16, 'valid_batchsize': 1, 'lr': 0.0001, 'checkpoint_path': 'pretrained_models/trained_on_MSEC.ckpt', 'checkpoint_monitor': 'loss', 'resume_training': True, 'monitor_mode': 'min', 'early_stop': False, 'valid_ratio':
0.1, 'flags': {}}
ERR: import thop failed, skip. error msg:
No module named 'thop'
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[[ WARN ]] Using HistUNet in BilateralUpsampleNet as backbone
Running initialization for BaseModel
DeepWBNet(
(illu_net): BilateralUpsampleNet(
(guide): GuideNet(
(conv1): ConvBlock(
(conv): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1))
(activation): ReLU()
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): ConvBlock(
(conv): Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1))
(activation): Sigmoid()
)
)
(slice): SliceNode()
(coeffs): LowResHistUNet(
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(hist_conv): Conv2d(8, 8, kernel_size=(2, 2), stride=(2, 2), bias=False)
(inc): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(3, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
(down1): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down2): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down3): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down4): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(up1): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 4096, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up2): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(32, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 1024, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up3): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(16, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 256, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up4): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(16, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(8, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 128, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(outc): OutConv(
(conv): Conv2d(8, 96, kernel_size=(1, 1), stride=(1, 1))
)
)
(apply_coeffs): ApplyCoeffs()
)
(out_net): Sequential(
(0): Conv2d(9, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): NONLocalBlock2D(
(g): Sequential(
(0): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
(W): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1))
(theta): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(phi): Sequential(
(0): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
)
(5): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(32, 3, kernel_size=(1, 1), stride=(1, 1))
(8): NONLocalBlock2D(
(g): Sequential(
(0): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
(W): Conv2d(1, 3, kernel_size=(1, 1), stride=(1, 1))
(theta): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(phi): Sequential(
(0): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
)
)
)
[ WARN ] Result directory "lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test" exists. Press ENTER to overwrite or input suffix to create a new one:

New name: lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test.
[ WARN ] Overwrite result_dir: lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test
TEST - Result save path:
pretrained_models\test_result\lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test
Loading model from: pretrained_models/trained_on_MSEC.ckpt
Dataset augmentation:
[ToPILImage(), Downsample([512, 512]), RandomHorizontalFlip(p=0.5), RandomVerticalFlip(p=0.5), ToTensor()]
D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py:447: LightningDeprecationWarning: Setting Trainer(gpus=-1) is deprecated in v1.7 and will be removed in v2.0. Please use Tr ainer(accelerator='gpu', devices=-1) instead.
rank_zero_deprecation(
Error executing job with overrides: ['checkpoint_path=pretrained_models/trained_on_MSEC.ckpt']
Traceback (most recent call last):
File "src/test.py", line 32, in main
trainer = Trainer(
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 433, in init
self._accelerator_connector = AcceleratorConnector(
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py", line 214, in init
self._set_parallel_devices_and_init_accelerator()
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py", line 531, in _set_parallel_devices_and_init_accelerator
raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: CUDAAccelerator can not run on your system since the accelerator is not available. The following accelerator(s) is available and can be passed into accelerator argu
ment of Trainer: ['cpu'].

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
How to solve it?

out of memory

RuntimeError: CUDA out of memory. Tried to allocate 856.00 MiB (GPU 0; 8.00 GiB total capacity; 2.96 GiB already allocated; 2.60 GiB free; 3.13 GiB reserved in total by PyTorch)
Although there is available VRAM, the system consistently displays "out of memory" error while running on Windows.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.