(LCDPNet) D:\1\LCDPNet-main>python src/test.py checkpoint_path=pretrained_models/trained_on_MSEC.ckpt
Global seed set to 233
D:\Anaconda3\envs\LCDPNet\lib\site-packages\hydra_internal\defaults_list.py:251: UserWarning: In 'config': Defaults list is missing _self_
. See https://hydra.cc/docs/upgrades/1.0_to_1.1/default_composition_order for more informat
ion
warnings.warn(msg, UserWarning)
Check runtime config: use "D:\1\LCDPNet-main\src\config\runtime\lcdpnet.default.yaml" as template.
Running config: {'aug': {'crop': False, 'downsample': [512, 512], 'h-flip': True, 'v-flip': True}, 'train_ds': {'class': 'img_dataset', 'name': 'lcdp_data.train', 'input': ['your_dataset_path/input/'], 'GT': ['your_dataset_path/gt/
']}, 'test_ds': {'class': 'img_dataset', 'name': 'lcdp_data.test', 'input': ['./imgs/test-input/'], 'GT': ['./imgs/test-gt/']}, 'valid_ds': {'class': 'img_dataset', 'name': 'lcdp_data.valid', 'input': ['your_dataset_path/valid-in
put/'], 'GT': ['your_dataset_path/valid-gt/']}, 'runtime': {'bilateral_upsample_net': {'hist_unet': {'n_bins': 8, 'hist_as_guide': False, 'channel_nums': [8, 16, 32, 64, 128], 'encoder_use_hist': False, 'guide_feature_from_hist':
True, 'region_num': 2, 'use_gray_hist': False, 'conv_type': 'drconv', 'down_ratio': 2, 'hist_conv_trainable': False, 'drconv_position': [0, 1]}, 'modelname': 'bilateral_upsample_net', 'predict_illumination': False, 'loss': {'mse': 1
.0, 'cos': 0.1, 'ltv': 0.1}, 'luma_bins': 8, 'channel_multiplier': 1, 'spatial_bin': 16, 'batch_norm': True, 'low_resolution': 256, 'coeffs_type': 'matrix', 'conv_type': 'conv', 'backbone': 'hist-unet', 'illu_map_power': False}, 'hi
st_unet': {'n_bins': 8, 'hist_as_guide': False, 'channel_nums': False, 'encoder_use_hist': False, 'guide_feature_from_hist': False, 'region_num': 8, 'use_gray_hist': False, 'conv_type': 'drconv', 'down_ratio': 2, 'hist_conv_trainabl
e': False, 'drconv_position': [1, 1]}, 'modelname': 'lcdpnet', 'use_wavelet': False, 'use_attn_map': False, 'use_non_local': False, 'how_to_fuse': 'cnn-weights', 'backbone': 'bilateral_upsample_net', 'conv_type': 'conv', 'backbone_o
ut_illu': True, 'illumap_channel': 3, 'share_weights': True, 'n_bins': 8, 'hist_as_guide': False, 'loss': {'ltv': 0, 'cos': 0, 'weighted_loss': 0, 'tvloss1': 0, 'tvloss2': 0, 'tvloss1_new': 0.01, 'tvloss2_new': 0.01, 'l1_loss': 1.0,
'ssim_loss': 0, 'psnr_loss': 0, 'illumap_loss': 0, 'hist_loss': 0, 'inter_hist_loss': 0, 'vgg_loss': 0, 'cos2': 0.5}}, 'project': 'default_proj', 'name': 'default_name', 'comment': False, 'debug': False, 'val_debug_step_nums': 2, '
gpu': -1, 'backend': 'ddp', 'runtime_precision': 16, 'amp_backend': 'native', 'amp_level': 'O1', 'dataloader_num_worker': 5, 'mode': 'train', 'logger': 'tb', 'num_epoch': 1000, 'valid_every': 10, 'savemodel_every': 4, 'log_every': 1
00, 'batchsize': 16, 'valid_batchsize': 1, 'lr': 0.0001, 'checkpoint_path': 'pretrained_models/trained_on_MSEC.ckpt', 'checkpoint_monitor': 'loss', 'resume_training': True, 'monitor_mode': 'min', 'early_stop': False, 'valid_ratio':
0.1, 'flags': {}}
ERR: import thop failed, skip. error msg:
No module named 'thop'
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[ WARN ] Use Conv in HistGuidedDRDoubleConv[0] instead of DRconv.
[[ WARN ]] Using HistUNet in BilateralUpsampleNet as backbone
Running initialization for BaseModel
DeepWBNet(
(illu_net): BilateralUpsampleNet(
(guide): GuideNet(
(conv1): ConvBlock(
(conv): Conv2d(3, 16, kernel_size=(1, 1), stride=(1, 1))
(activation): ReLU()
(bn): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(conv2): ConvBlock(
(conv): Conv2d(16, 1, kernel_size=(1, 1), stride=(1, 1))
(activation): Sigmoid()
)
)
(slice): SliceNode()
(coeffs): LowResHistUNet(
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(hist_conv): Conv2d(8, 8, kernel_size=(2, 2), stride=(2, 2), bias=False)
(inc): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(3, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
(down1): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(8, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down2): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down3): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(down4): Down(
(maxpool_conv): Sequential(
(0): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(1): DoubleConv(
(double_conv): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU(inplace=True)
)
)
)
)
(up1): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(64, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 4096, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up2): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(32, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 1024, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up3): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(32, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(16, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 256, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(up4): Up(
(up): Upsample(scale_factor=2.0, mode=bilinear)
(conv): HistGuidedDRDoubleConv(
(conv1): Conv2d(16, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(inter1): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
(conv2): DRConv2d(
(conv_kernel): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(8, 4, kernel_size=(1, 1), stride=(1, 1))
(2): Sigmoid()
(3): Conv2d(4, 128, kernel_size=(1, 1), stride=(1, 1), groups=2)
)
(conv_guide): Conv2d(24, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(corr): Correlation(xcorr_fast)
)
(inter2): Sequential(
(0): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU(inplace=True)
)
)
)
(outc): OutConv(
(conv): Conv2d(8, 96, kernel_size=(1, 1), stride=(1, 1))
)
)
(apply_coeffs): ApplyCoeffs()
)
(out_net): Sequential(
(0): Conv2d(9, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): NONLocalBlock2D(
(g): Sequential(
(0): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
(W): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1))
(theta): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(phi): Sequential(
(0): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
)
(5): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1))
(6): ReLU(inplace=True)
(7): Conv2d(32, 3, kernel_size=(1, 1), stride=(1, 1))
(8): NONLocalBlock2D(
(g): Sequential(
(0): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
(W): Conv2d(1, 3, kernel_size=(1, 1), stride=(1, 1))
(theta): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(phi): Sequential(
(0): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
(1): UpsamplingBilinear2d(size=[16, 16], mode=bilinear)
)
)
)
)
[ WARN ] Result directory "lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test" exists. Press ENTER to overwrite or input suffix to create a new one:
New name: lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test.
[ WARN ] Overwrite result_dir: lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test
TEST - Result save path:
pretrained_models\test_result\lcdpnet_pretrained_models_trained_on_MSEC.ckpt@lcdp_data.test
Loading model from: pretrained_models/trained_on_MSEC.ckpt
Dataset augmentation:
[ToPILImage(), Downsample([512, 512]), RandomHorizontalFlip(p=0.5), RandomVerticalFlip(p=0.5), ToTensor()]
D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py:447: LightningDeprecationWarning: Setting Trainer(gpus=-1)
is deprecated in v1.7 and will be removed in v2.0. Please use Tr ainer(accelerator='gpu', devices=-1)
instead.
rank_zero_deprecation(
Error executing job with overrides: ['checkpoint_path=pretrained_models/trained_on_MSEC.ckpt']
Traceback (most recent call last):
File "src/test.py", line 32, in main
trainer = Trainer(
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\utilities\argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 433, in init
self._accelerator_connector = AcceleratorConnector(
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py", line 214, in init
self._set_parallel_devices_and_init_accelerator()
File "D:\Anaconda3\envs\LCDPNet\lib\site-packages\pytorch_lightning\trainer\connectors\accelerator_connector.py", line 531, in _set_parallel_devices_and_init_accelerator
raise MisconfigurationException(
pytorch_lightning.utilities.exceptions.MisconfigurationException: CUDAAccelerator can not run on your system since the accelerator is not available. The following accelerator(s) is available and can be passed into accelerator
argu
ment of Trainer
: ['cpu'].
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
How to solve it?