Git Product home page Git Product logo

flare7k's People

Contributors

sczhou avatar ykdai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

flare7k's Issues

Traning the model 7kpp baseline option

I am trying to reproduce the training model and after trying reducing num gpu, batch size per gpu and num work per gpu I got the following error at the end:

`$ python basicsr/train.py -opt options/uformer_flare7kpp_baseline_option.yml --debug
Disable distributed.
Path already exists. Rename it to /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml_archived_20240613_165148
2024-06-13 16:51:48,099 INFO:

Version Information:
BasicSR: 1.4.2
PyTorch: 2.3.1+cu121
TorchVision: 0.18.1+cu121
2024-06-13 16:51:48,099 INFO:
name: debug_Uformer_flare7kpp_baseline_option.yml
model_type: DeflareModel
scale: 1
num_gpu: 1
manual_seed: 0
datasets:[
train:[
name: Flare7Kpp
type: Flare7kpp_Pair_Loader
image_path: dataset/Flickr24K
scattering_dict:[
Flare7k_scattering: dataset/Flare7Kpp/Flare7K/Scattering_Flare/Compound_Flare
Real_scattering1: dataset/Flare7Kpp/Flare-R/Compound_Flare
]
reflective_dict:[
Flare7k_reflective: dataset/Flare7Kpp/Flare7K/Reflective_Flare
Real_reflective1: None
]
light_dict:[
Flare7k_light: dataset/Flare7Kpp/Flare7K/Scattering_Flare/Light_Source
Real_light1: dataset/Flare7Kpp/Flare-R/Light_Source
]
data_ratio: [0.5, 0.5]
transform_base:[
img_size: 512
]
transform_flare:[
scale_min: 0.7
scale_max: 1.2
translate: 100
shear: 20
]
mask_type: None
use_shuffle: True
num_worker_per_gpu: 2
batch_size_per_gpu: 1
dataset_enlarge_ratio: 1
prefetch_mode: None
phase: train
scale: 1
]
val:[
name: flare_test
type: Image_Pair_Loader
dataroot_gt: dataset/Flare7Kpp/val/gt
dataroot_lq: dataset/Flare7Kpp/val/input
gt_size: 512
phase: val
scale: 1
]
]
network_g:[
type: Uformer
img_size: 512
img_ch: 3
output_ch: 6
multi_stage: 1
]
path:[
pretrain_network_g: None
strict_load_g: True
resume_state: None
experiments_root: /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml
models: /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml/models
training_states: /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml/training_states
log: /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml
visualization: /home/alejandro/Flare7K/experiments/debug_Uformer_flare7kpp_baseline_option.yml/visualization
]
train:[
optim_g:[
type: Adam
lr: 0.0001
weight_decay: 0
betas: [0.9, 0.99]
]
scheduler:[
type: MultiStepLR
milestones: [200000]
gamma: 0.5
]
out_deflare: True
ema_decay: 0.9
total_iter: 600000
warmup_iter: -1
l1_opt:[
type: L_Abs_pure
loss_weight: 0.5
]
perceptual:[
type: L_percepture
loss_weight: 0.5
]
]
val:[
val_freq: 8
save_img: True
metrics:[
psnr:[
type: calculate_psnr
crop_border: 0
test_y_channel: False
]
ssim:[
type: calculate_ssim
crop_border: 0
test_y_channel: False
]
]
]
logger:[
print_freq: 1
save_checkpoint_freq: 8
use_tb_logger: True
wandb:[
project: None
resume_id: None
]
]
dist_params:[
backend: nccl
port: 29500
]
dist: False
rank: 0
world_size: 1
auto_resume: False
is_train: True
root_path: /home/alejandro/Flare7K

Base Image Loaded with examples: 23949
Scattering Flare Image: Flare7k_scattering is loaded successfully with examples 5000
Now we have 1 scattering flare images
Scattering Flare Image: Real_scattering1 is loaded successfully with examples 962
Now we have 2 scattering flare images
Reflective Flare Image: Flare7k_reflective is loaded successfully with examples 2000
Now we have 1 refelctive flare images
ERROR: reflective flare images are not loaded properly
Now we have 2 refelctive flare images
Light Source Image: Flare7k_light is loaded successfully with examples 5000
Now we have 1 light source images
Light Source Image: Real_light1 is loaded successfully with examples 962
Now we have 2 light source images
2024-06-13 16:51:48,183 INFO: Dataset [Flare7kpp_Pair_Loader] - Flare7Kpp is built.
2024-06-13 16:51:48,183 INFO: Training statistics:
Number of train images: 23949
Dataset enlarge ratio: 1
Batch size per gpu: 1
World size (gpu number): 1
Require iter number per epoch: 23949
Total epochs: 26; iters: 600000.
2024-06-13 16:51:48,183 INFO: Dataset [Image_Pair_Loader] - flare_test is built.
2024-06-13 16:51:48,183 INFO: Number of val images/folders in flare_test: 0
/home/alejandro/Flare7K/venv/lib/python3.10/site-packages/torch/functional.py:512: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3587.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2024-06-13 16:51:48,308 INFO: Network [Uformer] is created.
2024-06-13 16:51:48,382 INFO: Network: Uformer, with parameters: 20,473,888
2024-06-13 16:51:48,382 INFO: Uformer(
embed_dim=32, token_projection=linear, token_mlp=ffn,win_size=8
(pos_drop): Dropout(p=0.0, inplace=False)
(input_proj): InputProj(
(proj): Sequential(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): LeakyReLU(negative_slope=0.01, inplace=True)
)
)
(output_proj): OutputProj(
(proj): Sequential(
(0): Conv2d(64, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
)
(encoderlayer_0): BasicUformerLayer(
dim=32, input_resolution=(512, 512), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=32, input_resolution=(512, 512), num_heads=1, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=32, win_size=(8, 8), num_heads=1
(qkv): LinearProjection(
(to_q): Linear(in_features=32, out_features=32, bias=True)
(to_kv): Linear(in_features=32, out_features=64, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=32, out_features=32, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): Identity()
(norm2): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=32, out_features=128, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=128, out_features=32, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=32, input_resolution=(512, 512), num_heads=1, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=32, win_size=(8, 8), num_heads=1
(qkv): LinearProjection(
(to_q): Linear(in_features=32, out_features=32, bias=True)
(to_kv): Linear(in_features=32, out_features=64, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=32, out_features=32, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.014)
(norm2): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=32, out_features=128, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=128, out_features=32, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(dowsample_0): Downsample(
(conv): Sequential(
(0): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
)
(encoderlayer_1): BasicUformerLayer(
dim=64, input_resolution=(256, 256), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=64, input_resolution=(256, 256), num_heads=2, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=64, win_size=(8, 8), num_heads=2
(qkv): LinearProjection(
(to_q): Linear(in_features=64, out_features=64, bias=True)
(to_kv): Linear(in_features=64, out_features=128, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=64, out_features=64, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.029)
(norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=64, out_features=256, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=256, out_features=64, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=64, input_resolution=(256, 256), num_heads=2, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=64, win_size=(8, 8), num_heads=2
(qkv): LinearProjection(
(to_q): Linear(in_features=64, out_features=64, bias=True)
(to_kv): Linear(in_features=64, out_features=128, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=64, out_features=64, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.043)
(norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=64, out_features=256, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=256, out_features=64, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(dowsample_1): Downsample(
(conv): Sequential(
(0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
)
(encoderlayer_2): BasicUformerLayer(
dim=128, input_resolution=(128, 128), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=128, input_resolution=(128, 128), num_heads=4, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=128, win_size=(8, 8), num_heads=4
(qkv): LinearProjection(
(to_q): Linear(in_features=128, out_features=128, bias=True)
(to_kv): Linear(in_features=128, out_features=256, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=128, out_features=128, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.057)
(norm2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=512, out_features=128, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=128, input_resolution=(128, 128), num_heads=4, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=128, win_size=(8, 8), num_heads=4
(qkv): LinearProjection(
(to_q): Linear(in_features=128, out_features=128, bias=True)
(to_kv): Linear(in_features=128, out_features=256, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=128, out_features=128, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.071)
(norm2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=512, out_features=128, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(dowsample_2): Downsample(
(conv): Sequential(
(0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
)
(encoderlayer_3): BasicUformerLayer(
dim=256, input_resolution=(64, 64), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=256, input_resolution=(64, 64), num_heads=8, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=256, win_size=(8, 8), num_heads=8
(qkv): LinearProjection(
(to_q): Linear(in_features=256, out_features=256, bias=True)
(to_kv): Linear(in_features=256, out_features=512, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=256, out_features=256, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.086)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=256, out_features=1024, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=1024, out_features=256, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=256, input_resolution=(64, 64), num_heads=8, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=256, win_size=(8, 8), num_heads=8
(qkv): LinearProjection(
(to_q): Linear(in_features=256, out_features=256, bias=True)
(to_kv): Linear(in_features=256, out_features=512, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=256, out_features=256, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.100)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=256, out_features=1024, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=1024, out_features=256, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(dowsample_3): Downsample(
(conv): Sequential(
(0): Conv2d(256, 512, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
)
)
(conv): BasicUformerLayer(
dim=512, input_resolution=(32, 32), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=512, input_resolution=(32, 32), num_heads=16, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=512, win_size=(8, 8), num_heads=16
(qkv): LinearProjection(
(to_q): Linear(in_features=512, out_features=512, bias=True)
(to_kv): Linear(in_features=512, out_features=1024, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=512, out_features=512, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.100)
(norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=512, out_features=2048, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=2048, out_features=512, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=512, input_resolution=(32, 32), num_heads=16, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=512, win_size=(8, 8), num_heads=16
(qkv): LinearProjection(
(to_q): Linear(in_features=512, out_features=512, bias=True)
(to_kv): Linear(in_features=512, out_features=1024, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=512, out_features=512, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.100)
(norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=512, out_features=2048, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=2048, out_features=512, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(upsample_0): Upsample(
(deconv): Sequential(
(0): ConvTranspose2d(512, 256, kernel_size=(2, 2), stride=(2, 2))
)
)
(decoderlayer_0): BasicUformerLayer(
dim=512, input_resolution=(64, 64), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=512, input_resolution=(64, 64), num_heads=16, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=512, win_size=(8, 8), num_heads=16
(qkv): LinearProjection(
(to_q): Linear(in_features=512, out_features=512, bias=True)
(to_kv): Linear(in_features=512, out_features=1024, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=512, out_features=512, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.100)
(norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=512, out_features=2048, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=2048, out_features=512, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=512, input_resolution=(64, 64), num_heads=16, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=512, win_size=(8, 8), num_heads=16
(qkv): LinearProjection(
(to_q): Linear(in_features=512, out_features=512, bias=True)
(to_kv): Linear(in_features=512, out_features=1024, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=512, out_features=512, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.086)
(norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=512, out_features=2048, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=2048, out_features=512, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(upsample_1): Upsample(
(deconv): Sequential(
(0): ConvTranspose2d(512, 128, kernel_size=(2, 2), stride=(2, 2))
)
)
(decoderlayer_1): BasicUformerLayer(
dim=256, input_resolution=(128, 128), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=256, input_resolution=(128, 128), num_heads=8, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=256, win_size=(8, 8), num_heads=8
(qkv): LinearProjection(
(to_q): Linear(in_features=256, out_features=256, bias=True)
(to_kv): Linear(in_features=256, out_features=512, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=256, out_features=256, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.071)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=256, out_features=1024, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=1024, out_features=256, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=256, input_resolution=(128, 128), num_heads=8, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=256, win_size=(8, 8), num_heads=8
(qkv): LinearProjection(
(to_q): Linear(in_features=256, out_features=256, bias=True)
(to_kv): Linear(in_features=256, out_features=512, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=256, out_features=256, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.057)
(norm2): LayerNorm((256,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=256, out_features=1024, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=1024, out_features=256, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(upsample_2): Upsample(
(deconv): Sequential(
(0): ConvTranspose2d(256, 64, kernel_size=(2, 2), stride=(2, 2))
)
)
(decoderlayer_2): BasicUformerLayer(
dim=128, input_resolution=(256, 256), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=128, input_resolution=(256, 256), num_heads=4, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=128, win_size=(8, 8), num_heads=4
(qkv): LinearProjection(
(to_q): Linear(in_features=128, out_features=128, bias=True)
(to_kv): Linear(in_features=128, out_features=256, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=128, out_features=128, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.043)
(norm2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=512, out_features=128, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=128, input_resolution=(256, 256), num_heads=4, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=128, win_size=(8, 8), num_heads=4
(qkv): LinearProjection(
(to_q): Linear(in_features=128, out_features=128, bias=True)
(to_kv): Linear(in_features=128, out_features=256, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=128, out_features=128, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.029)
(norm2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=128, out_features=512, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=512, out_features=128, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(upsample_3): Upsample(
(deconv): Sequential(
(0): ConvTranspose2d(128, 32, kernel_size=(2, 2), stride=(2, 2))
)
)
(decoderlayer_3): BasicUformerLayer(
dim=64, input_resolution=(512, 512), depth=2
(blocks): ModuleList(
(0): LeWinTransformerBlock(
dim=64, input_resolution=(512, 512), num_heads=2, win_size=8, shift_size=0, mlp_ratio=4.0
(norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=64, win_size=(8, 8), num_heads=2
(qkv): LinearProjection(
(to_q): Linear(in_features=64, out_features=64, bias=True)
(to_kv): Linear(in_features=64, out_features=128, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=64, out_features=64, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): DropPath(drop_prob=0.014)
(norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=64, out_features=256, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=256, out_features=64, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): LeWinTransformerBlock(
dim=64, input_resolution=(512, 512), num_heads=2, win_size=8, shift_size=4, mlp_ratio=4.0
(norm1): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(attn): WindowAttention(
dim=64, win_size=(8, 8), num_heads=2
(qkv): LinearProjection(
(to_q): Linear(in_features=64, out_features=64, bias=True)
(to_kv): Linear(in_features=64, out_features=128, bias=True)
)
(attn_drop): Dropout(p=0.0, inplace=False)
(proj): Linear(in_features=64, out_features=64, bias=True)
(se_layer): Identity()
(proj_drop): Dropout(p=0.0, inplace=False)
(softmax): Softmax(dim=-1)
)
(drop_path): Identity()
(norm2): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=64, out_features=256, bias=True)
(act): GELU(approximate='none')
(fc2): Linear(in_features=256, out_features=64, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
)
(activation): Sequential(
(0): Sigmoid()
)
)
Output channel is: 6
Network contains 1 stages.
2024-06-13 16:51:48,383 INFO: Use Exponential Moving Average with decay: 0.9
2024-06-13 16:51:48,491 INFO: Network [Uformer] is created.
2024-06-13 16:51:48,516 INFO: Loss [L_Abs_pure] is created.
/home/alejandro/Flare7K/venv/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/alejandro/Flare7K/venv/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG19_Weights.IMAGENET1K_V1. You can also use weights=VGG19_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
2024-06-13 16:51:49,145 INFO: Loss [L_percepture] is created.
2024-06-13 16:51:49,146 INFO: Model [DeflareModel] is created.
2024-06-13 16:51:49,182 INFO: Start training from epoch: 0, iter: 0
2024-06-13 16:51:51,180 INFO: [debug..][epoch: 0, iter: 1, lr:(1.000e-04,)] [eta: 0:00:18, time (data): 1.997 (0.462)] l1_recons: 4.0594e-01 l1_flare: 1.6040e-01 l1_base: 1.7573e-01 l1: 7.4207e-01 l_vgg: 5.2195e+00 l_vgg_base: 4.1613e+00 l_vgg_flare: 1.0581e+00
2024-06-13 16:51:51,499 INFO: [debug..][epoch: 0, iter: 2, lr:(1.000e-04,)] [eta: 17:45:16, time (data): 1.158 (0.232)] l1_recons: 5.6692e-01 l1_flare: 2.1488e-01 l1_base: 2.0206e-01 l1: 9.8386e-01 l_vgg: 6.3416e+00 l_vgg_base: 3.2543e+00 l_vgg_flare: 3.0873e+00
2024-06-13 16:51:51,818 INFO: [debug..][epoch: 0, iter: 3, lr:(1.000e-04,)] [eta: 1 day, 2:36:36, time (data): 0.879 (0.155)] l1_recons: 3.8963e-01 l1_flare: 2.1506e-01 l1_base: 1.3989e-01 l1: 7.4458e-01 l_vgg: 5.3299e+00 l_vgg_base: 3.7483e+00 l_vgg_flare: 1.5816e+00
2024-06-13 16:51:52,138 INFO: [debug..][epoch: 0, iter: 4, lr:(1.000e-04,)] [eta: 1 day, 7:57:16, time (data): 0.739 (0.117)] l1_recons: 3.5855e-01 l1_flare: 2.2071e-01 l1_base: 9.5195e-02 l1: 6.7446e-01 l_vgg: 8.1121e+00 l_vgg_base: 5.9091e+00 l_vgg_flare: 2.2030e+00
2024-06-13 16:51:52,459 INFO: [debug..][epoch: 0, iter: 5, lr:(1.000e-04,)] [eta: 1 day, 11:32:46, time (data): 0.655 (0.094)] l1_recons: 2.4221e-01 l1_flare: 2.3469e-01 l1_base: 8.3441e-02 l1: 5.6034e-01 l_vgg: 6.0877e+00 l_vgg_base: 3.9796e+00 l_vgg_flare: 2.1080e+00
2024-06-13 16:51:52,777 INFO: [debug..][epoch: 0, iter: 6, lr:(1.000e-04,)] [eta: 1 day, 14:01:48, time (data): 0.599 (0.078)] l1_recons: 2.4650e-01 l1_flare: 1.0821e-01 l1_base: 1.0287e-01 l1: 4.5758e-01 l_vgg: 4.7867e+00 l_vgg_base: 3.2679e+00 l_vgg_flare: 1.5189e+00
2024-06-13 16:51:53,097 INFO: [debug..][epoch: 0, iter: 7, lr:(1.000e-04,)] [eta: 1 day, 15:56:33, time (data): 0.559 (0.067)] l1_recons: 2.5951e-01 l1_flare: 2.2821e-01 l1_base: 1.0263e-01 l1: 5.9036e-01 l_vgg: 5.8477e+00 l_vgg_base: 4.3610e+00 l_vgg_flare: 1.4867e+00
2024-06-13 16:51:53,416 INFO: [debug..][epoch: 0, iter: 8, lr:(1.000e-04,)] [eta: 1 day, 17:24:17, time (data): 0.529 (0.059)] l1_recons: 2.9199e-01 l1_flare: 1.8818e-01 l1_base: 1.0716e-01 l1: 5.8733e-01 l_vgg: 5.3248e+00 l_vgg_base: 3.5285e+00 l_vgg_flare: 1.7962e+00
2024-06-13 16:51:53,416 INFO: Saving models and training states.
Traceback (most recent call last):
File "/home/alejandro/Flare7K/basicsr/train.py", line 215, in
train_pipeline(root_path)
File "/home/alejandro/Flare7K/basicsr/train.py", line 193, in train_pipeline
model.validation(val_loader, current_iter, tb_logger, opt['val']['save_img'])
File "/home/alejandro/Flare7K/basicsr/models/base_model.py", line 48, in validation
self.nondist_validation(dataloader, current_iter, tb_logger, save_img)
File "/home/alejandro/Flare7K/basicsr/models/deflare_model.py", line 190, in nondist_validation
self.metric_results[metric] /= (idx + 1)
UnboundLocalError: local variable 'idx' referenced before assignment`

Bad performance on real-shot images in BDD100k

I downloaded the provided pretrained Uformer model from the authors and performed inference on the real-shot nighttime images in the BDD100k dataset. However, it did not work. I am unsure whether I did not set the right hyper-parameters or if it was limited by the domain gap between real flares and synthesized flares.

real_night_0
deflare_img_0

About reflective dataset

Hi,Thanks for your great job first!
I wonder if BracketFlare is better than Flare7k-ReflectiveFlare dataset for reflective flare removal or not。

ModelNotFoundError

Excuse me, I run the train script, It shows ModuleNotFoundError: No module named 'basicsr.archs.vgg_arch'? Is there missing some file?

Training using U_Net

Hello, when I wanted to use U_Net for training, I modified the following parameters:
In 'uformer_flare7kpp_baseline_option', change the parameter 'type' of 'network_g' to 'U_Net' for training. The error reported is:

Traceback (most recent call last):
File "basicsr/train.py", line 217, in
train_pipeline(root_path)
File "basicsr/train.py", line 126, in train_pipeline
model = build_model(opt)
File ".../basicsr/models/init.py", line 26, in build_model
model = MODEL_REGISTRY.get(opt['model_type'])(opt)
File ".../basicsr/models/sr_model.py", line 22, in init
self.net_g = build_network(opt['network_g'])
File ".../basicsr/archs/init.py", line 25, in build_network
net = ARCH_REGISTRY.get(network_type)(**opt)
TypeError: init() got an unexpected keyword argument 'img_size'

How to solve this problem? If I want to use U_Net to train the network, are there any more steps required? I am looking forward to your reply.

关于test imgaes

作者你们好
请问可以提供100张测试图片吗,貌似不在github上

The final test results.

Was the reflection dataset used in training for the highest accuracy in the Flare7K++ paper?
Thank you for your reply.

Validation set problem

Could you please provide more details about the validation set during the training process? About how to generate a validation set, or how to select a validation set? Thank you so much

Is there an error in your data_loader.py?

In your data_loader.py, class Flare_Image_Loader, there are some codes to transform the PIL image to tensor:
if self.transform_base is not None: base_img=to_tensor(base_img) base_img=adjust_gamma(base_img) base_img=self.transform_base(base_img) else: base_img=to_tensor(base_img) base_img=adjust_gamma(base_img) base_img=base_img.permute(2,0,1)

in the ELSE branch, is 'base_img=base_img.permute(2,0,1)' still needed? I think the to_sensor function already did this, when I didn't use transform_base, there was an error about dimension.

Training and inference issue of Uformer

Hello, I'm trying to reproduce the training process of Uformer with the source code provided in this repository. During the training process, the visualization results of the validation are the expected normal images. But when the trained weights are saved and loaded again to perform inference, the results generated by the model are all gray images. By debugging the code, I found that for any input, the final output of the model is a tensor with an average value of 0.5 for each element. What could be the cause of this problem, thank you!

Questions about dataset usage

Hi, first of all, thanks a lot for coming up with such a great dataset!

For flare removal, definitely the most difficult aspect is generating the dataset for sure. I would love to use your data for training the model that I am developing, but I am planning on deploying the models to the IOS app that I am developing in the future. I realize your dataset is under S-Lab License 1.0, so I am aware of having to contact you to ask for your permission. Do you strictly prohit commercial use? It would be really great to be able to use your data.

score

Sorry to bother you again, I saw some of the data you provided, thank you very much, but I couldn't find gt and mask for the test image,i want to compare with other players

配置对应环境运行缺失libcufft.so.11和libnvJitLink.so.12

python3 evaluate.py --input result/blend/ --gt dataset/Flare7Kpp/test_data/real/gt/ --mask dataset/Flare7Kpp/test_data/real/mask/

Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/site-packages/torch/init.py", line 174, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/ctypes/init.py", line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: libcufft.so.11: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/ubuntu/qls/Flare7K/evaluate.py", line 6, in
from torchvision.transforms import ToTensor
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/site-packages/torchvision/init.py", line 5, in
import torch
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/site-packages/torch/init.py", line 234, in
_load_global_deps()
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/site-packages/torch/init.py", line 195, in _load_global_deps
_preload_cuda_deps(lib_folder, lib_name)
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/site-packages/torch/init.py", line 161, in _preload_cuda_deps
ctypes.CDLL(lib_path)
File "/home/ubuntu/anaconda3/envs/flare/lib/python3.9/ctypes/init.py", line 382, in init
self._handle = _dlopen(self._name, mode)
OSError: libnvJitLink.so.12: cannot open shared object file: No such file or directory

KeyError: "No object named 'ExampleModel' found in 'model' registry!"

您好,我在文件夹中没有找到 *_model.py 文件,请问可以提供一下吗?谢谢

Traceback (most recent call last):
File "F:/pythonProject/Flare7K-main/basicsr/train.py", line 213, in
train_pipeline(root_path)
File "F:/pythonProject/Flare7K-main/basicsr/train.py", line 122, in train_pipeline
model = build_model(opt)
File "F:\pythonProject\Flare7K-main\basicsr\models_init_.py", line 27, in build_model
model = MODEL_REGISTRY.get(opt['model_type'])(opt)
File "F:\pythonProject\Flare7K-main\basicsr\utils\registry.py", line 71, in get
raise KeyError(f"No object named '{name}' found in '{self._name}' registry!")
KeyError: "No object named 'ExampleModel' found in 'model' registry!"

Validation dataset problem

Hello, I would like to ask the following questions about the validation dataset:

  1. Are the flare-free images generated for the validation dataset consistent with the 'gt' images in 'test_data/real/' or '\test_data\synthetic'? If it's not consistent, was it selected in 'Flickr24K'? Could you kindly provide it?

  2. Is the flare pattern used when generating the validation dataset the pattern provided in flare7K++?

Looking forward to your reply. Thank you.

Can I get your help on code execution?

Excuse me, I always get the following error when debugging code locally according to your requirements

Name Flare_Pair_Loader is not found, use name: Flare_Pair_Loader_basicsr!
Traceback (most recent call last):
File "basicsr/train.py", line 215, in <module>
train_pipeline(root_path)
File "basicsr/train.py", line 120, in train_pipeline
result = create_train_val_dataloader(opt, logger)
File "basicsr/train.py", line 35, in create_train_val_dataloader
train_set = build_dataset(dataset_opt)
File "/home/cvgroup/anaconda3/envs/flare/lib/python3.7/site-packages/basicsr/data/init.py", line 34, in build_dataset
dataset = DATASET_REGISTRY.get(dataset_opt['type'])(dataset_opt)
File "/home/cvgroup/anaconda3/envs/flare/lib/python3.7/site-packages/basicsr/utils/registry.py", line 71, in get
raise KeyError(f"No object named '{name}' found in '{self._name}' registry!")
KeyError: "No object named 'Flare_Pair_Loader' found in 'dataset' registry!"
If I could get your help, I would be extremely grateful!

paper?

Have you applied the uformer network to this field without trying to publish a paper?

Training with U-Net

Hi, I was trying to reproduce the training process of Unet with unet_arch.py under the basicsr/archs folder.
I changed the parameter 'type' of 'network_g' to 'U_Net' and remove the 'img_size' in uformer_flare7kpp_baseline_option.yml. No errors were reported during the training process.
The results of running test.py on checkpoints are as follows :
net_g_5000:
00000_blend_5000
net_g_10000:
00000_blend_10000
net_g_25000:
00000_blend_25000
What led to the occurrence of this problem? Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.