I want to use your code with the Proba-V dataset, but I'm facing the following error.
$ python src/train.py --config config/config.json 0%| | 0/261 [00:00<?, ?it/s] 0%| | 0/400 [00:00<?, ?it/s] Traceback (most recent call last): File "[...]/HighRes-net/src/train.py", line 308, in <module> main(config) File "[...]/HighRes-net/src/train.py", line 294, in main trainAndGetBestModel(fusion_model, regis_model, optimizer, dataloaders, baseline_cpsnrs, config) File "[...]/HighRes-net/src/train.py", line 180, in trainAndGetBestModel srs_shifted = apply_shifts(regis_model, srs, shifts, device)[:, 0] File "[...]/HighRes-net/src/train.py", line 61, in apply_shifts new_images = shiftNet.transform(thetas, images, device=device) File "[...]/HighRes-net/src/DeepNetworks/ShiftNet.py", line 96, in transform new_I = lanczos.lanczos_shift(img=I.transpose(0, 1), File "[...]/HighRes-net/src/lanczos.py", line 96, in lanczos_shift I_s = torch.conv1d(I_padded, RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 4, 202, 202]
Here are the different values or shapes which are passed in the conv1d function :
I_padded
input shape : torch.Size([1, 4, 202, 202])
k_y.shape[0]
and k_x.shape[0]
groups number : 4
k_y
and k_x
weights shapes : torch.Size([4, 1, 7, 1])
(and torch.Size([4, 1, 1, 7])
)
[k_y.shape[2] // 2, 0]
and [0, k_x.shape[3] // 2]
padding values : [3, 0]
and [3, 0]
I used the default config.json, except for the following parameters.
- "batch_size": 4
- "min_L": 4
- "n_views": 16
But I receive similar errors keeping the default values.
I tried to squeeze the 1st dim of img, the 2nd of weights and to specify a simple int value for padding to avoid the different error messages, but all I finally had is this new RuntimeError.
'Given groups=4, weight of size [4, 7, 1], expected input[4, 202, 202] to have 28 channels, but got 202 channels instead'
Any clue to help me?