Thank you for your sharing. When I was training ,there is some errors . Following is my result.
$ python train_depth.py --config configs/blender_train.json
D:\python\lib\site-packages\torchvision\models_utils.py:135: UserWarning: Using 'weights' as positional parameter(s) is deprecated since 0.13 and may be removed in the future. Please use keyword parameter(s) instead.
warnings.warn(
D:\python\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=None
.
warnings.warn(msg)
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
D:\python\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:108: You defined a validation_step
but have no val_dataloader
. Skipping val loop.
| Name | Type | Params
0 | encoder | ResnetAttentionEncoder | 11.7 M
1 | decoder | DepthDecoder | 3.2 M
14.8 M Trainable params
0 Non-trainable params
14.8 M Total params
59.392 Total estimated model params size (MB)
D:\python\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:224: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers
argument(try 8 which is the number of cpus on this machine) in the
DataLoader` init to improve performance.
D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py:1609: The number of training batches (7) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
Epoch 0: 0%| | 0/7 [00:00<?, ?it/s] [ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread_('D:\Datasets\blender_stomach2\output\blender-duodenum-5-211126\images\0190.png'): can't open/read file: check file path/integrity
[ WARN:[email protected]] global loadsave.cpp:248 cv::findDecoder imread_('D:\Datasets\blender_stomach2\output\blender-duodenum-5-211126\images\0036.png'): can't open/read file: check file path/integrity
Traceback (most recent call last):
File "D:\slam\LINGMI-MR\LINGMI-MR\train_depth.py", line 40, in
trainer.fit(model, train_loader)
File "D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "D:\python\lib\site-packages\pytorch_lightning\trainer\call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1112, in _run
results = self._run_stage()
File "D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1191, in _run_stage
self._run_train()
File "D:\python\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1214, in _run_train
self.fit_loop.run()
File "D:\python\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
self.advance(*args, **kwargs)
File "D:\python\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 267, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "D:\python\lib\site-packages\pytorch_lightning\loops\loop.py", line 199, in run
self.advance(*args, **kwargs)
File "D:\python\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 187, in advance
batch = next(data_fetcher)
File "D:\python\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 184, in next
return self.fetching_function()
File "D:\python\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 265, in fetching_function
self._fetch_next_batch(self.dataloader_iter)
File "D:\python\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 280, in _fetch_next_batch
batch = next(iterator)
File "D:\python\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 569, in next
return self.request_next_batch(self.loader_iters)
File "D:\python\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 581, in request_next_batch
return apply_to_collection(loader_iters, Iterator, next)
File "D:\python\lib\site-packages\lightning_utilities\core\apply_func.py", line 64, in apply_to_collection
return function(data, *args, **kwargs)
File "D:\python\lib\site-packages\torch\utils\data\dataloader.py", line 628, in next
data = self._next_data()
File "D:\python\lib\site-packages\torch\utils\data\dataloader.py", line 1333, in _next_data
return self._process_data(data)
File "D:\python\lib\site-packages\torch\utils\data\dataloader.py", line 1359, in _process_data
data.reraise()
File "D:\python\lib\site-packages\torch_utils.py", line 543, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\python\lib\site-packages\torch\utils\data_utils\worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "D:\python\lib\site-packages\torch\utils\data_utils\fetch.py", line 58, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\python\lib\site-packages\torch\utils\data_utils\fetch.py", line 58, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\slam\LINGMI-MR\LINGMI-MR\datasets\blender_dataset.py", line 83, in getitem
color = self.src.get_color(index)
File "D:\slam\LINGMI-MR\LINGMI-MR\datasets\blender_dataset.py", line 49, in get_color
img = np.transpose(img, (2, 0, 1))
File "<array_function internals>", line 180, in transpose
File "D:\python\lib\site-packages\numpy\core\fromnumeric.py", line 660, in transpose
return _wrapfunc(a, 'transpose', axes)
File "D:\python\lib\site-packages\numpy\core\fromnumeric.py", line 57, in _wrapfunc
return bound(*args, **kwds)
ValueError: axes don't match array
Epoch 0: 0%| | 0/7 [00:04<?, ?it/s]
Looking forward to your replying.