Git Product home page Git Product logo

psd-principled-synthetic-to-real-dehazing-guided-by-physical-priors's People

Contributors

zychen-ustc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

psd-principled-synthetic-to-real-dehazing-guided-by-physical-priors's Issues

训练验证的一些问题

您好,感谢您的工作为去雾工作提供了一个很好的框架,我们向您表示真诚的感谢。在复现您的代码时,我们直接使用了您所提供的FFA预训练模型PSD-FFANET,加载后用来对RESIDE/SOTS/indoor进行验证,但验证结果十分不理想。中间有什么bug方便说一下吗?

我们仅进行了如下修改:
1.由于RESIDE/SOTS/indoor内的clear和hazy尺寸不对应,我们对真值进行了resize:

    H, W, C = np.shape(haze_img)
    gt_img = gt_img.resize((W, H), Image.ANTIALIAS)

2.其他的修改都是程序运行的修改。

另外是否方便对代码的错误进行一些修改并进行提交,如

  1. utilis.py 第 112 行 应改为: dehaze = net(haze, haze_A, True)
  2. utilis.py 96、97行 代码应该隐去
  3. ......

再次感谢您的贡献。

关于两个权重的问题

为什么我在使用PSB-MSBDN测试foggy-driving真实雾图数据集,效果并没用FFANET的效果好
web_00002_MyModel_0
outputweb_00002_MyModel_0

Can you share the yolov3 model for object detction?

Hi, Thanks for your sharing of your work.
And I want to follow your work.
However, when I test the detect the object in RTTS the results can not achieve your report. When I use the MSBDN to dehaze the image, and detect the object with Yolov3 by pytorch, and evaluate the mAP, the results is decreased than the hazy images. Can you give some suggestion?

无参考指标与YOLOv3

您好,请问可以分享一下论文中所用到的无参考指标的代码链接吗?如果您方便的话,可以再提供一下您所采用的yolov3代码的zip文件吗?我的邮箱是[email protected]
十分感谢!

没有数据集

您好,请问您提交的是不是并没有包括你们在论文中所使用的数据集?我看到很多地方的文件路径有点奇怪。感谢!

Questions about the hyper-parameter generating CLAHE real images

Hello, Thanks for your incredible work! Now I'd like to run the PSD dehazing method on my own dataset. However, the fine-tuning step requires the CLAHE version of real haze images. I have downloaded the CLAHE code (Matlab version), but I'm not sure how to set the parameters in CLAHE. I'd appreciate it if you could offer some advice about how to set this hyper-parameter. Thanks!

p.s.
CLAHE code (Matlab Version)https://ww2.mathworks.cn/matlabcentral/fileexchange/22182-contrast-limited-adaptive-histogram-equalization-clahe
[CEImage] = runCLAHE(Image,XRes,YRes,Min,Max,NrX,NrY,NrBins,Cliplimit);
% Image - The input/output image
% XRes - Image resolution in the X direction
% YRes - Image resolution in the Y direction
% Min - Minimum greyvalue of input image (also becomes minimum of output image)
% Max - Maximum greyvalue of input image (also becomes maximum of output image)
% NrX - Number of contextial regions in the X direction (min 2, max uiMAX_REG_X)
% NrY - Number of contextial regions in the Y direction (min 2, max uiMAX_REG_Y)
% NrBins - Number of greybins for histogram ("dynamic range")
% Cliplimit - Normalized cliplimit (higher values give more contrast)

main.py部分丢失

非常感谢您的工作,但是在main.py中,找不到data_utils中的TrainData、ValData,您能提供一下吗?非常感谢!

lack of datasets

can u upload the '/data/nnice1216/Dehazing/unlabeled/' data file?

测试报错

File "/root/PSD1/PSD/models/MSBDN.py", line 275, in forward
concat6 = torch.cat([dy7, y6], 1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 68 but got size 69 for tensor number 1 in the list.

About the training model

I want to train your model with my own dataset, can I just execute main.py and not apply finetune.py?

数据集问题

非常出色的工作,感谢您愿意开源代码与模型!
我对您工作中使用的数据集存在一些疑问,希望您能不吝赐教:

  1. 请问您在预训练的过程中使用的OTS数据集是来源于RESIDE:v0版本(313950对图片)还是RESIDE:β版本(72135对图片)呢?是否有对数据集进行采样呢?
  2. 关于finetune过程中使用的数据集,您在论文中提到使用的是URHI,但在此处使用的似乎是RTTS,或者是我产生了误解么?以及是否有进行采样呢?
    希望您有时间时能够给予解答,万分感谢!

Puzzled about the error in finetune.py

Thank you for your excellent job about the real image dehazing. I am very insterested in this research. However, I meet an error when I train the network with finetune.py. The error is as follows: "RuntimeError: CUDA error: invalid configuration argument"
It happened in the line 47 of the energy_functions.py,
energy_dc_loss = loss_f(unlabel_haze, T)---->weights = torch.matmul(img_patches - mean_patches, var_fac)

Good holiday and looking forward to your help.

License for this model

Thank you for sharing your wonderful results with us!

I have successfully converted your model to ONNX. If possible, I would like to share this model in my repository, what is the license? My model conversion scripts are released under the MIT license, but the license of the source model itself is subject to the license of the provider repository.

Model format that was successfully converted

  1. ONNX
  2. TensorFlow Lite
  3. TFJS
  4. TFTRT
  5. CoreML
  6. OpenVINO
  7. Myriad Inference Blob

配置咨询

请问作者训练时用的是双卡吗,配置如何?谢谢!

关于test.py报错的问题

作者您好!首先感谢您的代码与模型。
我根据test.py的

test_data_dir = '/data/nnice1216/Dehazing/unlabeled/'

在PSD文件夹下创建了/data/nnice1216/Dehazing/unlabeled/文件夹,并在unlabeled文件夹中放入一个SOTS测试集的有雾图像,同时把下载好的PSB-MSBDN、PSD-FFANET、PSD-GCANET文件放在PSD文件夹下,然后运行test.py,提示如下错误:

E:\Anaconda3\envs\pytorch\python.exe "E:/0shenduxuexi/【CVPR2021】PSD Principled Synthetic_to_Real Dehazing Guided by Physical Priors/PSD/PSD/PSD/test.py"
Traceback (most recent call last):
 File "E:/0shenduxuexi/【CVPR2021】PSD Principled Synthetic_to_Real Dehazing Guided by Physical Priors/PSD/PSD/PSD/test.py", line 35, in <module>
   test_data_loader = DataLoader(TestData_FFA(test_data_dir), batch_size=1, shuffle=False, num_workers=8) # For FFA and MSBDN
 File "E:\0shenduxuexi\【CVPR2021】PSD Principled Synthetic_to_Real Dehazing Guided by Physical Priors\PSD\PSD\PSD\datasets\pretrain_datasets.py", line 489, in __init__
   self.haze_names = list(os.walk(self.haze_dir))[0][2]
IndexError: list index out of range

进程已结束,退出代码1

我的python版本是3.6.13,pytorch版本是1.10.2。
请问该如何解决?谢谢作者!

预训练问题

您好,感谢为图像去雾提供了一个很好的解决思路,向您表示真挚的感谢。但是在运行main.py的时候,选择的是FFANet,前面DATALOADER的时候是正确的,输出的:DATALOADER ### DONE!也开了第一个Epoch,但是之后就开始报错了,具体的显示如下(为了验证可行性,只选择了34张照片进行训练): DATALOADER DONE!
Epoch: 0, Iteration: 0, Loss: 0.14289504289627075, Rec_Loss1: 0.05949130654335022, Rec_loss2: 0.08340374380350113
Epoch: 0, Iteration: 1, Loss: 0.04430900514125824, Rec_Loss1: 0.02571740560233593, Rec_loss2: 0.01859159767627716
Epoch: 0, Iteration: 2, Loss: 0.1531301736831665, Rec_Loss1: 0.07720714807510376, Rec_loss2: 0.07592301815748215
Epoch: 0, Iteration: 3, Loss: 0.04206860437989235, Rec_Loss1: 0.014385269023478031, Rec_loss2: 0.027683334425091743
Epoch: 0, Iteration: 4, Loss: 0.024794980883598328, Rec_Loss1: 0.018735533580183983, Rec_loss2: 0.0060594468377530575
Epoch: 0, Iteration: 5, Loss: 0.02767857350409031, Rec_Loss1: 0.004832420963793993, Rec_loss2: 0.022846153005957603
Epoch: 0, Iteration: 6, Loss: 0.05871203541755676, Rec_Loss1: 0.02811766229569912, Rec_loss2: 0.030594373121857643
Epoch: 0, Iteration: 7, Loss: 0.05164826288819313, Rec_Loss1: 0.024732043966650963, Rec_loss2: 0.026916218921542168
Epoch: 0, Iteration: 8, Loss: 0.059729255735874176, Rec_Loss1: 0.0336235910654068, Rec_loss2: 0.026105666533112526
Epoch: 0, Iteration: 9, Loss: 0.03006775490939617, Rec_Loss1: 0.013887450098991394, Rec_loss2: 0.016180304810404778
Epoch: 0, Iteration: 10, Loss: 0.02384977787733078, Rec_Loss1: 0.011282042600214481, Rec_loss2: 0.012567736208438873
Epoch: 0, Iteration: 11, Loss: 0.028310412541031837, Rec_Loss1: 0.022773319855332375, Rec_loss2: 0.005537092685699463
Epoch: 0, Iteration: 12, Loss: 0.009352276101708412, Rec_Loss1: 0.002952676033601165, Rec_loss2: 0.006399600300937891
Epoch: 0, Iteration: 13, Loss: 0.011586668901145458, Rec_Loss1: 0.005171761382371187, Rec_loss2: 0.006414907518774271
Epoch: 0, Iteration: 14, Loss: 0.01459668017923832, Rec_Loss1: 0.005380912218242884, Rec_loss2: 0.009215767495334148
Epoch: 0, Iteration: 15, Loss: 0.01685214228928089, Rec_Loss1: 0.006959381978958845, Rec_loss2: 0.009892760775983334
Epoch: 0, Iteration: 16, Loss: 0.009804013185203075, Rec_Loss1: 0.0033319436479359865, Rec_loss2: 0.0064720697700977325
Epoch: 0, Iteration: 17, Loss: 0.016198759898543358, Rec_Loss1: 0.005649095866829157, Rec_loss2: 0.010549664497375488
Epoch: 0, Iteration: 18, Loss: 0.016497818753123283, Rec_Loss1: 0.007637546863406897, Rec_loss2: 0.008860272355377674
Epoch: 0, Iteration: 19, Loss: 0.007037108298391104, Rec_Loss1: 0.0028563570231199265, Rec_loss2: 0.004180751275271177
Epoch: 0, Iteration: 20, Loss: 0.024494905024766922, Rec_Loss1: 0.010508932173252106, Rec_loss2: 0.013985971920192242
Epoch: 0, Iteration: 21, Loss: 0.017702093347907066, Rec_Loss1: 0.0060675074346363544, Rec_loss2: 0.011634585447609425
Epoch: 0, Iteration: 22, Loss: 0.028357721865177155, Rec_Loss1: 0.005747524555772543, Rec_loss2: 0.022610196843743324
Epoch: 0, Iteration: 23, Loss: 0.018168855458498, Rec_Loss1: 0.004355086944997311, Rec_loss2: 0.01381376851350069
Epoch: 0, Iteration: 24, Loss: 0.018373781815171242, Rec_Loss1: 0.005595667753368616, Rec_loss2: 0.012778114527463913
Epoch: 0, Iteration: 25, Loss: 0.009770587086677551, Rec_Loss1: 0.006647426635026932, Rec_loss2: 0.0031231604516506195
Epoch: 0, Iteration: 26, Loss: 0.009897572919726372, Rec_Loss1: 0.0016765507170930505, Rec_loss2: 0.008221021853387356
Epoch: 0, Iteration: 27, Loss: 0.010041097179055214, Rec_Loss1: 0.0023588703479617834, Rec_loss2: 0.007682227063924074
Epoch: 0, Iteration: 28, Loss: 0.0069048767909407616, Rec_Loss1: 0.0014798814663663507, Rec_loss2: 0.005424995440989733
Epoch: 0, Iteration: 29, Loss: 0.02984018251299858, Rec_Loss1: 0.005807706620544195, Rec_loss2: 0.0240324754267931
Epoch: 0, Iteration: 30, Loss: 0.005661278031766415, Rec_Loss1: 0.0023895849008113146, Rec_loss2: 0.0032716933637857437
Epoch: 0, Iteration: 31, Loss: 0.021612636744976044, Rec_Loss1: 0.008022915571928024, Rec_loss2: 0.013589720241725445
Epoch: 0, Iteration: 32, Loss: 0.028909411281347275, Rec_Loss1: 0.010480371303856373, Rec_loss2: 0.018429039046168327
Epoch: 0, Iteration: 33, Loss: 0.010996435768902302, Rec_Loss1: 0.0022096827160567045, Rec_loss2: 0.008786752820014954
Epoch: 0, Iteration: 34, Loss: 0.009218547493219376, Rec_Loss1: 0.0021805008873343468, Rec_loss2: 0.007038047071546316
Traceback (most recent call last):
File "E:/xixixi/Dehaze/main.py", line 101, in
val_psnr, val_ssim = validation(net, val_data_loader, device, category) #
TypeError: validation() missing 1 required positional argument: 'category'

进程已结束,退出代码为 1
`
我对utils.py中的def validation进行了检查,没有察觉的有问题,该添加的路径和目录也添加了,但始终都报这个错误。
最后,对您的帮助表示感谢。

the real hazy dataset during fine-tuning?

hey,

thx for your share, it is a wonderful work toward real image dehazing.

During the fine-tuning phase, do you use the URHI or RTTS?
In your paper, it said URHI. However, on GitHub, it said: "We use RTTS from RESIDE dataset as our fine-tuning data."
Could you justify it for me?
BTW, since the PSD is tested on RTTS, I don't think it's reasonable to fine-tune the model on RTTS.

Best,
Zewei

路径问题

为什么这个代码里从不同文件夹里导入模块没法用from import

显存不足的问题

在运行main.py时(使用FFA),出现了
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 6.00 GiB total capacity; 4.30 GiB already allocated; 25.12 MiB free; 4.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
在减少了batch_size,num_workers等后仍然如此,将训练集的数量减少至几十张的情况下,仍然如此,请问是训练算法本身的开销就是如此巨大吗,期待您的回复

训练集路径问题

运行主程序后,会出现Traceback (most recent call last):
File "G:/桌面/PSD/PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main/PSD/main.py", line 45, in
train_data_loader = DataLoader(TrainData(crop_size, train_data_dir), batch_size=train_batch_size, shuffle=True,
File "G:\桌面\PSD\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py", line 20, in init
self.haze_names = list(os.walk(self.haze_dir))[0][2]
IndexError: list index out of range
下面是我的训练集路径
val_data_dir = 'G:/桌面/数据集/SOTS(1)/SOTS/outdoor/'
train_data_dir = 'G:/桌面/数据集/OTS_ALPHA/'
为什么会显示超出范围呢

预训练模型

你好,最近我在使用您的代码跑自己的数据集,运行test.py效果还不错。但是在我运行finetune.py时,出现这样的错误:FileNotFoundError: [Errno 2] No such file or directory: '/model/nnice1216/DAD/MSBDNNet_pretrain.pth'
。我看了您之前回复的一些问题中知道,readme里面提供的模型是在预训练后进行过微调的模型。那么这里的报错出现的模型是什么模型呢,是未微调的嘛?我不是太理解,如果不是,是否可以提供一下呢。期待您的解答,感谢!

some code errors in test.py

Thanks for you excellent work! but , when I use test.py , there are some errors .

  Such as ,   model.cuda & net.cuda & batchnormal error.
  Can you check your test.py code?

Thanks a lot.

附录

请问论文附录在哪里能查阅。请问你没加color attention 先验损失,是测试过效果不好么。还请你上传一下你的yolov3测试和NIMA测试代码

无参考指标

非常杰出的工作,是否能够提供FADE,NIQE,BRISQUE,NIMA的相关代码呢。万分感谢!

关于A-Net 训练存在的问题

作者您好,我尝试使用您提供的代码训练重新训练FFANet。
在训练的时候A-net 模块存在一些问题:
在self.layer7 = BlockUNet1(64, 64)中:
self.batch = nn.InstanceNorm2d(out_channels)
y = self.batch(y)
出现 ValueError: Expected more than 1 spatial element when training, got input size torch.Size([2, 64, 1, 1])
我看了DCPDN的代码,使用的是BatchNorm2d()。
如果将nn.InstanceNorm2d()修改为BatchNorm2d(),则不会出现此错误。
请问我该如何修改?InstanceNorm2d修改为BatchNorm2d对模型影响大吗?希望得到您的回复,感谢!

训练细节

请问您是用的什么GPU几张训练时长是多少

关于finetune

感谢您能为此出色的工作提供开源代码和微调后的模型!
我对微调的无监督过程中各个损失函数的设置比较感兴趣,希望在您的预训练的基础上重新微调模型。
请问您能否提供在finetune.py中需要load的预训练的模型(如MSBDN/FFANet),以及使用CLAHE对RTTS做预处理的代码呢?
另外,想确认一下,您使用了CLAHE预处理后的数据进行微调,而不是原始RTTS数据是吗?

test.py中使用“PSB-MSBDN”模型报错

test.py当我使用您提供的'PSD-FFANET'模型时测试没有问题;
但是当切换成使用‘’PSB-MSBDN‘’模型时报错:

RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.convd4xconvd4x.conv2d.weight", "module.convd4xconvd4x.conv2d.bias".
Unexpected key(s) in state_dict: "module.convd4x.conv2d.weight", "module.convd4x.conv2d.bias".

求解答!

test时出现问题

您好,我在尝试运行test.py文件时遇到下列问题。
我将需要测试的图片放在新建的data文件夹中,但是出现以下问题

C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:62: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:62: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:124: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:124: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
Traceback (most recent call last):
  File "C:/Users/sweet_li/Desktop/PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main/PSD/test.py", line 38, in <module>
    test_data_loader = DataLoader(TestData_FFA(test_data_dir), batch_size=1, shuffle=False, num_workers=8) # For FFA and MSBDN
  File "C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py", line 491, in __init__
    self.haze_names = list(os.walk(self.haze_dir))[0][2]
IndexError: list index out of range

进程已结束,退出代码为 1

在我注释掉self.haze_names = list(os.walk(self.haze_dir))[0][2]这一行之后,又出现

D:\anaconda\python.exe C:/Users/sweet_li/Desktop/PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main/PSD/test.py
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:62: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:62: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:124: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\datasets\pretrain_datasets.py:124: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if list(haze.shape)[0] is not 3 or list(gt.shape)[0] is not 3:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 125, in _main
    prepare(preparation_data)
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "D:\anaconda\lib\runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "D:\anaconda\lib\runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "D:\anaconda\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "C:\Users\sweet_li\Desktop\PSD-Principled-Synthetic-to-Real-Dehazing-Guided-by-Physical-Priors-main\PSD\test.py", line 47, in <module>
    for batch_id, val_data in enumerate(test_data_loader):
  File "D:\anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "D:\anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "D:\anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "D:\anaconda\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "D:\anaconda\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\anaconda\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "D:\anaconda\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "D:\anaconda\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

基于我现在的代码水平,确实没有找到解决办法,请问能给予解决答复吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.