Git Product home page Git Product logo

fishfsrnet's Introduction

FishFSRNet

Code for paper "Super-Resolving Face Image by Facial Parsing Information"

image

Citation

@ARTICLE{10090424,
  author={Wang, Chenyang and Jiang, Junjun and Zhong, Zhiwei and Zhai, Deming and Liu, Xianming},
  journal={IEEE Transactions on Biometrics, Behavior, and Identity Science}, 
  title={Super-Resolving Face Image by Facial Parsing Information}, 
  year={2023},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TBIOM.2023.3264223}}

Results

BaiDu passward: nvg1

Training

The training stage includes two stages:

i) Train ParsingNet. Enter parsing folder and then use main_parsingnet,py to train the model,

python main_parsingnet.py --writer_name parsingnet --dir_data data_path 

After training the ParsingNet, we use the pretrained ParsingNet to generate facial parsing,

python test_parsingnet.py --writer_name the_path_you_want_to_save_results

ii) Train FishFSRNet. Modify move the generated parsing map into the path setting of dataset_parsing.py, then train the fishfsrnet

python main_parsing.py --writer_name fishfsrnet --dir_data data_path

Testing

The testing stage also contains two stage, we should first use the pretrained ParsingNet to estimate parsing map and then utilize the parsing map to reconstruct SR results.

python test_parsingnet.py --writer_name the_path_you_want_to_save_results
python test.py --writer_name fishfsrnet --dir_data data_path

fishfsrnet's People

Contributors

wcy-cs avatar

Stargazers

 avatar  avatar  avatar Xiaobing Han avatar BX Z avatar Luis Unzueta avatar  avatar

Watchers

 avatar

fishfsrnet's Issues

关于测试的一些问题

作者您好!我最近用您公布的源代码重新训练了FishFSRNet×4和FishFSRNet×8网络,在做测试的时候,×8的模型没有任何问题可以得出PSNR值,但是×4的模型不知为何有如下的报错,请问该如何解决呢?十分感谢!
Traceback (most recent call last):
File "D:\pythonProject1\FishFSRNet-main\fsr\test.py", line 41, in
main()
File "D:\pythonProject1\FishFSRNet-main\fsr\test.py", line 20, in main
net.load_state_dict(pretrained_dict)
File "D:\Anaconda\envs\fish\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FISHNET:
Missing key(s) in state_dict: "refine2.0.refine8.conv.0.body.0.weight", "refine2.0.refine8.conv.0.body.0.bias", "refine2.0.refine8.conv.0.body.2.weight", "refine2.0.refine8.conv.0.body.2.bias", "refine2.0.refine8.conv.1.body.0.weight", "refine2.0.refine8.conv.1.body.0.bias", "refine2.0.refine8.conv.1.body.2.weight", "refine2.0.refine8.conv.1.body.2.bias", "refine2.0.attention.mlp.1.weight", "refine2.0.attention.mlp.1.bias", "refine2.0.attention.mlp.3.weight", "refine2.0.attention.mlp.3.bias", "refine2.1.refine8.conv.0.body.0.weight", "refine2.1.refine8.conv.0.body.0.bias", "refine2.1.refine8.conv.0.body.2.weight", "refine2.1.refine8.conv.0.body.2.bias", "refine2.1.refine8.conv.1.body.0.weight", "refine2.1.refine8.conv.1.body.0.bias", "refine2.1.refine8.conv.1.body.2.weight", "refine2.1.refine8.conv.1.body.2.bias", "refine2.1.attention.mlp.1.weight", "refine2.1.attention.mlp.1.bias", "refine2.1.attention.mlp.3.weight", "refine2.1.attention.mlp.3.bias", "refine2.2.down2.0.weight", "refine2.2.down2.0.bias", "refine2.2.down2.2.weight", "refine2.2.down2.2.bias", "refine2.2.refine8.conv.0.body.0.weight", "refine2.2.refine8.conv.0.body.0.bias", "refine2.2.refine8.conv.0.body.2.weight", "refine2.2.refine8.conv.0.body.2.bias", "refine2.2.refine8.conv.1.body.0.weight", "refine2.2.refine8.conv.1.body.0.bias", "refine2.2.refine8.conv.1.body.2.weight", "refine2.2.refine8.conv.1.body.2.bias", "refine2.2.attention.mlp.1.weight", "refine2.2.attention.mlp.1.bias", "refine2.2.attention.mlp.3.weight", "refine2.2.attention.mlp.3.bias", "refine2.3.down1.weight", "refine2.3.down1.bias", "refine2.3.down2.0.weight", "refine2.3.down2.0.bias", "refine2.3.down2.2.weight", "refine2.3.down2.2.bias", "refine2.3.refine8.conv.0.body.0.weight", "refine2.3.refine8.conv.0.body.0.bias", "refine2.3.refine8.conv.0.body.2.weight", "refine2.3.refine8.conv.0.body.2.bias", "refine2.3.refine8.conv.1.body.0.weight", "refine2.3.refine8.conv.1.body.0.bias", "refine2.3.refine8.conv.1.body.2.weight", "refine2.3.refine8.conv.1.body.2.bias", "refine2.3.attention.mlp.1.weight", "refine2.3.attention.mlp.1.bias", "refine2.3.attention.mlp.3.weight", "refine2.3.attention.mlp.3.bias", "refine2.4.down1.weight", "refine2.4.down1.bias", "refine2.4.refine2.conv.0.body.0.weight", "refine2.4.refine2.conv.0.body.0.bias", "refine2.4.refine2.conv.0.body.2.weight", "refine2.4.refine2.conv.0.body.2.bias", "refine2.4.refine2.conv.1.body.0.weight", "refine2.4.refine2.conv.1.body.0.bias", "refine2.4.refine2.conv.1.body.2.weight", "refine2.4.refine2.conv.1.body.2.bias", "refine2.4.refine4.conv.0.body.0.weight", "refine2.4.refine4.conv.0.body.0.bias", "refine2.4.refine4.conv.0.body.2.weight", "refine2.4.refine4.conv.0.body.2.bias", "refine2.4.refine4.conv.1.body.0.weight", "refine2.4.refine4.conv.1.body.0.bias", "refine2.4.refine4.conv.1.body.2.weight", "refine2.4.refine4.conv.1.body.2.bias", "refine2.4.refine8.conv.0.body.0.weight", "refine2.4.refine8.conv.0.body.0.bias", "refine2.4.refine8.conv.0.body.2.weight", "refine2.4.refine8.conv.0.body.2.bias", "refine2.4.refine8.conv.1.body.0.weight", "refine2.4.refine8.conv.1.body.0.bias", "refine2.4.refine8.conv.1.body.2.weight", "refine2.4.refine8.conv.1.body.2.bias", "refine2.4.attention.mlp.1.weight", "refine2.4.attention.mlp.1.bias", "refine2.4.attention.mlp.3.weight", "refine2.4.attention.mlp.3.bias", "refine2.4.conv.weight", "refine2.4.conv.bias", "refine2.5.refine2.conv.0.body.0.weight", "refine2.5.refine2.conv.0.body.0.bias", "refine2.5.refine2.conv.0.body.2.weight", "refine2.5.refine2.conv.0.body.2.bias", "refine2.5.refine2.conv.1.body.0.weight", "refine2.5.refine2.conv.1.body.0.bias", "refine2.5.refine2.conv.1.body.2.weight", "refine2.5.refine2.conv.1.body.2.bias", "refine2.5.refine4.conv.0.body.0.weight", "refine2.5.refine4.conv.0.body.0.bias", "refine2.5.refine4.conv.0.body.2.weight", "refine2.5.refine4.conv.0.body.2.bias", "refine2.5.refine4.conv.1.body.0.weight", "refine2.5.refine4.conv.1.body.0.bias", "refine2.5.refine4.conv.1.body.2.weight", "refine2.5.refine4.conv.1.body.2.bias", "refine2.5.refine8.conv.0.body.0.weight", "refine2.5.refine8.conv.0.body.0.bias", "refine2.5.refine8.conv.0.body.2.weight", "refine2.5.refine8.conv.0.body.2.bias", "refine2.5.refine8.conv.1.body.0.weight", "refine2.5.refine8.conv.1.body.0.bias", "refine2.5.refine8.conv.1.body.2.weight", "refine2.5.refine8.conv.1.body.2.bias", "refine2.5.attention.mlp.1.weight", "refine2.5.attention.mlp.1.bias", "refine2.5.attention.mlp.3.weight", "refine2.5.attention.mlp.3.bias", "refine2.5.conv.weight", "refine2.5.conv.bias", "up1.0.0.weight", "up1.0.0.bias", "up2.0.0.weight", "up2.0.0.bias", "up3.0.0.weight", "up3.0.0.bias", "up_stage3.0.body.0.weight", "up_stage3.0.body.0.bias", "up_stage3.0.body.2.weight", "up_stage3.0.body.2.bias", "up_stage3.0.attention_layer1.spatial_layer1.weight", "up_stage3.0.attention_layer1.spatial_layer1.bias", "up_stage3.0.attention_layer1.spatial_layer3.weight", "up_stage3.0.attention_layer1.spatial_layer3.bias", "up_stage3.0.attention_layer2.mlp.1.weight", "up_stage3.0.attention_layer2.mlp.1.bias", "up_stage3.0.attention_layer2.mlp.3.weight", "up_stage3.0.attention_layer2.mlp.3.bias", "up_stage3.0.conv.weight", "up_stage3.0.conv.bias", "up_stage3.0.conv_feature.0.weight", "up_stage3.0.conv_feature.0.bias", "up_stage3.0.conv_parsing.0.weight", "up_stage3.0.conv_parsing.0.bias", "up_stage3.0.conv_fusion.weight", "up_stage3.0.conv_fusion.bias", "up_stage3.0.attention_fusion.weight", "up_stage3.0.attention_fusion.bias", "up_stage3.1.body.0.weight", "up_stage3.1.body.0.bias", "up_stage3.1.body.2.weight", "up_stage3.1.body.2.bias", "up_stage3.1.attention_layer1.spatial_layer1.weight", "up_stage3.1.attention_layer1.spatial_layer1.bias", "up_stage3.1.attention_layer1.spatial_layer3.weight", "up_stage3.1.attention_layer1.spatial_layer3.bias", "up_stage3.1.attention_layer2.mlp.1.weight", "up_stage3.1.attention_layer2.mlp.1.bias", "up_stage3.1.attention_layer2.mlp.3.weight", "up_stage3.1.attention_layer2.mlp.3.bias", "up_stage3.1.conv.weight", "up_stage3.1.conv.bias", "up_stage3.1.conv_feature.0.weight", "up_stage3.1.conv_feature.0.bias", "up_stage3.1.conv_parsing.0.weight", "up_stage3.1.conv_parsing.0.bias", "up_stage3.1.conv_fusion.weight", "up_stage3.1.conv_fusion.bias", "up_stage3.1.attention_fusion.weight", "up_stage3.1.attention_fusion.bias", "down1.conv.weight", "down1.conv.bias", "down_stage1.0.body.0.weight", "down_stage1.0.body.0.bias", "down_stage1.0.body.2.weight", "down_stage1.0.body.2.bias", "down_stage1.0.attention_layer1.spatial_layer1.weight", "down_stage1.0.attention_layer1.spatial_layer1.bias", "down_stage1.0.attention_layer1.spatial_layer3.weight", "down_stage1.0.attention_layer1.spatial_layer3.bias", "down_stage1.0.attention_layer2.mlp.1.weight", "down_stage1.0.attention_layer2.mlp.1.bias", "down_stage1.0.attention_layer2.mlp.3.weight", "down_stage1.0.attention_layer2.mlp.3.bias", "down_stage1.0.conv.weight", "down_stage1.0.conv.bias", "down_stage1.0.conv_feature.0.weight", "down_stage1.0.conv_feature.0.bias", "down_stage1.0.conv_parsing.0.weight", "down_stage1.0.conv_parsing.0.bias", "down_stage1.0.conv_fusion.weight", "down_stage1.0.conv_fusion.bias", "down_stage1.0.attention_fusion.weight", "down_stage1.0.attention_fusion.bias", "down_stage1.1.body.0.weight", "down_stage1.1.body.0.bias", "down_stage1.1.body.2.weight", "down_stage1.1.body.2.bias", "down_stage1.1.attention_layer1.spatial_layer1.weight", "down_stage1.1.attention_layer1.spatial_layer1.bias", "down_stage1.1.attention_layer1.spatial_layer3.weight", "down_stage1.1.attention_layer1.spatial_layer3.bias", "down_stage1.1.attention_layer2.mlp.1.weight", "down_stage1.1.attention_layer2.mlp.1.bias", "down_stage1.1.attention_layer2.mlp.3.weight", "down_stage1.1.attention_layer2.mlp.3.bias", "down_stage1.1.conv.weight", "down_stage1.1.conv.bias", "down_stage1.1.conv_feature.0.weight", "down_stage1.1.conv_feature.0.bias", "down_stage1.1.conv_parsing.0.weight", "down_stage1.1.conv_parsing.0.bias", "down_stage1.1.conv_fusion.weight", "down_stage1.1.conv_fusion.bias", "down_stage1.1.attention_fusion.weight", "down_stage1.1.attention_fusion.bias", "conv_tail1.weight", "conv_tail1.bias", "conv.weight", "conv.bias", "up21.0.0.weight", "up21.0.0.bias", "conv_tail2.weight", "conv_tail2.bias", "up22.0.0.weight", "up22.0.0.bias", "up23.0.0.weight", "up23.0.0.bias", "conv_tail3.weight", "conv_tail3.bias", "up2_stage3.0.body.0.weight", "up2_stage3.0.body.0.bias", "up2_stage3.0.body.2.weight", "up2_stage3.0.body.2.bias", "up2_stage3.0.attention_layer1.spatial_layer1.weight", "up2_stage3.0.attention_layer1.spatial_layer1.bias", "up2_stage3.0.attention_layer1.spatial_layer3.weight", "up2_stage3.0.attention_layer1.spatial_layer3.bias", "up2_stage3.0.attention_layer2.mlp.1.weight", "up2_stage3.0.attention_layer2.mlp.1.bias", "up2_stage3.0.attention_layer2.mlp.3.weight", "up2_stage3.0.attention_layer2.mlp.3.bias", "up2_stage3.0.conv.weight", "up2_stage3.0.conv.bias", "up2_stage3.0.conv_feature.0.weight", "up2_stage3.0.conv_feature.0.bias", "up2_stage3.0.conv_parsing.0.weight", "up2_stage3.0.conv_parsing.0.bias", "up2_stage3.0.conv_fusion.weight", "up2_stage3.0.conv_fusion.bias", "up2_stage3.0.attention_fusion.weight", "up2_stage3.0.attention_fusion.bias", "up2_stage3.1.body.0.weight", "up2_stage3.1.body.0.bias", "up2_stage3.1.body.2.weight", "up2_stage3.1.body.2.bias", "up2_stage3.1.attention_layer1.spatial_layer1.weight", "up2_stage3.1.attention_layer1.spatial_layer1.bias", "up2_stage3.1.attention_layer1.spatial_layer3.weight", "up2_stage3.1.attention_layer1.spatial_layer3.bias", "up2_stage3.1.attention_layer2.mlp.1.weight", "up2_stage3.1.attention_layer2.mlp.1.bias", "up2_stage3.1.attention_layer2.mlp.3.weight", "up2_stage3.1.attention_layer2.mlp.3.bias", "up2_stage3.1.conv.weight", "up2_stage3.1.conv.bias", "up2_stage3.1.conv_feature.0.weight", "up2_stage3.1.conv_feature.0.bias", "up2_stage3.1.conv_parsing.0.weight", "up2_stage3.1.conv_parsing.0.bias", "up2_stage3.1.conv_fusion.weight", "up2_stage3.1.conv_fusion.bias", "up2_stage3.1.attention_fusion.weight", "up2_stage3.1.attention_fusion.bias".
Unexpected key(s) in state_dict: "refine2.0.attention.body.0.weight", "refine2.0.attention.body.0.bias", "refine2.0.attention.body.2.conv1.weight", "refine2.0.attention.body.2.conv1.bias", "refine2.0.attention.body.2.conv3.weight", "refine2.0.attention.body.2.conv3.bias", "refine2.0.attention.body.2.conv5.weight", "refine2.0.attention.body.2.conv5.bias", "refine2.0.attention.body.2.conv7.weight", "refine2.0.attention.body.2.conv7.bias", "refine2.0.attention.attention_layer2.mlp.1.weight", "refine2.0.attention.attention_layer2.mlp.1.bias", "refine2.0.attention.attention_layer2.mlp.3.weight", "refine2.0.attention.attention_layer2.mlp.3.bias", "refine2.1.attention.body.0.weight", "refine2.1.attention.body.0.bias", "refine2.1.attention.body.2.conv1.weight", "refine2.1.attention.body.2.conv1.bias", "refine2.1.attention.body.2.conv3.weight", "refine2.1.attention.body.2.conv3.bias", "refine2.1.attention.body.2.conv5.weight", "refine2.1.attention.body.2.conv5.bias", "refine2.1.attention.body.2.conv7.weight", "refine2.1.attention.body.2.conv7.bias", "refine2.1.attention.attention_layer2.mlp.1.weight", "refine2.1.attention.attention_layer2.mlp.1.bias", "refine2.1.attention.attention_layer2.mlp.3.weight", "refine2.1.attention.attention_layer2.mlp.3.bias", "refine2.2.attention.body.0.weight", "refine2.2.attention.body.0.bias", "refine2.2.attention.body.2.conv1.weight", "refine2.2.attention.body.2.conv1.bias", "refine2.2.attention.body.2.conv3.weight", "refine2.2.attention.body.2.conv3.bias", "refine2.2.attention.body.2.conv5.weight", "refine2.2.attention.body.2.conv5.bias", "refine2.2.attention.body.2.conv7.weight", "refine2.2.attention.body.2.conv7.bias", "refine2.2.attention.attention_layer2.mlp.1.weight", "refine2.2.attention.attention_layer2.mlp.1.bias", "refine2.2.attention.attention_layer2.mlp.3.weight", "refine2.2.attention.attention_layer2.mlp.3.bias", "refine2.3.attention.body.0.weight", "refine2.3.attention.body.0.bias", "refine2.3.attention.body.2.conv1.weight", "refine2.3.attention.body.2.conv1.bias", "refine2.3.attention.body.2.conv3.weight", "refine2.3.attention.body.2.conv3.bias", "refine2.3.attention.body.2.conv5.weight", "refine2.3.attention.body.2.conv5.bias", "refine2.3.attention.body.2.conv7.weight", "refine2.3.attention.body.2.conv7.bias", "refine2.3.attention.attention_layer2.mlp.1.weight", "refine2.3.attention.attention_layer2.mlp.1.bias", "refine2.3.attention.attention_layer2.mlp.3.weight", "refine2.3.attention.attention_layer2.mlp.3.bias", "up1.body.0.weight", "up1.body.0.bias", "up2.body.0.weight", "up2.body.0.bias", "up21.body.0.weight", "up21.body.0.bias", "up22.body.0.weight", "up22.body.0.bias".
size mismatch for refine2.0.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.1.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.2.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.3.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).

训练集

您好,请问训练集是自己先预处理了Celeba,将其大小调整为128×128作为ground truth HR人脸,然后将ground truth的大小调整为64×64, 32×32和16×16作为相应的LR人脸,这个是分别放在不同的文件夹下并命名为LR等了吗?但好像与程序中的scale == 4,8,16对不太上。非常期待您的回答,谢谢!

关于提取人脸解析图

您好!请问是怎么通过BiSeNet来提取人脸解析图的?相关代码可以提供一下吗?感谢!

训练问题

请问训练ParsingNet的时候,按照readme给的命令,出现 TypeError: init() missing 1 required positional argument: 'args'
这个报错,参数传入问题,应该如何解决呢?感谢!

fishfsr的模型结构问题

您好,在运行fsr文件夹中main_parsing.py文件时,遇到下列问题:
TypeError: conv2d() received an invalid combination of arguments - got (Tensor, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:

  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (Tensor, Parameter, Parameter, tuple, tuple, tuple, int)
  • (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
    didn't match because some of the arguments have invalid types: (Tensor, Parameter, Parameter, tuple, tuple, tuple, int)
    请问是net结构的问题吗?应该怎么解决?期盼您的回复,感谢!

fish

关于实验设置

作者您好!请问parsingnet和fishfsrnet在训练阶段都用的是CelebA中的168854张图像吗?另外想问一下两个训练阶段采用的epoch数目和batch size分别是多少呀!十分感谢

fishfsrnet.py训练代码

File "/FishFSRNet-main/fsr/fishfsrnet.py", line 98, in init
self.reduc = common.channelReduction()
AttributeError: module 'common' has no attribute 'channelReduction'
您好,在运行python main_parsing.py训练整个网络时会出错,显示fishfsrnet.py中没有channelReduction,去到common.py中确实没有找到channelReduction的定义,请问这个common.channelReduction()表示的是什么呢?能否在common.py中添加对于的代码,感谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.