Git Product home page Git Product logo

crfill's Introduction

Hi there 👋

zengxianyu/zengxianyu is a ✨ special ✨ repository because its README.md (this file) appears on your GitHub profile.

Here are some ideas to get you started:

  • 🔭 I’m currently working on phd
  • 🌱 I’m currently learning machine learning
  • 👯 I’m looking to collaborate on computer vision

crfill's People

Contributors

cvstack avatar favormylikes avatar zengxianyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crfill's Issues

opt.load_baseg

In the second train.py "update_part=fine" of train.sh, I got the error about opt.load_baseg in the util.py.

image

I thought there isn't the part for adding load_baseg arguments in the train_options, So, I add it as test_options.

is it right for training?

Can't run demo: ./demo.sh: line 11: 25630 Illegal instruction: 4 python demo.py ....

I successfully created the enviorment, but when trying to run the demo I'm getting the following error:

./demo.sh: line 11: 25630 Illegal instruction: 4 python demo.py --name objrmv --dataset_mode testimage --model inpaint --netG baseconv --which_epoch latest --image_dir ./datasets/places2sample1k_val/places2samples1k_crop256 --mask_dir ./datasets/places2sample1k_val/places2samples1k_25

I'm using the newest Macbook (M1 Max Chip). Could this be related to the issue?

questions about training and testing details

Nice work! Thank you for sharing code. I have some questions about training and testing details.

  1. For Places2 dataset, which dataset did you use? Places365-Standard or Places365-Challenge 2016 or Places365-Challenge 2016+Places-Extra69?
  2. Did you use high-resolution images for Places2 or the 256*256 ones?
  3. For testing, as mentioned in the paper, the mask regions are at random positions, are the random positions the same for different methods? Referring to the quantitative evalution valuse in Table 1 and Table 2. If the masked regions are randomly different when testing different methods, how to make sure the comparing is fair?
    Thank you very much for your time.

Changing masks generated in training data

Hello, thank you for your interesting work. I was wondering where you generate masks for the training data images. Because I wanted to know if I could change the method of generation of these masks to make them not be generated randomly in any part of the training images.

RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so: cannot open shared object file: No such file or directory (checkDL at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/DynamicLibrary.cpp:21)

When I run sh train.sh, I get the error: RuntimeError: Error in dlopen or dlsym: libcaffe2_nvrtc.so: cannot open shared object file: No such file or directory (checkDL at /opt/conda/conda-bld/pytorch_1573049304260/work/aten/src/ATen/DynamicLibrary.cpp:21).
But I can not solve this error.

training code

do you intend to publish training code any time soon?

训练问题

代码可以在windows环境下运行码?出现“No module named 'models.pix2pix_model'”问题怎么解决?谢谢

How to train model on high-resolution images

If I want to inference on high-resolution images(2048x2048), how do I need to train the model?
The current effect is that the higher the resolution, the worse the effect. In addition to modifying the crop size, do I need to modify the discriminator and others?

There is no latest_net_D.pth

i have a problem

FileNotFoundError: [Errno 2] No such file or directory: './checkpoints/debug/latest_net_D.pth'

SFI-Swin: Symmetric Face Inpainting

Dear reaserchers, please also consider checking our newly introduced face inpainting method to address the symmetry problems of general inpainting methods by using swin transformer and semantic aware discriminators.
Our proposed method showed better results in terms of fid score and newly proposed metric which focus on the face symmetry compared to some of the state-of-the-art methods including lama.
Our paper is availabe at:
https://www.researchgate.net/publication/366984165_SFI-Swin_Symmetric_Face_Inpainting_with_Swin_Transformer_by_Distinctly_Learning_Face_Components_Distributions

The code also will be published in:
https://github.com/mohammadrezanaderi4/SFI-Swin

Training Error

In the second and third train.py, I get the following error:

AttributeError: 'Namespace' object has no attribute 'load_baseg'

This error is in:
File "/content/crfill/util/util.py", line 222, in load_network
if opt.load_baseg:

Use your weight as the checkpoint

发现您是**人,如果不建议的话,那我直接打中文了。经过您之前的指导,我用自己的数据集,运行了您的code。我的数据集是有关于文章背景提取,即先mask掉文章的内容,然后修复提取背景图。

为了试探您的模型在我的数据集上是否是可行的。我做了初步的training, 我暂取了自己数据集中的1000张图片试跑,得到我的参数文件,然后做test。同时,我也直接用您的参数文件,在我的数据集上做test。 我发现,您有一个pretrained的weight,它在我的数据集上直接test的效果非常好。就是运行download.sh之后,有个checkpoints/objrmv/latest_net_G.pth。 我想把您的这个当作checkpoint,resume training from this checkpoint。为了training,似乎latest_net_G.pth是不够的,还需要它对应的 latest_net_D.pth, latest_net_D_aux.pth,以及iter.txt。

您能提供给我吗? 我将很感谢您!

warnings of multigpu

hello~
when I ran your model with 2 GPUs, each epoch was with such warnings:
"UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all"

These warnings usually happened due to parallel training.

I believe your codes had already been set well for multi GPU, cause your argument “--gpu 0,1 ” naturally enabled multiple running. Then, do I need to deal with such warnings? Or just ignore it?

Would be grateful if you can help me!

Hello, I am very interested in your research, but I have encountered some mistakes, how should I solve them?Thanks!

Traceback (most recent call last):
File "E:\code\crfill-master\train.py", line 14, in
opt = TrainOptions().parse()
File "E:\code\crfill-master\options\base_options.py", line 151, in parse
opt = self.gather_options()
File "E:\code\crfill-master\options\base_options.py", line 76, in gather_options
model_option_setter = models.get_option_setter(model_name)
File "E:\code\crfill-master\models_init_.py", line 35, in get_option_setter
model_class = find_model_using_name(model_name)
File "E:\code\crfill-master\models_init_.py", line 15, in find_model_using_name
modellib = importlib.import_module(model_filename)
File "D:\software\Anaconda3\envs\sss\lib\importlib_init_.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1030, in _gcd_import
File "", line 1007, in _find_and_load
File "", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'models.pix2pix_model'
If you can help me take care of him, I will be very grateful to you!

AttributeError: 'Namespace' object has no attribute 'load_baseg'

Traceback (most recent call last):
File "/root/crfill/train.py", line 27, in
trainer = create_trainer(opt)
File "/root/crfill/trainers/init.py", line 25, in create_trainer
instance = model(opt)
File "/root/crfill/trainers/pix2pix_trainer.py", line 20, in init
self.pix2pix_model = models.create_model(opt)
File "/root/crfill/models/init.py", line 41, in create_model
instance = model(opt)
File "/root/crfill/models/inpaint_model.py", line 32, in init
self.netG, self.netD = self.initialize_networks(opt)
File "/root/crfill/models/inpaint_model.py", line 110, in initialize_networks
netG = util.load_network(netG, 'G', opt.which_epoch, opt)
File "/root/crfill/util/util.py", line 222, in load_network
if opt.load_baseg:
AttributeError: 'Namespace' object has no attribute 'load_baseg'

在python train.py 的第二和第三阶段我都遇到了这个问题,请问该如何解决呢,期待您的回复

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format

raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for BaseConvGenerator:
size mismatch for conv14.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([96, 48, 3, 3]).
size mismatch for conv16.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([24, 24, 3, 3]).
size mismatch for conv16.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([24]).
size mismatch for conv17.weight: copying a param with shape torch.Size([3, 24, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 12, 3, 3]).

Can I test an image without a mask?

I mean, in real life, we just have a damaged picture. How can I inpaint it when I don't have a mask? Is there any algorithm that detects the mask automatically?

Issue with the model outputs at inference

Issue with the final stage of training :

python train.py \
	${PREFIX} \
	--batchSize ${BSIZE0} \
	--nThreads ${NWK} \
	--update_part all \
	--niter 10 \
	${EXTRA}

The issue I am facing:

Traceback (most recent call last):
  File "train.py", line 60, in <module>
    infer_out,inp = trainer.pix2pix_model.forward(data_i, mode='inference')
ValueError: too many values to unpack (expected 2)

Will you release the training script?

hi, nice work for image inpainting!

I only find the test.py, will you plan to release the training script? If yes, when will you release?

Thanks in advance!

您好,请问此网络是否可以支持任意分辨率的图像测试?

根据您examples文件夹中的图像,以及您的appendix中的输出结果,应该可以理解为您的网络测试时可以支持任意分辨率的图像和mask作为输入,但是实际我将examples中的用例作为测试输入时,由于网络中存在下采样、上采样,因此会使得采样操作后的输出特征分辨率与输入图像分辨率不一致,导致无法进行下述code。
x = x*mask + xin[:, 0:3, :, :]*(1.-mask)
请问您针对example中的分辨率的图像如何得到其输出的呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.