Hi, thank you for your awesome work.
Btw, I tried to transfer my own with other target images.
Basically, it works but face doesn't look like.
So I did the model fine-tuning in the way you prompted me to look like an iPER.
Execution with demo_imulator.py is no problem. Run_imulator.py prompts a problem, but I can execute without the parameter "has_detector".
Is this because of the problem of extracting faces? In addition, I would like to ask you if the model has been fine-tuned after the completion of this implementation?
The following is a hint of an execution error:
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ --src_path ./assets/src_imgs/imper_A_Pose/10006.png --tgt_path ./assets/samples/refs/iPER/024_8_3 --bg_ks 13 --ft_ks 3 --has_detector --post_tune --save_res
------------ Options -------------
T_pose: False
batch_size: 4
bg_ks: 13
bg_model: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
bg_replace: False
body_seg: False
cam_strategy: smooth
checkpoints_dir: ./outputs/checkpoints/
cond_nc: 3
data_dir: /p300/datasets/iPER
dataset_mode: iPER
debug: False
do_saturate_mask: False
face_model: assets/pretrains/sphere20a_20171020.pth
front_warp: False
ft_ks: 3
gen_name: impersonator
gpu_ids: 0
has_detector: True
hmr_model: assets/pretrains/hmr_tf2pt.pth
image_size: 256
images_folder: images_HD
ip:
is_train: False
load_epoch: 0
load_path: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
map_name: uv_seg
model: imitator
n_threads_test: 2
name: running
norm_type: instance
only_vis: False
output_dir: ./outputs/results/
part_info: assets/pretrains/smpl_part_info.json
port: 31100
post_tune: True
pri_path: ./assets/samples/A_priors/imgs
repeat_num: 6
save_res: True
serial_batches: False
smpl_model: assets/pretrains/smpl_model.pkl
smpls_folder: smpls
src_path: ./assets/src_imgs/imper_A_Pose/10006.png
swap_part: body
test_ids_file: val.txt
tex_size: 3
tgt_path: ./assets/samples/refs/iPER/024_8_3
time_step: 10
train_ids_file: train.txt
uv_mapping: assets/pretrains/mapper.txt
view_params: R=0,90,0/t=0,0,0
-------------- End ----------------
./outputs/checkpoints/running
Network impersonator was created
loaded net: ./outputs/checkpoints/lwb_imper_fashion_place/net_epoch_30_id_G.pth
Network deepfillv2 was created
loaded net: ./outputs/checkpoints/deepfillv2/net_epoch_50_id_G.pth
Personalization: meta imitation...
Traceback (most recent call last):
File "run_imitator.py", line 225, in
adaptive_personalize(test_opt, imitator, visualizer)
File "run_imitator.py", line 203, in adaptive_personalize
imitator.personalize(opt.src_path, visualizer=None)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad
return func(*args, **kwargs)
File "/media/ubuntu/新加卷/Prog/impersonator-master/models/imitator.py", line 117, in personalize
bbox, body_mask = self.detector.inference(img[0])
File "/media/ubuntu/新加卷/Prog/impersonator-master/utils/detectors.py", line 70, in inference
predictions = self.forward(img_list)[0]
File "/media/ubuntu/新加卷/Prog/impersonator-master/utils/detectors.py", line 40, in forward
predictions = self.model(images)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py", line 48, in forward
features = self.backbone(images.tensors)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torchvision/models/_utils.py", line 58, in forward
x = module(x)
File "/home/ubuntu/.virtualenvs/tensorflow362/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
RuntimeError: CUDA NVRTC error: NVRTC_ERROR_BUILTIN_OPERATION_FAILURE
The above operation failed in interpreter, with the following stack trace:
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at [email protected]
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True