Git Product home page Git Product logo

shrubb / latent-pose-reenactment Goto Github PK

View Code? Open in Web Editor NEW
179.0 5.0 34.0 1.62 MB

The authors' implementation of the "Neural Head Reenactment with Latent Pose Descriptors" (CVPR 2020) paper.

Home Page: https://shrubb.github.io/research/latent-pose-reenactment/

License: Apache License 2.0

Python 97.43% Shell 2.57%
deep-learning head-reenactment face-reenactment talking-head generative-model pose-estimation landmark-detection facial-landmarks self-supervised-learning voxceleb

latent-pose-reenactment's People

Contributors

shrubb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

latent-pose-reenactment's Issues

Speed up preprocessing step?

Hi, I have cloned this repo and used the utils/preprocess_dataset.sh as it is to preprocess VoxCeleb2 dataset. It has been more than 2 months but only about 1/6 of the dataset has been processed. Is this the normal speed of preprocessing or have I done anything wrong? If it helps, I am running on 2 RTX3080 GPUs with all default settings.
Where do I need to change to speed up the preprocessing stage?
Where can I increase the batch size for preprocessing and for the Graphonomy model?
Thank you

Question about training for the embedding network of the discriminator

I'm sincerely thanks for sharing your wonderful work and its code publicly.

As I searched your code, I have a question about the training logic for the embedding network of the discriminator.

In this line :
https://github.com/shrubb/latent-pose-reenactment/blob/master/discriminators/no_landmarks.py#L156

embed = None
if hasattr(self, 'embed'):
    embed = self.embed(label)

fake_in = fake_rgbs
fake_score_G, fake_features = self.pass_inputs(fake_in, embed)
fake_score_D, _ = self.pass_inputs(fake_in.detach(), embed.detach()) # what I want to discuss! 

real_in = target_rgbs
real_score, real_features = self.pass_inputs(real_in, embed)

data_dict['fake_features'] = fake_features
data_dict['real_features'] = real_features
data_dict['real_embedding'] = embed
data_dict['fake_score_G'] = fake_score_G
data_dict['fake_score_D'] = fake_score_D
data_dict['real_score'] = real_score

Is there any specific reason to detach the embedded features when we calculate adversarial loss for discriminator on fake images?

In addition, only by this way, can we guarantee that the embedder's features will be differently mapped among indices?
I think a trivial solution can exist unless we do not apply a regularization term for those features, e.g., uniformity.

Best,
Junsoo Lee

Some error when loading seg net

Thanks for sharing your research and code. When I run ./utils/preprocess_dataset .sh to get image segmentation, I meet with the problem that the checkpoint's key does not match the model, can you tell me some reasom about? some log is below

Constructing DeepLabv3+ model...
Number of classes: 20
Output stride: 16
Number of Input Channels: 3
unexpected key "source_graph_2_fea.node_fea_for_res" in state_dict
unexpected key "source_graph_2_fea.node_fea_for_hidden" in state_dict
unexpected key "source_graph_2_fea.weight" in state_dict
unexpected key "source_skip_conv.0.weight" in state_dict
unexpected key "source_skip_conv.0.bias" in state_dict
unexpected key "source_semantic.weight" in state_dict
unexpected key "source_semantic.bias" in state_dict
unexpected key "middle_semantic.weight" in state_dict
unexpected key "middle_semantic.bias" in state_dict
unexpected key "middle_source_featuremap_2_graph.pre_fea" in state_dict
unexpected key "middle_source_featuremap_2_graph.weight" in state_dict
unexpected key "middle_source_graph_conv1.weight" in state_dict
unexpected key "middle_source_graph_conv2.weight" in state_dict
unexpected key "middle_source_graph_conv3.weight" in state_dict
unexpected key "middle_source_graph_2_fea.node_fea_for_res" in state_dict
unexpected key "middle_source_graph_2_fea.node_fea_for_hidden" in state_dict
unexpected key "middle_source_graph_2_fea.weight" in state_dict
unexpected key "middle_skip_conv.0.weight" in state_dict
unexpected key "middle_skip_conv.0.bias" in state_dict
unexpected key "transpose_graph_source2target.weight" in state_dict
unexpected key "transpose_graph_source2target.adj" in state_dict
unexpected key "transpose_graph_target2source.weight" in state_dict
unexpected key "transpose_graph_target2source.adj" in state_dict
unexpected key "transpose_graph_middle2source.weight" in state_dict
unexpected key "transpose_graph_middle2source.adj" in state_dict
unexpected key "transpose_graph_middle2target.weight" in state_dict
unexpected key "transpose_graph_middle2target.adj" in state_dict
unexpected key "transpose_graph_source2middle.weight" in state_dict
unexpected key "transpose_graph_source2middle.adj" in state_dict
unexpected key "transpose_graph_target2middle.weight" in state_dict
unexpected key "transpose_graph_target2middle.adj" in state_dict
unexpected key "fc_graph_source.weight" in state_dict
unexpected key "fc_graph_target.weight" in state_dict
unexpected key "fc_graph_middle.weight" in state_dict
missing keys in state_dict: "{'xception_features.block10.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block15.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block14.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block20.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block13.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block4.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block4.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.conv4.pointwise_bn.num_batches_tracked', 'xception_features.block2.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block3.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block12.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.1.depthwise_bn.num_batches_tracked', 'transpose_graph.weight', 'xception_features.block17.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block1.rep.4.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block2.block2_lastconv.1.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block19.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block14.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block5.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block2.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.1.pointwise_bn.num_batches_tracked', 'decoder.0.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block6.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block12.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block13.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block2.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block5.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block8.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block17.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.bn2.num_batches_tracked', 'xception_features.block15.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.bn1.num_batches_tracked', 'xception_features.block7.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block1.rep.4.depthwise_bn.num_batches_tracked', 'xception_features.conv4.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block19.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block19.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block7.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block2.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block15.rep.1.pointwise_bn.num_batches_tracked', 'decoder.1.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.conv5.pointwise_bn.num_batches_tracked', 'global_avg_pool.2.num_batches_tracked', 'decoder.0.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block15.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block2.skipbn.num_batches_tracked', 'transpose_graph.adj', 'xception_features.block13.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block14.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block3.skipbn.num_batches_tracked', 'xception_features.block4.rep.3.depthwise_bn.num_batches_tracked', 'aspp3.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block1.rep.0.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.1.depthwise_bn.num_batches_tracked', 'aspp4.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block5.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block16.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block12.rep.5.pointwise_bn.num_batches_tracked', 'aspp3.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.conv3.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.0.pointwise_bn.num_batches_tracked', 'xception_features.block16.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block16.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block6.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.5.pointwise_bn.num_batches_tracked', 'decoder.1.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block12.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.2.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.3.depthwise_bn.num_batches_tracked', 'aspp4.atrous_convolution.depthwise_bn.num_batches_tracked', 'fc_graph.weight', 'concat_projection_bn1.num_batches_tracked', 'xception_features.block13.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block2.block2_lastconv.1.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block19.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block15.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block10.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.2.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.3.depthwise_bn.num_batches_tracked', 'aspp2.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block11.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.conv5.depthwise_bn.num_batches_tracked', 'aspp1.bn.num_batches_tracked', 'xception_features.block16.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block20.skipbn.num_batches_tracked', 'xception_features.block11.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.5.pointwise_bn.num_batches_tracked', 'feature_projection_bn1.num_batches_tracked', 'xception_features.block19.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.conv3.depthwise_bn.num_batches_tracked', 'aspp2.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block1.skipbn.num_batches_tracked', 'xception_features.block15.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.3.pointwise_bn.num_batches_tracked'}"
--images_path (/tmp/tmp.PlT5JZ5mgU

som questions about fintune on certain person

Hi @shrubb, when I try to finetune on a certain person, while the id keeps fine, but the expression does not go well, especially the mouth. so I'd like to ask some advice:
first, how many images would you suggest to finetune on a certain person and how to design this person's expression and pose distribution in order to get better results?
second, I also try to train higher resolution output based on your trained model(e.g,512), but the results are bad, and I also try to train from scratch, the id is fine but the expression reenactment seems not work.
Hope you can give some advice, thanks~

Questions about architecture details.

First, thank you for your awesome work! It is very helpful to me.
I have two questions regarding the architectures.

  1. What is the effect of moving the range as below in generator?

# Move tanh's output from (-1; 1) to (-0.25; 1.25)
rgb = rgb * 0.75
rgb += 0.5

  1. There is no norm_layer in resblocks of discriminator, could you give me a reason for that? I think it's unusual.

Again, thanks!

a question

I want to know why after the addition is aligned, the big angle frame will have problems(Head down: ghosting). Is this prediction sensitive to the location information of the face?

AttributeError: module 'imgaug.augmenters' has no attribute 'BlendAlphaSimplexNoise'

run.sh

# in this example, your images should be "$DATASET_ROOT/images-cropped/$IDENTITY_NAME/*.jpg"
DATASET_ROOT="/content/dataset"
IDENTITY_NAME="id00017"
MAX_BATCH_SIZE=8             # pick the largest possible, start with 8 and decrease until it fits in VRAM
CHECKPOINT_PATH="/content/latent-pose-reenactment/utils/latent-pose-release.pth"
OUTPUT_PATH="outputs/"       # a directory for outputs, will be created
RUN_NAME="tony_hawk_take_1"  # give your run a name if you want

# Important. See the note below
TARGET_NUM_ITERATIONS=230

# Don't change these
NUM_IMAGES=`ls -1 "$DATASET_ROOT/images-cropped/$IDENTITY_NAME" | wc -l`
BATCH_SIZE=$((NUM_IMAGES<MAX_BATCH_SIZE ? NUM_IMAGES : MAX_BATCH_SIZE))
ITERATIONS_IN_EPOCH=$(( NUM_IMAGES / BATCH_SIZE ))

mkdir -p $OUTPUT_PATH

python3 train.py \
    --config finetuning-base                 \
    --checkpoint_path "$CHECKPOINT_PATH"     \
    --data_root "$DATASET_ROOT"              \
    --train_split_path "$IDENTITY_NAME"      \
    --batch_size $BATCH_SIZE                 \
    --num_epochs $(( (TARGET_NUM_ITERATIONS + ITERATIONS_IN_EPOCH - 1) / ITERATIONS_IN_EPOCH )) \
    --experiments_dir "$OUTPUT_PATH"         \


!sudo bash /content/dataset/run.sh


PID 763 - 2021-03-26 01:46:58,894 - INFO - utils.load_config_file - Using config configs/finetuning-base.yaml
PID 763 - 2021-03-26 01:46:58,901 - INFO - utils.get_args_and_modules - Loading checkpoint file /content/latent-pose-reenactment/utils/latent-pose-release.pth
PID 763 - 2021-03-26 01:47:00,262 - INFO - utils.setup - Random Seed: 123
PID 763 - 2021-03-26 01:47:00,262 - INFO - train.py - Initialized the process group, my rank is 0
PID 763 - 2021-03-26 01:47:00,262 - INFO - train.py - Loading dataloader 'voxceleb2_segmentation_nolandmarks'
PID 763 - 2021-03-26 01:47:00,470 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Determining the 'train' data source
PID 763 - 2021-03-26 01:47:00,470 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Checking if '/content/dataset/images-cropped/id00017' is a directory...
PID 763 - 2021-03-26 01:47:00,471 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Yes, it is; the only train identity will be 'id00017'
PID 763 - 2021-03-26 01:47:00,472 - INFO - dataloaders.common.voxceleb.get_part_data (train) - This dataset has 3 images
PID 763 - 2021-03-26 01:47:00,472 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Setting `args.num_labels` to 1 because we are fine-tuning or the model has been fine-tuned
PID 763 - 2021-03-26 01:47:00,472 - WARNING - dataloader - Could not find the '.npy' file with bboxes, will assume the images are already cropped
PID 763 - 2021-03-26 01:47:00,473 - INFO - dataloaders.augmentation - Pixelwise augmentation: True
PID 763 - 2021-03-26 01:47:00,473 - INFO - dataloaders.augmentation - Affine scale augmentation: True
PID 763 - 2021-03-26 01:47:00,473 - INFO - dataloaders.augmentation - Affine shift augmentation: True
Traceback (most recent call last):
  File "train.py", line 129, in <module>
    dataloader_train = m['dataloader'].get_dataloader(args, part='train', phase='train')
  File "/content/latent-pose-reenactment/dataloaders/dataloader.py", line 28, in get_dataloader
    dataset = self.dataset.get_dataset(args, part)
  File "/content/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 40, in get_dataset
    augmenter = augmentation.get_augmentation_seq(args)
  File "/content/latent-pose-reenactment/dataloaders/common/augmentation.py", line 24, in get_augmentation_seq
    return ParametricAugmenter(args)
  File "/content/latent-pose-reenactment/dataloaders/common/augmentation.py", line 60, in __init__
    iaa.BlendAlphaSimplexNoise(
AttributeError: module 'imgaug.augmenters' has no attribute 'BlendAlphaSimplexNoise'

Training progress slow

Hi shrubb, I tried running the default code on 2 RTX3080 GPUs with default settings to train from scratch with data/splits/train.csv as the identites, but the training time is taking very long (at least 7.04 seconds per iteration, so 1 epoch takes at least 24 hours).
I tried changing num_workers to 0, pin_memory in the dataloader to True and decreasing batch size but it still faces the same issue.
Would like to enquire if theres any way to speed up training process?
Thanks

FileNotFoundError

sir, when I am running train.py from the following script:

# in this example, your images should be "$DATASET_ROOT/images-cropped/$IDENTITY_NAME/*.jpg"
DATASET_ROOT="DATASET_ROOT"
IDENTITY_NAME="personA"
MAX_BATCH_SIZE=8             # pick the largest possible, start with 8 and decrease until it fits in VRAM
CHECKPOINT_PATH="checkpoints/latent-pose-release.pth"
OUTPUT_PATH="outputs/"       # a directory for outputs, will be created
RUN_NAME="tony_hawk_take_1"  # give your run a name if you want

# Important. See the note below
TARGET_NUM_ITERATIONS=230

# Don't change these
NUM_IMAGES=`ls -1 "$DATASET_ROOT/images-cropped/$IDENTITY_NAME" | wc -l`
BATCH_SIZE=$((NUM_IMAGES<MAX_BATCH_SIZE ? NUM_IMAGES : MAX_BATCH_SIZE))
ITERATIONS_IN_EPOCH=$(( NUM_IMAGES / BATCH_SIZE ))

mkdir -p $OUTPUT_PATH

python train.py \
    --config finetuning-base                 \
    --checkpoint_path "$CHECKPOINT_PATH"     \
    --data_root "$DATASET_ROOT"              \
    --train_split_path "$IDENTITY_NAME"      \
    --batch_size $BATCH_SIZE                 \
    --num_epochs $(( (TARGET_NUM_ITERATIONS + ITERATIONS_IN_EPOCH - 1) / ITERATIONS_IN_EPOCH )) \
    --experiments_dir "$OUTPUT_PATH"         \
    --experiment_name "$RUN_NAME"

Then I am getting the following error:

PID 2869 - 2021-03-24 11:45:10,551 - INFO - utils.load_config_file - Using config configs/finetuning-base.yaml
PID 2869 - 2021-03-24 11:45:10,554 - INFO - utils.get_args_and_modules - Loading checkpoint file checkpoints/latent-pose-release.pth
PID 2869 - 2021-03-24 11:45:15,676 - INFO - utils.setup - Random Seed: 123
PID 2869 - 2021-03-24 11:45:15,677 - INFO - train.py - Initialized the process group, my rank is 0
PID 2869 - 2021-03-24 11:45:15,677 - INFO - train.py - Loading dataloader 'voxceleb2_segmentation_nolandmarks'
PID 2869 - 2021-03-24 11:45:15,983 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Determining the 'train' data source
PID 2869 - 2021-03-24 11:45:15,983 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Checking if 'DATASET_ROOT/images-cropped/personA' is a directory...
PID 2869 - 2021-03-24 11:45:15,983 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Yes, it is; the only train identity will be 'personA'
PID 2869 - 2021-03-24 11:45:16,003 - INFO - dataloaders.common.voxceleb.get_part_data (train) - This dataset has 1184 images
PID 2869 - 2021-03-24 11:45:16,003 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Setting `args.num_labels` to 1 because we are fine-tuning or the model has been fine-tuned
PID 2869 - 2021-03-24 11:45:16,016 - WARNING - dataloader - Could not find the '.npy' file with bboxes, will assume the images are already cropped
PID 2869 - 2021-03-24 11:45:16,016 - INFO - dataloaders.augmentation - Pixelwise augmentation: True
PID 2869 - 2021-03-24 11:45:16,016 - INFO - dataloaders.augmentation - Affine scale augmentation: True
PID 2869 - 2021-03-24 11:45:16,016 - INFO - dataloaders.augmentation - Affine shift augmentation: True
PID 2869 - 2021-03-24 11:45:16,030 - INFO - dataloaders.dataloader - This process will receive a dataset with 1184 samples
PID 2869 - 2021-03-24 11:45:16,030 - INFO - train.py - Starting from checkpoint checkpoints/latent-pose-release.pth
PID 2869 - 2021-03-24 11:45:16,030 - INFO - utils.load_model_from_checkpoint - Loading embedder 'unsupervised_pose_separate_embResNeXt_segmentation'
PID 2869 - 2021-03-24 11:45:20,137 - INFO - utils.load_model_from_checkpoint - Loading generator 'vector_pose_unsupervised_segmentation_noBottleneck'
PID 2869 - 2021-03-24 11:45:20,531 - INFO - utils.load_model_from_checkpoint - Loading discriminator 'no_landmarks'
PID 2869 - 2021-03-24 11:45:21,225 - WARNING - utils.load_model_from_checkpoint - Discriminator has changed in config (maybe due to finetuning), so not loading `optimizer_D`
PID 2869 - 2021-03-24 11:45:21,225 - INFO - utils.load_model_from_checkpoint - Loading runner holycow
PID 2869 - 2021-03-24 11:45:21,225 - WARNING - utils.load_model_from_checkpoint - Embedder or generator has changed in config, so not loading `optimizer_G`
PID 2869 - 2021-03-24 11:45:21,229 - INFO - train.py - Starting from iteration #2714183
PID 2869 - 2021-03-24 11:45:25,641 - WARNING - runner - Parameters mismatch in generator and the initial value of weights' running averages. Initializing by cloning
PID 2869 - 2021-03-24 11:45:25,645 - INFO - train.py - For fine-tuning, computing an averaged identity embedding from 1184 frames
Traceback (most recent call last):
  File "train.py", line 247, in <module>
    for data_dict, _ in dataloader_train:
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data
    return self._process_data(data)
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nitin/anaconda3/envs/latent_face/lib/python3.8/site-packages/torch/utils/data/dataset.py", line 257, in __getitem__
    return self.dataset[self.indices[idx]]
  File "/home/nitin/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 196, in __getitem__
    dec_dicts = [self.loader.load_sample(path, i, self.imsize, **features_to_load) for i in dec_ids]
  File "/home/nitin/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 196, in <listcomp>
    dec_dicts = [self.loader.load_sample(path, i, self.imsize, **features_to_load) for i in dec_ids]
  File "/home/nitin/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 157, in load_sample
    segmentation = self.load_segm(path, i)
  File "/home/nitin/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 85, in load_segm
    raise FileNotFoundError(f'Sample {segm_path} not found')
FileNotFoundError: Sample DATASET_ROOT/segmentation-cropped/personA/00657.png not found

Where contents of folder IDENTITY_NAME be like:

(latent_face) nitin@nitin-desktop:~/latent-pose-reenactment/DATASET_ROOT/images-cropped/personA$ ls
00000.jpg  00092.jpg  00184.jpg  00276.jpg  00368.jpg  00460.jpg  00552.jpg  00644.jpg  00736.jpg  00828.jpg  00920.jpg  01012.jpg  01104.jpg 00001.jpg  00093.jpg  00185.jpg  00277.jpg  00369.jpg  00461.jpg  00553.jpg  00645.jpg  00737.jpg  00829.jpg  00921.jpg  01013.jpg  01105.jpg 00002.jpg  00094.jpg  00186.jpg  00278.jpg  00370.jpg  00462.jpg  00554.jpg  00646.jpg  00738.jpg  00830.jpg  00922.jpg  01014.jpg  01106.jpg

What I have observed is, when inside the folder IDENTITY_NAME the image are of extension *.jpg then the code is looking for images of type *.png and vice versa.

I request you to please help me in solving this.

how to not use finetune

I want to test w/o finetune, how should I do? I set finetune=False in finetune-base.yaml when running train.py, but it dosen't work

how to run without training.

how to execute without training, that is, passing a driver image and another source image and generating the output without training.
I would like to use in real time with a webcam, each frame of the webcam would act as a "driver", and any image of a face as an avatar.

Unexpected and missing key's in state-dict when constructing DeepLabv3+ model

Sorry for the large copy and paste dump. I get the below when trying to run the preprocess_dataset.sh file.

I am not quite sure what the issue is and cannot find any help online, any chance you might know what is going wrong?

Constructing DeepLabv3+ model...
Number of classes: 20
Output stride: 16
Number of Input Channels: 3
unexpected key "source_graph_2_fea.node_fea_for_res" in state_dict
unexpected key "source_graph_2_fea.node_fea_for_hidden" in state_dict
unexpected key "source_graph_2_fea.weight" in state_dict
unexpected key "source_skip_conv.0.weight" in state_dict
unexpected key "source_skip_conv.0.bias" in state_dict
unexpected key "source_semantic.weight" in state_dict
unexpected key "source_semantic.bias" in state_dict
unexpected key "middle_semantic.weight" in state_dict
unexpected key "middle_semantic.bias" in state_dict
unexpected key "middle_source_featuremap_2_graph.pre_fea" in state_dict
unexpected key "middle_source_featuremap_2_graph.weight" in state_dict
unexpected key "middle_source_graph_conv1.weight" in state_dict
unexpected key "middle_source_graph_conv2.weight" in state_dict
unexpected key "middle_source_graph_conv3.weight" in state_dict
unexpected key "middle_source_graph_2_fea.node_fea_for_res" in state_dict
unexpected key "middle_source_graph_2_fea.node_fea_for_hidden" in state_dict
unexpected key "middle_source_graph_2_fea.weight" in state_dict
unexpected key "middle_skip_conv.0.weight" in state_dict
unexpected key "middle_skip_conv.0.bias" in state_dict
unexpected key "transpose_graph_source2target.weight" in state_dict
unexpected key "transpose_graph_source2target.adj" in state_dict
unexpected key "transpose_graph_target2source.weight" in state_dict
unexpected key "transpose_graph_target2source.adj" in state_dict
unexpected key "transpose_graph_middle2source.weight" in state_dict
unexpected key "transpose_graph_middle2source.adj" in state_dict
unexpected key "transpose_graph_middle2target.weight" in state_dict
unexpected key "transpose_graph_middle2target.adj" in state_dict
unexpected key "transpose_graph_source2middle.weight" in state_dict
unexpected key "transpose_graph_source2middle.adj" in state_dict
unexpected key "transpose_graph_target2middle.weight" in state_dict
unexpected key "transpose_graph_target2middle.adj" in state_dict
unexpected key "fc_graph_source.weight" in state_dict
unexpected key "fc_graph_target.weight" in state_dict
unexpected key "fc_graph_middle.weight" in state_dict
missing keys in state_dict: "{'xception_features.block10.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block15.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block3.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.conv5.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block20.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block13.rep.3.pointwise_bn.num_batches_tracked', 'transpose_graph.weight', 'xception_features.block10.rep.3.pointwise_bn.num_batches_tracked', 'aspp2.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block11.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block6.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block1.rep.2.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.1.pointwise_bn.num_batches_tracked', 'fc_graph.weight', 'xception_features.block2.block2_lastconv.1.depthwise_bn.num_batches_tracked', 'xception_features.block8.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block3.skipbn.num_batches_tracked', 'xception_features.conv4.pointwise_bn.num_batches_tracked', 'xception_features.block16.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block2.rep.1.pointwise_bn.num_batches_tracked', 'decoder.1.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block2.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block13.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block15.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block9.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block14.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block20.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block1.skipbn.num_batches_tracked', 'xception_features.block1.rep.0.depthwise_bn.num_batches_tracked', 'xception_features.block16.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block7.rep.3.depthwise_bn.num_batches_tracked', 'decoder.0.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block17.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block2.block2_lastconv.1.pointwise_bn.num_batches_tracked', 'global_avg_pool.2.num_batches_tracked', 'aspp3.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block6.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.3.depthwise_bn.num_batches_tracked', 'decoder.0.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block16.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.bn1.num_batches_tracked', 'xception_features.block11.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block8.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block6.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.conv5.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.2.depthwise_bn.num_batches_tracked', 'aspp4.atrous_convolution.pointwise_bn.num_batches_tracked', 'xception_features.block4.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block3.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.4.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block1.rep.0.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.conv4.depthwise_bn.num_batches_tracked', 'xception_features.conv3.pointwise_bn.num_batches_tracked', 'xception_features.block7.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block17.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block19.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block20.skipbn.num_batches_tracked', 'transpose_graph.adj', 'xception_features.block5.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block1.rep.4.depthwise_bn.num_batches_tracked', 'xception_features.block15.rep.1.depthwise_bn.num_batches_tracked', 'aspp3.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block9.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block20.rep.5.pointwise_bn.num_batches_tracked', 'aspp2.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block15.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block14.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block19.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block5.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block15.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block8.rep.3.depthwise_bn.num_batches_tracked', 'aspp1.bn.num_batches_tracked', 'xception_features.block2.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.conv3.depthwise_bn.num_batches_tracked', 'xception_features.block14.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block18.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block8.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block18.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block13.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block11.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block4.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block2.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.3.pointwise_bn.num_batches_tracked', 'decoder.1.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block4.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block5.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block9.rep.3.pointwise_bn.num_batches_tracked', 'xception_features.block17.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block2.skipbn.num_batches_tracked', 'xception_features.bn2.num_batches_tracked', 'xception_features.block7.rep.5.depthwise_bn.num_batches_tracked', 'xception_features.block20.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block9.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block4.rep.3.pointwise_bn.num_batches_tracked', 'feature_projection_bn1.num_batches_tracked', 'concat_projection_bn1.num_batches_tracked', 'xception_features.block3.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block10.rep.1.depthwise_bn.num_batches_tracked', 'xception_features.block7.rep.1.pointwise_bn.num_batches_tracked', 'xception_features.block3.rep.5.pointwise_bn.num_batches_tracked', 'xception_features.block12.rep.1.depthwise_bn.num_batches_tracked', 'aspp4.atrous_convolution.depthwise_bn.num_batches_tracked', 'xception_features.block15.rep.3.depthwise_bn.num_batches_tracked', 'xception_features.block10.rep.5.pointwise_bn.num_batches_tracked'}"
--images_path (/tmp/tmp.93UyFgH4hX) is a file, reading it for a list of files...
Found 1 images
Will output files in /home/ubuntu/data/dev/projects/latent-pose-reenactment/segmentation-cropped with names relative to /home/ubuntu/data/dev/projects/latent-pose-reenactment/images-cropped.
Example:
The segmentation for: /home/ubuntu/data/dev/projects/latent-pose-reenactment/images-cropped/jack/00000.jpg
Will be put in: /home/ubuntu/data/dev/projects/latent-pose-reenactment/segmentation-cropped/jack
2022-08-11 12:47:48: 0 / 1
/home/ubuntu/anaconda3/envs/lpr/lib/python3.7/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
Average inference time: 0.46386814300001333
/home/ubuntu/data/dev/projects/latent-pose-reenactment/utils

RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable

Wed Nov 17 06:44:26 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.44       Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:00:04.0 Off |                    0 |
| N/A   31C    P8    28W / 149W |      0MiB / 11441MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

%cd /content/latent-pose-reenactment/utils
!sudo bash /content/latent-pose-reenactment/utils/preprocess_dataset.sh

/content/latent-pose-reenactment/utils
/content/latent-pose-reenactment/utils
Got 1 folders, will process from 0-th to 999999999-th
0 images /content/dataset_ana2/images/ana ana
WARNING: /content/dataset_ana2/images-cropped/ana/ already exists, contains 0 files
Traceback (most recent call last):
  File "crop_as_in_dataset.py", line 680, in <module>
    cropper = ChosenFaceCropper((args.image_size, args.image_size))
  File "crop_as_in_dataset.py", line 211, in __init__
    self.face_detector = load_face_detector()
  File "crop_as_in_dataset.py", line 22, in load_face_detector
    return FaceDetector(device='cuda')
  File "/usr/local/lib/python3.7/dist-packages/face_alignment/detection/sfd/sfd_detector.py", line 31, in __init__
    self.face_detector.to(device)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 899, in to
    return self._apply(convert)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 570, in _apply
    module._apply(fn)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 593, in _apply
    param_applied = fn(param)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 897, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: all CUDA-capable devices are busy or unavailable
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

wrong index of inter-ocular in pose_reconstruction_error?

Hi shrubb, good to see you again.

In pose_reconstruction_error ,
the landmark distance is normalized by inter-ocular distance.

pose_reconstruction_error compute_pose_identity_error.py,

def pose_reconstruction_error(gt_landmarks, our_landmarks, apply_optimal_alignment=False):
    assert gt_landmarks.shape == (len(IDENTITIES), NUM_VIDEO_FRAMES, 68, 2)
    assert our_landmarks.shape == gt_landmarks.shape

........

    interocular = np.linalg.norm(gt_landmarks[:, :, 36] - gt_landmarks[:, :, 45], 
axis=-1).clip(min=1e-2)
    normalized_distances = np.linalg.norm(gt_landmarks - our_landmarks, axis=-1) / interocular[:, :, None]
    return normalized_distances.mean()

but, according to the landmark index,

image

I think the distance between (index 36 - index 45) is not inter-ocular distance.

maybe (index 39 - index 42) i guess??

Can we use the pretrained model to drive any arbitrary video?

Hi,
Thank you for your wonderful work! I was wondering, can we use the pre-trained model to drive a particular image? In the paper, it has been mentioned that the resulting images will have a lot of identity gaps. For the time being, I am not too worried about the identity gap, but would like to see the results. Currently, when I run, I am getting the error of identity_embedding missing in the pre-trained model. What identity_embedding was used to get results out of the pre-trained checkpoint?
Thank you

Image Shaking

Help pls, my dst images look something like this
Unknown

and src looks like this
1

preprocess cfg looks like this

`DO_DECODE_VIDEOS=
false

DO_CROP=
true
DO_COMPUTE_SEGMENTATION=
true
DO_COMPUTE_LANDMARKS=
false
DO_COMPUTE_POSE_3DMM=
false

DO_CROP_FFHQ=
false
DO_COMPUTE_SEGMENTATION_FFHQ=
false`

after fine-tuning and drive, my video is shaking, why can this be?

ezgif-4-f7ba1155c0ec

I assume this is due to the incorrect operation of the face tracker.

How to reconstruct the original uncut image with the output image?

Hello again, sorry for the inconvenience, how to rebuild the image once cropped replacing it with the result.

when processing an image it saves the image in image-cropped, then I run drive.py, what I want to do is overlay the drive output image on the original non-cropped image.

I know that result = torch_to_opencv (data_dict ['fake_rgbs'] [0]) is the output. then you should take this image and superimpose it on the original untrimmed frame.

About Graph

Thank you for your contribution,What drawing software do you use?,I strongly think your structure diagram is very beautiful。thank you :)

Could I get a code for preprocessing Voxceleb2?

Hi, again!

I'm trying to train your model from scratch. And I preprocessed Voxceleb2 datasets as follows:

Following the preprocessing of Neural Head Reenactment with Latent Pose Descriptors

  • [Sampling] Sample a frame at every [SAMPLE_RATE] = 25 (same as paper).
  • [Resizing] Detect face image -> make square -> enlarge 80% of a box -> resize 256 x 256
  • [Detecting others] Extract facial landmarks / segmentation mask

All preprocessing codes were implemented referring your codes in utils/crop_as_in_dataset.py and utils/preprocess_dataset.sh. Here, I have two questions about the preprocessing:

1 . I found that resizing operation generates rather unrealistic outputs due to the reflection padding. I think it could have a bad effect on training (in terms of perceptual loss). Also, segmentation masks based on these outputs are erroneous too. I attach intermediate results produced by your code :)

individualImage

  1. Second one is about sampling. I sampled every 25th frames (i.e., 0, 25, 50, ...) from the videos located at the same hash id. As you know, in that way, some hash ids have less than 9 frames. Did you discard those hash ids then..? I wonder the exact procedure to get lists: /data/splits/train.csv, /data/splits/val.csv

If possible, it would be greatly helpful if you can give a code for preprocessing.
Thank you, Burkov!

ValueError: Could not determine input data source, check `args.data_root`, `args.img_dir` and `args.val_split_path

when I am running drive.py as:

python3 drive.py outputs/tony_hawk_take_1/checkpoints/model_02715367.pth results/ --destination results/ --images_path DATASET_ROOT/images-cropped/personA/

then I am getting the following exception:

2021-03-25 23:19:44,423 - INFO - Will run on device 'cuda:0'
2021-03-25 23:19:44,423 - INFO - Loading checkpoint from 'outputs/tony_hawk_take_1/checkpoints/model_02715367.pth'
2021-03-25 23:19:44,715 - INFO - Loading embedder 'unsupervised_pose_separate_embResNeXt_segmentation'
2021-03-25 23:19:46,895 - INFO - Loading generator 'vector_pose_unsupervised_segmentation_noBottleneck'
2021-03-25 23:19:47,269 - INFO - Loading discriminator 'no_landmarks'
2021-03-25 23:19:47,519 - INFO - Loading dataloader 'voxceleb2_segmentation_nolandmarks'
2021-03-25 23:19:47,772 - INFO - Determining the 'val' data source
2021-03-25 23:19:47,772 - INFO - Checking if 'results/images-cropped/DATASET_ROOT/images-cropped/personA' is a directory...
2021-03-25 23:19:47,772 - INFO - No, it isn't
2021-03-25 23:19:47,772 - INFO - Checking if 'DATASET_ROOT/images-cropped/personA' is a file...
2021-03-25 23:19:47,772 - INFO - No, it isn't
2021-03-25 23:19:47,772 - INFO - Checking if 'results/images-cropped' is a directory...
2021-03-25 23:19:47,772 - INFO - No, it isn't
Traceback (most recent call last):
  File "drive.py", line 78, in <module>
    dataloader = Dataloader(saved_args.dataloader).get_dataloader(saved_args, part='val', phase='val')
  File "/home/nitin/latent-pose-reenactment/dataloaders/dataloader.py", line 28, in get_dataloader
    dataset = self.dataset.get_dataset(args, part)
  File "/home/nitin/latent-pose-reenactment/dataloaders/voxceleb2_segmentation_nolandmarks.py", line 33, in get_dataset
    dirlist = voxceleb.get_part_data(args, part)
  File "/home/nitin/latent-pose-reenactment/dataloaders/common/voxceleb.py", line 79, in get_part_data
    raise ValueError(
ValueError: Could not determine input data source, check `args.data_root`, `args.img_dir` and `args.val_split_path

I request you to please provide a complete command wise example with an idea of folder structures of the project.
This will help us in executing the project without any problem. Because argument --help is not helping much in many cases.

jitter output

When preprocessing a video, the frames appear to have jitter, and the video output from drive.py is jittered.
how to remove jitter.

Set up to replicate the reenactment results in the paper

Hi,
I am trying to replicate the pose reconstruction error and identity error reported in the paper. The instruction in this repo involves some amount of user's judgement in the fine tuning to get the best results.

Is there any default settings to generate the results in the paper? That would be good to test this model on number of videos for a fair comparison to other counter parts.

Thanks

Could you provide some missing files and double-check the meta-model checkpoint?

The README mentions the argument --config finetuning-base in the fine-tuning step and a training configuration configs/default.yaml in the training step. I suppose the config directory was not committed.

The preprocessing script uses a file inference_folder.py for Graphonomy, is it a custom script modified from the original inference.py? If so, could you provide the script?

Without the finetuning-base config, I manually added the --finetune argument for fine-tuning but encountered the following errors. How should I resolve it?

PID 10490 - 2020-10-29 11:14:39,776 - INFO - utils.load_config_file - Using config configs/finetuning-base.yaml
PID 10490 - 2020-10-29 11:14:39,776 - WARNING - utils.get_args_and_modules - Could not load config finetuning-base
PID 10490 - 2020-10-29 11:14:39,776 - INFO - utils.get_args_and_modules - Loading checkpoint file checkpoints/latent-pose-release.pth
PID 10490 - 2020-10-29 11:14:41,345 - INFO - utils.setup - Random Seed: 123
PID 10490 - 2020-10-29 11:14:42,995 - INFO - train.py - Initialized the process group, my rank is 0
PID 10490 - 2020-10-29 11:14:42,995 - WARNING - train.py - Sorry, multi-GPU fine-tuning is NYI, setting `--num_gpus=1`
PID 10490 - 2020-10-29 11:14:42,995 - INFO - train.py - Loading dataloader 'voxceleb2_segmentation_nolandmarks'
PID 10490 - 2020-10-29 11:14:43,133 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Determining the 'train' data source
PID 10490 - 2020-10-29 11:14:43,133 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Checking if '/path/latent-pose-reenactment/data/VoxCeleb1_test_finetuning/images-cropped/id10280/XiKRlssBw2M/000330#001148.mp4' is a directory...
PID 10490 - 2020-10-29 11:14:43,133 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Yes, it is; the only train identity will be 'id10280/XiKRlssBw2M/000330#001148.mp4'
PID 10490 - 2020-10-29 11:14:43,138 - INFO - dataloaders.common.voxceleb.get_part_data (train) - This dataset has 818 images
PID 10490 - 2020-10-29 11:14:43,139 - INFO - dataloaders.common.voxceleb.get_part_data (train) - Setting `args.num_labels` to 1 because we are fine-tuning or the model has been fine-tuned
PID 10490 - 2020-10-29 11:14:43,148 - WARNING - dataloader - Could not find the '.npy' file with bboxes, will assume the images are already cropped
PID 10490 - 2020-10-29 11:14:43,148 - INFO - dataloaders.augmentation - Pixelwise augmentation: True
PID 10490 - 2020-10-29 11:14:43,148 - INFO - dataloaders.augmentation - Affine scale augmentation: True
PID 10490 - 2020-10-29 11:14:43,148 - INFO - dataloaders.augmentation - Affine shift augmentation: True
PID 10490 - 2020-10-29 11:14:43,160 - INFO - dataloaders.dataloader - This process will receive a dataset with 409 samples
PID 10490 - 2020-10-29 11:14:43,160 - INFO - train.py - Starting from checkpoint checkpoints/latent-pose-release.pth
PID 10490 - 2020-10-29 11:14:43,160 - INFO - utils.load_model_from_checkpoint - Loading embedder 'unsupervised_pose_separate_embResNeXt_segmentation'
PID 10490 - 2020-10-29 11:14:44,027 - INFO - utils.load_model_from_checkpoint - Loading generator 'vector_pose_unsupervised_segmentation_noBottleneck'
PID 10490 - 2020-10-29 11:14:44,552 - INFO - utils.load_model_from_checkpoint - Loading discriminator 'no_landmarks'
PID 10490 - 2020-10-29 11:14:45,501 - WARNING - utils.load_model_from_checkpoint - Discriminator has changed in config (maybe due to finetuning), so not loading `optimizer_D`
PID 10490 - 2020-10-29 11:14:45,501 - INFO - utils.load_model_from_checkpoint - Loading runner holycow
PID 10490 - 2020-10-29 11:14:45,502 - WARNING - utils.load_model_from_checkpoint - Embedder or generator has changed in config, so not loading `optimizer_G`
PID 10490 - 2020-10-29 11:14:45,503 - INFO - train.py - Starting from iteration #2714183
PID 10490 - 2020-10-29 11:14:50,379 - WARNING - runner - Parameters mismatch in generator and the initial value of weights' running averages. Initializing by cloning
PID 10490 - 2020-10-29 11:14:50,386 - INFO - train.py - For fine-tuning, computing an averaged identity embedding from 409 frames
PID 10490 - 2020-10-29 11:14:53,639 - INFO - train.py - Entering training loop
/path/anaconda3/envs/latent-pose/lib/python3.7/site-packages/torch/nn/functional.py:3385: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
  warnings.warn("Default grid_sample and affine_grid behavior has changed "
Traceback (most recent call last):
  File "train.py", line 291, in <module>
    epoch, args, phase='train', writer=writer, saver=saver)
  File "/path/Documents/latent-pose-reenactment/runners/holycow.py", line 230, in run_epoch
    all_data_dict, losses_G_dict, losses_D_dict = training_module(data_dict, target_dict)
  File "/path/anaconda3/envs/latent-pose/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/path/latent-pose-reenactment/runners/holycow.py", line 178, in forward
    crit_out = criterion(data_dict)
  File "/path/anaconda3/envs/latent-pose/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/path/latent-pose-reenactment/criterions/dis_embed.py", line 22, in forward
    fake_embed = data_dict['embeds_elemwise']
KeyError: 'embeds_elemwise'

Shared Memory error

Hey @shrubb
I was using your inference_folder.py script in utils/Graphonomy/exp/inference however it was showing the following error. I was using it to pre-process videos.

ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 761, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "/root/miniconda/envs/test/lib/python3.7/multiprocessing/queues.py", line 104, in get
    if not self._poll(timeout):
  File "/root/miniconda/envs/test/lib/python3.7/multiprocessing/connection.py", line 257, in poll
    return self._poll(timeout)
  File "/root/miniconda/envs/test/lib/python3.7/multiprocessing/connection.py", line 414, in _poll
    r = wait([self], timeout)
  File "/root/miniconda/envs/test/lib/python3.7/multiprocessing/connection.py", line 920, in wait
    ready = selector.select(timeout)
  File "/root/miniconda/envs/test/lib/python3.7/selectors.py", line 415, in select
    fd_event_list = self._selector.poll(timeout)
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
    _error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3038) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "exp/inference/inference_folder.py", line 247, in <module>
    for sample_idx, (images, images_flipped, image_paths, original_sizes) in enumerate(dataloader):
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 841, in _next_data
    idx, data = self._get_data()
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 808, in _get_data
    success, data = self._try_get_data()
  File "/root/miniconda/envs/test/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 774, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 3038) exited unexpectedly

Some questions about training from scratch

Hi Egor Burkov, thanks for your great project. Now I'm trying to train the meta learning model from scratch, however, the training results are not good as your model. Our dataset is much smaller (625 people) than VoxCeleb2. Is this the major reason that the training results look not great as yours? What is the most important factor affecting the performance of the model?

preprocess_dataset.sh error please help

!sudo bash /content/latent-pose-reenactment/utils/preprocess_dataset.sh

/content/latent-pose-reenactment/utils
Got 2 folders, will process from 0-th to 999999999-th
0 images /content/dataset//images/id00017 id00017
WARNING: /content/dataset//images-cropped/id00017/ already exists, contains 0 files
0% 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "crop_as_in_dataset.py", line 684, in
for input_image in tqdm(image_loader):
File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1104, in iter
for obj in iterable:
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/usr/local/lib/python3.7/dist-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 85, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>

0% 0/2 [00:00<?, ?it/s]

errr

How to continue training without replacing the previous training?

How to continue training without replacing the previous training?
I have images of a face divided into part1 and part2

first I train part1 of images of face.

when I finish training and do a driver, you can see the face of the images in part1

Then I want to add the images from part 2.

then I load the pth and train from there with the images of part 2

The problem is that when I make a driver, only the images of part2 are chosen, it is as if part2 were to overwrite part 1.

What I expected was to have more variety of expressions, so expressions from part 1 and part 2 are chosen.

How to avoid overwriting images or expressions from a previous training session?

I found quite amount of datasets are missed in train.csv

Hello. I manually preprocessed all Voxceleb2 datasets for training the provided model from scratch.

However, I found that many hashes of videos in Voxceleb2 are missed in data/splits/train.csv.

Can I get the reason for standards whether to contain in the train.csv or not?

Anyway, thank you for providing an interesting paper.

Question about preprocessing Voxceleb2

hello shrubb!

Thanks for sharing your research and code. I research Neural Talking Head based on your works, and it will be helpful if you give me some advice.

I think it will be an extension of #6
When I downloaded the video files provided by official VoxCeleb2(https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html), there was a problem that the forehead was cropped as follows.

image

#6 showed the results ad follows.
image

In the paper, preprocessing about the data is mentioned as follow(https://arxiv.org/pdf/2004.12000.pdf)

Our training dataset is a collection of YouTube videos from VoxCeleb2 [4]. There are on the order of 100,000 videos of about 6,000 people. We sampled 1 of every 25 frames from each video, leaving around seven million of total training images. In each image, we re-cropped the annotated face by first capturing its bounding box with the S3FD detector [43], then making that box square by enlarging the smaller side, growing the box’s sides by 80% keeping the center, and finally resizing the cropped image to 256 × 256.

When using Voxceleb2 data, did you download the video directly through the youtube url and pre-process it instead of using the video file provided by the official? If so, could you please share the data?

How to generalize more facial expressions and improve resolution?

hello good job, I would like to know how to have more expressions in the output of driver.py, I train with a variety of facial expressions, but it seems that it only uses some expressions, there is some parameter to adjust to make it generate more expressions.
also if it is possible to increase the output resolution, example 720p

I got a TypeError when running 'drive.py', can you help me see what's wrong. The opencv version is the same as yours (4.3.0.36).

2020-11-18 15:02:37,009 - INFO - Will run on device 'cuda:0'
2020-11-18 15:02:37,009 - INFO - Loading checkpoint from '/data/home/wws/reenactment/latent-pose-reenactment/outputs/trump_cropface_1/checkpoints/model_02714323.pth'
2020-11-18 15:02:37,579 - INFO - Loading embedder 'unsupervised_pose_separate_embResNeXt_segmentation'
2020-11-18 15:02:41,966 - INFO - Loading generator 'vector_pose_unsupervised_segmentation_noBottleneck'
2020-11-18 15:02:42,804 - INFO - Loading discriminator 'no_landmarks'
2020-11-18 15:02:43,519 - INFO - Loading dataloader 'voxceleb2_segmentation_nolandmarks'
2020-11-18 15:02:43,942 - INFO - Determining the 'val' data source
2020-11-18 15:02:43,943 - INFO - Checking if '/data/home/wws/reenactment/latent-pose-reenactment/dataset/biden_videos/images-cropped/crop.mp4' is a directory...
2020-11-18 15:02:43,943 - INFO - Yes, it is; the only val identity will be 'crop.mp4'
2020-11-18 15:02:43,946 - INFO - This dataset has 326 images
2020-11-18 15:02:43,946 - INFO - Setting args.num_labels to 1 because we are fine-tuning or the model has been fine-tuned
2020-11-18 15:02:43,952 - WARNING - Could not find the '.npy' file with bboxes, will assume the images are already cropped
2020-11-18 15:02:43,952 - INFO - args.inference is set, so switching off all augmentations
2020-11-18 15:02:43,953 - INFO - This process will receive a dataset with 326 samples
0%| | 0/326 [00:00<?, ?it/s]
Traceback (most recent call last):
File "drive.py", line 94, in
result = torch_to_opencv(data_dict['fake_rgbs'][0])
File "drive.py", line 92, in torch_to_opencv
return cv2.cvtColor(image, cv2.COLOR_RGB2BGR, dst=image)
TypeError: Expected Ptrcv::UMat for argument 'dst'

only one image for finetune

Hi, @shrubb, thanks for your great work, and I train it normally, but I have some questions to ask:

First, just as mentioned above, when finetuning on only one image, the Id and pose extractor use the same image, and the results seem normal, since training on one image almost consumes litter time, it seems this meta-learning process has realized many to many's pose reenactment?

Second, I also try to add more images to train since only one image seems cannot guarantee the id and resolution, but more images result in a less accurate expression, so there has a trade-off between id and pose. I wonder if there are some other methods to solve this problem?

Hope you can give some advice, thanks~

assert self.video_capture.isOpened() AssertionError in Ubuntu 20.4

After configuring the project as given in https://github.com/shrubb/latent-pose-reenactment/blob/master/INSTALL.md, While I am running the following command:

python utils/crop_as_in_dataset.py SOURCE="./DATASET_ROOT/videos/Jong/Jong.mp4" DESTINATION="./DATASET_ROOT/videos/Jong/outputs/"

Then I am getting the following exception:

Traceback (most recent call last):
  File "/home/admin/latent-pose-reenactment/utils/crop_as_in_dataset.py", line 663, in <module>
    image_reader = ImageReader.get_image_reader(args.source)
  File "/home/admin/latent-pose-reenactment/utils/crop_as_in_dataset.py", line 465, in get_image_reader
    return OpencvVideoCaptureReader(source)
  File "/home/admin/latent-pose-reenactment/utils/crop_as_in_dataset.py", line 534, in __init__
    assert self.video_capture.isOpened()
AssertionError

Please help me in solving this exception.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.