Git Product home page Git Product logo

everybodydancenow's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

everybodydancenow's Issues

EverybodyDanceNow Colab

so with these dependencies it works?

--python3.6
--torch 1.6.0+cu101
--torchvision 0.7.0+cu101
--tensorflow 1.15.0

Capture

Synthesized Video Dataset

Hi,
Could you please upload a dataset of synthesized frames or just some videos that you obtained as a result of this algorithm? It would be very important for my project.
Thank you

About evaluation metrics

Hi, I am an undergraduate from Peking Univ. currently reimplementing the project, everything's going fine but I really need the method/code to evaluate the result, specifically, the SSIM/IPIPS evaluation metrics. Besides, I'd like to know how you guys crop the face and body region for all test images. Thanks!

Feel free to contact me by email at [email protected]

Issues in Dataset Statistics

I downloaded the dataset from here as mentioned on the project website. I faced some discrepancies in the numbers I read in the paper, and what I found in the dataset:
1] In Section 3: Dataset Collection of the appendix, the paper mentions that first 20% of the filmed footage is used for training and the last 80% for testing. However, when I calculated the number of frames of data (for the 5 subjects) in the downloaded datasets, I see the following numbers:
Subject : train test ratio
Subject 1: 11642 3598 3.23
Subject 2: 10623 2794 3.80
Subject 3: 9948 1848 5.38
Subject 4: 23410 4546 5.15
Subject 5: 25214 4998 5.04
Why the mismatch. Is it a typo in the paper?

2] In the same section, the paper mentions that for every subject, 120fps videos were shot, with the duration of each video being somewhere between 8 to 17 minutes.
However, looking at the number of frames, the total runtime of all frames is only 13 min 41 seconds (assuming 120 fps), whereas it would be expected to be at least around 40+ minutes. Is only part of the filmed footage used in the final dataset?

Number of Training Epochs

The paper gives the number of epochs to train for each stage as follows:
5 epochs for global stage
30 epochs for local stage
5 epochs for FaceGAN

For each stage, the code divides the training epochs into two parts niter (epochs at starting learning rate) and niter_decay (epochs at decaying learning rate). For a given stage, how do we divide the total epochs (say 30 for the local stage) into these two parts. Equally? Some other ratio?

Why does training require Tensorflow?

I have PyTorch 1.11.0 with CUDA 10.2 installed, but when attempting to run training, the process dies due to a lack of a Tensorflow dependency. Where should the configuration be changed to ensure PyTorch is used. Perhaps this code does not work with PyTorch 1.11?

Could applied to face motion transfer?

Thank you for your excellent work!
It seems 'everybody dance now' is able to apply to face motion transfer? If so, I just need to replace the skeleton frame to face landmark frame int 'sample_data/train/train_label'? Looking forward to your reply.

how to use detected pose directly

I want to make a set of detected pose from some source dancing videos , and cache them .

when somebody give me a photo , i use one detected pose from the cache , and drive the photo to dance

Can we use multiple subjects for training?

Hi, @carolineec Thanks for sharing your code! But I have some questions about your training dataset AlignedDataset (in the aligned_dataset.py).

It seems that it only allows using one subject for training. In this line of code, one can see that whether we have the next sample depends on the size of the dataset. So I guess that we can only use one subject for one training dataset. Is it correct? If we want to test it on another subject, we need to train the model again?

Thanks for your time!
Best
Haomiao

Can you share the pytorch version ?

I am getting this error while training subject1 local state and subject2 global stage:

AttributeError: 'int' object has no attribute 'numel'

Pose Normalization code

Hi, has anyone trained to run the script graph_posenorm.py, obtaining good results as shown in the paper?
I have tried it, but it doesn't work at all. Could someone suggest to me how to do it?
I'm trying to modify it, but I still don't have good results.

Thank you in advance

Change output when inferencing to over 256x256 and to use rectangular aspect ratio?

I'm training with python3 train_fullts.py \ --name ./model_global/ \ --dataroot ./dataset/train/ \ --checkpoints_dir ./checkpoints/ \ --loadSize 512 \ --no_instance \ --no_flip \ --tf_log \ --label_nc 6 \ --resize_or_crop scale_width \ --save_latest_freq 100 \
and inferencings with python3 test_fullts.py \ --name model_global \ --dataroot ./dataset/test/ \ --checkpoints_dir ./checkpoints/ \ --results_dir ./result/ \ --loadSize 512 \ --no_instance \ --how_many 10000 \ --label_nc 6 \ --aspect_ratio 2.0 \, but when inferencing, the result is a square 256x256, no matter how I change the output. How do I output in a non square aspect ratio and at a resolution greater than 256?

requirements.txt ?

I'm getting this error while training

---------- Networks initialized ------------- C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\_compile.py:24: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information return torch._dynamo.disable(fn, recursive)(*args, **kwargs) model [Pix2PixHDModel] was created create web directory WHERE_TO_SAVE_CHECKPOINTS\MY_MODEL_NAME_global\web... Traceback (most recent call last): File "D:\MTP\EverybodyDanceNow\train_fullts.py", line 63, in <module> losses, generated = model(Variable(data['label']), Variable(data['next_label']), Variable(data['image']), \ File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\parallel\data_parallel.py", line 183, in forward return self.module(*inputs[0], **module_kwargs[0]) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "D:\MTP\EverybodyDanceNow\models\pix2pixHD_model_fullts.py", line 201, in forward I_0 = self.netG.forward(input_concat) File "D:\MTP\EverybodyDanceNow\models\networks.py", line 228, in forward return self.model(input) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\container.py", line 217, in forward input = module(input) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\instancenorm.py", line 87, in forward return self._apply_instance_norm(input) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\instancenorm.py", line 36, in _apply_instance_norm return F.instance_norm( File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 2525, in instance_norm _verify_spatial_size(input.size()) File "C:\Users\saira\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 2493, in _verify_spatial_size raise ValueError(f"Expected more than 1 spatial element when training, got input size {size}") ValueError: Expected more than 1 spatial element when training, got input size torch.Size([1, 512, 1, 1])

Perhaps it is because of different pytorch version. May I get a requirements.txt file to run correctly ?

How to implement this project

Hi,
I'm a beginner, I don't really understand the process( in README ) to implement this project, can someone give me advice about how to train my own module ( like where to place my source & target video or should run which file ) and get the result from it?

Thanks!

Is detecting part included?

Hi, @carolineec Thank you for your sharing. I noticed that you mentioned that this work includes detecting whether the video is real or forged by EDN. Is it included in your open source code? If there is any, could you indicate its location; if not, could you add it?
Thanks for your time!
Best
IItaly

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.