Git Product home page Git Product logo

Comments (8)

tcwang0509 avatar tcwang0509 commented on August 10, 2024

You can take a look at the provided example dataset. You dataset should be in the same format.

from vid2vid.

yuanzhou15 avatar yuanzhou15 commented on August 10, 2024

I tried looking into the example data sets. Would I have the same architecture if I want a image to image transformation? Would i just need Train_A, Train_B and then Test_A and Test_B, and then I just need to specify data-root?

from vid2vid.

tcwang0509 avatar tcwang0509 commented on August 10, 2024

If you just want image-to-image translation instead of videos, you can just use this repo: https://github.com/NVIDIA/pix2pixHD/

from vid2vid.

yuanzhou15 avatar yuanzhou15 commented on August 10, 2024

We are currently using pix2pix as well, but want to see if vid2vid will yield better results since our pictures are a sequence of images. I saw that in the sample data sets, there are more than just .png file. I haven't got a chance to run vid2vid yet, but will we be able to run vid2vid with just train_A and train_B folders with just pictures?

from vid2vid.

tcwang0509 avatar tcwang0509 commented on August 10, 2024

Yes, make sure the images in the two folders are in corresponding order (i.e. first image in train_A corresponds to first image in train_B), and it should work.

from vid2vid.

ChristopherLu avatar ChristopherLu commented on August 10, 2024

Also try to see if I can run vid2vid on my own dataset. Are the sequence sub-folders (e.g., seq0001 for cityscapes dataset) necessary? Or I can simply group all image sequences under the train_A folder?

from vid2vid.

kartikJ-9 avatar kartikJ-9 commented on August 10, 2024

I tried looking into the example data sets. Would I have the same architecture if I want a image to image transformation? Would i just need Train_A, Train_B and then Test_A and Test_B, and then I just need to specify data-root?

I am trying to replicate the results of the pose model. I trained the model on colab. I have a sequence of images and openpose keypoints JSON file corresponding to each frame, that I need to provide to get the generated video for my use case... But I don't see Train_A,Train_B folders. Since I am trying it in colab, the .py PyTorch variables wont be accessible from colab command line. Please help me here. Thanks in advance.

from vid2vid.

songyn95 avatar songyn95 commented on August 10, 2024

@tcwang0509 @yuanzhou15
At present, I am also doing image to image translation. I have tried pix2pix and pix2pixHD, but since my image is a continuous video frame, I also want to try the vid2vid method. I want to ask the following questions:

  1. what is the test effect of this method?
  2. Thecontinue_ train command seems to be unavailable. It needs to be retrained every time. Have you ever encountered this?
  3. if use_real_img command is used, the first frame is In folder test_B? Must the number of images in folder test_B correspond to that in folder test_A?

Looking forward to your reply!!!

from vid2vid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.