Git Product home page Git Product logo

Comments (6)

patrickjonesdotca avatar patrickjonesdotca commented on August 15, 2024

I also tried !python /content/Text-To-Video-Finetuning/inference.py --model /content/Text-To-Video-Finetuning/models/model_scope_diffusers --prompt "cat in a space suit" and had the same output

from text-to-video-finetuning.

ExponentialML avatar ExponentialML commented on August 15, 2024

Hey there. After training, are you pointing to the trained model?

By default, it should be placed at the script root under ./outputs/train_<date>

from text-to-video-finetuning.

dvschultz avatar dvschultz commented on August 15, 2024

What are you trying to view the video in? I’ve found there’s something weird about the codec sometimes and it needs to be viewed in an application like VLC

from text-to-video-finetuning.

patrickjonesdotca avatar patrickjonesdotca commented on August 15, 2024

Hey there. After training, are you pointing to the trained model?

By default, it should be placed at the script root under ./outputs/train_<date>

Yes I did try the trained model. Trained two different ones in fact.
And then I thought I would do a sanity check and try to generate an image with the installed "base" model and filed this report.

Am I trying to generate an image correctly immediately after install with this line? !python /content/Text-To-Video-Finetuning/inference.py --model /content/Text-To-Video-Finetuning/models/model_scope_diffusers --prompt "cat in a space suit" because if that command is incorrect I've been on the wrong track.

from text-to-video-finetuning.

polyware-ai avatar polyware-ai commented on August 15, 2024

If you have lots of videos you might need to train it for longer. How many steps did you train it and how many videos? 2500 is not enough if you are doing hundreds of videos with different prompts each.

from text-to-video-finetuning.

patrickjonesdotca avatar patrickjonesdotca commented on August 15, 2024

If you have lots of videos you might need to train it for longer. How many steps did you train it and how many videos? 2500 is not enough if you are doing hundreds of videos with different prompts each.

I was using images actually to train the model and there were about a dozen of them. I went the opposite way.

But, the problem as I see it is that one should be able to generate a clip with the inference model before running a training session. I ran into issues with that as well, hence this (possibly errant) bug report.

from text-to-video-finetuning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.