Git Product home page Git Product logo

Comments (8)

phonygene avatar phonygene commented on August 28, 2024 1

OH, it turned out that scale 0 just worked fine .
And as the scale increases, the differences between output images drop sharply.
The images generated by scale 1 have slightly shift-effect, and the images generated by scale 3 are almost the same.
So, at this rate, there's no need to train over scale 3 at all.

This is pretty amazing .
Thanks for sharing your elegant work.

from singan.

sno6 avatar sno6 commented on August 28, 2024

Having the same issue running on Google Colab: seems to stall out at scale 8:[1999/2000]

from singan.

tamarott avatar tamarott commented on August 28, 2024

This seems to be a memory problem. When the number of scales is large, there are more model parameters to store.

from singan.

phonygene avatar phonygene commented on August 28, 2024

Sorry, I had fat-fingered.(clicked on Close button accidentally.)

I 've checked GPU memory usage while training.
It truly was nearly full loaded when the training stuck.

@tamarott
Do you have any suggestion ?
Is it possible to reduce the batch size or something for avoiding this ?
Or restart training from last checkpoint ?

I read your paper. There's an example of the starry night.
Seems like it goes well on scale 8.
But when I tried Random Samples on scale 8 , it just generated 50 images which are exactly the same as each other.

from singan.

JonathanFly avatar JonathanFly commented on August 28, 2024

With 16GB of GPU memory, the highest resolution output I have achieved is 667 x 413 from the main training script. Does that seem right? Would changing the aspect ratio let me squeeze more pixels into the model so I can also get more in the final random samples?

from singan.

rickdotta avatar rickdotta commented on August 28, 2024

@phonygene what do you mean by you dont need to train over scale 3? Is it possible to generate arbitrary sized images using just scale 3? How?

Thank you!

from singan.

phonygene avatar phonygene commented on August 28, 2024

@rickdotta As I said : In my case, when training scale larger than scale 3 , it only generated identical images, so I tried scale 0 model and found out that it worked fine. I don't understand why it works so differently from the paper, but at least It saves me a lot of time ( troll face ) .

from singan.

xivh avatar xivh commented on August 28, 2024

@phonygene How do you stop training at a smaller scale?

from singan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.