Git Product home page Git Product logo

Comments (8)

titu1994 avatar titu1994 commented on June 27, 2024

The requirement is not dropped. The gram matrix is always a square matrix.

Therefore to compute the gram matrix I resize the image to a square shape. Then perform all the vgg loss and style loss and so on using LBFGS. The final output is a square image of same size as the gram matrix.

I then resize this gram matrix to preserve aspect ratio of the original image. If you wish to see a square image, set --maintain_aspect_ratio="False"

from neural-style-transfer.

ink1 avatar ink1 commented on June 27, 2024

Sorry, that was not quite what I wanted to ask. I'm asking about image aspect ratio. I think it can be arbitrary throughout all processing steps.

Of course the Gram matrix is square because it is a cross correlation matrix. What I don't understand is why you require the width and height of a processed image to be equal. Line 130:
assert img_height == img_width, 'Due to the use of the Gram matrix, width and height must match.'

When I remove this requirement the code still works (as it should!)

from neural-style-transfer.

titu1994 avatar titu1994 commented on June 27, 2024

That's a redundant check. I resize the image to the gram matrix size (400x400) by default, and then perform this check. The check can safely be removed, since we are rescaling the image to the gram matrix size just above that.

from neural-style-transfer.

ink1 avatar ink1 commented on June 27, 2024

Yes, I can see that. What I'm asking is why you are doing
img_width = img_height = args.img_size
instead of, for example, something like
img_width = args.img_width
img_height = args.img_height
(I also realise that you do not have the two above options atm)

What has the Gram matrix to do with the aspect ration of your image?

from neural-style-transfer.

titu1994 avatar titu1994 commented on June 27, 2024

You are correct in the fact that the Gram matrix has nothing to do with the aspect ratio of the image.

In the original keras script, which this script is based off of, the author made sure to assert the width and height of the imported image and style were the exact same size (The comment about image needing to be same for gram matrix size has since been removed, so I will do the same).

Therefore I removed that check and performed style transfer on a 400 x 640 image as content and style. The result was worse, even after several hundred iterations.

For comparison, the first image is with 400 x 400 content image and style image :
moon lake - 400x400

Whereas the second image is with 400x640 image as input :

moon lake - 400x640

Consider the upper image with sharp features similar to the turbulence pattern from the Starry Night and the bottom image with less distinct patterns and patches of poor style transfer especially in the lower left portion of the image.

All things considered, I don't think I will preserve the aspect ratio of the loaded content and style image.

from neural-style-transfer.

ink1 avatar ink1 commented on June 27, 2024

I don't know how you are getting these results. I observe a lot less colour difference in these two resolutions using default settings over 10 iterations (VGG16). I manually rescaled both input and style to 640x400 and 400x400 for two tests in order to avoid rescaling inside of the code. Can you try the same?
640x400
out night_at_iteration_10
400x400 upscaled to 640x400
out 400_at_iteration_10 640

from neural-style-transfer.

titu1994 avatar titu1994 commented on June 27, 2024

The results seem close now. I'm travelling for a few days and won't have access to my laptop.

The 640x400 seems to preserve the text at the top right far better, along with similar style transfer in other regions. My results must be due to some other error.

Feel free to add a PR

from neural-style-transfer.

titu1994 avatar titu1994 commented on June 27, 2024

@ink1, I found the mistake. I used the older Network.py to test my image (since it is faster), but it sacrifices quality for speed. When I switched to INetwork.py I was able to replicate your results.

As of commit 6a08eaa, the content image and style image are scaled to the content aspect ratio before passing onto the VGG network. This will drastically increase execution time (INetwork used to take 14 seconds, now it takes 23 seconds per epoch), but delivers more precise results.

Thanks for raising the issue. The results now seems to be closer to the DeepArt.io results.

from neural-style-transfer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.