Git Product home page Git Product logo

gangogh's People

Contributors

adam-hanna avatar erincr avatar rkjones4 avatar rodrigobdz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gangogh's Issues

Wiki Scraping Time

Hello there,

I have been running scape_wiki.py and scraping through Wikiart. It has been well over an hour and is still going strong as it is scraping! Is this expected for this process to be running this long?

By the way, here is a preview as to what I am seeing:
screenshot 2017-06-21 13 12 31

Cheers,
Adam

Conv2DCustomBackpropInputOp only supports NHWC Error

We are getting the following error :

InvalidArgumentError (see above for traceback): Conv2DCustomBackpropInputOp only supports NHWC.
[[{{node gradients/Discriminator.4_2/Conv2D_grad/Conv2DBackpropInput}} = Conv2DBackpropInput[T=DT_FLOAT, _class=["loc:@Gradi...pendency_1"], data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ConstantFolding/gradients_2/Discriminator.4_2/Conv2D_grad/ShapeN-matshapes-0, Discriminator.4/Discriminator.4.Filters/read, gradients/AddN_2)]]

How do we change the format to NHWC to make it work?

We are using the image dump from the Google Drive link above - and all images have been resized to 64x64.

Versions: Python 3.6.6 ; Tensorflow 1.11 and Nvidia GPU 1080

Thanks

Not achieving any progress

After more than 25k runs, the generated output is still the same:

good_samples_3_24999

I am using the default settings, nothing changed. Is this is familiar issue?

Difficult to output larger images

After running your code successfully to produce 64 x 64 images, I'm trying to output larger images, 128 x 128, but I have been running into a number of problems.

The first is that there are a lot of magic numbers, with no explanation of what they mean:
e.g. "844dim2" - I tried multiplying dim by 4 here (the square of 2x is 4 * square of x)
and that seems to work, but I'm just guessing because it's not clear where the numbers come from.

There are many areas where the number 64 appears, and it's not clear if it really should be 64, or if it should be the dimension.

To output 128 x 128 images, does the model dimensionality need to be 128?

other issues:
raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB. ... I'm running this on a Titan Xp GPU, which has 12 GB of memory, so there surely has to be a way that I'm not limited by this constraint. Any idea of that that would be?

this occurs at:

File "GANGogh/GANgogh.py", line 378, in
_x_r = session.run(real_data, feed_dict={all_real_data_conv: _x})

I'm new to tensor flow so this is rather scary and intimidating.

So any guidance on how to simply change the code so that it can output larger dimension images would be greatly appreciated, because it seemed initially that the task would be straightforward, only requiring a few variable changes, relating to DIM, OUTPUT_DIM, ETC...

I think this project is really cool, and really appreciated your medium article... it's what first exposed me to GANs.

Any feedback on how to approach this would be greatly appreciated.

Thanks

Host training data rather than scraping

I'd really prefer that each user doesn't scrape multiple tens of gigabytes of data from wikiart.org.

I'll submit a pull request to update the readme, but I've created a torrent file and am also hosting a training data set on google drive.

Let me know your thoughts. I'll cancel the PR if you disagree. Thanks.

picStuff.py missing images to resize

This project is amazing. I’m trying to run the code though and picStuff.py keeping saying “missed it”. It creates the right folders but doesn’t seem to be resizing the pictures. Any ideas on how to solve this problem?

ValueError: Dimensions must be equal, but are 64 and 3 for 'Discriminator.1/Conv2D

i've already built the tinyimages folder with the 64x64 images. earlier i had the error that conv2d etc can only handle NHWC so i recoded so it returns NHWC but im getting this error below and nothing shows up in my generated folder. i've already used python 3.5.6 (as suggested by a reply an earlier issue on the site). what do i do to get beyond this error?

whats interesting is - when i ran the program unchanged (except for the image locations), it produced one file in the generated folder, samples_groundtruth.png, before i got the "NHWC only" error.

(base) C:\apps\GANGogh-master>C:/Anaconda/python.exe c:/apps/GANGogh-master/GANgogh.py
Uppercase local vars:
BATCH_SIZE: 84
CLASSES: 14
CRITIC_ITERS: 5
DIM: 64
ITERS: 200000
LAMBDA: 10
MODE: acwgan
N_GPUS: 1
OUTPUT_DIM: 12288
PREITERATIONS: 2000
2019-06-02 01:51:34.101849: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
WARNING:tensorflow:From C:\Anaconda\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Traceback (most recent call last):
File "C:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1659, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 64 and 3 for 'Discriminator.1/Conv2D' (op: 'Conv2D') with input shapes: [84,3,64,64], [5,5,3,64].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "c:/apps/GANGogh-master/GANgogh.py", line 256, in
disc_fake,disc_fake_class = Discriminator(fake_data, CLASSES)
File "c:/apps/GANGogh-master/GANgogh.py", line 177, in kACGANDiscriminator
output = lib.ops.conv2d.Conv2D('Discriminator.1', 3, dim, 5, output, stride=2)
File "c:\apps\GANGogh-master\tflib\ops\conv2d.py", line 111, in Conv2D
data_format='NHWC'
File "C:\Anaconda\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1025, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "C:\Anaconda\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Anaconda\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1823, in init
control_input_ops)
File "C:\Anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 1662, in _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 64 and 3 for 'Discriminator.1/Conv2D' (op: 'Conv2D') with input shapes: [84,3,64,64], [5,5,3,64].

(base) C:\apps\GANGogh-master>


Images Zip on Google Drive

Trying to get the image files on Google Drive. It downloads and then says 'invalid file'. Tried twice on 2 different Windows computers. I have a slow connection so didn't want to try the torrent and have the same issue so -

For anyone having the same issue:
I tried the scrape_wiki.py but the url's have changed since then - I managed to get them by finding the json files associated with pagination.

Replaced soupit like so:

def soupit(j,genre):
try:
url ="https://www.wikiart.org/en/paintings-by-genre/"+genre+"?json=2&page="+str(j)

    jsonP = urllib.request.urlopen(url)
    data = json.loads(jsonP.read())
    urls=[]
    
    
    for artItem in data["Paintings"]:
        urls.append(artItem["image"])

    return urls
except Exception as e:
    print('Failed to find the following genre page combo: '+genre+str(j))

GANgogh.py has difficulty saving a batch of ground-truth samples

Running GANgogh.py returns:

FileNotFoundError: [Errno 2] No such file or directory: 'generated/samples_groundtruth.png'

This is caused by line 353 in GANgogh.py:

lib.save_images.save_images(_x_r.reshape((BATCH_SIZE, 3, 64, 64)), 'generated/samples_groundtruth.png')

Tried using scipy.misc.imsave instead of lib.save_images.save_images but that didn't work. Ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.