Git Product home page Git Product logo

text-to-image's Introduction

Text To Image Synthesis Using Thought Vectors

Join the chat at https://gitter.im/text-to-image/Lobby

This is an experimental tensorflow implementation of synthesizing images from captions using Skip Thought Vectors. The images are synthesized using the GAN-CLS Algorithm from the paper Generative Adversarial Text-to-Image Synthesis. This implementation is built on top of the excellent DCGAN in Tensorflow. The following is the model architecture. The blue bars represent the Skip Thought Vectors for the captions.

Model architecture

Image Source : Generative Adversarial Text-to-Image Synthesis Paper

Requirements

Datasets

  • All the steps below for downloading the datasets and models can be performed automatically by running python download_datasets.py. Several gigabytes of files will be downloaded and extracted.
  • The model is currently trained on the flowers dataset. Download the images from this link and save them in Data/flowers/jpg. Also download the captions from this link. Extract the archive, copy the text_c10 folder and paste it in Data/flowers.
  • Download the pretrained models and vocabulary for skip thought vectors as per the instructions given here. Save the downloaded files in Data/skipthoughts.
  • Make empty directories in Data, Data/samples, Data/val_samples and Data/Models. They will be used for sampling the generated images and saving the trained models.

Usage

  • Data Processing : Extract the skip thought vectors for the flowers data set using :
python data_loader.py --data_set="flowers"
  • Training

    • Basic usage python train.py --data_set="flowers"
    • Options
      • z_dim: Noise Dimension. Default is 100.
      • t_dim: Text feature dimension. Default is 256.
      • batch_size: Batch Size. Default is 64.
      • image_size: Image dimension. Default is 64.
      • gf_dim: Number of conv in the first layer generator. Default is 64.
      • df_dim: Number of conv in the first layer discriminator. Default is 64.
      • gfc_dim: Dimension of gen untis for for fully connected layer. Default is 1024.
      • caption_vector_length: Length of the caption vector. Default is 1024.
      • data_dir: Data Directory. Default is Data/.
      • learning_rate: Learning Rate. Default is 0.0002.
      • beta1: Momentum for adam update. Default is 0.5.
      • epochs: Max number of epochs. Default is 600.
      • resume_model: Resume training from a pretrained model path.
      • data_set: Data Set to train on. Default is flowers.
  • Generating Images from Captions

    • Write the captions in text file, and save it as Data/sample_captions.txt. Generate the skip thought vectors for these captions using:
    python generate_thought_vectors.py --caption_file="Data/sample_captions.txt"
    
    • Generate the Images for the thought vectors using:
    python generate_images.py --model_path=<path to the trained model> --n_images=8
    

    n_images specifies the number of images to be generated per caption. The generated images will be saved in Data/val_samples/. python generate_images.py --help for more options.

Sample Images Generated

Following are the images generated by the generative model from the captions.

Caption Generated Images
the flower shown has yellow anther red pistil and bright red petals
this flower has petals that are yellow, white and purple and has dark lines
the petals on this flower are white with a yellow center
this flower has a lot of small round pink petals.
this flower is orange in color, and has petals that are ruffled and rounded.
the flower has yellow petals and the center of it is brown

Implementation Details

  • Only the uni-skip vectors from the skip thought vectors are used. I have not tried training the model with combine-skip vectors.
  • The model was trained for around 200 epochs on a GPU. This took roughly 2-3 days.
  • The images generated are 64 x 64 in dimension.
  • While processing the batches before training, the images are flipped horizontally with a probability of 0.5.
  • The train-val split is 0.75.

Pre-trained Models

  • Download the pretrained model from here and save it in Data/Models. Use this path for generating the images.

TODO

  • Train the model on the MS-COCO data set, and generate more generic images.
  • Try different embedding options for captions(other than skip thought vectors). Also try to train the caption embedding RNN along with the GAN-CLS model.

References

Alternate Implementations

License

MIT

text-to-image's People

Contributors

abhisheknarayanan avatar gitter-badger avatar neilsh avatar paarthneekhara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text-to-image's Issues

Unable to restore the trained model

Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 67, in main
saver.restore(sess, args.model_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1127, in restore
raise ValueError("Restore called with invalid save path %s" % save_path)
ValueError: Restore called with invalid save path /home/tushar/codes/python_codes/text-to-image/Data/Models/

Input of Discriminator

Hi, here's another question.
Here the discriminator's input is wrong image and right text. However according to the original paper (page 5), they use real image and wrong text which is different from your implementation.

the error about utils

when i run generate_images.py. it shows ModuleNotFoundError: No module named 'Utils'
how can i solve this? thank you

How to get captions of the image?

Thanks for your contribution!
A question confused me for a while, how do you get the captions of all the image? Because as far as I know the Oxford-102 data set only has labels for each class instead of every attribute, so are them labeled by you?

data loader running error

while running the (data_loader.py) i am getting the following errors. if any one know the solutions please let me know

Traceback (most recent call last):
File "/home/komali/PycharmProjects/text-to-image-master/data_loader.py", line 113, in
main()
File "/home/komali/PycharmProjects/text-to-image-master/data_loader.py", line 108, in main
save_caption_vectors_flowers(args.data_dir)
File "/home/komali/PycharmProjects/text-to-image-master/data_loader.py", line 85, in save_caption_vectors_flowers
encoded_captions[img] = skipthoughts.encode(model, image_captions[img])
File "/home/komali/PycharmProjects/text-to-image-master/skipthoughts.py", line 96, in encode
X = preprocess(X)
File "/home/komali/PycharmProjects/text-to-image-master/skipthoughts.py", line 160, in preprocess
sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
File "/home/komali/anaconda2/lib/python2.7/site-packages/nltk/data.py", line 801, in load
opened_resource = _open(resource_url)
File "/home/komali/anaconda2/lib/python2.7/site-packages/nltk/data.py", line 919, in open
return find(path
, path + ['']).open()
File "/home/komali/anaconda2/lib/python2.7/site-packages/nltk/data.py", line 641, in find
raise LookupError(resource_not_found)
LookupError:


Resource u'tokenizers/punkt/english.pickle' not found. Please
use the NLTK Downloader to obtain the resource: >>>
nltk.download()
Searched in:
- '/home/komali/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- u''

Same error while generating as well as training

Traceback (most recent call last):
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1628, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 1 in both shapes must be equal, but are 100 and 256. Shapes are [64,100] and [64,256].
From merging shape 0 with other shapes. for 'concat/concat_dim' (op: 'Pack') with input shapes: [64,100], [64,256].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 238, in
main()
File "train.py", line 76, in main
input_tensors, variables, loss, outputs, checks = gan.build_model()
File "/home/bsatya/Study/ML/text-to-image/model.py", line 39, in build_model
fake_image = self.generator(t_z, t_real_caption)
File "/home/bsatya/Study/ML/text-to-image/model.py", line 139, in generator
z_concat = tf.concat(1, [t_z, reduced_text_embedding])
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1121, in concat
dtype=dtypes.int32).get_shape().assert_is_compatible_with(
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1050, in convert_to_tensor
as_ref=False)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4875, in pack
"Pack", values=values, axis=axis, name=name)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1792, in init
control_input_ops)
File "/home/bsatya/Study/ML/p3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1631, in _create_c_op
raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 100 and 256. Shapes are [64,100] and [64,256].
From merging shape 0 with other shapes. for 'concat/concat_dim' (op: 'Pack') with input shapes: [64,100], [64,256].

Training on Bird dataset

I ran the code on bird dataset. after epoch 40, the losses turn to nan. What could be the possible reason, and what could be done to solve the issue

How to generate model from new datasets

Hi Paarthneekhara, thank you so much. This project is very useful to me.
I've the below problem.
When we are trying create a model for our own dataset(new images) , train.py is generating a set of .ckpt meta files,data files and index files as shown below
image.
I am not getting single .ckpt file like your pre-trained model (latest_model_flowers_temp.ckpt). Can you please help me how to generate new .ckpt file with our own data sets .
Thanks In Advance !!

Please respond ASAP
@paarthneekhara

Pretrained Model

Hi Paarthneekhara,

Can you also provide your pretrained checkpoint that you used to generate the images shown on the readme file?

Best,
Oushesh

checkpoint files while generating image

when run command --> python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

using a Pre-trained Models.
I got this error :
`2018-02-01 05:17:12.967163: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
Traceback (most recent call last):
File "generate_images.py", line 109, in
main()
File "generate_images.py", line 70, in main
saver.restore(sess, args.model_path)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1686, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1128, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1344, in _do_run
options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1363, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "d_bn1/moving_variance" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[Node: save/RestoreV2_3 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_3/tensor_names, save/RestoreV2_3/shape_and_slices)]]

Caused by op u'save/RestoreV2_3', defined at:
File "generate_images.py", line 109, in
main()
File "generate_images.py", line 68, in main
saver = tf.train.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1239, in init
self.build()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1248, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1284, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 765, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 268, in restore_op
[spec.tensor.dtype])[0])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 1031, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1625, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

NotFoundError (see above for traceback): Tensor name "d_bn1/moving_variance" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[Node: save/RestoreV2_3 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_3/tensor_names, save/RestoreV2_3/shape_and_slices)]]`

My os : ubuntu 16.04
TF : 1.5

Memory usage in data processing step

Hi,
I've followed the setup instructions. However, when I do python data_loader.py --data_set="flowers", my memory usage seems to go up and up, while the time for the processing of each image caption also goes up and up. Eventually I ran out of memory and it crashes..

Any idea why this may be?

I'm using Ubuntu 16, and have 12 Gb of RAM, and a GTX1080Ti
I have never used Theano (which skipthoughts uses), but I do have it installed

train errors

when i tried to run train.py i am getting the below error. please help me to slove it.if any one knew the answer please let me know

Traceback (most recent call last):
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 256, in
main()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 106, in main
loaded_data)
TypeError: 'NoneType' object is not iterable

error while generating images

zhoutao@zhoutao:~/project/text-to-image-master$ python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 64, in main
_, _, _, _, _ = gan.build_model()
File "/home/zhoutao/project/text-to-image-master/model.py", line 40, in build_model
disc_wrong_image, disc_wrong_image_logits = self.discriminator(t_wrong_image, t_real_caption, reuse = True)
File "/home/zhoutao/project/text-to-image-master/model.py", line 158, in discriminator
h0 = ops.lrelu(ops.conv2d(image, self.options['df_dim'], name = 'd_h0_conv')) #32
File "/home/zhoutao/project/text-to-image-master/Utils/ops.py", line 76, in conv2d
initializer=tf.truncated_normal_initializer(stddev=stddev))
File "/home/zhoutao/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1054, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/zhoutao/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 951, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/zhoutao/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 356, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/home/zhoutao/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 341, in _true_getter
use_resource=use_resource)
File "/home/zhoutao/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 653, in _get_single_variable
name, "".join(traceback.format_list(tb))))
ValueError: Variable d_h0_conv/w already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "/home/zhoutao/project/text-to-image-master/Utils/ops.py", line 76, in conv2d
initializer=tf.truncated_normal_initializer(stddev=stddev))
File "/home/zhoutao/project/text-to-image-master/model.py", line 158, in discriminator
h0 = ops.lrelu(ops.conv2d(image, self.options['df_dim'], name = 'd_h0_conv')) #32
File "/home/zhoutao/project/text-to-image-master/model.py", line 39, in build_model
disc_real_image, disc_real_image_logits = self.discriminator(t_real_image, t_real_caption)

I get this error,and my tf.version
'1.1.0-rc2'

can you help me?

The range of pixel value

In generator , the return value is (tf.tanh(h4)/2. + 0.5), which is fake image. The range of pixel value is [0,1] for fake image. However, the range of pixel value is [0,255] for real image. Is that reasonable?@paarthneekhara

Issue with libjpeg

When I try to generate, console logs this output:

{
  gpu : 0
  noisemode : "random"
  name : "generation1"
  noisetype : "normal"
  batchSize : 32
  net : "celebA_25_net_G.t7"
  imsize : 1
  nz : 100
  display : 1
}
nn.Sequential {
  [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> output]
  (1): nn.SpatialFullConvolution(100 -> 512, 4x4)
  (2): nn.SpatialBatchNormalization (4D) (512)
  (3): nn.ReLU
  (4): nn.SpatialFullConvolution(512 -> 256, 4x4, 2,2, 1,1)
  (5): nn.SpatialBatchNormalization (4D) (256)
  (6): nn.ReLU
  (7): nn.SpatialFullConvolution(256 -> 128, 4x4, 2,2, 1,1)
  (8): nn.SpatialBatchNormalization (4D) (128)
  (9): nn.ReLU
  (10): nn.SpatialFullConvolution(128 -> 64, 4x4, 2,2, 1,1)
  (11): nn.SpatialBatchNormalization (4D) (64)
  (12): nn.ReLU
  (13): nn.SpatialFullConvolution(64 -> 3, 4x4, 2,2, 1,1)
  (14): nn.Tanh
}
Images size: 	32 x 3 x 64 x 64	
Min, Max, Mean, Stdv	0.01557257771492	0.99922662973404	0.46208152188531	0.24795615762643	
Wrong JPEG library version: library is 90, caller expects 80

Then stops.

I am using libjpeg9b, I tried to use libjpeg6 but didn't work though and I can't get the required version.

Is there something to do with this issue?

Error from batch_norm

I got this error when I was trying to run your scripts.

Traceback (most recent call last):
  File "train.py", line 238, in <module>
    main()
  File "train.py", line 76, in main
    input_tensors, variables, loss, outputs, checks = gan.build_model()
  File "/home/akara/Workspace/text-to-image/model.py", line 44, in build_model
    disc_wrong_image, disc_wrong_image_logits   = self.discriminator(t_wrong_image, t_real_caption, reuse = True)
  File "/home/akara/Workspace/text-to-image/model.py", line 165, in discriminator
    h1 = ops.lrelu( self.d_bn1(ops.conv2d(h0, self.options['df_dim']*2, name = 'd_h1_conv'))) #16
  File "/home/akara/Workspace/text-to-image/Utils/ops.py", line 34, in __call__
    ema_apply_op = self.ema.apply([batch_mean, batch_var])
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 391, in apply
    self._averages[var], var, decay, zero_debias=zero_debias))
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
    update_delta = _zero_debias(variable, value, decay)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 177, in _zero_debias
    trainable=False)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1024, in get_variable
    custom_getter=custom_getter)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 850, in get_variable
    custom_getter=custom_getter)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 346, in get_variable
    validate_shape=validate_shape)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 331, in _true_getter
    caching_device=caching_device, validate_shape=validate_shape)
  File "/home/akara/miniconda2/envs/gan/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 650, in _get_single_variable
    "VarScope?" % name)
ValueError: Variable d_bn1/d_bn1_2/d_bn1_2/moments/moments_1/mean/ExponentialMovingAverage/biased does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

It was when the script is trying to create another discriminator.

disc_real_image, disc_real_image_logits   = self.discriminator(t_real_image, t_real_caption)
disc_wrong_image, disc_wrong_image_logits   = self.discriminator(t_wrong_image, t_real_caption, reuse = True) # Here
disc_fake_image, disc_fake_image_logits   = self.discriminator(fake_image, t_real_caption, reuse = True)

I printed all variables but it seems to initialize with different variable names, but the reuse = True.

ValueError: Dimension 1 in both shapes must be equal, but are 100 and 256. Shapes are [64,100] and [64,256].

Traceback (most recent call last):
File "train.py", line 241, in
main()
File "train.py", line 79, in main
input_tensors, variables, loss, outputs, checks = gan.build_model()
File "C:\Users\anwar\Downloads\Documents\text-to-image-master\model.py", line 37, in build_model
fake_image = self.generator(t_z, t_real_caption)
File "C:\Users\anwar\Downloads\Documents\text-to-image-master\model.py", line 137, in generator
z_concat = tf.concat(1, [t_z, reduced_text_embedding])
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1121, in concat
dtype=dtypes.int32).get_shape().assert_is_compatible_with(
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1048, in convert_to_tensor
as_ref=False)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1144, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5644, in pack
"Pack", values=values, axis=axis, name=name)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 3272, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1790, in init
control_input_ops)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 100 and 256. Shapes are [64,100] and [64,256].
From merging shape 0 with other shapes. for 'concat/concat_dim' (op: 'Pack') with input shapes: [64,100], [64,256].

Use of tf.nn.moments in batch_norm class

As I went through the code, in Discriminator code, we are using the same block thrice, Once for real images, then for fake and wrong images with variable reuse. But when we reach "batch_norm" __call() function, for Second time with reuse=True, it tries to reuse all the variables in __call(). Similar thing happens with ema.apply(). We have created two variables batch_mean and batch_var which we pass to ema.apply(). Then as we have already set reuse to True, these variables do not get added to our global_variables list.
So , as this code has single scope for all variables in __call() with reuse being set to True, there is conflict. I am not able to run that code segment. Any explanation on use of tf.nn.moments in __call() will be appreciated.

Thanks

Trying to generate images using pre-trained model

What I did

  • Downloaded the pre-trained model
  • Created a file (j.caption) with a sample caption
  • Ran: python generate_thought_vectors.py --caption_file=j.caption
  • Got the following error, any ideas?
['pink flower with green leaves']
Loading model parameters...
Traceback (most recent call last):
  File "generate_thought_vectors.py", line 32, in <module>
    main()
  File "generate_thought_vectors.py", line 23, in main
    model = skipthoughts.load_model()
  File "/Users/jikkujose/Projects/outside_projects/text-to-image/skipthoughts.py", line 38, in load_model
    with open('%s.pkl'%path_to_umodel, 'rb') as f:

Specs

  • Mac OSX 10.11.6
  • Python 2.7.11

data_loader error (need help)

/home/amax/anaconda3/bin/python /home/amax/ypy/text-to-image-master/Codes3/data_loader.py
['image_00301.jpg', 'image_00302.jpg', 'image_00303.jpg', 'image_00304.jpg', 'image_00305.jpg', 'image_00306.jpg', 'image_00307.jpg', 'image_00308.jpg', 'image_00309.jpg', 'image_00310.jpg', 'image_00311.jpg', 'image_00312.jpg', 'image_00313.jpg', 'image_00314.jpg', 'image_00315.jpg', 'image_00316.jpg', 'image_00317.jpg', 'image_00318.jpg', 'image_00319.jpg', 'image_00320.jpg', 'image_00321.jpg', 'image_00322.jpg', 'image_00323.jpg', 'image_00324.jpg', 'image_00325.jpg', 'image_00326.jpg', 'image_00327.jpg', 'image_00328.jpg', 'image_00329.jpg', 'image_00330.jpg', 'image_00331.jpg', 'image_00332.jpg', 'image_00333.jpg', 'image_00334.jpg', 'image_00335.jpg', 'image_00336.jpg', 'image_00337.jpg', 'image_00338.jpg', 'image_00339.jpg', 'image_00340.jpg', 'image_00341.jpg', 'image_00342.jpg', 'image_00343.jpg', 'image_00344.jpg', 'image_00345.jpg', 'image_00346.jpg', 'image_00347.jpg', 'image_00348.jpg', 'image_00349.jpg', 'image_00350.jpg', 'image_00351.jpg', 'image_00352.jpg', 'image_00353.jpg', 'image_00354.jpg', 'image_00355.jpg', 'image_00356.jpg', 'image_00357.jpg', 'image_00358.jpg', 'image_00359.jpg', 'image_00360.jpg', 'image_00361.jpg', 'image_00362.jpg', 'image_00363.jpg', 'image_00364.jpg', 'image_00365.jpg', 'image_00366.jpg', 'image_00367.jpg', 'image_00368.jpg', 'image_00369.jpg', 'image_00370.jpg', 'image_00371.jpg', 'image_00372.jpg', 'image_00373.jpg', 'image_00374.jpg', 'image_00375.jpg', 'image_00376.jpg', 'image_00377.jpg', 'image_00378.jpg', 'image_00379.jpg', 'image_00380.jpg', 'image_00381.jpg', 'image_00382.jpg', 'image_00383.jpg', 'image_00384.jpg', 'image_00385.jpg', 'image_00386.jpg', 'image_00387.jpg', 'image_00388.jpg', 'image_00389.jpg', 'image_00390.jpg', 'image_00391.jpg', 'image_00392.jpg', 'image_00393.jpg', 'image_00394.jpg', 'image_00395.jpg', 'image_00396.jpg', 'image_00397.jpg', 'image_00398.jpg', 'image_00399.jpg', 'image_00400.jpg']
8189
8189
Loading model parameters...
Compiling encoders...
Loading tables...
Traceback (most recent call last):
File "/home/amax/ypy/text-to-image-master/Codes3/data_loader.py", line 113, in
main()
File "/home/amax/ypy/text-to-image-master/Codes3/data_loader.py", line 108, in main
save_caption_vectors_flowers(args.data_dir)
File "/home/amax/ypy/text-to-image-master/Codes3/data_loader.py", line 77, in save_caption_vectors_flowers
model = skipthoughts.load_model()
File "/home/amax/ypy/text-to-image-master/Codes3/skipthoughts.py", line 61, in load_model
utable, btable = load_tables()
File "/home/amax/ypy/text-to-image-master/Codes3/skipthoughts.py", line 82, in load_tables
btable = numpy.load(path_to_tables + 'btable.npy',encoding='latin1')
File "/home/amax/anaconda3/lib/python3.5/site-packages/numpy/lib/npyio.py", line 421, in load
pickle_kwargs=pickle_kwargs)
File "/home/amax/anaconda3/lib/python3.5/site-packages/numpy/lib/format.py", line 650, in read_array
array = pickle.load(fp, **pickle_kwargs)
EOFError: Ran out of input

error while generating images

 python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=7
Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 64, in main
_, _, _, _, _ = gan.build_model()
File "/home/tushar/codes/python_codes/text-to-image/model.py", line 40, in build_model
disc_wrong_image, disc_wrong_image_logits = self.discriminator(t_wrong_image, t_real_caption, reuse = True)
File "/home/tushar/codes/python_codes/text-to-image/model.py", line 161, in discriminator
h1 = ops.lrelu( self.d_bn1(ops.conv2d(h0, self.options['df_dim']*2, name = 'd_h1_conv'))) #16
File "/home/tushar/codes/python_codes/text-to-image/Utils/ops.py", line 34, in call
ema_apply_op = self.ema.apply([batch_mean, batch_var])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/moving_averages.py", line 391, in apply
self._averages[var], var, decay, zero_debias=zero_debias))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/moving_averages.py", line 70, in assign_moving_average
update_delta = _zero_debias(variable, value, decay)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/moving_averages.py", line 177, in _zero_debias
trainable=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1024, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 850, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 346, in get_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 331, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 650, in _get_single_variable
"VarScope?" % name)
ValueError: Variable d_bn1/d_bn1_2/d_bn1_2/moments/moments_1/mean/ExponentialMovingAverage/biased does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

From merging shape 0 with other shapes. for 'h3_concat/concat_dim' (op: 'Pack') with input shapes: [8,4,4,512], [8,4,4,256].

Traceback (most recent call last):
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1628, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 3 in both shapes must be equal, but are 512 and 256. Shapes are [8,4,4,512] and [8,4,4,256].
From merging shape 0 with other shapes. for 'h3_concat/concat_dim' (op: 'Pack') with input shapes: [8,4,4,512], [8,4,4,256].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 64, in main
_, _, _, _, _ = gan.build_model()
File "C:\Users\RanjanNi\Desktop\t2i_skip\Python 3 Codes\model.py", line 39, in build_model
disc_real_image, disc_real_image_logits = self.discriminator(t_real_image, t_real_caption)
File "C:\Users\RanjanNi\Desktop\t2i_skip\Python 3 Codes\model.py", line 171, in discriminator
h3_concat = tf.concat( 3, [h3, tiled_embeddings], name='h3_concat')
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1121, in concat
dtype=dtypes.int32).get_shape().assert_is_compatible_with(
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1050, in convert_to_tensor
as_ref=False)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1146, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 971, in _autopacking_conversion_function
return _autopacking_helper(v, dtype, name or "packed")
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\array_ops.py", line 923, in _autopacking_helper
return gen_array_ops.pack(elems_as_tensors, name=scope)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5856, in pack
"Pack", values=values, axis=axis, name=name)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1792, in init
control_input_ops)
File "C:\Users\RanjanNi\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\tensorflow\python\framework\ops.py", line 1631, in _create_c_op
raise ValueError(str(e))
ValueError: Dimension 3 in both shapes must be equal, but are 512 and 256. Shapes are [8,4,4,512] and [8,4,4,256].
From merging shape 0 with other shapes. for 'h3_concat/concat_dim' (op: 'Pack') with input shapes: [8,4,4,512], [8,4,4,256].

Validate our model

Do anyone has an idea about How to validate our MODEL ? Can you please share .

Data_Loader error in loading utable.npy for SkipThoughts

In data_loader.py, the command to load data from a npy file:

utable = numpy.load('utable.npy')

But this results in EOFError:

File "/home/divyat/anaconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 419, in load
pickle_kwargs=pickle_kwargs)
File "/home/divyat/anaconda2/lib/python2.7/site-packages/numpy/lib/format.py", line 640, in read_array
array = pickle.load(fp, **pickle_kwargs)
EOFError

I don't understand the error, what is happening exactly. The numpy.load() method calls a pickle.load() function which causes error. The file utable.npy is a file used for generating text embeddings using skipthoughts. I downloaded the files using the link suggested in the repository from here:

https://github.com/ryankiros/skip-thoughts#getting-started

about some files

i can not get the files uni_skip.npz.pkl from skip-thoughts
can you put it on your github?

how to find out accuracy of output image

i am sending this regarding validation of text to image generation on mscoco data set. i done each every steps as mentioned in github, after that i want to check how much accurate is my output. if any one done the validation part please let me know the solution.

Data Loader error

While I'am Running the (Data_Loader. py) , I'am Facing The errors Which i have listed below.

Please help me with this and provide me some suggestions..........

/home/mcis-lap-40/anaconda2/bin/python2.7 /home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/data_loader.py
['image_05564.jpg', 'image_00768.jpg', 'image_02634.jpg', 'image_01139.jpg', 'image_03123.jpg', 'image_07190.jpg', 'image_04447.jpg', 'image_04168.jpg', 'image_02712.jpg', 'image_06383.jpg', 'image_03912.jpg', 'image_04501.jpg', 'image_05291.jpg', 'image_00288.jpg', 'image_00072.jpg', 'image_03889.jpg', 'image_07244.jpg', 'image_06798.jpg', 'image_00918.jpg', 'image_05653.jpg', 'image_00674.jpg', 'image_03239.jpg', 'image_03899.jpg', 'image_03533.jpg', 'image_07395.jpg', 'image_07028.jpg', 'image_04904.jpg', 'image_05546.jpg', 'image_00055.jpg', 'image_01711.jpg', 'image_07194.jpg', 'image_00407.jpg', 'image_04943.jpg', 'image_02619.jpg', 'image_03452.jpg', 'image_08034.jpg', 'image_05260.jpg', 'image_04480.jpg', 'image_05755.jpg', 'image_02984.jpg', 'image_05093.jpg', 'image_05536.jpg', 'image_03897.jpg', 'image_02737.jpg', 'image_00747.jpg', 'image_04563.jpg', 'image_03462.jpg', 'image_05507.jpg', 'image_05230.jpg', 'image_03184.jpg', 'image_03481.jpg', 'image_07330.jpg', 'image_07901.jpg', 'image_07992.jpg', 'image_00735.jpg', 'image_03004.jpg', 'image_07465.jpg', 'image_01523.jpg', 'image_06365.jpg', 'image_00517.jpg', 'image_02786.jpg', 'image_00095.jpg', 'image_03536.jpg', 'image_07090.jpg', 'image_03677.jpg', 'image_04008.jpg', 'image_00869.jpg', 'image_00342.jpg', 'image_00028.jpg', 'image_01244.jpg', 'image_03064.jpg', 'image_01646.jpg', 'image_00518.jpg', 'image_03384.jpg', 'image_06004.jpg', 'image_04536.jpg', 'image_06615.jpg', 'image_06038.jpg', 'image_08165.jpg', 'image_00041.jpg', 'image_04968.jpg', 'image_01182.jpg', 'image_02858.jpg', 'image_03165.jpg', 'image_02314.jpg', 'image_03491.jpg', 'image_06416.jpg', 'image_06544.jpg', 'image_05969.jpg', 'image_02424.jpg', 'image_02956.jpg', 'image_07920.jpg', 'image_00997.jpg', 'image_07581.jpg', 'image_02445.jpg', 'image_00797.jpg', 'image_06187.jpg', 'image_01753.jpg', 'image_04437.jpg', 'image_02000.jpg']
8189
8189
Loading model parameters...
Compiling encoders...
WARNING (theano.tensor.blas): We did not found a dynamic library into the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.

You can find the C code in this temporary file: /tmp/theano_compilation_error_jO0Mk5
Traceback (most recent call last):
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/data_loader.py", line 113, in
main ()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/data_loader.py", line 107, in main
save_caption_vectors_flowers (args.data_dir)
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/data_loader.py", line 79, in save_caption_vectors_flowers
model = skipthoughts.load_model ()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/skipthoughts.py", line 50, in load_model
f_w2v = theano.function([embedding, x_mask], ctxw2v, name='f_w2v')
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/compile/pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1833, in orig_function
fn = m.create(defaults)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/compile/function_module.py", line 1707, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/vm.py", line 1084, in make_all
impl=impl))
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/op.py", line 955, in make_thunk
no_recycling)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/op.py", line 858, in make_c_thunk
output_storage=node_output_storage)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cc.py", line 1215, in make_thunk
keep_lock=keep_lock)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cc.py", line 1155, in compile
keep_lock=keep_lock)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cc.py", line 1618, in cthunk_factory
key=key, lnk=self, keep_lock=keep_lock)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cmodule.py", line 1174, in module_from_key
module = lnk.compile_cmodule(location)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cc.py", line 1521, in compile_cmodule
preargs=preargs)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/theano/gof/cmodule.py", line 2353, in compile_str
(status, compile_stderr.replace('\n', '. ')))
Exception: ('The following error happened while compiling the node', Dot22(Reshape{2}.0, encoder_Wx), '\n', "Compilation failed (return status=1): /usr/bin/ld: /usr/local/lib/libpython2.7.a(cobject.o): relocation R_X86_64_32 against `PyCObject_Type' can not be used when making a shared object; recompile with -fPIC. /usr/local/lib/libpython2.7.a: error adding symbols: Bad value. collect2: error: ld returned 1 exit status. ", '[Dot22(<TensorType(float32, matrix)>, encoder_Wx)]')

Process finished with exit code 1

training error for MS-COCO

When I tried to run training on MS-COCO dataset,

python data_loader.py --data_set='MS-COCO' --data_dir='MSCOCO-data'

I downloaded the images and captions from [http://mscoco.org/dataset/#download](MS-COCO dataset)
The contents of MSCOCO-data/ folder are as follows:
MSCOCO-data/
|-- annotations
| |-- captions_train2014.json
| |-- captions_val2014.json
| |-- instances_train2014.json
| |-- instances_val2014.json
| |-- person_keypoints_train2014.json
| |-- person_keypoints_val2014.json
|-- meta_train.pkl
|-- train2014
| |-- COCO_train2014_000000000009.jpg
| |-- COCO_train2014_000000000025.jpg

I got the below error while running the training code:

Traceback (most recent call last):
  File "data_loader.py", line 111, in <module>
    main()
  File "data_loader.py", line 108, in main
    save_caption_vectors_ms_coco(args.data_dir, args.split, args.batch_size)
  File "data_loader.py", line 39, in save_caption_vectors_ms_coco
    h5f_tv_batch = h5py.File( join(data_dir, 'tvs/'+split + '_tvs_' + str(batch_no)), 'w')
  File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 272, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
  File "/usr/local/lib/python2.7/dist-packages/h5py/_hl/files.py", line 98, in make_fid
    fid = h5f.create(name, h5f.ACC_TRUNC, fapl=fapl, fcpl=fcpl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2684)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-4rPeHA-build/h5py/_objects.c:2642)
  File "h5py/h5f.pyx", line 96, in h5py.h5f.create (/tmp/pip-4rPeHA-build/h5py/h5f.c:2097)
IOError: Unable to create file (Unable to open file: name = '/home/nitish/mscoco-data/tvs/train_tvs_0', errno = 2, error message = 'no such file or directory', flags = 13, o_flags = 242)

Error while generating images using the pre-trained models

The error is as follows:
Generated 0
Traceback (most recent call last):
File "/home/amax/fmd/text-to-image/generate_images.py", line 106, in
main()
File "/home/amax/fmd/text-to-image/generate_images.py", line 102, in main
scipy.misc.imsave( join(args.data_dir, 'val_samples/combined_image_{}.jpg'.format(cn)) , combined_image)
File "/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 199, in imsave
im.save(name)
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 1439, in save
save_handler(self, fp, filename)
File "/usr/local/lib/python2.7/dist-packages/PIL/JpegImagePlugin.py", line 471, in _save
ImageFile._save(im, fp, [("jpeg", (0,0)+im.size, 0, rawmode)])
File "/usr/local/lib/python2.7/dist-packages/PIL/ImageFile.py", line 495, in _save
e = Image._getencoder(im.mode, e, a, im.encoderconfig)
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 401, in _getencoder
raise IOError("encoder %s not available" % encoder_name)
IOError: encoder jpeg not available

Error: d_bn1/d_bn1_2/moments/Squeeze/ExponentialMovingAverage/ does not exist

On running generate_images script, following are errors received.
Could be please suggest a fix for this?
Thanks

====================================================
python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

Traceback (most recent call last):
File "generate_images.py", line 106, in
main()
File "generate_images.py", line 64, in main
_, _, _, _, _ = gan.build_model()
File "repo/model.py", line 40, in build_model
disc_wrong_image, disc_wrong_image_logits = self.discriminator(t_wrong_image, t_real_caption, reuse = True)
File "repo/model.py", line 161, in discriminator
h1 = ops.lrelu( self.d_bn1(ops.conv2d(h0, self.options['df_dim']*2, name = 'd_h1_conv'))) #16
File "repo/Utils/ops.py", line 34, in call
ema_apply_op = self.ema.apply([batch_mean, batch_var])
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 403, in apply
colocate_with_primary=(var.op.type in ["Variable", "VariableV2"]))
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot
colocate_with_primary=colocate_with_primary)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 151, in create_slot_with_initializer
dtype)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 67, in _create_slot_var
validate_shape=validate_shape)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1297, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1093, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 439, in get_variable
constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 408, in _true_getter
use_resource=use_resource, constraint=constraint)
File "path/anaconda3/envs/py27/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 765, in _get_single_variable
"reuse=tf.AUTO_REUSE in VarScope?" % name)
ValueError: Variable d_bn1/d_bn1_2/moments/Squeeze/ExponentialMovingAverage/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=tf.AUTO_REUSE in VarScope?

unable to open pickle of skipthoughts

XXX@XXX:~/codes/python_codes/text-to-image$ python generate_thought_vectors.py --caption_file="Data/sample_captions.txt"
['the flower shown has yellow anther red pistil and bright red petals']
Loading model parameters...
Compiling encoders...
Loading tables...
Traceback (most recent call last):
File "generate_thought_vectors.py", line 33, in
main()
File "generate_thought_vectors.py", line 23, in main
model = skipthoughts.load_model()
File "/home/tushar/codes/python_codes/text-to-image/skipthoughts.py", line 60, in load_model
utable, btable = load_tables()
File "/home/tushar/codes/python_codes/text-to-image/skipthoughts.py", line 80, in load_tables
utable = numpy.load(path_to_tables + 'utable.npy')
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 406, in load
pickle_kwargs=pickle_kwargs)
File "/usr/local/lib/python2.7/dist-packages/numpy/lib/format.py", line 637, in read_array
array = pickle.load(fp, **pickle_kwargs)
EOFError

Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint.

i try to use pre trained model and got erro below:

Traceback (most recent call last):
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "d_bn1/moving_mean" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[{{node save/RestoreV2}} = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1546, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "d_bn1/moving_mean" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[node save/RestoreV2 (defined at generate_images.py:67) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at:
File "generate_images.py", line 107, in
main()
File "generate_images.py", line 67, in main
saver = tf.train.Saver()
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Tensor name "d_bn1/moving_mean" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[node save/RestoreV2 (defined at generate_images.py:67) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1556, in restore
names_to_keys = object_graph_key_mapping(save_path)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1830, in object_graph_key_mapping
checkpointable.OBJECT_GRAPH_PROTO_KEY)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 371, in get_tensor
status)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: _CHECKPOINTABLE_OBJECT_GRAPH not found in checkpoint file

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "generate_images.py", line 107, in
main()
File "generate_images.py", line 68, in main
saver.restore(sess, args.model_path)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1562, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Tensor name "d_bn1/moving_mean" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[node save/RestoreV2 (defined at generate_images.py:67) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

Caused by op 'save/RestoreV2', defined at:
File "generate_images.py", line 107, in
main()
File "generate_images.py", line 67, in main
saver = tf.train.Saver()
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1102, in init
self.build()
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 406, in _AddRestoreOps
restore_sequentially)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 862, in bulk_restore
return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_io_ops.py", line 1466, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/lucao/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Tensor name "d_bn1/moving_mean" not found in checkpoint files Data/Models/latest_model_flowers_temp.ckpt
[[node save/RestoreV2 (defined at generate_images.py:67) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]]

package installation error

i am unable to install pickle and cpickle in python 2.7.6 version. while installing i am getting the below error

**Collecting Pickle

Could not find a version that satisfies the requirement Pickle (from versions: )
No matching distribution found for Pickle
You are using pip version 8.0.2, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.**

please let me know the solution

I have a problem with using pre-trained model

Unknown: NewRandomAccessFile failed to Create/Open: Data\Models : access denied。

some people think cause it by python version.I gress I use python3,but pre-trained model used python2.

Tensorflow error

your code is giving me errors ... especially the tensorflow code.....

File "/home/uday/Documents/GAN1/model.py", line 138, in generator
z_concat = tf.concat(1,[t_z, reduced_text_embedding])
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1111, in concat
dtype=dtypes.int32).get_shape().assert_is_compatible_with(
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 998, in convert_to_tensor
as_ref=False)
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1094, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 217, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 196, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 436, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/uday/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 347, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

Can't download the caption dataset

Dear experts
I still have problem with the dataset "flowers_text_c10.tar.bz2" that I can't download it. It is hard for me to solve it with repo according to your explanation in the code for downloading. Is there any other approaches I can follow to download it? Such as from E-mail?
Thanks.

Generated images don't correspond to caption

Maybe I'm doing something stupid but essentially I followed your generating images (with pretrained model) steps:

python generate_thought_vectors.py --caption_file="Data/sample_captions.txt"

where sample_captions.txt has just these words: "the flower has yellow petals and the center of it is brown" (Is this format correct?)

python generate_images.py --model_path=Data/Models/latest_model_flowers_temp.ckpt --n_images=8

A single combined_image_0.jpg comes out in the val_samples folder but the images are DO NOT "have yellow petals and the center of it is brown". As far as I can tell, the model is not conditioning on the text properly. What am I doing wrong?

image

Evaluation Metrics

Hi,
First of all, thanks for your awesome work! I'm wondering that is there any evaluation metrics for this kind of generative model since I want to compare the performance between using Skip Thought Vectors and the other embedding options for captions.

training error

while running the (train.py) i got the below errors. if any one knew tha solution please let me know

Traceback (most recent call last):
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 250, in
main()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 77, in main
input_tensors, variables, loss, outputs, checks = gan.build_model()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/model.py", line 39, in build_model
disc_real_image, disc_real_image_logits = self.discriminator(t_real_image, t_real_caption)
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/model.py", line 159, in discriminator
h0 = ops.lrelu(ops.conv2d(image, self.options['df_dim'], name = 'd_h0_conv')) #32
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/Utils/ops.py", line 76, in conv2d
initializer=tf.truncated_normal_initializer(stddev=stddev))
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
use_resource=use_resource)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 682, in _get_single_variable
"VarScope?" % name)
ValueError: Variable d_h0_conv/w does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

Process finished with exit code 1

Training Error

While running the (Train. py) i am facing the following errors. Please help me to solve those errors.

/usr/bin/python2.7 /home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py
Traceback (most recent call last):
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 250, in
main()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/train.py", line 77, in main
input_tensors, variables, loss, outputs, checks = gan.build_model()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/model.py", line 37, in build_model
fake_image = self.generator(t_z, t_real_caption)
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-master/model.py", line 137, in generator
z_concat = tf.concat(1, [t_z, reduced_text_embedding])
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1061, in concat
dtype=dtypes.int32).get_shape(
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 611, in convert_to_tensor
as_ref=False)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 676, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 121, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 376, in make_tensor_proto
_AssertCompatible(values, dtype)
File "/home/mcis-lap-40/.local/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

Process finished with exit code 1

Training MS_COCO Error

When i'am trying to Run (Train .py) By using MS-COCO dataset . I'am facing the below errors.

please provide suggestions to solve those errors.

ERRORS:
Traceback (most recent call last):
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-generation/train.py", line 227, in
main()
File "/home/mcis-lap-40/raviwork/Tensorflow_Models/text-to-image-generation/train.py", line 105, in main
loaded_data)
TypeError: 'NoneType' object is not iterable

Process finished with exit code 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.