y0ast / vae-tensorflow Goto Github PK
View Code? Open in Web Editor NEWImplementation of a Variational Auto-Encoder in TensorFlow
License: MIT License
Implementation of a Variational Auto-Encoder in TensorFlow
License: MIT License
It seems like you use the l2 regularization on all weights except the bias term. Is there any reason?
I have run your program, but the loss between the input and output is high . That is the sigmiod-cross-entropy-with-logits is high which is about 0.25. I think its not a good model to map the input into the latent feature and reconstruct from the latent feature .
Hello,
I want to write a VAE with tensorflow, and I was looking at your implementation. I am surprised by this line : x_hat = tf.matmul(hidden_decoder, W_decoder_hidden_reconstruction) + b_decoder_hidden_reconstruction
Isn't it more usual to add an activation function to build the output from the decoder's hidden layer, i.e doing something like x_hat = tf.nn.sigmoid(tf.matmul(hidden_decoder, W_decoder_hidden_reconstruction) + b_decoder_hidden_reconstruction) ?
Thanks!
Grégoire
envy@ub1404:~/os_pri/github/VAE-TensorFlow$ python main.py
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:105] successfully opened CUDA library libcurand.so locally
Extracting MNIST/train-images-idx3-ubyte.gz
Extracting MNIST/train-labels-idx1-ubyte.gz
Extracting MNIST/t10k-images-idx3-ubyte.gz
Extracting MNIST/t10k-labels-idx1-ubyte.gz
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:900] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 950M
major: 5 minor: 0 memoryClockRate (GHz) 1.124
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.60GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:755] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 950M, pci bus id: 0000:01:00.0)
Initializing parameters
Traceback (most recent call last):
File "main.py", line 103, in
save_path = saver.save(sess, "save/model.ckpt")
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 1037, in save
{self.saver_def.filename_tensor_name: checkpoint_file})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 340, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 564, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 637, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 659, in _do_call
e.code)
tensorflow.python.framework.errors.NotFoundError: save/model.ckpt.tempstate14807336009671266924
[[Node: save/save = SaveSlices[T=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/save/tensor_names, save/save/shapes_and_slices, Variable/_71, Variable/Adam/_73, Variable/Adam_1/_75, Variable_1/_77, Variable_1/Adam/_79, Variable_1/Adam_1/_81, Variable_2/_83, Variable_2/Adam/_85, Variable_2/Adam_1/_87, Variable_3/_89, Variable_3/Adam/_91, Variable_3/Adam_1/_93, Variable_4/_95, Variable_4/Adam/_97, Variable_4/Adam_1/_99, Variable_5/_101, Variable_5/Adam/_103, Variable_5/Adam_1/_105, Variable_6/_107, Variable_6/Adam/_109, Variable_6/Adam_1/_111, Variable_7/_113, Variable_7/Adam/_115, Variable_7/Adam_1/_117, Variable_8/_119, Variable_8/Adam/_121, Variable_8/Adam_1/_123, Variable_9/_125, Variable_9/Adam/_127, Variable_9/Adam_1/_129, beta1_power/_131, beta2_power/_133)]]
Caused by op u'save/save', defined at:
File "main.py", line 81, in
saver = tf.train.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 832, in init
restore_sequentially=restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 500, in build
save_tensor = self._AddSaveOps(filename_tensor, vars_to_save)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 197, in _AddSaveOps
save = self.save_op(filename_tensor, vars_to_save)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 149, in save_op
tensor_slices=[vs.slice_spec for vs in vars_to_save])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/io_ops.py", line 172, in _save
tensors, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 341, in _save_slices
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 661, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1154, in init
self._traceback = _extract_stack()
envy@ub1404:~/os_pri/github/VAE-TensorFlow$
i had read your code, it is very good. But i saw that the two variables mu_encoder and logvar_encoder are both shaped [batch_size, latent_dim], if i want to generate a data, i should random a sigma, use a mu and logvar to generate, but after training we learn batch_size mus and logvars, how can i pick a mu and a logvar from them?
hello, thanks for sharing this code. I am confused about a question. I find MNIST data are continuous(pixel value not just 0 and 1), so why can we use Bernoulli distribution in decoder?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.