Git Product home page Git Product logo

pose-gan's People

Contributors

aliaksandrsiarohin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose-gan's Issues

Train/test data split

Hi, I noticed that you have two data splits for each dataset - one old and one new. May I know why you have two different versions? And which one should I use if I want to reproduce the numbers in your paper?

Thanks a lot!

input_1:0 is both fed and fetched.

I'm running this command

CUDA_VISIBLE_DEVICES=0 test.py --generator_checkpoint checkpoints/market/generator-no-warp.h5 --warp_skip none --dataset market

But I'm getting error

input_1:0 is both fed and fetched

tf==1.11.0
keras==2.2.3

Is there any ideas of how to solve it?

It seems incompatible problem......

I use the latest keras 2.2.2 and contrib, I use test command "python test.py --generator_checkpoint path" in anaconda, but I get the problem "ValueError: You are trying to load a weight file containing 31 layers into a model with 35 layers" in function "load_weights_from_hdf5_group". I degrade keras version to 2.1.6 and download a corresponding-time keras-contrib, but it no works. Is it a version problem? Can you tell me which version is correct or what should I do? Thank you!

The error is
`Generate images...

Number of images: 32668

Number of pairs train: 263632

Number of pairs test: 12000

Traceback (most recent call last):
File "test.py", line 152, in
test()

File "test.py", line 118, in test
generator.load_weights(args.generator_checkpoint)

File "/home/lgh/anaconda2/envs/pose-gan/lib/python2.7/site-packages/keras/engine/topology.py", line 2667, in load_weights
f, self.layers, reshape=reshape)

File "/home/lgh/anaconda2/envs/pose-gan/lib/python2.7/site-packages/keras/engine/topology.py", line 3365, in load_weights_from_hdf5_group
str(len(filtered_layers)) + ' layers.')

ValueError: You are trying to load a weight file containing 31 layers into a model with 35 layers.`

About body mask

Hi Aliaksandr,
Thanks for your work and code. I have a question about one of the ten masks: torso mask.
In your implementation in def pose_masks(array2, img_size)::

body_mask = np.ones(img_size)# mask_from_kp_array(get_array_of_points(kp2, ['Rhip', 'Lhip', 'Lsho', 'Rsho']), 0.1 * st2, img_size)

According to my understanding, this line aims to obtain the mask for torso, why do you use a full-one mask rather than a mask obtained by the method you commented mask_from_kp_array(...) ?

Downloading testing checkpoints

Hi, I'm trying to download the checkpoints for the testing, whereas I faced a download error while this process. Since these files are uploaded on unaccessible drive in regards of wget, I'm about to download these on my local machine and then move it through scp or so. However, it fails to download the files due to massive file size. Is there more convenient way to download this checkpoint files?

Thanks in advance.

Test result not looking good on fashion dataset (blurry faces), any ideas?

Hi I just run the test on both fashion dataset with all of your pre-trained models. For all cases I used the same parameter as in full mode, like for fashion dataset:

export CUDA_VISIBLE_DEVICES=1 python test.py --generator_checkpoint path/to/generator/checkpoint --output_dir output/full --checkpoints_dir output/full --warp_skip mask --dataset fasion --l1_penalty_weight 0.01 --nn_loss_area_size 5 --batch_size 2 --content_loss_layer block1_conv2 --number_of_epochs 90

The result looks okay for market images (probably due to the fact that it's low-resolution and blurry in the first place). But for fashion dataset I can notice severe degradation of human faces: distorted contour, blurry eyes/nose/mouth, etc. It's not as good as the result posted in your paper. Here I show some results of generator-warp-mask-nn5-cl12.h5, and using any other checkpoints will yield similar result.

fasionmenjacketsvestsid0000243904_1front jpg_fasionmenjacketsvestsid0000243904_2side jpg
fasionwomencardigansid0000368603_2side jpg_fasionwomencardigansid0000368603_3back jpg
fasionwomenteestanksid0000665502_2side jpg_fasionwomenteestanksid0000665502_1front jpg

I'm afraid that I used the wrong parameter for testing. Can you tell me what's the correct testing setting? Also, which checkpoint gives the best result in your experiment? I can't tell that from the generated result, since they all have the same problem of blurry faces.

problems when test checkpoint files

Hi,
Thank you for your great work! I have encountered some inexplicable problems when using your checkpoints.
when I test in fashion dataset as:
run python test.py --generator_checkpoint ./checkpoints/generator-warp-mask-nn5-cl12.h5 --warp_skip mask
Have encountered the previous error:
"tensorflow.python.framework.errors_impl.InvalidArgumentError: input_1:0 is both fed and fetched."
Then I fix it by replacing line142 with :
outputs = [input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc
outputs = [keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
return Model(inputs=[input_img] + input_pose + [output_img, output_pose] + bg_img + warp, outputs=outputs)

However, there is another new problem in the running process.
Traceback (most recent call last):
File "/data00/home/menyifang/code/comparison/pose-gan/test.py", line 152, in
test()
File "/data00/home/menyifang/code/comparison/pose-gan/test.py", line 119, in test
input_images, target_images, generated_images, names = generate_images(dataset, generator, args.use_input_pose)
File "/data00/home/menyifang/code/comparison/pose-gan/test.py", line 91, in generate_images
out = generator.predict(batch)
File "/data00/home/menyifang/anaconda3/envs/tfgpu/lib/python3.6/site-packages/keras/engine/training.py", line 1462, in predict
callbacks=callbacks)
File "/data00/home/menyifang/anaconda3/envs/tfgpu/lib/python3.6/site-packages/keras/engine/training_arrays.py", line 324, in predict_loop
batch_outs = f(ins_batch)
File "/data00/home/menyifang/anaconda3/envs/tfgpu/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 3076, in call
run_metadata=self.run_metadata)
File "/data00/home/menyifang/anaconda3/envs/tfgpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1439, in call
run_metadata_ptr)
File "/data00/home/menyifang/anaconda3/envs/tfgpu/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: output dimensions must be positive
[[{{node affine_transform_layer_2/ResizeNearestNeighbor}}]]
[[{{node lambda_3/Identity}}]]

Is there an idea to solve this problem?
version I use: tf 1.13.1, keras 2.3.0

json.decoder.JSONDecoderError:('Expecting value: line1 column 1 (char 0)','occurred at index 0') in demo.py

I'm running demo.py on windows 10, below is the issue i am getting..

Create pairs dataset...
Traceback (most recent call last):
File "demo.py", line 86, in
df = filter_not_valid(df_keypoints)
File "C:\Users\Downloads\pose gan\pose gan\pose-gan-master\create_pairs_dataset.py", line 14, in filter_not_valid
return df_keypoints[df_keypoints.apply(check_valid, axis=1)].copy()
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\pandas\core\frame.py", line 6014, in apply
return op.get_result()
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\pandas\core\apply.py", line 142, in get_result
return self.apply_standard()
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\pandas\core\apply.py", line 248, in apply_standard
self.apply_series_generator()
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\pandas\core\apply.py", line 277, in apply_series_generator
results[i] = self.f(v)
File "C:\Users\Downloads\pose gan\pose gan\pose-gan-master\create_pairs_dataset.py", line 11, in check_valid
kp_array = pose_utils.load_pose_cords_from_strings(str(x['keypoints_y']), str(x['keypoints_x']))
File "C:\Users\Downloads\pose gan\pose gan\pose-gan-master\pose_utils.py", line 92, in load_pose_cords_from_strings
y_cords = json.loads(y_str)
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\json_init_.py", line 354, in loads
return _default_decoder.decode(s)
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: ('Expecting value: line 1 column 1 (char 0)', 'occurred at index 0')

I've made small changes in the code like in kp_array : (str(x['keypoints_y']), str(x['keypoints_x']), I have added str to it otherwise it was saying "the JSONobject must be a str, bytes or bytearray, not float"

error in training with warp_skip = mask / full

Hello,

When I want to train the model with the configs that you mentioned (warp_skip = mask or full) it gives me this error. I don't have such a problem with the baseline i.e. using warp_skip=none.
Can you help?

Thanks

2019-03-18 20:09:27.117571: I tensorflow/core/kernels/cuda_solvers.cc:159] Creating CudaSolver handles for stream 0x5637fe29dc40
2019-03-18 20:09:27.477257: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at matrix_inverse_op.cc:191 : Internal: tensorflow/core/kernels/cuda_solvers.cc:803: cuBlas call failed status = 13

Traceback (most recent call last):
  File "train.py", line 29, in <module>
    main()
  File "train.py", line 26, in main
    trainer.train()
  File "Deformable_GAN_last_try/gan/train.py", line 106, in train
    self.train_one_epoch((((self.current_epoch + 1) % self.checkpoint_ratio == 0) or self.current_epoch==0))
  File "Deformable_GAN_last_try/gan/train.py", line 79, in train_one_epoch
    self.train_one_step(discriminator_loss_list, generator_loss_list)
  File "Deformable_GAN_last_try/gan/train.py", line 68, in train_one_step
    loss = self.generator_model.train_on_batch(generator_batch, np.zeros([self.batch_size]))
  File "venvs/p2tf_new/local/lib/python2.7/site-packages/keras/engine/training.py", line 1217, in train_on_batch
    outputs = self.train_function(ins)
  File "venvs/p2tf_new/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__
    return self._call(inputs)
  File "venvs/p2tf_new/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call
    fetched = self._callable_fn(*array_vals)
  File "venvs/p2tf_new/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1439, in __call__
    run_metadata_ptr)
  File "venvs/p2tf_new/local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InternalError: tensorflow/core/kernels/cuda_solvers.cc:803: cuBlas call failed status = 13
         [[{{node training_1/Adam/gradients/model_1/affine_transform_layer_2/transform/ImageProjectiveTransformV2_grad/MatrixInverse}} = MatrixInverse[T=DT_FLOAT, _class=["loc:@train...ransformV2"], adjoint=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training_1/Adam/gradients/model_1/affine_transform_layer_2/transform/ImageProjectiveTransformV2_grad/flat_transforms_to_matrices/Reshape_1)]]
         [[{{node loss/mul/_1251}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7915_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] 

improve training

I have trained the network and see that the texture information is not generated properly and it is blurry. I would like to know for how many epochs did you train the model. What could be still changed to improve the generated images.

Transfered pose estimator?

Hi,
May I ask if the pose estimator model you provide is trained on Deepfashion with some transfer learning?

Thanks

About test checkpoints files

Hi,
You can take a dataset as an example to explain in detail how to use checkpoint files. I have encountered some inexplicable problems when using your checkpoints.

such as market
run python test.py --generator_checkpoint ../checkpoints/market/generator-warp-maks-nn3-cl12.h5 --warp_skip mask
Have encountered the previous error:
Traceback (most recent call last): File "test.py", line 152, in <module> test() File "test.py", line 119, in test input_images, target_images, generated_images, names = generate_images(dataset, generator, args.use_input_pose) File "test.py", line 91, in generate_images out = generator.predict(batch) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/engine/training.py", line 1169, in predict steps=steps) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/engine/training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2671, in _call session) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2623, in _make_callable callable_fn = session._make_callable_from_options(callable_opts) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1471, in _make_callable_from_options return BaseSession._Callable(self, callable_options) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1425, in __init__ session._session, options_ptr, status) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: input_1:0 is both fed and fetched. Exception tensorflow.python.framework.errors_impl.InvalidArgumentError: InvalidArgumentError() in <bound method _Callable.__del__ of <tensorflow.python.client.session._Callable object at 0x7f8f7c5fb5d0>> ignored
How should I deal with this problem?

Is it possible to transpose person's pose?

I want to use pose-gan in this way.

  1. input two images(image1 and image2).
  2. image1 has a person A taking a pose X.
    image 2 has a person B taking a pose Y
  3. i want my output would be a person A taking a pose Y

But, i read the paper and it was quite different from what i thought......
In paper, two input images must have same person. just with different pose.

Do you think i can get the result what i want?

Sorry for asking too personal questions..

version of TensorFlow and Keras

Hi,
Thanks for making your code public available.
Could please tell me the version of Tensorflow and Keras you use?
Thanks.

Question: How to serve the trained models?

Question

Is it possible to use the trained model (i.e. checkpoints) for serving (i.e a tensorflow/serving) in a production environment?
If yes, what is the procedure that I should follow in order to do that?

Example scenario

I want to input a single image of a person (not from the images used in the training datasets) to the system and synthesize that person image in a new pose

IOError: File data/market-pairs-train.csv does not exist

Hi,
I'm running train.py on Ubutu 16.04, below is the issue i am getting..

Traceback (most recent call last):
File "train.py", line 32, in
main()
File "train.py", line 24, in main
dataset = PoseHMDataset(test_phase=False, **vars(args))
File "/home/aibc/Documents/xiangmu/pose-gan/pose_dataset.py", line 27, in init
self._pairs_file_train = pd.read_csv(kwargs['pairs_file_train'])
File "/home/aibc/.local/lib/python2.7/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/aibc/.local/lib/python2.7/site-packages/pandas/io/parsers.py", line 440, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/aibc/.local/lib/python2.7/site-packages/pandas/io/parsers.py", line 787, in init
self._make_engine(self.engine)
File "/home/aibc/.local/lib/python2.7/site-packages/pandas/io/parsers.py", line 1014, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/home/aibc/.local/lib/python2.7/site-packages/pandas/io/parsers.py", line 1708, in init
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 384, in pandas._libs.parsers.TextReader.cinit
File "pandas/_libs/parsers.pyx", line 695, in pandas._libs.parsers.TextReader._setup_parser_source
IOError: File data/market-pairs-train.csv does not exist

And I found that I can't find the file(market-pairs-train.csv)
If you have time, can provide a download link
Thank you

there is a bug in inception_score.py

i run the code under your instructions, but i meet this bug:
File "/home/wuxuehui/research/clothes/pose-gan/gan/inception_score.py", line 92, in _init_inception
logits = tf.matmul(tf.squeeze(pool3), w)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 2122, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4279, in mat_mul
name=name)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
op_def=op_def)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1734, in init
control_input_ops)
File "/home/wuxuehui/py2_env/venv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1570, in _create_c_op
raise ValueError(str(e))
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [2048], [2048,1008].

test model error

when I "Run python test.py --generator_checkpoint path/to/generator/checkpoint"

it errors:
ValueError: You are trying to load a weight file containing 31 layers into a model with 35 layers.

I had tried all the models, but no one can work

Training has two problems on two data sets

on market
problem one :
Can not find definition of predict_on_batch() function in train.py(including other modules) in the file of gan
leading to error:

Traceback (most recent call last): File "train.py", line 29, in <module> main() File "train.py", line 26, in main trainer.train() File "/home/hyw/zp/keras/pose-gan-master/gan/train.py", line 104, in train self.save_generated_images() File "/home/hyw/zp/keras/pose-gan-master/gan/train.py", line 45, in save_generated_images image = self.dataset.display(self.generator.predict_on_batch(batch), batch) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/engine/training.py", line 1274, in predict_on_batch outputs = self.predict_function(ins) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2671, in _call session) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2623, in _make_callable callable_fn = session._make_callable_from_options(callable_opts) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1471, in _make_callable_from_options return BaseSession._Callable(self, callable_options) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1425, in __init__ session._session, options_ptr, status) File "/home/hyw/zp/Anaconda2_pytorch_keras/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 528, in __exit__ c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: input_1:0 is both fed and fetched. Exception tensorflow.python.framework.errors_impl.InvalidArgumentError: InvalidArgumentError() in <bound method _Callable.__del__ of <tensorflow.python.client.session._Callable object at 0x7f7149a0aed0>> ignored
Can you explain how I should deal with this problem? please

question two :
on deepfashion
error as follows:

Traceback (most recent call last): File "train.py", line 29, in <module> main() File "train.py", line 26, in main trainer.train() File "/home/hyw/zp/keras/pose-gan-master/gan/train.py", line 104, in train self.save_generated_images() File "/home/hyw/zp/keras/pose-gan-master/gan/train.py", line 42, in save_generated_images batch = self.dataset.next_generator_sample_test() File "/home/hyw/zp/keras/pose-gan-master/pose_dataset.py", line 174, in next_generator_sample_test batch = self.load_batch(index, False, True) File "/home/hyw/zp/keras/pose-gan-master/pose_dataset.py", line 153, in load_batch result = [self.load_image_batch(pair_df, 'from')] File "/home/hyw/zp/keras/pose-gan-master/pose_dataset.py", line 131, in load_image_batch batch[i] = imread(os.path.join(self._images_dir_test, p[direction])) ValueError: could not broadcast input array from shape (1101,750,3) into shape (256,256,3)
How should I deal with this problem?

thanks a lot !!!

Docker Image for demo

Hi,
I'm new with deep learning and facing some troubles while running the application, If you can create a docker image for prediction and setup, that would be great.

Real-Data in your paper

Hello! Professor
Could you explain the Real-Data mentioned in the paper? Because you did not give a description about Real-Data in the paper.
Email:[email protected]
thank you.

training with my data

Hello, I trained with my own data. During the test, I found that if the movements of the source image and the target image were too different, the generated person's arms (between shoulder and elbow) were always broken. May I ask if you have this kind of problem?

Annotation result on my own dataset

Imagine you want to create dataset my.

  1. You should create folder data/my-dataset. With 2 sub-folders data/my-dataset/test
    and data/my-dataset/train .
  2. Add your dataset in cmd.py line 31 and line 84.
  3. Put your data into data/my-dataset/test and data/my-dataset/train. The data should have the following structure (id_person)_(id_image).jpg . The images with the same (id_person) is grouped together. Every 2 images with same (id_person) will make a pair.
  4. Run python compute_coordinates.py --dataset my and python create_pairs_dataset.py --dataset my
  5. Run train.py and test.py

Hi,

I just run compute_coordinates.py as the steps, and I got the annotation files. But the values in the file my-annotation-test.csv are all -1. What's the matter?

name:keypoints_y:keypoints_x 0188_c2s2_149727_03.jpg: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] 0467_c1s2_049246_04.jpg: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] 0501_c3s1_136908_05.jpg: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]

Unexpected cuda error

I run into this error message when trying to train a new model with Market dataset, following the exact steps on your github page:

2018-07-18 00:25:07.908521: E C:\Users\User\Source\Repos\tensorflow\tensorflow\stream_executor\cuda\cuda_driver.cc:1080] failed to synchronize the stop event: CUDA_ERROR_ILLEGAL_INSTRUCTION 2018-07-18 00:25:07.908824: E C:\Users\User\Source\Repos\tensorflow\tensorflow\stream_executor\cuda\cuda_timer.cc:54] Internal: error destroying CUDA event in context 0000026F6FC0BCB0: CUDA_ERROR_ILLEGAL_INSTRUCTION 2018-07-18 00:25:07.909146: E C:\Users\User\Source\Repos\tensorflow\tensorflow\stream_executor\cuda\cuda_timer.cc:59] Internal: error destroying CUDA event in context 0000026F6FC0BCB0: CUDA_ERROR_ILLEGAL_INSTRUCTION 2018-07-18 00:25:07.909478: F C:\Users\User\Source\Repos\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:2328] failed to set stream for cudnn handle: CUDNN_STATUS_MAPPING_ERROR

I managed to pinpoint the location this error occurs. It seems to be this line in gan/train.py:
loss = self.discriminator_model.train_on_batch( discrimiantor_batch + generator_batch, np.zeros([self.batch_size]))

My environment is as follows:
System: Windows 10
cuda: v9.1
cudnn: v7.1
tensorflow-gpu: 1.7.0
keras: 2.2.0

I'm not sure how to fix this problem and I just wonder if you have you ever run into this problem? Also, can you tell me your configurations? Thanks!

How should I arrange my own dataset to use your scripts for annotation and pairing?

I've finally managed to reproduce your test result, it was really amazing! I would also like to try on a different dataset and would like to use your scripts to process it, but I'm not quite sure how to arrange my dataset. Could you give me a hint?

Specifically, I noticed that in your annotation files all images are arranged randomly, but you can create paired images for training/testing, and I'm afraid I don't quite understand how you associate one image with another? Should I specifically annotate matching image pairs (either by putting each pair in a separate folder like in DeepFashion dataset, or by giving them matching names like img_01_train.png and img_01_gt.png)?

Many thanks!

Error while executing "create_pairs_dataset.py" file

roshan@roshan-Satellite-L855:~/Desktop/pose-gan$ python2 create_pairs_dataset.pyTraceback (most recent call last):
File "create_pairs_dataset.py", line 1, in
import pandas as pd
File "/home/roshan/.local/lib/python2.7/site-packages/pandas/init.py", line 19, in
"Missing required dependencies {0}".format(missing_dependencies))
ImportError: Missing required dependencies ['numpy']

Test.py ValueError: You are trying to load a weight file containing 37 layers into a model with 35 layers.

▶ python3 test.py --generator_checkpoint ./checkpoints/fasion/generator-warp-mask.h5
Using TensorFlow backend.
2018-10-03 14:40:22.584639: W tensorflow/core/framework/op_def_util.cc:346] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
2018-10-03 14:40:23.051506: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Generate images...
Number of images: 32668
Number of pairs train: 263632
Number of pairs test: 12000
Traceback (most recent call last):
  File "test.py", line 152, in <module>
    test()
  File "test.py", line 118, in test
    generator.load_weights(args.generator_checkpoint)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/network.py", line 1166, in load_weights
    f, self.layers, reshape=reshape)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/keras/engine/saving.py", line 1030, in load_weights_from_hdf5_group
    str(len(filtered_layers)) + ' layers.')
ValueError: You are trying to load a weight file containing 37 layers into a model with 35 layers.

target images folder in demo.py

what images to put in target images folder in demo.py? and what do you mean by 'For target images use kp from previous frame if not detected'?

Training Time ?

Hi,
Thanx for uploading your code. Can you tell me how much time does it take to train the model for 90 epochs ( i don't have much disk space so i'm not storing intermediate pose maps either ) ? It's taking 0.5/epoch hour on my machine ( nvidia 1080 ti ). Do you get similar times ?

Problem about skimage_transfrom

Hi, a warning is proposed during the first epoch training,

/home/tj/.conda/envs/dsc/lib/python3.6/site-packages/skimage/transform/_geometric.py:683: RuntimeWarning: divide by zero encountered in true_divide
H.flat[list(self._coeffs) + [8]] = - V[-1, :-1] / V[-1, -1]

It seems that when calculating the transformation matrix, for some samples, there will have zeros in division. It may not influence the training process, but does anyone know why or in which case (what kind of skeleton pair) this warning occurs?

Attribute Error

Hello

When I try running "compute_coordinates.py" I get the following Attribute error.

File "compute_coordinates.py", line 1, in
import pose_utils
File "/mocapdata/Students/sheela/pose-gan/pose_utils.py", line 3, in
from skimage.draw import circle, line_aa, polygon
File "/mocapdata/Students/sheela/env/local/lib/python2.7/site-packages/skimage/init.py", line 167, in
from .util.dtype import (img_as_float32,
File "/mocapdata/Students/sheela/env/local/lib/python2.7/site-packages/skimage/util/init.py", line 6, in
from .apply_parallel import apply_parallel
File "/mocapdata/Students/sheela/env/local/lib/python2.7/site-packages/skimage/util/apply_parallel.py", line 8, in
import dask.array as da
File "/mocapdata/Students/sheela/env/local/lib/python2.7/site-packages/dask/array/init.py", line 9, in
from .routines import (take, choose, argwhere, where, coarsen, insert,
File "/mocapdata/Students/sheela/env/local/lib/python2.7/site-packages/dask/array/routines.py", line 256, in
@wraps(np.matmul)
File "/usr/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
AttributeError: 'numpy.ufunc' object has no attribute 'module'

Could you please let me know what is the problem? How can it be solved?

Thanks

[Bug Found]: compute_st_distance will crash for incomplete input skeletons

@AliaksandrSiarohin I'm running a test with pre-trained model, but encountered the following error:

Traceback (most recent call last):
File "test_video.py", line 121, in
test(1)
File "test_video.py", line 99, in test
out = generate_video(video, pose, generator)
File "test_video.py", line 75, in generate_video
batch = prepare_batch(src_frame, frame, src_skeleton, skeleton)
File "test_video.py", line 57, in prepare_batch
result += [pose_transform.affine_transforms(pose_from, pose_to)]
File "/backup/lingboyang/pose-gan/pose_transform.py", line 199, in affine_transforms
st2 = compute_st_distance(kp2)
File "/backup/lingboyang/pose-gan/pose_transform.py", line 103, in compute_st_distance
st_distance2 = np.sum((kp['Lhip'] - kp['Lsho']) ** 2)
KeyError: 'Lsho'

And here's where the error take place: In pose_transform.py around line 100:

def compute_st_distance(kp):
    st_distance1 = np.sum((kp['Rhip'] - kp['Rsho']) ** 2)
    st_distance2 = np.sum((kp['Lhip'] - kp['Lsho']) ** 2)
    return np.sqrt((st_distance1 + st_distance2)/2.0)

By checking missing keys in kp I found the problematic frame with pose and skeletons:

array([[ 25, 113],
       [ 53, 122],
       [ 53, 107],
       [ 85,  88],
       [110,  59],
       [ -1,  -1],
       [ -1,  -1],
       [ -1,  -1],
       [129, 110],
       [177, 108],
       [233, 108],
       [128, 138],
       [176, 151],
       [223, 168],
       [ 19, 108],
       [ 19, 119],
       [ 24, 105],
       [ 25, 130]], dtype=int32)

Do you think it's okay to use the distance between neck and both hips? It seems the neck point are more stable than shoulder points.

error while training

Traceback (most recent call last):
File "train.py", line 29, in
main()
File "train.py", line 26, in main
trainer.train()
File "/content/new/gan/train.py", line 103, in train
self.save_generated_images()
File "/content/new/gan/train.py", line 44, in save_generated_images
image = self.dataset.display(self.generator.predict_on_batch(batch), batch)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1274, in predict_on_batch
outputs = self.predict_function(ins)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2715, in call
return self._call(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2671, in _call
session)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2623, in _make_callable
callable_fn = session._make_callable_from_options(callable_opts)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/client/session.py", line 1505, in _make_callable_from_options
return BaseSession._Callable(self, callable_options)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_core/python/client/session.py", line 1460, in init
session._session, options_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: input_1:0 is both fed and fetched.

However, results performed not as well as in your current version paper.

I select the following person_id as test data, which were also used in "pose_guided" and "Disentangled Person Image Generation", and the run train.py and test.py

CUDA_VISIBLE_DEVICES=0 python train.py --output_dir output/full --checkpoints_dir output/full --warp_skip mask --dataset fasion --l1_penalty_weight 0.01 --nn_loss_area_size 5 --batch_size 2 --content_loss_layer block1_conv2 --number_of_epochs 90

CUDA_VISIBLE_DEVICES=3 python test.py --generator_checkpoint output/full/epoch_089_generator.h5 --warp_skip mask --dataset fasion --l1_penalty_weight 0.01 --nn_loss_area_size 5 --batch_size 2 --content_loss_layer block1_conv2

However, results performed not as well as in your current version paper. (and same parameters as in train.py)

SSIM score masked 0.9499144956871529 Inception score = (3.2932858, 0.039500084),masked = (3.5532665, 0.09156021);
SSIM score = 0.7542003027263708, masked = 0.9499144956871529;l1 score = 0.12546359778842633


(http://47.100.21.47/image/fasionWOMENTeesTanksid0000350408_3back.jpg_fasionWOMENTeesTanksid0000350408_1front.jpg.png)


(http://47.100.21.47/image/fasionWOMENBlousesShirtsid0000210701_2side.jpg_fasionWOMENBlousesShirtsid0000210701_3back.jpg.png)


(http://47.100.21.47/image/fasionWOMENCardigansid0000369601_2side.jpg_fasionWOMENCardigansid0000369601_1front.jpg.png)


(http://47.100.21.47/image/fasionWOMENCardigansid0000369601_4full.jpg_fasionWOMENCardigansid0000369601_2side.jpg.png)


(http://47.100.21.47/image/fasionWOMENTeesTanksid0000121234_3back.jpg_fasionWOMENTeesTanksid0000121234_1front.jpg.png)


(http://47.100.21.47/image/fasionWOMENTeesTanksid0000121234_3back.jpg_fasionWOMENTeesTanksid0000121234_2side.jpg.png)


(http://47.100.21.47/image/fasionWOMENTeesTanksid0000131001_2side.jpg_fasionWOMENTeesTanksid0000131001_1front.jpg.png)


(http://47.100.21.47/image/fasionWOMENTeesTanksid0000131001_7additional.jpg_fasionWOMENTeesTanksid0000131001_3back.jpg.png)

id_00005363
id_00006364
id_00004600
id_00006814
id_00002068
id_00005364
id_00007540
id_00007542
id_00005361
id_00005152
id_00007466
id_00006569
id_00003907
id_00002591
id_00005486
id_00004703
id_00004469
id_00004701
id_00005482
id_00006817
id_00005762
id_00005761
id_00005316
id_00005317
id_00005310
id_00006965
id_00001184
id_00002112
id_00002114
id_00005318
id_00001140
id_00004991
id_00005466
id_00007383
id_00007382
id_00006311
id_00006317
id_00006315
id_00007384
id_00006318
id_00004840
id_00000814
id_00007261
id_00004963
id_00006498
id_00006499
id_00005162
id_00006782
id_00005168
id_00000099
id_00004260
id_00004566
id_00001249
id_00007017
id_00000666
id_00004560
id_00002300
id_00006084
id_00000782
id_00001368
id_00005189
id_00006147
id_00005180
id_00002597
id_00003285
id_00006223
id_00006222
id_00003168
id_00001393
id_00007314
id_00000164
id_00000862
id_00001783
id_00001629
id_00001784
id_00005263
id_00007721
id_00007723
id_00000561
id_00000160
id_00007729
id_00001566
id_00003576
id_00000928
id_00005986
id_00004103
id_00000271
id_00001764
id_00004294
id_00006139
id_00004290
id_00000969
id_00006132
id_00004299
id_00006135
id_00002365
id_00003767
id_00003602
id_00007926
id_00000140
id_00007436
id_00007920
id_00000438
id_00003418
id_00002415
id_00007426
id_00006526
id_00007425
id_00006520
id_00002960
id_00002962
id_00003788
id_00000048
id_00007032
id_00004030
id_00002969
id_00004033
id_00004035
id_00005918
id_00005910
id_00005911
id_00004583
id_00003466
id_00002566
id_00002560
id_00002561
id_00003460
id_00005421
id_00006357
id_00001069
id_00002732
id_00006982
id_00003534
id_00002446
id_00007452
id_00007451
id_00007450
id_00007454
id_00004633
id_00004630
id_00005619
id_00004921
id_00004736
id_00005617
id_00005737
id_00005327
id_00006855
id_00005325
id_00006851
id_00006188
id_00005253
id_00001615
id_00001203
id_00006180
id_00005557
id_00007468
id_00006677
id_00001511
id_00006678
id_00001145
id_00001973
id_00003006
id_00001925
id_00003002
id_00007694
id_00004952
id_00001844
id_00005137
id_00000135
id_00001339
id_00007972
id_00007602
id_00000132
id_00003185
id_00007703
id_00001334
id_00007609
id_00001724
id_00004256
id_00006077
id_00001728
id_00006177
id_00000623
id_00003523
id_00001923
id_00003649
id_00003456
id_00002929
id_00007076
id_00000553
id_00004783
id_00003824
id_00007771
id_00003822
id_00007777
id_00003458
id_00007670
id_00001430
id_00004173
id_00003311
id_00002191
id_00005955
id_00006009
id_00007730
id_00005815
id_00005814
id_00002353
id_00002605
id_00002356
id_00000950
id_00003732
id_00000488
id_00000521
id_00007808
id_00007982
id_00002628
id_00007552
id_00007807
id_00007417
id_00001890
id_00000319
id_00006550
id_00003914
id_00004677
id_00004675
id_00002995
id_00003856
id_00004478
id_00007786
id_00007782
id_00001779
id_00005775
id_00004472
id_00004778
id_00007895
id_00002107
id_00007899
id_00007508
id_00006739
id_00006803
id_00001155
id_00001093
id_00001094
id_00001097
id_00007258
id_00007398
id_00000131
id_00004855
id_00004853
id_00005587
id_00001807
id_00007159
id_00001801
id_00005177
id_00001135
id_00001028
id_00002771
id_00001027
id_00002774
id_00005071
id_00001371
id_00000775
id_00007286
id_00005204
id_00003660
id_00007130
id_00006607
id_00006606
id_00006602
id_00005693
id_00003212
id_00007322
id_00000706
id_00000871
id_00005291
id_00000702
id_00003073
id_00001791
id_00001652
id_00002384
id_00003315
id_00002381
id_00001578
id_00002383
id_00003254
id_00003257
id_00007634
id_00003318
id_00001570
id_00001571
id_00001477
id_00005858
id_00001474
id_00004509
id_00001774
id_00000150
id_00002256
id_00002257
id_00000239
id_00006043
id_00003389
id_00002315
id_00006125
id_00002258
id_00003776
id_00000678
id_00000172
id_00003676
id_00001687
id_00004180
id_00007931
id_00004183
id_00007844
id_00002131
id_00007022
id_00003957
id_00007026
id_00001508
id_00002951
id_00003790
id_00001501
id_00000196
id_00000195
id_00004002
id_00007013
id_00004430
id_00003672
id_00003493
id_00002552
id_00002226
id_00000459
id_00002558
id_00001485
id_00002655
id_00001055
id_00005021
id_00005799
id_00007118
id_00006585
id_00006581
id_00004641
id_00004640
id_00003644
id_00004649
id_00007109
id_00005704
id_00006863
id_00007599
id_00005702
id_00005335
id_00006900
id_00002035
id_00005709
id_00007595
id_00007678
id_00007043
id_00001495
id_00006642
id_00006644
id_00003913
id_00004328
id_00006769
id_00007365
id_00005093
id_00001289
id_00001282
id_00004860
id_00003132
id_00003814
id_00004946
id_00001114
id_00000603
id_00001116
id_00005412
id_00005417
id_00005416
id_00003620
id_00003623
id_00001446
id_00003581
id_00001396
id_00003585
id_00004543
id_00000193
id_00004267
id_00005896
id_00004261
id_00005890
id_00004389
id_00004386
id_00004385
id_00004269
id_00001739
id_00004380
id_00006425
id_00006163
id_00006247
id_00006242
id_00001943
id_00001230
id_00002662
id_00003446
id_00002912
id_00000390
id_00007708
id_00000017
id_00003361
id_00003449
id_00007701
id_00007661
id_00007662
id_00005963
id_00003696
id_00005801
id_00000006
id_00005804
id_00002263
id_00001627
id_00003219
id_00002617
id_00000512
id_00000122
id_00000120
id_00000121
id_00000124
id_00003216
id_00006421
id_00007839
id_00007838
id_00006426
id_00006542
id_00002162
id_00002163
id_00000560
id_00007404
id_00004682
id_00003964
id_00001882
id_00003962
id_00007015
id_00000065
id_00002223
id_00007797
id_00006728
id_00004804
id_00002460
id_00006942
id_00007498
id_00007888
id_00002503
id_00004816
id_00003996
id_00007178
id_00006687
id_00005115
id_00006683
id_00006682
id_00002713
id_00006727
id_00006722
id_00002715
id_00002714
id_00006334
id_00006336
id_00007246
id_00007531
id_00007089
id_00007125
id_00003498
id_00007121
id_00004611
id_00007082
id_00004907
id_00004615
id_00001011
id_00002228
id_00002198
id_00005066
id_00002763
id_00005062
id_00004348
id_00004349
id_00004889
id_00004885
id_00001268
id_00004341
id_00005231
id_00003235
id_00003099
id_00006616
id_00004485
id_00003140
id_00007331
id_00007334
id_00003060
id_00007235
id_00001868
id_00001903
id_00001902
id_00001863
id_00001866
id_00000112
id_00000506
id_00005382
id_00007629
id_00004083
id_00003719
id_00000119
id_00002606
id_00005866
id_00004120
id_00004533
id_00000982
id_00003398
id_00002309
id_00002304
id_00003396
id_00000593
id_00003196
id_00000532
id_00007901
id_00007903
id_00007904
id_00007907
id_00000161
id_00001345
id_00000168
id_00006506
id_00007065
id_00002822
id_00002121
id_00002941
id_00003431
id_00005888
id_00000341
id_00004012
id_00004159
id_00007753
id_00003801
id_00003802
id_00004400
id_00001512
id_00007695
id_00004156
id_00004155
id_00000187
id_00005624
id_00005831
id_00007195
id_00007589
id_00002541
id_00000935
id_00002239
id_00000936
id_00003325
id_00000932
id_00005015
id_00006802
id_00006800
id_00006375
id_00002050
id_00007804
id_00005395
id_00005019
id_00006477
id_00006577
id_00000235
id_00007477
id_00003933
id_00002334
id_00005496
id_00005713
id_00004716
id_00004711
id_00006837
id_00006926
id_00005657
id_00001895
id_00002431
id_00007875
id_00003917
id_00001634
id_00005479
id_00004305
id_00004303
id_00006658
id_00006659
id_00006657
id_00005476
id_00004309
id_00006898
id_00006899
id_00005085
id_00007371
id_00007277
id_00006486
id_00006483
id_00000804
id_00007271
id_00004873
id_00004975
id_00001825
id_00004976
id_00004971
id_00001127
id_00001044
id_00001709
id_00001310
id_00000288
id_00003610
id_00000659
id_00001705
id_00001254
id_00006091
id_00006257
id_00003304
id_00003298
id_00007301
id_00003116
id_00003114
id_00001384
id_00007036
id_00003504
id_00007302
id_00000727
id_00006391
id_00002673
id_00003376
id_00002675
id_00002678
id_00003378
id_00007717
id_00002522
id_00007654
id_00001553
id_00001415
id_00001559
id_00004118
id_00003353
id_00000978
id_00000840
id_00003052
id_00000971
id_00000973
id_00004289
id_00007958
id_00000159
id_00007950
id_00003753
id_00006920
id_00007820
id_00002010
id_00006539
id_00002154
id_00002157
id_00007432
id_00006532
id_00002153
id_00006530
id_00003974
id_00002972
id_00004029
id_00002931
id_00004021
id_00003344
id_00003870
id_00000550
id_00005757
id_00002089
id_00004593
id_00004459
id_00003479
id_00003470
id_00000380
id_00007487
id_00005124
id_00000396
id_00005515
id_00006322
id_00002703
id_00007752
id_00005478
id_00003278
id_00003064
id_00006994
id_00006991
id_00006990
id_00005106
id_00004931
id_00004935
id_00004626
id_00005723
id_00005720
id_00006849
id_00005357
id_00001008
id_00001649
id_00004236
id_00004234
id_00005548
id_00001212
id_00003082
id_00005305
id_00004330
id_00006196
id_00006737
id_00000342
id_00006199
id_00007344
id_00003152
id_00007391
id_00007221
id_00000850
id_00007223
id_00001671
id_00001915
id_00004803
id_00001678
id_00001856
id_00005532
id_00001919
id_00001590
id_00001592
id_00003233
id_00007966
id_00004489
id_00001455
id_00001326
id_00004243
id_00005976
id_00003241
id_00001755
id_00006060
id_00006062
id_00001696
id_00001585
id_00006264
id_00003123
id_00007914
id_00003127
id_00007910
id_00003658
id_00000079
id_00003551
id_00003557
id_00007918
id_00002836
id_00003423
id_00002834
id_00002523
id_00003424
id_00005876
id_00003343
id_00003340
id_00003345
id_00007621
id_00004067
id_00003628
id_00004419
id_00007684
id_00000038
id_00004798
id_00007157
id_00006450
id_00005945
id_00003887
id_00001163
id_00000917
id_00002201
id_00002203
id_00003336
id_00004090
id_00001113
id_00003883

cmd.py will prevent pandas from loading properly

I've found that when running test.py the pandas package will not load properly because the configure file cmd.py seems to override the same-name file while loading pandas (pdb more precisely).

Here's the detailed error output:
Using TensorFlow backend. /home/vcl/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds) Traceback (most recent call last): File "test.py", line 3, in <module> from conditional_gan import make_generator File "/backup/lingboyang/pose-gan/conditional_gan.py", line 1, in <module> from keras.models import Model, Input, Sequential File "/home/vcl/anaconda3/lib/python3.6/site-packages/keras/__init__.py", line 3, in <module> from . import utils File "/home/vcl/anaconda3/lib/python3.6/site-packages/keras/utils/__init__.py", line 6, in <module> from . import conv_utils File "/home/vcl/anaconda3/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in <module> from .. import backend as K File "/home/vcl/anaconda3/lib/python3.6/site-packages/keras/backend/__init__.py", line 89, in <module> from .tensorflow_backend import * File "/home/vcl/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 5, in <module> import tensorflow as tf File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/__init__.py", line 24, in <module> from tensorflow.python import * File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 83, in <module> from tensorflow.python.estimator import estimator_lib as estimator File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/estimator_lib.py", line 35, in <module> from tensorflow.python.estimator.inputs import inputs File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/inputs/inputs.py", line 22, in <module> from tensorflow.python.estimator.inputs.numpy_io import numpy_input_fn File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/inputs/numpy_io.py", line 22, in <module> from tensorflow.python.estimator.inputs.queues import feeding_functions File "/home/vcl/anaconda3/lib/python3.6/site-packages/tensorflow/python/estimator/inputs/queues/feeding_functions.py", line 40, in <module> import pandas as pd File "/home/vcl/anaconda3/lib/python3.6/site-packages/pandas/__init__.py", line 59, in <module> from pandas.util._tester import test File "/home/vcl/anaconda3/lib/python3.6/site-packages/pandas/util/_tester.py", line 11, in <module> import pytest File "/home/vcl/anaconda3/lib/python3.6/site-packages/pytest.py", line 14, in <module> from _pytest.debugging import pytestPDB as __pytestPDB File "/home/vcl/anaconda3/lib/python3.6/site-packages/_pytest/debugging.py", line 3, in <module> import pdb File "/home/vcl/anaconda3/lib/python3.6/pdb.py", line 136, in <module> class Pdb(bdb.Bdb, cmd.Cmd): AttributeError: module 'cmd' has no attribute 'Cmd'

See this link for reference.
I'm going to replace every reference of cmd in your project to another name and see if that fixes this issue. I'll let you know of any progress.

File "create_pairs_dataset.py" missing

I've noticed that your instruction step 3 says "Create pairs dataset with python create_pairs_dataset.py. It define pairs for training." However, I cannot find any file with name create_pairs_dataset.py in your repository.

Could it be that you've changed the file name into create_dataset_for_reid.py or create_pairs_for_reid.py? And which one should I use to generate the training dataset?

Thanks!

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64. Shapes are [3,3,18,64] and [64,21,3,3]. for 'Assign_2' (op: 'Assign') with input shapes: [3,3,18,64], [64,21,3,3].

Hi,
when i use your checkpoint file "generator-warp-mask-nn3-cl12.h5" for market test, I get this error:
Using TensorFlow backend.
2019-10-20 16:21:09.954087: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Generate images...
Number of images: 32668
Number of pairs train: 263632
Number of pairs test: 12000
WARNING:tensorflow:From /data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py:1290: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Traceback (most recent call last):
File "/data00/home/menyifang/code/comparison/pose-gan-fix/pose-gan/test.py", line 152, in
test()
File "/data00/home/menyifang/code/comparison/pose-gan-fix/pose-gan/test.py", line 118, in test
generator.load_weights(args.generator_checkpoint)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/keras/engine/topology.py", line 2619, in load_weights
load_weights_from_hdf5_group(f, self.layers)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/keras/engine/topology.py", line 3095, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2188, in batch_set_value
assign_op = x.assign(assign_placeholder)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 594, in assign
return state_ops.assign(self._variable, value, use_locking=use_locking)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/ops/state_ops.py", line 276, in assign
validate_shape=validate_shape)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/ops/gen_state_ops.py", line 59, in assign
use_locking=use_locking, name=name)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3162, in create_op
compute_device=compute_device)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3208, in _create_op_helper
set_shapes_for_outputs(op)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2427, in set_shapes_for_outputs
return _set_shapes_for_outputs(op)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2400, in _set_shapes_for_outputs
shapes = shape_func(op)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2330, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "/data00/home/menyifang/anaconda3/envs/tfold/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Dimension 0 in both shapes must be equal, but are 3 and 64. Shapes are [3,3,18,64] and [64,21,3,3]. for 'Assign_2' (op: 'Assign') with input shapes: [3,3,18,64], [64,21,3,3].

Do you have any idea to solve? (environment is as same as yours and it works well for fashion dataset) Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.