Git Product home page Git Product logo

Comments (57)

shreya800 avatar shreya800 commented on June 20, 2024

I'm running demo.py on windows 10, python 3.6

below is the error I am getting

Traceback (most recent call last):
File "demo.py", line 115, in
input_images, target_images, generated_images,source_image, names = generate_images(dataset, generator, args.use_input_pose)
File "C:\Users\Documents\pose gan\pose-gan-master\test.py", line 90, in generate_images
batch, name = dataset.next_generator_sample_test(with_names=True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 174, in next_generator_sample_test
batch = self.load_batch(index, False, True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 155, in load_batch
result.append(self.compute_pose_map_batch(pair_df, 'from'))
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 68, in compute_pose_map_batch
row = self._annotations_file.loc[p[direction]]
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1478, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1911, in _getitem_axis
self._validate_key(key, axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1798, in _validate_key
error()
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [source-image.jpg] is not in the [index]'

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Can't say what is the cause of this. Can you send me the content of pose_dataset self._annotation_file? (e.g. Line 68)

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Ok, still I need you to print self._annotation_file .
And send me the staff that is there.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

the content of pose_dataset.py (line 68) is given below

def compute_pose_map_batch(self, pair_df, direction):
assert direction in ['to', 'from']
batch = np.empty([self._batch_size] + list(self._image_size) + [18 if self._pose_rep_type == 'hm' else 3])
i = 0
for _, p in pair_df.iterrows():
row = self._annotations_file.loc[p[direction]]
if self._cache_pose_rep:
file_name = self._tmp_pose + p[direction] + self._pose_rep_type + '.npy'
if os.path.exists(file_name):
pose = np.load(file_name)
else:
kp_array = pose_utils.load_pose_cords_from_strings(row['keypoints_y'], row['keypoints_x'])
if self._pose_rep_type == 'hm':
pose = pose_utils.cords_to_map(kp_array, self._image_size)
else:
pose = pose_transform.make_stickman(kp_array, self._image_size)
np.save(file_name, pose)
else:
kp_array = pose_utils.load_pose_cords_from_strings(row['keypoints_y'], row['keypoints_x'])
if self._pose_rep_type == 'hm':
pose = pose_utils.cords_to_map(kp_array, self._image_size)
else:
pose = pose_transform.make_stickman(kp_array, self._image_size)
batch[i] = pose
i += 1
return batch

below are the codes where self._annotations_file has been defined

class PoseHMDataset(UGANDataset):
def init(self, test_phase=False, **kwargs):
super(PoseHMDataset, self).init(kwargs['batch_size'], None)
self._test_phase = test_phase

    self._batch_size = 1 if self._test_phase else kwargs['batch_size']
    self._image_size = kwargs['image_size']
    self._images_dir_train = kwargs['images_dir_train']
    self._images_dir_test = kwargs['images_dir_test']

    self._bg_images_dir_train = kwargs['bg_images_dir_train']
    self._bg_images_dir_test = kwargs['bg_images_dir_test']

    self._pairs_file_train = pd.read_csv(kwargs['pairs_file_train'])
    self._pairs_file_test = pd.read_csv(kwargs['pairs_file_test'])

    self._annotations_file_test = pd.read_csv(kwargs['annotations_file_train'], sep=':')
    self._annotations_file_train = pd.read_csv(kwargs['annotations_file_test'], sep=':')

    **self._annotations_file = pd.concat([self._annotations_file_test, self._annotations_file_train],
                                       axis=0, ignore_index=True)**

    self._annotations_file = self._annotations_file.set_index('name')

    self._use_input_pose = kwargs['use_input_pose']
    self._warp_skip = kwargs['warp_skip']
    self._disc_type = kwargs['disc_type']
    self._tmp_pose = kwargs['tmp_pose_dir']
    self._use_bg = kwargs['use_bg']
    self._pose_rep_type = kwargs['pose_rep_type']
    self._cache_pose_rep = kwargs['cache_pose_rep']

    self._test_data_index = 0

    if not os.path.exists(self._tmp_pose):
        os.makedirs(self._tmp_pose)

    print ("Number of images: %s" % len(self._annotations_file))
    print ("Number of pairs train: %s" % len(self._pairs_file_train))
    print ("Number of pairs test: %s" % len(self._pairs_file_test))

    self._batches_before_shuffle = int(self._pairs_file_train.shape[0] // self._batch_size)

is this what you wanted?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

No.
Add:
print self._annotations_file # in line 68
copy result from console and send me.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

I added print self._annotations_file in pose_dataset.py. Here is what i get

base) C:\Users\Documents\pose gan\pose-gan-master>python pose_dataset.py
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-17 17:36:29.962460: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-17 17:36:31.935656: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

I need the content of variable self._annotation_file .
You should run 'python demo.py ...'

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

i have given print(self._annotation_file) command in #line 68 of pose_dataset.py and ran the demo.py file but did not get any output for print command

(base) C:\Users\Documents\pose gan\pose-gan-master>python demo.py --dataset prw --warp_skip mask --generator_checkpoint C:\Users\rashmi.vijayan.nair\Downloads\generator-warp-mask-nn3-cl12.h5 --use_bg 1
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-17 19:29:12.786739: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-17 19:29:14.790225: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Annotate image keypoints...
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\saving.py:292: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
0it [00:00, ?it/s]C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\transform_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\util\dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8
.format(dtypeobj_in, dtypeobj_out))
11it [05:30, 25.19s/it]
Create pairs dataset...
name ... keypoints_x
0 denis_walk000000.jpg ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, 5...

[1 rows x 3 columns]
0 denis_walk000000.jpg
1 denis_walk000001.jpg
2 denis_walk000002.jpg
3 denis_walk000003.jpg
4 denis_walk000004.jpg
5 denis_walk000005.jpg
6 denis_walk000006.jpg
7 denis_walk000007.jpg
8 denis_walk000008.jpg
9 denis_walk000009.jpg
10 source-image.jpg
Name: name, dtype: object
Number of pairs: 10
Create bg images...
Generating images...
C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py:34: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.

To accept the future behavior, pass 'sort=True'.

To retain the current behavior and silence the warning, pass sort=False

axis=0, ignore_index=True)
Number of images: 11
Number of pairs train: 0
Number of pairs test: 10
Generate images...
0%| | 0/10 [00:00<?, ?it/s]
Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1790, in _validate_key
error()
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [source-image.jpg] is not in the [index]'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 115, in
input_images, target_images, generated_images,source_image, names = generate_images(dataset, generator, args.use_input_pose)
File "C:\Users\Documents\pose gan\pose-gan-master\test.py", line 91, in generate_images
batch, name = dataset.next_generator_sample_test(with_names=True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 175, in next_generator_sample_test
batch = self.load_batch(index, False, True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 156, in load_batch
result.append(self.compute_pose_map_batch(pair_df, 'from'))
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 68, in compute_pose_map_batch
row = self._annotations_file.loc[p[direction]]
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1478, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1911, in _getitem_axis
self._validate_key(key, axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1798, in _validate_key
error()
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [source-image.jpg] is not in the [index]'

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Sorry I can't find the content of self._annotation_file.
Do you have something like this:

print (self._annotation_file) #Line 68
row = self._annotations_file.loc[p[direction]] #Line 69

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Try to print it after creation:

self._annotations_file = self._annotations_file.set_index('name')
print (self._annotation_file)

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

An what this stars:

**self._annotations_file = pd.concat([self._annotations_file_test, self._annotations_file_train],
                                       axis=0, ignore_index=True)**

means?

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

got this after running
print (self._annotation_file) #Line 68
row = self._annotations_file.loc[p[direction]] #Line 69

(base) C:\Users\Documents\pose gan\pose-gan-master>python demo.py --dataset prw --warp_skip mask --generator_checkpoint C:\Users\rashmi.vijayan.nair\Downloads\generator-warp-mask-nn3-cl12.h5 --use_bg 1
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-17 20:03:12.713144: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-17 20:03:14.735110: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Annotate image keypoints...
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\saving.py:292: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
0it [00:00, ?it/s]C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\transform_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\util\dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8
.format(dtypeobj_in, dtypeobj_out))
11it [03:51, 21.19s/it]
Create pairs dataset...
name ... keypoints_x
0 denis_walk000000.jpg ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, 5...

[1 rows x 3 columns]
0 denis_walk000000.jpg
1 denis_walk000001.jpg
2 denis_walk000002.jpg
3 denis_walk000003.jpg
4 denis_walk000004.jpg
5 denis_walk000005.jpg
6 denis_walk000006.jpg
7 denis_walk000007.jpg
8 denis_walk000008.jpg
9 denis_walk000009.jpg
10 source-image.jpg
Name: name, dtype: object
Number of pairs: 10
Create bg images...
Generating images...
C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py:34: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.

To accept the future behavior, pass 'sort=True'.

To retain the current behavior and silence the warning, pass sort=False

axis=0, ignore_index=True)
Number of images: 11
Number of pairs train: 0
Number of pairs test: 10
Generate images...
0%| | 0/10 [00:00<?, ?it/s] keypoints_x keypoints_y name,keypoints_y,keypoints_x
name
NaN NaN NaN denis_walk000000.jpg,"[13, 28, 29, 42, 46, 27,...
NaN NaN NaN denis_walk000001.jpg,"[12, 28, 29, 44, 49, 27,...
NaN NaN NaN denis_walk000002.jpg,"[11, 25, 25, 43, 51, 25,...
NaN NaN NaN denis_walk000003.jpg,"[10, 25, 25, 42, 54, 26,...
NaN NaN NaN denis_walk000004.jpg,"[9, 24, 23, 42, 54, 25, ...
NaN NaN NaN denis_walk000005.jpg,"[7, 22, 22, 42, 54, 22, ...
NaN NaN NaN denis_walk000006.jpg,"[7, 21, 21, 42, 54, 22, ...
NaN NaN NaN denis_walk000007.jpg,"[8, 21, 20, 42, 54, 21, ...
NaN NaN NaN denis_walk000008.jpg,"[8, 21, 21, 42, 54, 21, ...
NaN NaN NaN denis_walk000009.jpg,"[8, 20, 20, 42, 54, 21, ...
NaN NaN NaN source-image.jpg,"[14, 24, 25, 41, 56, 23, 39,...

Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1790, in _validate_key
error()
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [source-image.jpg] is not in the [index]'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 115, in
input_images, target_images, generated_images,source_image, names = generate_images(dataset, generator, args.use_input_pose)
File "C:\Users\Documents\pose gan\pose-gan-master\test.py", line 91, in generate_images
batch, name = dataset.next_generator_sample_test(with_names=True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 175, in next_generator_sample_test
batch = self.load_batch(index, False, True)
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 156, in load_batch
result.append(self.compute_pose_map_batch(pair_df, 'from'))
File "C:\Users\Documents\pose gan\pose-gan-master\pose_dataset.py", line 69, in compute_pose_map_batch
row = self._annotations_file.loc[p[direction]]
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1478, in getitem
return self._getitem_axis(maybe_callable, axis=axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1911, in _getitem_axis
self._validate_key(key, axis)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1798, in _validate_key
error()
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexing.py", line 1785, in error
axis=self.obj._get_axis_name(axis)))
KeyError: 'the label [source-image.jpg] is not in the [index]'

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

the stars were for making that particular line of code into bold but i think that did not get executed properly!!!!!

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Ok. So the problem is probably in the way you handle json.
Can you send me the part that you change?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

And also the file tmp-annotations-test.cvs

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

i have changed the following things in the demo file:

the demo file had :

##line 38 to 44
f = open(args.annotations_file_train, 'w')
print >>f, 'name:keypoints_y:keypoints_x'
f.close()

f = open(args.pairs_file_train, 'w')
print >>f, 'from,to'
f.close()

but i have changed this to:
with open(args.annotations_file_train, 'w', newline='')as f:
thewriter=csv.writer(f)
thewriter.writerow(['name:keypoints_y:keypoints_x'])
f.close()

with open(args.pairs_file_train, 'w', newline='')as f:
    thewriter=csv.writer(f)
    thewriter.writerow(['from,to'])
    f.close()

because the content was not getting printed inside the csv file instead it was getting printed on to the console so i made the above changes

and i have used similar commands for :

with open(args.annotations_file_test, 'w')as result_file:
thewriter=csv.writer(result_file)
thewriter.writerow(['name','keypoints_y','keypoints_x'])

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

as well as in## line 70 of the demo file

it had the following :

print >> result_file, "%s: %s: %s" % (os.path.basename(image_name),
str(list(pose_cords[:, 0])), str(list(pose_cords[:, 1])))

which i changed to :

thewriter.writerow([os.path.basename(image_name),
str(list(pose_cords[:, 0])), str(list(pose_cords[:, 1]))])

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

also i have used only 10 target images out of 66 because, the process of running the demo file with 66 images was taking much time so thought of getting output on 10 and then doing the same for 66 target images

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

In my csv files, I use ':' as separator. When you initialize your writer, is seem like you not specify this.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

the below given zip file is tmp-annotaions-test.csv
tmp-annotation-test.zip

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Yes the problem with separation. Why you decided to change the "prints"?

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

because the content was not getting printed inside the csv file instead it was getting printed on to the console so i made the changes in "prints"

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

You can specify where you want to print.
For example if you want to print "abc" in file f.

print ("abc", file=f)

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

Also everything was getting printed in csv file like names:keypoints_x:keypoints_y all in a single column. After changing

print >> result_file, "%s: %s: %s" % (os.path.basename(image_name),
str(list(pose_cords[:, 0])), str(list(pose_cords[:, 1])))

to this

thewriter.writerow([os.path.basename(image_name),
str(list(pose_cords[:, 0])), str(list(pose_cords[:, 1]))])

We get names, keypoints_x,keypoints_y in separate columns in tmp-annotation-test.csv

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Lists pose_cords[:, 0] and pose_cords[:, 1] have ',' when you save them.
If you use default csv separator ',' you will have a problem with parsing this lists.
That is why I replace default separator with ':'.
So you can try to:

  1. Return default prints and adopt them to python3 (as I previously mention)
  2. Use csv.writer(result_file, delimiter=':')
  3. Use python2

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

You can specify where you want to print.
For example if you want to print "abc" in file f.

print ("abc", file=f)

i have used the above print command which you suggested, now its giving the output file i.e. tmp-annotations-test.csv as required

but now facing the following error:

(base) C:\Users\Documents\pose gan\pose-gan-master>python demo.py --dataset prw --warp_skip mask --generator_checkpoint C:\Users\rashmi.vijayan.nair\Downloads\generator-warp-mask-nn3-cl12.h5 --use_bg 1
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-17 22:07:12.378630: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-17 22:07:14.110600: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Annotate image keypoints...
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\saving.py:292: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
0it [00:00, ?it/s]C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\transform_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\util\dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8
.format(dtypeobj_in, dtypeobj_out))
11it [03:47, 22.60s/it]
Create pairs dataset...
name ... keypoints_x
0 denis_walk000000.jpg ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, ...

[1 rows x 3 columns]
0 denis_walk000000.jpg
1 denis_walk000001.jpg
2 denis_walk000002.jpg
3 denis_walk000003.jpg
4 denis_walk000004.jpg
5 denis_walk000005.jpg
6 denis_walk000006.jpg
7 denis_walk000007.jpg
8 denis_walk000008.jpg
9 denis_walk000009.jpg
10 source-image.jpg
Name: name, dtype: object
Number of pairs: 10
Create bg images...
Generating images...
Number of images: 11
Number of pairs train: 0
Number of pairs test: 10
Generate images...
0%| | 0/10 [00:00<?, ?it/s] ...
name ...
denis_walk000000.jpg ...
denis_walk000001.jpg ...
denis_walk000002.jpg ...
denis_walk000003.jpg ...
denis_walk000004.jpg ...
denis_walk000005.jpg ...
denis_walk000006.jpg ...
denis_walk000007.jpg ...
denis_walk000008.jpg ...
denis_walk000009.jpg ...
source-image.jpg ...

[11 rows x 2 columns]
...
name ...
denis_walk000000.jpg ...
denis_walk000001.jpg ...
denis_walk000002.jpg ...
denis_walk000003.jpg ...
denis_walk000004.jpg ...
denis_walk000005.jpg ...
denis_walk000006.jpg ...
denis_walk000007.jpg ...
denis_walk000008.jpg ...
denis_walk000009.jpg ...
source-image.jpg ...

[11 rows x 2 columns]

Traceback (most recent call last):
File "demo.py", line 113, in
input_images, target_images, generated_images, names = generate_images(dataset, generator, args.use_input_pose)
File "C:\Users\Documents\pose gan\pose-gan-master\test.py", line 92, in generate_images
out = generator.predict(batch)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training.py", line 1169, in predict
steps=steps)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\training_arrays.py", line 294, in predict_loop
batch_outs = f(ins_batch)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in call
return self._call(inputs)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2671, in _call
session)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2623, in _make_callable
callable_fn = session._make_callable_from_options(callable_opts)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1431, in _make_callable_from_options
return BaseSession._Callable(self, callable_options)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1385, in init
session._session, options_ptr, status)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 526, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: input_1:0 is both fed and fetched.
Exception ignored in: <bound method BaseSession._Callable.del of <tensorflow.python.client.session.BaseSession._Callable object at 0x0000029CB0D7DF60>>
Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1415, in del
self._session._session, self._handle, status)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 526, in exit
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: No such callable handle: 2871592353864

i have corrected all the changes which i made into the original commands which were present in the demo file and getting the above error.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

Without making any changes in demo.py. After running demo.py as it is there in repository above is the error i am getting.

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

You are using wrong tf version. Check #15.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

You are using wrong tf version. Check #15.

is this a tensorflow version for a system with gpu?

i am using a system with only CPU!!!!

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

if yes GPU ? then what is the alternative option that could be used for system with cpu?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Alternative is written in #15. Please check it.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

I have tried the alternate option by replacing the whole #line 142{ return Model(inputs=[input_img] + input_pose + [output_img, output_pose] + bg_img + warp,
outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc) }
as:
outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc,
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]

but getting the below error :

(base) C:\Users\Documents\pose gan\pose-gan-master>python demo.py --dataset prw --warp_skip mask --generator_checkpoint C:\Users\rashmi.vijayan.nair\Downloads\generator-warp-mask-nn3-cl12.h5 --use_bg 1
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\h5py_init_.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-22 12:52:00.128044: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-22 12:52:02.401207: W tensorflow/core/framework/op_def_util.cc:355] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Annotate image keypoints...
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\saving.py:292: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually.
warnings.warn('No training configuration found in save file: '
0it [00:00, ?it/s]C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\transform_warps.py:84: UserWarning: The default mode, 'constant', will be changed to 'reflect' in skimage 0.15.
warn("The default mode, 'constant', will be changed to 'reflect' in "
C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\skimage\util\dtype.py:122: UserWarning: Possible precision loss when converting from float64 to uint8
.format(dtypeobj_in, dtypeobj_out))
11it [03:36, 17.75s/it]
Create pairs dataset...
name ... keypoints_x
0 denis_walk000000.jpg ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, ...

[1 rows x 3 columns]
0 denis_walk000000.jpg
1 denis_walk000001.jpg
2 denis_walk000002.jpg
3 denis_walk000003.jpg
4 denis_walk000004.jpg
5 denis_walk000005.jpg
6 denis_walk000006.jpg
7 denis_walk000007.jpg
8 denis_walk000008.jpg
9 denis_walk000009.jpg
10 source-image.jpg
Name: name, dtype: object
Number of pairs: 10
Create bg images...
Generating images...
Number of images: 11
Number of pairs train: 0
Number of pairs test: 10
Traceback (most recent call last):
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1626, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 3 in both shapes must be equal, but are 18 and 3. Shapes are [?,128,64,18] and [?,128,64,3].
From merging shape 3 with other shapes. for 'lambda_1/Identity/input' (op: 'Pack') with input shapes: [?,128,64,3], [?,128,64,18], [?,128,64,3], [?,128,64,18], [?,128,64,3].
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "demo.py", line 108, in
args.warp_agg, args.use_bg, args.pose_rep_type)
File "C:\Users\Documents\pose gan\pose-gan-master\conditional_gan.py", line 144, in make_generator
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\Documents\pose gan\pose-gan-master\conditional_gan.py", line 144, in
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\engine\base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\keras\layers\core.py", line 687, in call
return self.function(inputs, **arguments)
File "C:\Users\Documents\pose gan\pose-gan-master\conditional_gan.py", line 144, in
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 81, in identity
return gen_array_ops.identity(input, name=name)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3993, in identity
"Identity", input=input, name=name)
File "C:\Users\AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 528, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'input' to a tensor and failed. Error: Dimension 3 in both shapes must be equal, but are 18 and 3. Shapes are [?,128,64,18] and [?,128,64,3].
From merging shape 3 with other shapes. for 'lambda_1/Identity/packed' (op: 'Pack') with input shapes: [?,128,64,3], [?,128,64,18], [?,128,64,3], [?,128,64,18], [?,128,64,3].

i have run the code with the following versions :
tensorflow==1.11.0
keras==2.2.3

also i have used only 10 images from the target images folder instead of 66 to save time

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

If you can't install proper version try to modidy the source code, as described in #15 :

You can alternatively try (in line 142 conditional_gan.py)

outputs = [input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc
outputs = [keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

If you can't install proper version try to modidy the source code, as described in #15 :

You can alternatively try (in line 142 conditional_gan.py)

outputs = [input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc
outputs = [keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Send me the full code of make_generator.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

def make_generator(image_size, use_input_pose, warp_skip, disc_type, warp_agg, use_bg, pose_rep_type):
use_warp_skip = warp_skip != 'none'
input_img = Input(list(image_size) + [3])
output_pose = Input(list(image_size) + [18 if pose_rep_type == 'hm' else 3])
output_img = Input(list(image_size) + [3])
bg_img = Input(list(image_size) + [3])

nfilters_decoder = (512, 512, 512, 256, 128, 3) if max(image_size) == 128 else (512, 512, 512, 512, 256, 128, 3)
nfilters_encoder = (64, 128, 256, 512, 512, 512) if max(image_size) == 128 else (64, 128, 256, 512, 512, 512, 512)

if warp_skip == 'full':
    warp = [Input((1, 8))]
elif warp_skip == 'mask':
    warp = [Input((10, 8)), Input((10, image_size[0], image_size[1]))]
elif warp_skip == 'stn':
    warp = [Input((72,))]
else:
    warp = []

if use_input_pose:
    input_pose = [Input(list(image_size) + [18 if pose_rep_type == 'hm' else 3])]
else:
    input_pose = []

if use_bg:
    bg_img = [bg_img]
else:
    bg_img = [] 

if use_warp_skip:
    enc_app_layers = encoder([input_img] + input_pose, nfilters_encoder)
    enc_tg_layers = encoder([output_pose] + bg_img, nfilters_encoder)
    enc_layers = concatenate_skips(enc_app_layers, enc_tg_layers, warp, image_size, warp_agg, warp_skip)
else:
    enc_layers = encoder([input_img] + input_pose + [output_pose], nfilters_encoder)

out = decoder(enc_layers[::-1], nfilters_decoder)

warp_in_disc = [] if disc_type != 'warp' else warp

outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc,
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Why there is no return? You forget to copy it?

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

No, i did not forget to copy it. This is what we have in make_generator.

how to add return in make_generator ?

return Model(outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc,
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs])
this way?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Yes

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

if i use return model then i get the following error

(base) C:\Users\Downloads\pose gan (2)\pose gan\pose-gan-master>python demo.py --dataset prw --warp_skip mask --generator_checkpoint C:\Users\shreya.l.singh\Downloads\generator-warp-mask-nn3-cl12\generator-warp-mask-nn3-cl12.h5 --use_bg 1
C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\h5py_init_.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
2018-10-23 01:25:16.962028: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-10-23 01:25:18.236207: W T:\src\github\tensorflow\tensorflow\core\framework\op_def_util.cc:346] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().
Traceback (most recent call last):
File "demo.py", line 8, in
from conditional_gan import make_generator
File "C:\Users\Downloads\pose gan (2)\pose gan\pose-gan-master\conditional_gan.py", line 143
outputs = [keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs])
^
SyntaxError: keyword argument repeated

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024
outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc,
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]

return Model(inputs=[input_img] + input_pose + [output_img, output_pose] + bg_img + warp, outputs=outputs)

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

After making above changes i get the below error?

Traceback (most recent call last):
File "demo.py", line 108, in
args.warp_agg, args.use_bg, args.pose_rep_type)
File "C:\Users\Downloads\pose gan (2)\pose gan\pose-gan-master\conditional_gan.py", line 143, in make_generator
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\Downloads\pose gan (2)\pose gan\pose-gan-master\conditional_gan.py", line 143, in
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\keras\engine\base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\keras\layers\core.py", line 687, in call
return self.function(inputs, **arguments)
File "C:\Users\Downloads\pose gan (2)\pose gan\pose-gan-master\conditional_gan.py", line 143, in
outputs =[keras.layers.Lambda(lambda x: ktf.identity(x))(out) for out in outputs]
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\tensorflow\python\ops\array_ops.py", line 80, in identity
return gen_array_ops.identity(input, name=name)
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 3888, in identity
"Identity", input=input, name=name)
File "C:\Users\AppData\Local\Continuum\Anaconda33\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 528, in _apply_op_helper
(input_name, err))
ValueError: Tried to convert 'input' to a tensor and failed. Error: Dimension 3 in both shapes must be equal, but are 18 and 3. Shapes are [?,128,64,18] and [?,128,64,3].
From merging shape 3 with other shapes. for 'lambda_1/Identity/packed' (op: 'Pack') with input shapes: [?,128,64,3], [?,128,64,18], [?,128,64,3], [?,128,64,18], [?,128,64,3].

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024
outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc,

Should be without comma

outputs=[input_img] + input_pose + [out, output_pose] + bg_img + warp_in_disc

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

it worked!

denis_walk000000.jpg [13, 28, 29, 42, 46, 27, 42, -1, 58, 83, -1, ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, ...
denis_walk000001.jpg [12, 28, 29, 44, 49, 27, 43, 52, 58, 81, -1, ... [33, 42, 35, 34, 23, 49, 57, 47, 39, 49, -1, ...
denis_walk000002.jpg [11, 25, 25, 43, 51, 25, 42, 60, 57, 86, -1, ... [32, 38, 31, 31, 22, 45, 52, 59, 35, 43, -1, ...
denis_walk000003.jpg [10, 25, 25, 42, 54, 26, 44, 59, 57, 84, 101,... [26, 35, 27, 25, 21, 42, 46, 48, 30, 35, 59, ...
denis_walk000004.jpg [9, 24, 23, 42, 54, 25, 45, 53, 57, 84, 100, ... [23, 34, 28, 25, 21, 40, 42, 27, 28, 30, 51, ...
denis_walk000005.jpg [7, 22, 22, 42, 54, 22, 43, 58, 56, 83, 103, ... [22, 31, 26, 25, 21, 37, 40, 32, 28, 25, 38, ...
source-image.jpg [14, 24, 25, 41, 56, 23, 39, 57, 61, 89, 114,... [26, 29, 19, 11, 3, 41, 49, 56, 26, 29, 33, 4...
83%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 5/6 [00:13<00:02, 2.65s/it] keypoints_y keypoints_x
name
denis_walk000000.jpg [13, 28, 29, 42, 46, 27, 42, -1, 58, 83, -1, ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, ...
denis_walk000001.jpg [12, 28, 29, 44, 49, 27, 43, 52, 58, 81, -1, ... [33, 42, 35, 34, 23, 49, 57, 47, 39, 49, -1, ...
denis_walk000002.jpg [11, 25, 25, 43, 51, 25, 42, 60, 57, 86, -1, ... [32, 38, 31, 31, 22, 45, 52, 59, 35, 43, -1, ...
denis_walk000003.jpg [10, 25, 25, 42, 54, 26, 44, 59, 57, 84, 101,... [26, 35, 27, 25, 21, 42, 46, 48, 30, 35, 59, ...
denis_walk000004.jpg [9, 24, 23, 42, 54, 25, 45, 53, 57, 84, 100, ... [23, 34, 28, 25, 21, 40, 42, 27, 28, 30, 51, ...
denis_walk000005.jpg [7, 22, 22, 42, 54, 22, 43, 58, 56, 83, 103, ... [22, 31, 26, 25, 21, 37, 40, 32, 28, 25, 38, ...
source-image.jpg [14, 24, 25, 41, 56, 23, 39, 57, 61, 89, 114,... [26, 29, 19, 11, 3, 41, 49, 56, 26, 29, 33, 4...
keypoints_y keypoints_x
name
denis_walk000000.jpg [13, 28, 29, 42, 46, 27, 42, -1, 58, 83, -1, ... [37, 47, 40, 37, 26, 54, 63, -1, 45, 57, -1, ...
denis_walk000001.jpg [12, 28, 29, 44, 49, 27, 43, 52, 58, 81, -1, ... [33, 42, 35, 34, 23, 49, 57, 47, 39, 49, -1, ...
denis_walk000002.jpg [11, 25, 25, 43, 51, 25, 42, 60, 57, 86, -1, ... [32, 38, 31, 31, 22, 45, 52, 59, 35, 43, -1, ...
denis_walk000003.jpg [10, 25, 25, 42, 54, 26, 44, 59, 57, 84, 101,... [26, 35, 27, 25, 21, 42, 46, 48, 30, 35, 59, ...
denis_walk000004.jpg [9, 24, 23, 42, 54, 25, 45, 53, 57, 84, 100, ... [23, 34, 28, 25, 21, 40, 42, 27, 28, 30, 51, ...
denis_walk000005.jpg [7, 22, 22, 42, 54, 22, 43, 58, 56, 83, 103, ... [22, 31, 26, 25, 21, 37, 40, 32, 28, 25, 38, ...
source-image.jpg [14, 24, 25, 41, 56, 23, 39, 57, 61, 89, 114,... [26, 29, 19, 11, 3, 41, 49, 56, 26, 29, 33, 4...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:15<00:00, 2.58s/it]
Save images to output/generated_images...

where can i see the output image?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

in output/generated_images

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

got it!!!
Thank you soo much for guiding us throughout.. 👍 :)

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

source-image jpeg_denis_walk000003 jpg

This is the output we are getting after giving a new source-image .
The image is very blurred.
how can we improve on this?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

The model you use for prw. It won't generalize to completely different persons. You should retrain the model using the images similar to images that will be used during testing.
You can also try model for fashion dataset, but it produce poor results for images of men because of fashion dataset imbalanced distribution.

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

The model you use for prw. It won't generalize to completely different persons. You should retrain the model using the images similar to images that will be used during testing.
You can also try model for fashion dataset, but it produce poor results for images of men because of fashion dataset imbalanced distribution.

okay ....
but where can we find training and testing codes for prw dataset?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

Why you need it?
The dataset can be found here https://yadi.sk/d/I7YLOnRsuMmtug

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

In demo.py input is an image and output is also an image. Is it possible to do few changes in code and after running the code webcam gets on, it takes pose of the person in front of the camera as an input and generate images similar to the pose of the person?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

It is possible. But what is the use case of this?

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

This is the task given us by our mentor, so just asking!!! Anyways thank you so much :)

from pose-gan.

shreya800 avatar shreya800 commented on June 20, 2024

We write like this on Anaconda prompt
python demo.py --dataset prw --warp_skip mask --generator_checkpoint generator-warp-mask-nn3-cl12.h5 --use_bg 1

what if i just want to write python demo.py on Anaconda prompt and run and remaining all in demo.py . what changes will i have to make then, in demo.py or any other file?

from pose-gan.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on June 20, 2024

You can change corresponding default values for parameters in 'cmd.py'. Replace

parser.add_argument('--dataset', default='market', choices=['market', 'fasion', 'prw', 'fasion128', 'fasion128128'],
                        help='Market, fasion or prw')

With

parser.add_argument('--dataset', default='prw', choices=['market', 'fasion', 'prw', 'fasion128', 'fasion128128'],
                        help='Market, fasion or prw')

Same for other parameters.

from pose-gan.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.