Git Product home page Git Product logo

keras_realtime_multi-person_pose_estimation's People

Contributors

laclouis5 avatar michalfaber avatar mmoraschini avatar mohamed209 avatar rludlow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras_realtime_multi-person_pose_estimation's Issues

Training with local data

Hi @michalfaber, thanks for the fantastic work.
I was trying to see if I can still train with local data only, without augmentation.
Apparently it says that "use_client_gen = False" is deprecated in the code, and a brief try shows that the format in locally generated h5 file is not compatible with what specified in DataIterator.
So my question is that, was there once a working version of for training with local data that you can upload, or maybe you can shed some light on how I should modify the code if I want to train with local data without augmentation? Thanks!

How to undersatand the criterion?

in demo.ipynb:

`score_midpts = np.multiply(vec_x, vec[0]) + np.multiply(vec_y, vec[1])

            score_with_dist_prior = sum(score_midpts)/len(score_midpts) \
                                    + min(0.5*oriImg.shape[0]/norm-1, 0)
            
            criterion1 = len(np.nonzero(score_midpts > param['thre2'])[0]) > 0.8 * len(score_midpts)
            # parm['thre2'] = 0.05
            criterion2 = score_with_dist_prior > 0
            if criterion1 and criterion2:
                connection_candidate.append([i, j, score_with_dist_prior, 
                                             score_with_dist_prior+candA[i][2]+candB[j][2]])`

How to understand the value of “min(0.5*oriImg.shape[0]/norm-1, 0)”? The tow probability terms are added directly. And how to understand the two criterion??

How to generate ground truth

Thanks for your great work!
Now I have my own dataset(not coco)
I want to know how to generate gt(confidence map and PAF)
especially I don't know when generate ground truth PAF, how to
judge whether a point is in limb, from the paper , the threshold is not clear.
I don't find how to generate gt in your code.
And what the mask_all and mask_miss mean in coco.
Thanks!

error when use_client_gen = False

Hi,
I am currently trying to get the code to run without data augmentation, having followed the training instructions up to the point where I run the training script, having previously got it to run (though slowly) with use_client_gen = True, so the h5 files already exist
The error is as follows:

File "train_pose.py", line 86, in
vec_num=38, heat_num=19, batch_size=batch_size, shuffle=True)
File "/home/megan/OpenPose/Keras/keras_Realtime_Multi-Person_Pose_Estimation/training/ds_iterator.py", line 10, in init
self.data_group = h5["data"]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/group.py", line 167, in getitem
oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 190, in h5py.h5o.open
KeyError: "Unable to open object (object 'data' doesn't exist)"

I am not sure what I am doing wrong here

Thanks

Megan

ValueError: The channel dimension of the inputs should be defined. Found `None`.

Traceback (most recent call last):
  File "./demo_image.py", line 247, in <module>
    model = get_testing_model()
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 184, in get_testing_model
    stage0_out = vgg_block(img_normalized, None)
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 29, in vgg_block
    x = conv(x, 64, 3, "conv1_1", (weight_decay, 0))
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 20, in conv
    bias_initializer=constant(0.0))(x)
  File "/home/wuyong/anaconda3/envs/tensorflow/lib/python3.5/site-packages/keras/engine/topology.py", line 528, in __call__
    self.build(input_shapes[0])
  File "/home/wuyong/anaconda3/envs/tensorflow/lib/python3.5/site-packages/keras/layers/convolutional.py", line 125, in build
    raise ValueError('The channel dimension of the inputs '
ValueError: The channel dimension of the inputs should be defined. Found `None`.

Seem like something wrong with img_input_shape. So I chance 177 line of model.py to this:

img_input_shape = (None, None, 3) ==>
   img_input_shape = (674, 712, 3)

Then new error occur:

Traceback (most recent call last):
  File "./demo_image.py", line 247, in <module>
    model = get_testing_model()
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 184, in get_testing_model
    stage0_out = vgg_block(img_normalized, None)
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 40, in vgg_block
    x = pooling(x, 2, 2, "pool2_1")
  File "/home/xy/share/reidPrj/keras_Realtime_Multi-Person_Pose_Estimation/model.py", line 24, in pooling
    x = MaxPooling2D((ks, ks), strides=(st, st), name=name)(x)
  raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'pool2_1/MaxPool' (op: 'MaxPool') with input shapes: [?,356,1,128].

What's wrong with that?
What shape img_input_shape should be ?

Order of layers is incorrect

Everywhere in notebooks order of parts is stated as following:
[nose, neck, Rsho, Relb, Rwri, Lsho, Lelb, Lwri, Rhip, Rkne, Rank, Lhip, Lkne, Lank, Leye, Reye, Lear, Rear,Bkg]
But real order of parts is
[nose, neck, Rsho, Relb, Rwri, Lsho, Lelb, Lwri, Rhip, Rkne, Rank, Lhip, Lkne, Lank, Reye, Leye, Rear, Lear,Bkg]

proof 1:
ZheC/Realtime_Multi-Person_Pose_Estimation#7

proof 2 (caffe converted model, notice part is 17)
screen shot 2017-11-13 at 03 01 39

proof 3:
I was comparing output of my py_rmpe_server and rmpe_dataset_server
scroll down and see last part, it should be right ear or left year which is correct?

This is original generated heatmap
original_server

This is heatmap generated according order of parts from notebook and config
python_server

multi GPU support?

Dear All,

I have problem when using multi-GPU to train the model. How to set multi-gpu to boost the training procedure.

I tried following methods:

  1. set GPUdeviceNumber=2 or GPUdeviceNumber=0,1
  2. add os.environ["CUDA_VISIBLE_DEVICES"] = "2"
    but can not work.

Thanks for your reply!
@michalfaber

loss definition

Hello. I am new to this program. When I read the code in train_pose.py, I am confused about the definition of the losses. It doesn't seem to reflect the definition of the Part Detection losses and Part Affinity Fields losses in the paper. Could you tell me where to find this part? Thank you very much!!

error running generate_hdf5

('Image ID ', 90108)
('Image ID ', 179112)
('Image ID ', 311295)
('Num samples ', 55242)
Traceback (most recent call last):
File "generate_hdf5.py", line 323, in
writeHDF5()
File "generate_hdf5.py", line 220, in writeHDF5
meta_data[clidx][i] = long(height_binary[i])
ValueError: invalid literal for long() with base 10: ''

issue with reducing the number of stages

Hi,

If I truncate the number of stages I get the following error (included at the bottom of the post)
I have altered the number of losses to reflect the number of stages. Notably the number of numpy arrays expected is always 2* the number of stages, whereas 12 numpy arrays are always presented. There are no pretrained weights for the stage blocks

Traceback (most recent call last):
File "train_pose_ownmod.py", line 172, in
initial_epoch=last_epoch
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2114, in fit_generator
class_weight=class_weight)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1826, in train_on_batch
check_batch_axis=True)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1411, in _standardize_user_data
exception_prefix='target')
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 88, in _standardize_input_data
'...')
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 6 array(s), but instead got the following list of 12 arrays: [array([[[[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ],
[ 0. , 0. , 0. , ..., 0. ,
0. , 0. ...

Strange code which I don't understand

Major part of code in rmpe_dataset_server taken from original project from custom caffe layer.
Custom caffe layer operated not the one miss mask, but 57 miss masks one for each layer.
Looks like they all were same, so in this project we generate only one miss mask.
57 miss mask are still generated in C++ code but not used after it.

Trying to implement python py_rmpe_server I suddenly found this part of code which I actuallt don't understand.

for (int g_y = 0; g_y < grid_y; g_y++) {
    for (int g_x = 0; g_x < grid_x; g_x++) {
      for (int i = 0; i < np; i++){
        float weight = float(mask_miss_aug.at<uchar>(g_y, g_x)) /255; //mask_miss_aug.at<uchar>(i, j);

        // very strange check next line -- anatolix
        if (meta.joint_self.is_visible[i] != 3){
          transformed_label[i*channel_offset + g_y*grid_x + g_x] = weight;
        }
      }
      // background channel
      transformed_label[np*channel_offset + g_y*grid_x + g_x] = float(mask_miss_aug.at<uchar>(g_y, g_x)) /255;
    }
  }

I.e. we check if 'main' person has joint visible, and if not we don't fill this layer mask at all. Actually in this case mask stays uninitialized, which is not the problem in our case because we not using in anyway.

but what original author meant by this? should masks be the same? what is main person, why we treat main person differently at all?

Is there any mask for some people?

Thank you for reading my question!

I know that you have to remove some annotation of people who has little annotation(<5), who has little scale(<32*32) and who is so close to 'main_person'.
Is there any other mask_miss to image level??
I counldn't find that part..
Thank you!!

I have another question.

Is Black mask_miss? (0,0,0)
Or white mask_miss? (255,255,255)

[white mask_miss]
32577258-49ac0b20-c4ea-11e7-9b87-3566eaf8dc94

[black_miss]
32577235-38f14e58-c4ea-11e7-9bb7-c45a2be07b33

demo_image.py return input image without dots

Thank you for share your code.
I tried run python demo_image.py --image sample_images/ski.jpg , I downloaded keras weight from here https://www.dropbox.com/s/llpxd14is7gyj0z/model.h5 ,
I expect I got result image with dot colors just like in your github, but in return I got output image same with input image also I got nothing in all_peaks variable all_peaks [[], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []]

some troubles with demo_image.py

`
if name == 'main':
parser = argparse.ArgumentParser()
parser.add_argument('--image', type=str, required=True, help='input image')
parser.add_argument('--output', type=str, default='result.png', help='output image')
parser.add_argument('--model', type=str, default='model/keras/model.h5', help='path to the weights file')

args = parser.parse_args()
input_image = args.image 
keras_weights_file = args.model

tic = time.time()
print('start processing...')`

it says
"usage: demo_image.py [-h] --image IMAGE [--output OUTPUT] [--model MODEL]
demo_image.py: error: the following arguments are required: --image
An exception has occurred, use %tb to see the full traceback."

how can i deal with this problem?I'm a Chinese and my English is not good:)

"Division by 0" running "demo_image.py"

I was testing the file demo_image.py with a continuous stream of jpg files (consecutive frames read from a folder). There's a file that keeps producing the following error:

/home/alessio/Sandbox/dev/PAF-pose-estimation/demo_image.py:109: RuntimeWarning: invalid value encountered in true_divide
vec = np.divide(vec, norm)
Traceback (most recent call last):
File "/home/alessio/Sandbox/dev/PAF-pose-estimation/demo_image.py", line 258, in
process(input_image, params, model_params)
File "/home/alessio/Sandbox/dev/PAF-pose-estimation/demo_image.py", line 123, in process
0.5 * oriImg.shape[0] / norm - 1, 0)
ZeroDivisionError: float division by zero

It's strange since frames are very similar to each other, and with previous frames works ok. Here's the guilty frame.
000004_rgb
What can be the reason for that?

Critical bug in generate hdf5

I've found critical bug with generate_hdf5.

the idea is the following: in case of several persons genLMDB puts one picture for each person, and augmentation centers and resize picture on him. Generate_hdf5 just puts one main person. That is the reason of difference in number of pictures, we have 50k, LMDB has 120k pictures.

This is fixed in my fork of project. Training results will be available in 2-3 days.

License

Hello Michal,
I'd like to know under what license the code is distributed
Thank you

Error when checking input: expected input_2 to have shape (None, None, None, 30) but got array with shape (10, 46, 46, 38)

I try to launch a training test with the following steps.

  1. Start training data server in the first terminal session. ./rmpe_dataset_server ../../keras_Realtime_Multi-Person_Pose_Estimation/dataset/train_dataset.h5 5555

Total samples: 54942
Epoch 1
curr_sample/total_samples/curr_epoch = 1/54942/1

  1. Start validation data server in a second terminal session. ./rmpe_dataset_server ../../keras_Realtime_Multi-Person_Pose_Estimation/dataset/val_dataset.h5 5556

Total samples: 300
Epoch 1
curr_sample/total_samples/curr_epoch = 1/300/1
Epoch 2
curr_sample/total_samples/curr_epoch = 1/300/2

  1. Set the correct number of samples within python train_pose.py

  2. Train the model in a third terminal python train_pose.py

Step 1 to 3 has successfully done, But I ran into the following errors in step 4.

2017-10-31 23:17:25.592649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 980, pci bus id: 0000:01:00.0)
Loaded VGG19 layer: block1_conv1
Loaded VGG19 layer: block1_conv2
Loaded VGG19 layer: block2_conv1
Loaded VGG19 layer: block2_conv2
Loaded VGG19 layer: block3_conv1
Loaded VGG19 layer: block3_conv2
Loaded VGG19 layer: block3_conv3
Loaded VGG19 layer: block3_conv4
Loaded VGG19 layer: block4_conv1
Loaded VGG19 layer: block4_conv2
Epoch 1/200000
Traceback (most recent call last):
File "train_pose.py", line 174, in
initial_epoch=last_epoch
File "/home/ros/anaconda3/envs/dl/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/ros/anaconda3/envs/dl/lib/python3.6/site-packages/keras/engine/training.py", line 2042, in fit_generator
class_weight=class_weight)
File "/home/ros/anaconda3/envs/dl/lib/python3.6/site-packages/keras/engine/training.py", line 1756, in train_on_batch
check_batch_axis=True)
File "/home/ros/anaconda3/envs/dl/lib/python3.6/site-packages/keras/engine/training.py", line 1378, in _standardize_user_data
exception_prefix='input')
File "/home/ros/anaconda3/envs/dl/lib/python3.6/site-packages/keras/engine/training.py", line 144, in _standardize_input_data
str(array.shape))
ValueError: Error when checking input: expected input_2 to have shape (None, None, None, 30) but got array with shape (10, 46, 46, 38)

Could you give some advice for the error?

'NoneType' object has no attribute 'shape'

@michalfaber thank you for your awesome work,I can run your work to test my own images,right?as for tensorflow,Is there a specific version? when I run python demo_image.py --image sample_images\s
ki.jpg, it occurred that AttributeError: 'NoneType' object has no attribute 'shape'.it points to line 35 of demo_image.py. Thanks for your time.

question about the implementation

as in demo_image.py

       ## find how many subset already contains partAs[i] or partBs[i]
            for j in range(len(subset)):
                if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:
                    subset_idx[found] = j
                    found += 1
      ........

            # if find no partA in the subset, create a new subset
            elif not found and k < 17:
                row = -1 * np.ones(20)
                row[indexA] = partAs[i]
                row[indexB] = partBs[i]
                row[-1] = 2

i am just wondering about the comment that # if find no partA in the subset, create a new subset

there may be some problem as if subset[j][indexA] == partAs[i] or subset[j][indexB] == partBs[i]:, partB is also checked.

It is a bug or some ?

also the logic in this part is not that clear for me, would you mind explaining it a little bit ? thank you.

['data'] object not found

Hi,
I'm converting the dataset to the hdf5 format via generate_hdf5.py with a change that I decreased the num_samples to 4000. I did this because I'm running this on a VM which has 64 GB of disk space and runs out if I let it run through the normal no.
However, when I run train_pose.py, it faces an error that "Unable to open object (object 'data' doesn't exist)". Face the same error when trying to run inspect_dataset.ipynb. Appreciate any pointers. Thank you!

demo.py TOO SLOW

Hi @michalfaber ,

Thank you for your shared code. Recently I try to modify your demo code and use it in REAL TIME VIDEO. But the calculation after "output" spend too much times. It takes me about 0.5s per images. Would you give me some suggestions to improve the codes?

Error while running generate_hdf5.py

While I run python generate_hdf5.py,
the output is :

loading annotations into memory...
Done (t=5.35s)
creating index...
index created!
Image ID 391895
Image ID 522418
Image ID 184613
Image ID 318219
...
Image ID 407646
Image ID 220310
Image ID 512403
Image ID 168974
Image ID 552775
Image ID 394940
Image ID 15335
Num samples 55242
Traceback (most recent call last):
File "generate_hdf5.py", line 322, in
writeHDF5()
File "generate_hdf5.py", line 290, in writeHDF5
img4ch = np.concatenate((img, meta_data, mask_miss[..., None], mask_all[..., None]),
TypeError: 'NoneType' object is not subscriptable

wrong train and val number

in generate_hdf5.py : 183
val_total_write_count = isValidationArray.count(0.0)

I print the result and I get:
val_total_write_count = 52597
tr_total_write_count= 2645
this is obviously wrong

queation about the implementation

                ## This case will only happend if subset[j] has already contains partAs[i]
                ## if subset[j] does not have partBs[i] , add partBs[i] to subset[j]
                ## FIXME what if subset[j] already contains partBs[i]
                if (subset[j][indexB] != partBs[i]):
                    subset[j][indexB]  = partBs[i]
                    subset[j][-1] += 1
                    subset[j][-2] += candidate[partBs[i].astype(int), 2] + connection_all[k][i][2]

after i check the mapIdx, there will be a problem ,as joint 6 was put into the subset by [2,6] in the 2nd iteration, [16,18] will put 18 in the subset in the 17th iteration, however, [6,18] may put another 18 joint into subset at the 19th iteration. From my understanding, [6,18] and [16,18] connections are independent of each other, then, one subset may have two different joint 18.

i think it will be great if we can make sure each subset will not have more than 1 each joint for some application.

to do that, we can compare two different joint 18's score to decide which one to put into the subset.

how do you think about that ? and anyway, it is a great work, i have read the whole implementation, it is great. thank you.

How to get or calculate the final mAP/AP and AR?

I'm really confused about how to get the pose coordination and get the AP/mAP, AR by my trained model. I just know the output of the two branches are the heatmap, and in the deploy prototxt ,the output are also the heat map with 46*46, so, there are any special toolbox or special code for the metrics? And, in my sever , I do not have the caffe-matlab app, so the author's method is not suit for my situation.
@@@michalfaber
any answers are welcome,Thanks

Image is never resized in augmentation

While testing my py_rmpe_server, I noticed rmpe_dataset_server is never randomly resizes images. I.e. main person on image always fixed size.

params.scale_prob=1;
  float dice = Rand(RAND_MAX) / static_cast <float> (RAND_MAX);
  float scale_multiplier;

  //actually with scale_prob==1 condition is always true -- anatolix
  if(dice > param_.scale_prob) {
    img_temp = img_src.clone();
    scale_multiplier = 1;
  }
  else {
    float dice2 = Rand(RAND_MAX) / static_cast <float> (RAND_MAX);
    scale_multiplier = (param_.scale_max - param_.scale_min) * dice2 + param_.scale_min; //linear shear into [scale_min, scale_max]
  }

Minimum Memory Requirement of GPU in Training Model

Dear All,

I am wondering what's the minimum gpu memory requirement for training the model. I have tried on gtx 970 and run out of all 4GB memory. Could anyone tell me the minimum memory requirement for successful training?

Thank you!

FileNotFoundError while running generate_masks.py

while I run python generate_masks.py,
this is the output:

python generate_masks.py
loading annotations into memory...
Traceback (most recent call last):
File "generate_masks.py", line 17, in
coco = COCO(val_anno_path)
File "/home/yurzho/anaconda3/envs/keras-openpose/lib/python3.6/site-packages/pycocotools/coco.py", line 84, in init
dataset = json.load(open(annotation_file, 'r'))
FileNotFoundError: [Errno 2] No such file or directory: '/home/yurzho/keras_Realtime_Multi-Person_Pose_Estimation/dataset/annotations/person_keypoints_train2017.json'

Sorry to bother, i met an error when python train_pose.py

the error:
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1725, in fit_generator
self._make_train_function()
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 937, in _make_train_function
self.total_loss)
TypeError: get_updates() takes exactly 3 arguments (4 given)

I am not familiar with keras , the version of my keras is 2.06, I don't know how to solve it, can anyone help me ? Thanks very much!

Can you post a validation loss?

I saw training loss around 20~30 values in your readme.

What about validation loss value?

Is validation loss value similar to the training one?

model.predict(input_img) very slow

@michalfaber Thanks for your great work!!
When I test the image 'ski.jpg'(shape: 7126743), 'model.predict(input_img)' took about 1200ms with a TITAN X GPU(only with scale 1). But in the caffe version, 'output_blobs = net.forward()' only took about 72ms. Can you help me figure it out? Thanks a lot!!

[email protected] missing?

$ python caffe_to_keras.py

Using TensorFlow backend.
Traceback (most recent call last):
File "caffe_to_keras.py", line 1, in
from model import get_model
ImportError: cannot import name 'get_model'

Dear all,
I met the error missing cannot import name 'get_model' from model.py, and I can not find get_model from that python file.
Please tell what should I do?

Regards!

How to get coordinate value data

Thank you for sharing your code! I tried to execute "demo.ipynb". I'd like to get coordinate value of each articulation point. I'm very afraid of asking this kind of basic question but, could you tell me how can I get these data?

error running demo_image.py

Traceback (most recent call last):
File "demo_image.py", line 248, in
canvas = process(input_image, params, model_params)
File "demo_image.py", line 49, in process
output_blobs = model.predict(input_img)
File "/Users/env/lib/python2.7/site-packages/keras/engine/training.py", line 1695, in predict
check_batch_axis=False)
File "/Users/env/lib/python2.7/site-packages/keras/engine/training.py", line 111, in _standardize_input_data
'Found: array with shape ' + str(data.shape))
ValueError: The model expects 3 arrays, but only received one array. Found: array with shape (1, 184, 200, 3)

KeyError: "Unable to open object (Object 'label' doesn't exist)" of h5["label"]

When I try to run the inspect_dataset.ipynb file with the hdf5 file val_generated_dataset.h5, which I create with the generate_hdf5.py script. I ran into this error. KeyError: "Unable to open object (Object 'label' doesn't exist)"

In generate_hdf5.py, I see there is only create one group "dataum".
grp = h5.create_group("datum")

So seems the group "label" not been created when creating the val_generated_dataset.h5. if the inspect_dataset.ipynb file not been updated after the code change of generate_hdf5.py ?

Can't find configobj

I downloaded the zip directly and run the "demo_image.py", which returned:

Traceback (most recent call last): File "D:/Python/keras_Realtime_Multi-Person_Pose_Estimation-master/keras_Realtime_Multi-Person_Pose_Estimation-master/demo_image.py", line 7, in <module> from config_reader import config_reader File "D:\Python\keras_Realtime_Multi-Person_Pose_Estimation-master\keras_Realtime_Multi-Person_Pose_Estimation-master\config_reader.py", line 1, in <module> from configobj import ConfigObj ModuleNotFoundError: No module named 'configobj'

How can we use hand detector from OpenPose

@michalfaber, thank you for your awesome work.
I am preparing a project that relies on pose estimation and requires hand landmarks. I was wondering if hand detector from OpenPose can also be used within your Keras version.
Are you going to adapt it? May be, it is easy for me to adapt it myself?

Can't Load the Weights

Hi! I am new to this project and Keras. I have downloaded the weights file from the dropbox into /model/keras/model.h5. However, I kept getting this error:

$ python demo_image.py --image sample_images/ski.jpg
Using TensorFlow backend.
start processing...
Traceback (most recent call last):
File "demo_image.py", line 245, in
model.load_weights(keras_weights_file)
File "/home/zf606/miniconda3/lib/python3.6/site-packages/keras/engine/topology.py", line 2613, in load_weights
f = h5py.File(filepath, mode='r')
File "/home/zf606/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 271, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/zf606/miniconda3/lib/python3.6/site-packages/h5py/_hl/files.py", line 101, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
OSError: Unable to open file (Unable to lock file, errno = 5, error message = 'input/output error')

Any ideas on how to fix this?

Probably in some cases masks from previous images applied to following images

Hi.

I am trying to understand how whole algorithm works and wrote simple script which saves everything which writes everything what goes from rmpe server to client.

For example:
0000026
0000026mask
0000026masked

But then I found sometime mask from previous frame applied to next frame
0000027
0000027mask
0000027masked

So I suspect in come cases mask from previous image sent with next image in rmpe server, can't tell exact place in rmpe server although

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.