Git Product home page Git Product logo

humbi's People

Contributors

zhixuany avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

humbi's Issues

Cannot recover hand keypoints from mano parameters.

Hi Zhixuan,

Thanks for your great work and sharing the codes. I'm having some trouble recovering 3D hand keypoints from the given mano parameters. I could recover the hand mesh (778 vertices) perfectly using the mano layer implemented by hassony2. However, the recovered hand keypoints don't match with the provided annotated hand keypoints. Do you have some insights on this? Or any instructions on correctly recovering hand keypoints from given mano parameters? Thanks in advance!

The following is the code snippet I used to reconstruct the hand keypoints and hand mesh.

import torch
from manopth.manolayer import ManoLayer
from manopth import demo
import os

import numpy

anno_dir = "Sample_hand/subject_1/hand/00000004/reconstruction"
r_params = numpy.loadtxt(os.path.join(anno_dir, "mano_params_r.txt"))
r_params = torch.Tensor(r_params).unsqueeze(0)

translation = r_params[:,:3]
pose_params = r_params[:,3:26]
shape_params = r_params[:, 26:]



# MANO layer
ncomps = 20
mano_layer = ManoLayer(
    mano_root='mano/models', use_pca=True, ncomps=ncomps, flat_hand_mean=False)
# Forward pass through MANO layer
mano_hand_verts, mano_hand_joints = mano_layer(pose_params, shape_params)
mano_hand_verts = mano_hand_verts/1000 + translation
mano_hand_joints = mano_hand_joints/1000 + translation


# load annotation
hand_verts = numpy.loadtxt(os.path.join(anno_dir,"vertices_r.txt"))
hand_joints = numpy.loadtxt(os.path.join(anno_dir,"keypoints_r.txt"))

hand_verts = torch.Tensor(hand_verts).unsqueeze(0)
hand_joints = torch.Tensor(hand_joints).unsqueeze(0)


# printing
print(hand_joints - mano_hand_joints)
print(hand_verts - mano_hand_verts)

import_cam_params function

Hello, thank you for this amazing work!

import_cam_params is referenced multiple times but I can't find the function definition.
Could you please provide the implementations?
Thank you!!

How Long the dataset's website approval?

Hi,
Thanks releasing the great datasets, I register the account, but it is not approval, So how long will spend the approval? My account name is "xubaobei"

Thank you very much!
Best wishes!

3DDFA model to Surrey model conversion

I notice you transformed 3DDFA model to Surrey in Chapter 4.2, and I want to train my network on 3DDFA and HUMBI dataset. So, can you offer some detail about how to did this? Or whether you have a plan to public related code?

Thanks for your work!

Can you provide the depth map

Hi,
Thanks the great job, I want to study the single rgb-d human reconstruction, so can you provide the depth map?

Thanks very much!
Best Wishes!

MANO and SMPL Parameters

Thank you so much for releasing the data. It is a wonderful dataset and will definitely help a lot for the community. I just quickly checked the data and got a question regarding to the SMPL and MANO parameters.

For the SMPL model, 86 variables are provided in the sample data for each image. As far as I know, SMPL has 24 joints (therefore 72 variables for pose) and 10 shape parameters, which is not consistent with 86, How should I interpret them?

Similarly, MANO has 16*3 pose parameters and 10 shape parameters, but in the sample data 36 variables were provided.

Thank you again for publishing this great dataset. Looking forward to your reply :)

"Recoverable" Downloading

Hi, thanks for the dataset. I'm wondering if it's possible to make the download "resume-able". As the zips are quite large, it takes hours to download even one file. I found on my side that any network fluctuation may cause an interruption in the download which cannot be resumed - I have to download it from zero again. As a result, I have struggled with this for weeks and haven't managed to download even a single file yet. Is this due to some protocol reason? Would it be possible to make the downloads can be resumed if once paused? Thank you.

Bounding box for Hand dataset

Hi,
The bounding box coordinates given are for the original dimension (1980 x 1080), since the img is cropped also, how can I convert these bounding box coordinates for the (250,250) image that is given?
Thanks!

broken links to point cloud data and github repo

Hi folks,

just wanted to bring your attention to the fact that the links to point cloud data on the download page seem to be broken. To fix them, one should cut everything after ?:

https://......./pointcloud/subject_1_80.zip?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-.....

->

https://......./pointcloud/subject_1_80.zip

Also, the Software link on the main page is broken.

Thanks for the great work!

~Sergey

reconstructed SMPL pose is inconsistent with the image

Thank you very much for the wonderful work and for releasing this dataset.

I am able to run the python "hello_smpl_for_humbi.py" script on the Sample_body data, but the mesh generated by the python script is inconsistent with the images in /subject_1/body/00000017/image (e.g. /image0000032.jpg). E.g.

The image from the sample data:
rsz_image0000000

Reconstructed mesh generated by the python scripty:
rsz_screenshot_from_2020-10-12_15-49-08

In the image, the left and right palms face each other. However, in the reconstructed mesh, the palms face outside.

I was wondering whether I use the python script correctly and if you observed this as well.

Below are my configurations:
SMPL model: I use the model provided in this repo at HUMBI/body/model/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl.
SMPL codes: I use the codes from the SMPL website and I have tried with both ver1.1 and ver1.0 but the above inconsistency still exists.
Configuration of the "hello_smpl_for_humbi.py":
dataset_path = './Sample_body';
subject = 1
frame = 17
outmesh_path = './smpl_mesh.obj'
model_path = './body/model/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl'

Thank you very for your time and your help would be much appreciated.
Best

Update:
I try the python script with subject_88 (subject_88/body/00000025/image) and the pose of the reconstructed hands is not consistent with the image too.

Female & Male SMPL Models

Hi Zhixuan, did you use different SMPL models (female / male) for subjects of different genders? How can we know for each subject whether the female or the male model was used? Thanks :)

License of dataset

Hi Zhixuan,

Thanks for the great work! I'm wondering if there is any documented dataset license I could refer to?

HUMBI dataset website can no longer login

Hi,

Thank you for your great contribution to our society. Currently, I am trying to download data from the official website https://humbi-data.net/. However, neither could I register a new account nor could I use my teammate's account to login(Displaying 'Login Failed, Please Try Again"). Would you mind to check whether the website is still supported? Looking forward to your reply.

Best,
S.F

Can I get the origin images?

Hi, thank you for the amazing work and contribution!
But whether and where can I get the original images of faces?
Thank you for the help and hope for your reply!

Documentation request: loading MANO hands and SMPL body model together

I am trying to load the reconstructed hands together with the SMPL mesh/vertices. While loading the SMPL vertices (red) and pose keypoints (yellow) and projecting into a given camera works fine I have some trouble with the hands.
The hand keypoints and the reconstructed hand mesh (blue: left, green: right) seem to be placed at different position than the mesh:

image
image

Applying the rotation and translation provided in mano_params_l.txt/mano_params_r.txt also does not seem to produce the right results.

image

The camera calibrations for body and hands seem to be identical.

Could you provide a sample of loading SMPL and MANO in the same coordinate frame?

Use HUMBI on Unity/Unreal Engine

Dear authors,

Thank you for this amazing work! I'm evaluating presence and affective interaction in XR scenarios and would like to use HUMBI in Unity or Unreal Engine. Could the authors please guide me on how to reuse this dataset?

Thanks!

Correspondence between Hand and Body Images

Hi, thanks for your dataset.
Is there any correspondence between hand and body images? i.e. Given the cropped hand image, can I find which body image it was originally from?
I'm interested because if we can find some samples where both hands and body are annotated, it might be helpful to learn the correlation between them.
I briefly checked the dataset before opening this issue. It seems that the images are not directly related by name. e.g. in the sample data, Sample_hand/subject_1/hand/00000001/image_cropped/left/image0000033.png is not likely to come from
Sample_body/subject_1/body/00000001/image/image0000033.jpg.
image
Am I getting anything wrong?
Thank you :)

human mask

As the noisy background in the image, is there the human mask available ?

Questions about projection of provided 3D keypoints

I want to see 2D keypoints of certain frame on images so i did projection processing.
but i got wrong result. what's the matter??
As i know about getting 2D points is {2D keypoints = homogeneous(projection matrix * 3D keypoints)}
Could you tell me what's the problem...??
More details about my code:
i want to see image0000000.jpg & projected 2D keypoints each frame.

import torch
import numpy as np
from lib.models.smpl import SMPL
import os
import pdb
sub_path = '/Body_1_80_updat/subject_1/body/'
frames = os.listdir(sub_path)[:-4]
proj = open(sub_path+'project.txt','r').readlines()

projection = np.zeros((107,3,4))
extrinsic = np.zeros((107,3,4))
intrinsic = np.zeros((107,3,3))
cam_KR = np.zeros((107,3,4))
for i in range(len(projection)):
    projection[i,0] = np.array(proj[4+4*i].split(),dtype=np.float32)
    projection[i, 1] = np.array(proj[5 + 4*i].split(),dtype=np.float32)
    projection[i, 2] = np.array(proj[6 + 4*i].split(),dtype=np.float32)
    

import cv2

imglist = list(range(107))
for frame in frames:
    img_path =  '/Body_1_80_updat/subject_1/body/%s/image/'%frame
    img_list = os.listdir(img_path)
    keypoints_path ='/Body_1_80_updat/subject_1/body/%s/reconstruction/keypoints.txt'%frame
    keypoints_3d= np.loadtxt(keypoints_path).reshape(-1,3)
    dummy = np.ones((len(keypoints_3d), 1))
    kpts = np.concatenate((keypoints_3d, dummy), axis=1)
    kpts = np.matmul(kpts, projection[0].T)
    kpts = kpts[:,:2]/kpts[:,2].reshape(-1,1)


    img = cv2.imread(img_path+img_list[0])
    for joint in kpts:
        cv2.circle(img, (int(joint[0]),int(joint[1])),10,(255,0,0),-1)

    img = cv2.resize(img, (512,512))
    cv2.imshow('img', img)
    k = cv2.waitKey()
    if k == 27:
        cv2.destroyAllWindows()

image

Matlab version

Hi thanks for the dataset and code
What's your matlab version? I can't run it under R2015a

Actually I writed the visual tool in python, using the smpl param and projection matrix provided. However the mesh I got isn't that accurate, as attched below.
I checked that the vertice I get matches the one provided in the reconstruction file and I didn't find any problem.

Here is my code:
'''
def proj_HUMBI():
opt = get_opt_debug()
smpl = SMPL(opt)

data_root = '/mnt/108-sdd/human_recon/data/humbi/body_mesh/'
subjects = os.listdir(data_root)

for subject in subjects:

    if subject != 'subject_136':

        subject_path = data_root + subject + '/body/'

        result_path = '/xxx/humbi/body_mesh/{}_rendered/'.format(subject)

        if not os.path.exists(result_path):
            os.mkdir(result_path)

        ext_lines = open(subject_path + 'extrinsic.txt', 'r').readlines()
        num_cam = int(ext_lines[1].replace('\n', '').split(' ')[-1])
        ext_lines = ext_lines[3:len(ext_lines)]

        int_lines = open(subject_path + 'intrinsic.txt', 'r').readlines()
        int_lines = int_lines[3:len(int_lines)]

        pro_lines = open(subject_path + 'project.txt', 'r').readlines()
        pro_lines = pro_lines[3:len(pro_lines)]

        cam_int_list = {}
        cam_ext_list = {}
        cam_pro_list = {}

        for i in range(num_cam):

            index_ext = ext_lines[5 * i].split(' ')[-1].replace('\n', '')

            cam_ext = np.zeros((4, 4)).astype(np.float32)

            for j in range(3):
                line_R = ext_lines[5 * i + 2 + j].replace('\n', '').split(' ')
                for k in range(3):
                    cam_ext[j][k] = float(line_R[k])

            line_T = ext_lines[5 * i + 1].replace('\n', '').split(' ')
            for j in range(3):
                cam_ext[j][3] = float(line_T[j])

            cam_ext[3][3] = 1
            cam_ext_list[index_ext] = cam_ext

            index_int = int_lines[4 * i].split(' ')[-1].replace('\n', '')

            cam_int = np.zeros((3, 3)).astype(np.float32)
            for j in range(3):
                line= int_lines[4 * i + 1 + j].replace('\n', '').split(' ')
                for k in range(3):
                    cam_int[j][k] = float(line[k])

            cam_int_list[index_int] = cam_int

            index_pro = pro_lines[4 * i].split(' ')[-1].replace('\n', '')
            project = np.zeros((3, 4)).astype(np.float32)
            for j in range(3):
                line= pro_lines[4 * i + 1 + j].replace('\n', '').split(' ')
                for k in range(4):
                    project[j][k] = float(line[k])

            cam_pro_list[index_pro] = project


        frames = os.listdir(subject_path)
        frames = [frame for frame in frames if frame.split('.')[-1] != 'txt']

        for frame in frames:

            print("processing {} {}".format(subject, frame))

            frame_path = subject_path + frame + '/'
            frame_image_path = frame_path + 'image/'
            frame_param_path = frame_path + 'reconstruction/smpl_parameter.txt'

            sub_result_path = result_path + frame + '/'

            if not os.path.exists(sub_result_path):
                os.mkdir(sub_result_path)

            smpl_param_lines = open(frame_param_path, 'r').readlines()

            scale = float(smpl_param_lines[0].replace('\n', ''))

            trans = []
            for i in range(1, 4):
                trans.append(float(smpl_param_lines[i].replace('\n', '')))
            trans = np.array(trans)

            pose = []
            for i in range(4, 76):
                pose.append(float(smpl_param_lines[i].replace('\n', '')))
            pose = np.array(pose)

            shape = []
            for i in range(76, 86):
                shape.append(float(smpl_param_lines[i].replace('\n', '')))
            shape = np.array(shape)

            # trans = np.insert(trans, 3, values=0)
            trans = trans.reshape(1, 3)

            pose = torch.from_numpy(pose.astype(np.float32)).unsqueeze(0)
            shape = torch.from_numpy(shape.astype(np.float32)).unsqueeze(0)

            vertice = smpl(pose, shape)  # * np.array(frame_data['scale']).astype(np.float32)
            pnts = smpl.get_joints(vertice).squeeze().detach().numpy()
            pnts = pnts * scale + trans
            pnts = np.insert(pnts, 3, values=1, axis=1)

            vertice = vertice.squeeze().detach().numpy()
            vertice = vertice * scale + trans
            vertice = np.insert(vertice, 3, values=1, axis=1)

            images = os.listdir(frame_image_path)
            for image_name in images:

                image = cv2.imread(frame_image_path + image_name)

                cam_index = str(int(image_name.replace('.jpg', '').replace('image', '')))

                cam_int = cam_int_list[cam_index]
                cam_int = np.insert(cam_int, 3, values=0, axis=1)

                cam_ext = cam_ext_list[cam_index]

                project = cam_pro_list[cam_index] # np.dot(cam_int, cam_ext)

                # pnt_temp = (pnts + trans).transpose(1, 0)
                # pnt24_3d_proj = np.dot(project, pnt_temp).transpose(1, 0)
                pnt24_3d_proj = np.dot(project, pnts.transpose(1, 0)).transpose(1, 0)
                vertice_3d_proj = np.dot(project, vertice.transpose(1, 0)).transpose(1, 0)

                for i in range(24):
                    pnt24_3d_proj[i][0] = pnt24_3d_proj[i][0] / pnt24_3d_proj[i][2]
                    pnt24_3d_proj[i][1] = pnt24_3d_proj[i][1] / pnt24_3d_proj[i][2]

                for i in range(6890):
                    vertice_3d_proj[i][0] = vertice_3d_proj[i][0] / vertice_3d_proj[i][2]
                    vertice_3d_proj[i][1] = vertice_3d_proj[i][1] / vertice_3d_proj[i][2]

                for i in range(24):
                    cv2.circle(image, (int(pnt24_3d_proj[i][0]), int(pnt24_3d_proj[i][1])), 5, (255, 0, 0))

                for i in range(6890):
                    cv2.circle(image, (int(vertice_3d_proj[i][0]), int(vertice_3d_proj[i][1])), 1, (0, 0, 255))

                cv2.imwrite(sub_result_path + '{}'.format(image_name), image)

In brief, read the projection matrix p, cal the mesh v using pose and shape, and p * (v * scale + trans)

And the result I can get:
1
2
3
4

Did i miss some thing, or the smpl param you provided is inaccurate in certain cases?

Unit of 3D keypoints coordinates

Hello, thank you for this amazing dataset!
I'm currently working on 3D pose estimation using the BODY_25 keypoints, but I cannot find the unit of those coordinates. I think they could be meters, but I'm not sure.
Could you please provide this information?

Error in crop image

Thank you for great project.
There is no hand in HUMBI/Hand_381_453/subject_381/hand/00000001/image_croped/left/.png*.

Question about clothing models

Hi. Thanks for publishing this useful dataset. I have a question about the clothing model in the dataset. As far as I understand, the method you used to obtain clothes (ClothCap) deforms the SMPL body into clothes. However, in the dataset, the clothing meshes have different numbers of vertices. Could you please let me know how I can make SMPL "dress" or how to match the vertices of clothing models back to SMPL?
Thank you very much!

License of dataset

Hi and thanks for the nice work. Could you document the license of the dataset (e.g. here on Github or on the website)? Couldn't find it anywhere. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.