Git Product home page Git Product logo

threedposeunitysample's Introduction

ThreeDPoseUnitySample

The next version uses Unity Barracuda. It has been published in this repository. https://github.com/digital-standard/ThreeDPoseUnityBarracuda

画像、動画、カメラなどの2次元の画像データから人体の3次元の姿勢を推定する機械学習を研究しています。こちらの内容は私のTwitterアカウントを参照ください。

ThreeDPoseUnitySampleは、その学習結果のモデルとUnityを使用した実装サンプルです。姿勢推定は一人が写っている画像を前提としています。複数人の推定には対応していません。

使い方

Video Playerに動画をセットします。この動画からclipRectのサイズに切り出し、TextureObjectを介して姿勢推定用のonnxに渡されます。 わざわざTextureObjectに渡す必要はありませんが、入力画像が224x224を想定しているためその為の確認用です。

動画とclipRectを正しく設定すれば動くはずですが、姿勢推定のモデルはまだ研究中ですのでそれほどの精度がありません。それなりの精度を出すためには、

  • 背景がシンプルであること(床の反射とかも誤認識する事があります)
  • 常に全身が写っていること(全身が写っている事が前提で作っています)
  • 人物が大きくもなく小さくもなく
  • ダボっとした服は誤認識しやすいです。手足がわかる服が良いです

サンプルに使用している動画「wiper.mp4」はミソジサラリーマン様こちらの動画を使用させて頂いております。ありがとうございます。このファイルは許可なく動画サイト等への転載は行わないでください。

License

非営利目的の使用のみ可です。趣味・研究などにはご自由にお使いください

Non-commercial use only.Please use it freely for hobbies and research.

threedposeunitysample's People

Contributors

yukihiko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

threedposeunitysample's Issues

running report an error

Unity 2019.3.15f It didn't work and console report error AssertionException: Assertion failure. Values are not equal.
Expected: 3 == 4
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at :0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message, System.Collections.Generic.IEqualityComparer`1[T] comparer) (at :0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message) (at :0)
UnityEngine.Assertions.Assert.AreEqual (System.Int32 expected, System.Int32 actual) (at :0)
Unity.Barracuda.PrecompiledComputeOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at
2

How can i use my laptop webcam?

I have found the CameraPlayStart() function in ThreeDPoseScript.cs file and UseWebCam variable is default set to true but I still don't know how to use my webcam in the project

Any clue to detect hand pose?

Great work! This sample works fine for detecting full body, however it doesn't work well for hand pose. Any clue to detect hand pose?

how to train model.

hello, I'm looking for a new train model for this Sample.
Is there a solution for train?
I think "MobileNet3D2.onnx" is train model...

Question about Synthetic Data

Hi @yukihiko,

In an earlier opened issue, you mentioned that you used synthetic data to train the model. I have a few questions regarding the generation of the dataset:

  1. How did you create the rigged human models and generate random poses, rendering them relative to the camera position? (I've heard SCAPE and SMPL are popular methods)
  2. Where did you get the textures for the human model? (i.e. the clothes, the face, the hands)

Also, would it be possible for you to share the code for rendering the model in the scene and performing the texture mapping?

Thank you for your continuous efforts to maintain this repository, and for being so willing to help!

Rick

p.s. Do you have an email that you are reachable at?

Compute joint's rotation from joint's position

Thanks for your impressive work.

I am insterested in computing joint's rotation from the joint's position and its child joint's position. As far as I understand it, it is hard to recover the rotation alone bone direction when just bone directions of current frame and initial frame are given.

Your solution is as follows,
jointPoint.Transform.rotation = Quaternion.LookRotation(jointPoint.Pos3D - jointPoint.Child.Pos3D, forward) * jointPoint.Inverse * jointPoint.InitRotation;
and the jointPoint.Inverse is the initial rotation quaternion from joint's bone direction. I notice that the jointPoint.Inverse is computed using default up-vector while here the current forward vector is used. Could you give a description please? Any other blog or tutorial would be appreciated.

Problem on building as WebGL

Hello, it‘s so nice to see your great work. But after I built it as a WebGL project, during it loading on the webpage, something wrong occured. Console shows error infos are listed below:

Uncaught abort("To use dlopen, you need to use Emscripten's linking support, see https://github.com/kripken/emscripten/wiki/Linking") at Error
at jsStackTrace (blob:http://localhost:8080/a9113ead-fbc3-4761-8e56-b06c85afef20:943:12)
at Object.stackTrace (blob:http://localhost:8080/a9113ead-fbc3-4761-8e56-b06c85afef20:957:11)
at Object.onAbort (http://localhost:8080/VisionPlatform_web/webGL/ThreeDPose/Loader/UnityLoader.js:1139:50)
...

I guess it has some relationship with the OpenCvSharp module. Have you met such problem before when you pack your project running on iOS? How can I fix it? Looking forward to your advice.
Many Thanks!

any other 3d models can be used in the demo?

Great job, I am trying to replace or recreate other 3d models referring to the Unity-chan in the demo, but the animation doesn't work as Unity-chan. Any other 3d models to download or any guidline to create such 3d models that can be used in this demo?

OpenCVSharp error on Mac

Hello @yukihiko
Would it be possible a quick guide on how to run this project, please?
I am moving my first steps into this field and I would like to learn more. I tried to run it but I receive the errors below.
My system:
MacBook Pro (16-inch, 2019)
2.6 GHz 6-Core Intel Core i7
32 GB 2667 MHz DDR4
AMD Radeon Pro 5500M 8 GB
Unity 2019 4 12 f1

What are the requirements to make it run? any prerequisites?

image

image

Question about the model

Hello,

Great work on 3D pose estimation. It's truly amazing what ML can do these days! I was wondering about a couple of things about your model:

  1. What model architecture is used for the ML model? (hourglass, cpm, etc...)
  2. How was the model trained? (heatmap regression; if so, how?; loss function, optimizers, lr?)
  3. What dataset(s) was this model trained on?
  4. For the iOS model, what set of mobile optimizations were performed on it to get it to run on a mobile device (quantization?) Does your phone heat up when you run it?

It also appears that this model is an older version. On your twitter account, it seems that that model has higher accuracy and is newer. What modifications did you do to make it perform better?

Thank you

On Hololens: Unable to load DLL 'OpenCvSharpExtern' OR SpringManager.cs(72,42): error CS1061: 'Type' does not contain a definition for 'GetField

Yukihiko,

We are able to play your sample with our own video by pressing play inside Unity 2018.4.7f1

But on the Microsoft Hololens we get
DllNotFoundException: Unable to load DLL 'OpenCvSharpExtern': The specified module could not be found.

To get your project into Hololens we click on "use WebCam" in your project. And then we build to run on the hololens which is x86 and windows universal platform as seen in the screenshot below.

We are not sure what to do. We tried copying your OpenCvSharpExtern dll to various places in the folders that were built before pushing to hololens but still doesnt work.

When we switch the scripting backends in the player settings from IL2CPP to .NET then we get a build error inside Unity when we try to build for Windows Universal

Assets\unity-chan!\Unity-chan! Model\Scripts\SpringManager.cs(72,42): error CS1061: 'Type' does not contain a definition for 'GetField' and no accessible extension method 'GetField' accepting a first argument of type 'Type' could be found (are you missing a using directive or an assembly reference?)

image

image

image

here are the build settings inside Unity in the screenshot below.
image

then when export to Visual Studio
image

here is the video from the hololens and the error not working
https://drive.google.com/file/d/1ru2gEQUAV6cQg9XGtjbf5CoKV6tpE_Gf/view

How to retarget the animation from Unity-chan to another 3d model.

Hi,
thank you for your hard work on this implementation! I wanted to ask you how can somebody replace Unity-Chan with another 3d model and have the same results in the animation? Until now, I have tried 3 models (one from Mixamo and two from the Asset Store). I observed that you have put some cube gameobjects (Nose, abdomen etc) in order to probably fill the skeleton with missing bones(?). Anyway, I tried to do the same with the models, the animation succeeds but unfortunately they have some glitches in specific body joints. Also one of them could not even get the animation, cause the parser could not find any body joint. If you need more information to understand I can provide you with some screenshots and more specific details.
Best regards.

Animation with Python

Hey yukihiko,

This really is an awesome project! And huge shoutout to you for open sourcing it.

Since I'm not familiar with Unity, I'm gonna try to learn to implement it in Python, specifically how to animate a character with pose keypoints. There are a lot of projects doing pose estimation in Python. However, could you please point me in a right direction as to how to do 3d avatar animation in Python once we have the pose estimations?

About the training process

Hello,
thank you so much for your amazing work ! 👍
I was wondering about a couple of things :

  1. I am really curious about the architecture of the network. In other GitHub issues, you said that 2D and 3D heatmap are independent and trained at the same time. How to feed 3D data into the network? Does 2D heatmap and offset were used in predict stage?
  2. How did you generate the coordinates of 3D data through unity. For the data collection stage , how does the action in front of the camera correspond to the action of avatar in the unity?
  3. What is the meaning of 2D and 3D offset?

I'll really appreciate your help.
Thank you

Creating animation from videos

Hello ,
Great work. I am trying to understand how , you managed managed to animate the character . If i understand correctly, there are two steps

a) 2d pose estimation using open pose or posenet
b) lifting the 2d pose to 3d . What did you use to achieve that ?
c) The correspodance of , 3d pose co -ordinates into untiy (x,y,z) co ordinates.

Is there a complete tutorial ? or Blog . I tried to follow your twitter account but could not understand many things. I am trying something similar to have animated avtars in unity for a more interactive teaching for school . Any help would be appreicated

Problem while installing on Android

Hey,
I tested this on my laptop using its webcam and it worked fine.

However, when I am installing it on my Android phone, I am getting some errors like this.
09-26 23:01:08.858 25404 25427 E Unity : DllNotFoundException: OpenCvSharpExtern 09-26 23:01:08.858 25404 25427 E Unity : at (wrapper managed-to-native) OpenCvSharp.NativeMethods.core_Mat_sizeof() 09-26 23:01:08.858 25404 25427 E Unity : at OpenCvSharp.NativeMethods.TryPInvoke () [0x0000e] in <1c4180998434408a87e8d3a3d5f215e6>:0 09-26 23:01:08.858 25404 25427 E Unity : Rethrow as OpenCvSharpException: OpenCvSharpExtern 09-26 23:01:08.858 25404 25427 E Unity : *** An exception has occurred because of P/Invoke. *** 09-26 23:01:08.858 25404 25427 E Unity : Please check the following: 09-26 23:01:08.858 25404 25427 E Unity : (1) OpenCV's DLL files exist in the same directory as the executable file. 09-26 23:01:08.858 25404 25427 E Unity : (2) Visual C++ Redistributable Package has been installed. 09-26 23:01:08.858 25404 25427 E Unity : (3) The target platform(x86/x64) of OpenCV's DLL files and OpenCvSharp is the same as your project's.

Any suggestions would be appreciated. Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.