Git Product home page Git Product logo

handy.js's People

Contributors

adarosecannon avatar gsimone avatar stewdio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

handy.js's Issues

Search method optimization

Currently, our search method calculates the Euclidean distance between live hand pose data and each recorded hand pose for that handedness. In one respect this is excellent because it yields a full list of search results which can then be sorted by distance and every single pose has a calculated (potentially useful) result. However, as the pose library grows the search routine becomes more cumbersome and time consuming. Array-clutching coupled with the fact that we don’t require a full search results list per frame for a good user experience has mostly ameliorated the problem thus far—but I think I’m starting to feel the detection lag and I think we can do better.

Proposal:
Let’s say Handy requires that our pose library contain at least one specific pose—for example, the “at rest” pose—and that all other pose objects in the library include the Euclidean distance between themselves and this “at rest” pose. We then take the live hand pose data and get the distance, d, between it and the “at rest” pose. We can now immediately see (without calculating further distances) what poses in the library might be similar to the live pose by just looking at each recorded pose’s d property. If the live pose is very distant from the “at rest” pose, then we can eliminate poses that are similar to the “at rest” pose from the actual search; we only need to search through recorded poses with a d similar to the live pose’s d.

Perhaps we can further compound this efficiency by requiring two poses in the library instead of one, and pre-calculating the distances between these required poses and all the others? (There must be diminishing returns here, however.)

documenting usage with a-frame

A-Frame is the de-facto webxr library, and it has solid built-in hand tracking. out of the box, it recognizes the pinch gesture. A quick guide for implementing Handy to extend a-rame would be super helpful, and likely expand the reach of this library.

Rounding of joint positions causing pose search to fail

@stewdio
Heya, I'm using handy.js for a machine learning project and you have saved me a lot of time! Thanks for the amazing library. However I think I might have found a bug but I'm not sure if it's a problem with my code or with the library.

The problem that I encountered was that handy.js was not properly searching up distances to the predefined poses, and I traced the problem to this:

handy.js/src/Handy.js

Lines 772 to 774 in 85f22ad

hand.livePoseData.jointPositions[ 0 ][ 0 ] === 0 &&
hand.livePoseData.jointPositions[ 0 ][ 1 ] === 0 &&
hand.livePoseData.jointPositions[ 0 ][ 2 ] === 0

where there was an optimization to terminate the search early if the first joint had its coordinates be all zero. I double checked with your live demo but your demo seemed to work fine.

However, when I ran it in my code, even with all the non-essential parts cut out, the first joint position was always at [0, 0 ,0] which causes the pose search to always fail. The other joint positions were however not all 0s, so the termination should not have happened.

If I remove the rounding from here:

handy.js/src/Handy.js

Lines 522 to 524 in 85f22ad

Math.round( jointMatrix.elements[ 12 ] * 1000 ),
Math.round( jointMatrix.elements[ 13 ] * 1000 ),
Math.round( jointMatrix.elements[ 14 ] * 1000 )

the first joint position is still some ridiculously small number (e-13!!) so probably its just some numeric errors from the matrix operations, so honestly I'm not sure what I'm doing wrong in my code as in your demo, the first joint does not return a position so close to all zeros.

BUT, upon closer inspection after I removed the rounding from handy.js, the position of the first joint also are quite often in the >0.5 range in your live demo, which perhaps might have cause some skipped searches. While I'm still figuring out why the coordinates of the first joint of my hands are almost all zero, which may as well be specific to my code, perhaps the rounding may also be causing issues for other people?

Build and mirror poses from one hand to another automatically

Having to record the same poses for each hand is time consuming and opens the door to accidental inconsistent labeling, etc. We need to look into the most efficient means of reflecting the recorded poses of one hand onto the other. This also has the potential to reduce download size as only one hand’s pose library would need to be downloaded. (It can then be cloned / reflected on the user’s end to act as the pose library for the other hand.)

My personal preference (and bias), as a right-handed person is to build the library for left hands only, then have that cloned and reflected for right hands. The reason is it’s easier for me to pose my left hand into a shape for recording and then use my right hand to raise / lower the headset and also work the keyboard to take that snapshot.

Bonus: Is it more efficient to never clone and reflect one hand’s pose library and instead bake reflection detection into the search mechanism itself? Or is that just asking for trouble?

Changes to WebXR Hands API broke pose mapping and hand models.

Recently the Oculus Browser updated its implementation of the WebXR Hands API from this October 2020 draft to this March 2021 draft which changed some fundamental aspects of how hand joints are referenced and how they are oriented in space. For example, joints are no longer referenced via an Array (eg. INDEX_PHALANX_TIP = 9; joints[ 9 ]), but by named Object (eg. joints[ 'index-finger-tip' ]). The joint naming convention itself has also changed. While an evolving API can sometimes be frustrating to support I’m nonetheless very happy to see folks pushing this API forward. It’s still early days, after all.

Handy has been updated to handle the joint referencing changes, and so on. But currently it does not support the joint orientation changes that the API has implemented—this means Handy is temporarily broken. The orientation changes are being addressed within Three.js itself: mrdoob/three.js#21712 Once these updates to Three are complete I’ll incorporate them into Handy and it should work again as intended 👍

Website demonstration doesn't work with latest Oculus Browser

The demonstration at https://stewartsmith.io/handy/ doesn't work on Oculus Quest (v1), latest Oculus Browser (v14). The cosmic background shows up, but nothing else (hands or other UI).

An error occurs inside Three.js init code:

WebGL context must be marked as XR compatible in order to use with an immersive XRSession
three.module.js:23581

See here: mrdoob/three.js#21126 - it seems to be a recent problem, presumably on all Chromium-based browsers, and should be fixed by updating to latest Three.

Using Handy.js with computer camera and MediaPipe

Hi,

I'm currently working on a project that interprets ASL into text, using a computer camera and MediaPipe to get the hand model.
I was wondering if there would be a way to use Handy.js for the hand model interpretation. Would there be a way for me to use Handy.js with the hand model from MediaPipe, instead of using the hand models from the Oculus Quest? (Potentially using the WebXR hand tracking API)
If so, could you point me in the right direction on how I would be able to do that?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.