Git Product home page Git Product logo

Comments (11)

techenthu1299 avatar techenthu1299 commented on July 23, 2024

Hello alex,

I have gotten GRT set up and am working with it too. I am using imu 9 axis accelerometer , gyro and mag sensors which is again worn on the wrist. I want to do some gesture recognition based on the sensor data.
I have modified the DTW example to my use case, and the format of the training data as well. It seems to classify well when there is only one class, meaning it can say if is of class one or not. But it is not able to classify when i train it with multiple classes. I am facing this issue when i use automatic gesture recognition in real time where i get continuous data from the sensor.

However, if i feed the complete shot data to the tool before hand instead of real time continuous data it seems to classify well even if there are multiple classes. Can you help me out here?? Thanks.

from grt.

ashayk avatar ashayk commented on July 23, 2024

Hi, I think you're actually further along than I am, so I'm probably not the best person to ask. Perhaps Nick can chime in and help us both. All the best with your project.

from grt.

techenthu1299 avatar techenthu1299 commented on July 23, 2024

Hi, thanks for the reply. I understand you are able to classify the
gestures properly. How are you doing that? Can you please help me here?
Thanks.

On Fri, Mar 11, 2016 at 12:07 PM, ashayk [email protected] wrote:

Hi, I think you're actually further along than I am, so I'm probably not
the best person to ask. Perhaps Nick can chime in and help us both. All the
best with your project.


Reply to this email directly or view it on GitHub
#47 (comment).

from grt.

ashayk avatar ashayk commented on July 23, 2024

I was only classifying one class as well. It's exactly as in the DTW example that you've implemented. I haven't done anything different.

from grt.

techenthu1299 avatar techenthu1299 commented on July 23, 2024

okay, thanks.

On Fri, Mar 11, 2016 at 12:11 PM, ashayk [email protected] wrote:

I was only classifying one class as well. It's exactly as in the DTW
example that you've implemented. I haven't done anything different.


Reply to this email directly or view it on GitHub
#47 (comment).

from grt.

nickgillian avatar nickgillian commented on July 23, 2024

Hi Alex and ashayk,

It sounds like the problem you are both running into is getting accurate gesture spotting to work (i.e., detecting a valid gesture when there are lots of other generic movements from the sensor)

There are lots of tricks you can use here to improve the accuracy of your system. The first thing I would do is to record and plot some of your data to see if you can see any obvious patterns in the data that might help you. For example, you might see that you get a much stronger signal in your data (e.g., magnitude of the accelerometer) when you perform a gesture (as opposed to normal movements)....if this was the case, then you could use some simple logic to detect that magnitude peak in the data and then only perform the DTW classification on that data (say a window of data either side of it).

Other things you can look at are:

  • how important are the other dimensions in your data (if you are using a 3 axis accelerometer, maybe only 1/2 of the dimensions are relevant...if this was the case, then you might consider only using these dimensions)
  • try constraining the warping path of the DTW algorithm and setting the maximum warping window that the DTW algorithm can use when comparing templates to the real-time signal (this should cut down the number of false positive errors)
  • if this doesn't improve things, then have a look at some of the other options like using the TimeDomainFeatures and combine this with an algorithm like SVM or Random Forests.

I hope this helps!

  • Nick

from grt.

azarus avatar azarus commented on July 23, 2024

Hey Nick!

Sorry for posting it here but i didn't wanted to create a new issue for this.
I also just started using the library. My question would be how would you go about using multiple accelerometers. For example if you want some gestures to be detected separately that only happens on one device, and ignore the rest of the data that comes from the other accelerometer in the TimeSeries?

Should i create multiple pipelines for each outcome and test each accelerometer seperately, and also test both of them? Or is there a built in option that i couldn't find?

Thank you for your help!

from grt.

cyberluke avatar cyberluke commented on July 23, 2024

Hi, I'm doing this like for two years. My advice is to use several pipelines. One for left hand, second for right hand, third for both hands for example.

There is also ANBC (classification, not timeseries...but u could plug in feature extraction of timedomain series or movement trajectory). ANBC allows u to use weight parameter for different axes or sensors: http://www.nickgillian.com/wiki/pmwiki.php/GRT/ANBC

from grt.

cyberluke avatar cyberluke commented on July 23, 2024

For me DTW works well, but I had to combine Euler (x,y,z) or Quaternion (w,x,y,z) plus Accelerometer (vx, vy....just two axes). This way I get x,y,z,vx,vy to DTW and can recognize like 5 or 10 different moves. But I have to record at least 50 times each gesture.

If I use some marker for gesture start, gesture end....for example a button on my finger....then it works much better. I'm now thinking how to make this useable for users....I'm thinking about some deadzone and filtering to filter the noise, when u are not doing any gesture. Of course this would have to have different parameters for x,y,z and for vx,vy.

from grt.

azarus avatar azarus commented on July 23, 2024

Accuracy was one of my greatest problem using DTW. I'd like to have a solution that is accurate enough and reliable.

from grt.

cyberluke avatar cyberluke commented on July 23, 2024

Was or still is? I have linear acceleration right away from the sensor thanks to sensor fusion. Then EnvelopeExtractor, deadzone without any machine learning pipeline. U cannot use regression or classification for any meaningful thing. DTW is not reliable. So the only thing is doing raw processing with feature extractors yourself and maybe set some threshold for each axis. Good thing is to combine it with classification of different virtual space area with gyro,mag. This is what I did and what works for me the best. Im going to try also FFT on acc to read a frequency of motion. At least thats what I did with EMG. Any other ideas?

from grt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.