Comments (4)
Hi, I'd be happy to point you towards relevant work. Unfortunately, the functionality you look for isn't planned to be implemented in GazeML in the near future.
Broadly speaking there are 3 separate calibrations required for a working eye tracking system.
- Camera intrinsic parameter calibration OpenCV tutorial
- Extrinsic camera transformation calibration (can be done using a mirror)
- User-specific parameter learning for either estimating gaze, or correcting estimated gaze direction
In addition, you need to estimate head pose, which you can do in a few different ways. One simple example uses a 3D head model and the PnP algorithm.
Once you have the head pose (rotation and translation) and gaze direction, it's a matter of some geometry. More specifically speaking, you should use the known (or generic) 3D model of the head to compute an estimated 3D eyeball center for each eye, then use that as the origin of the gaze ray. Intersecting the estimated gaze ray with the known screen plane and applying some scaling yields the on-screen coordinates.
You can skip Step 3 in implementing this pipeline, though with reduced PoR accuracy. Steps 1 and 2 cannot be skipped, nor the head pose estimation. One work which goes against what I stated is iTracker which directly regresses positions on the camera-plane. Since they achieve very impressive results, I would look into their work for a quick and effective solution.
Best of luck in your research.
from gazeml.
Hi Swook,
Thanks for your help with your previous answer!
We tried implementing the steps as suggested by you:
However, I am facing few challenges on this front, your help or guidance would be highly appreciated!!
-
We calculated the intrinsic matrix using the opencv calibration example. However, I am surprised to see my intrinsic matrix is changing every time I run this code on new images of the same chess board
-
We got the Extrinsic Parameters as well along with Head Pose (Rotation and Translation Vector).
-
Since we know, 2D iris center from the Hour-glass model, we converted that to a 3D point using Camera Matrix. and assumed that gaze vector will originate from this point.
Now, we are stuck at the following steps:
a. How do we define the screen plane with the same coordinates. For instance, if camera is located on top of screen just like we have it on a laptop and screen size if 42 inch by 40 inch.
b. How to find the intersection point of that plane and the gaze vector and how to convert that 3D point into 2D point on screen using any scaling function
Please let us know if you could help us/guide on this project
from gazeml.
Hi Subhash,
Apolgies for replying to late, I missed this issue. Unfortunately, I do not have the capacity to provide any guidance on your project.
Best of luck,
Seonwook
from gazeml.
https://www.youtube.com/watch?v=H_9viDBiwOE&list=PLLB6WOMcarJgAyGKsLUgqYD9eTb0GgX04&index=5
Can check the paper @SubhashPavan
from gazeml.
Related Issues (20)
- Theta and Phi
- Train-data parameter in elg-demo file
- Video_out is not defined when using record_video for testing elg demo
- Running demo elg using CPU gives overlapping frame of original image and segmented eye image instead of segmented eye image in top-left corner
- Could you share code to test on different data sets?
- can_use_eye, can_use_eyelid and can_use_iris in elg_demo.py are False HOT 1
- Stuck with --from_video option HOT 3
- Could you please tell me where the pre-training weight should be placed in the Windows system?
- The project only has elg demo. how to run dpg demo
- RuntimeError: Error deserializing object of type int64 while deserializing a floating point number.
- Some questions about the 'unityeyes.py'
- iris_centre is not mean of iris_landmarks?
- About train/test data in MPIIGaze.h5
- Collaboration offer to help progress
- Gaze Vector Accuracy HOT 1
- elgo_demo.py stucks after video is closed HOT 1
- python and tensorflow version
- [TUTORIAL] How to save ELG model to .onnx and further to TensorRT .engine HOT 1
- Long proc time when running elg_demo.py on CPU HOT 1
- Can not get pre-trained weights
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gazeml.