Git Product home page Git Product logo

gazeml's People

Contributors

innov-plus avatar swook avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gazeml's Issues

error occurs when I install it on macbook pro

➜  GazeML git:(zxdev_mac) python3 setup.py install
running install
running bdist_egg
running egg_info
writing gazeml.egg-info/PKG-INFO
writing dependency_links to gazeml.egg-info/dependency_links.txt
writing requirements to gazeml.egg-info/requires.txt
writing top-level names to gazeml.egg-info/top_level.txt
reading manifest file 'gazeml.egg-info/SOURCES.txt'
writing manifest file 'gazeml.egg-info/SOURCES.txt'
installing library code to build/bdist.macosx-10.13-x86_64/egg
running install_lib
warning: install_lib: 'build/lib' does not exist -- no Python modules to install

creating build/bdist.macosx-10.13-x86_64/egg
creating build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
copying gazeml.egg-info/PKG-INFO -> build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
copying gazeml.egg-info/SOURCES.txt -> build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
copying gazeml.egg-info/dependency_links.txt -> build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
copying gazeml.egg-info/requires.txt -> build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
copying gazeml.egg-info/top_level.txt -> build/bdist.macosx-10.13-x86_64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating 'dist/gazeml-0.1-py3.6.egg' and adding 'build/bdist.macosx-10.13-x86_64/egg' to it
removing 'build/bdist.macosx-10.13-x86_64/egg' (and everything under it)
Processing gazeml-0.1-py3.6.egg
Removing /usr/local/lib/python3.6/site-packages/gazeml-0.1-py3.6.egg
Copying gazeml-0.1-py3.6.egg to /usr/local/lib/python3.6/site-packages
gazeml 0.1 is already the active version in easy-install.pth

Installed /usr/local/lib/python3.6/site-packages/gazeml-0.1-py3.6.egg
Processing dependencies for gazeml==0.1
Searching for ujson
Reading https://pypi.python.org/simple/ujson/
Downloading https://files.pythonhosted.org/packages/16/c4/79f3409bc710559015464e5f49b9879430d8f87498ecdc335899732e5377/ujson-1.35.tar.gz#sha256=f66073e5506e91d204ab0c614a148d5aa938bdbf104751be66f8ad7a222f5f86
Best match: ujson 1.35
Processing ujson-1.35.tar.gz
Writing /var/folders/f_/shl432s55dl5766kw7tc5zsr0000gn/T/easy_install-x9yj5hx1/ujson-1.35/setup.cfg
Running ujson-1.35/setup.py -q bdist_egg --dist-dir /var/folders/f_/shl432s55dl5766kw7tc5zsr0000gn/T/easy_install-x9yj5hx1/ujson-1.35/egg-dist-tmp-kp7ctt_x
clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument]
clang: warning: /usr/local/opt/openssl/include: 'linker' input unused [-Wunused-command-line-argument]
ld: can't map file, errno=22 file '/usr/local/opt/openssl/lib' for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: Setup script exited with error: command '/usr/bin/gcc' failed with exit status 1

How can I resolve it ?

Directly regressing gaze angles from unity_eyes data?

Hi @swook, nice work and congrats! I notice that gaze direction ground truth info is also generated in unity_eyes data. I am wondering why not modify the network structure a little bit and directly regressing gaze angles based on landmark using unity_eyes dataset's gaze ground truth info as supervision? Why train a SVM on MPIIGaze dataset seperately? Have I missed something?
Thank U! ^ ^

gaze estimation

In the code, feature based gaze estimation is used instead of model based method. Is feature estimator better in accuracy?

cannot do validation

Hi. I am trying to load a validation set and my data setting is shown below:

    validation_data = UnityEyes(
        session,
        batch_size=batch_size,
        data_format='NCHW',

unityeyes_path='/cluster/project/infk/courses/machine_perception_19/validation_data/imgs_small/imgs',
testing = True,
num_threads=2,
generate_heatmaps=True,
eye_image_shape=(36, 60),
heatmaps_scale=1.0 / elg_first_layer_stride,
)
validation_data.set_augmentation_range('translation', 2.0, 10.0)
validation_data.set_augmentation_range('rotation', 1.0, 10.0)
validation_data.set_augmentation_range('intensity', 0.5, 20.0)
validation_data.set_augmentation_range('blur', 0.1, 1.0)
validation_data.set_augmentation_range('scale', 0.01, 0.1)
validation_data.set_augmentation_range('rescale', 1.0, 0.5)
validation_data.set_augmentation_range('num_line', 0.0, 2.0)
validation_data.set_augmentation_range('heatmap_sigma', 7.5, 2.5)

But the issue is that the model cannot do validation job during training and it cannot trigger final test either. I want to ask whether there are some other settings I ignored if I wanna do validation during training? Thank you!

MacOSx error while running elg_demo.py

Followed the steps

Terminating app due to uncaught exception 'NSInternalInconsistencyException'

stack trace:

Parsing Inputs...
22/08 19:53 INFO ------------------------------
22/08 19:53 INFO Approximate Model Statistics
22/08 19:53 INFO ------------------------------
22/08 19:53 INFO FLOPS per input: 205,323,382.0
22/08 19:53 INFO Trainable Parameters: 150,813
22/08 19:53 INFO ------------------------------
2019-08-22 19:53:18.974 Python[734:11771] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'NSWindow drag regions should only be invalidated on the Main Thread!'
*** First throw call stack:
(
0 CoreFoundation 0x00007fff318442fd __exceptionPreprocess + 256
1 libobjc.A.dylib 0x00007fff5bf15a17 objc_exception_throw + 48
2 CoreFoundation 0x00007fff3185de59 -[NSException raise] + 9
3 AppKit 0x00007fff2ee045ca -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 317
4 AppKit 0x00007fff2ee019f7 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1479
5 AppKit 0x00007fff2eec0d95 -[NSPanel _initContent:styleMask:backing:defer:contentView:] + 50
6 AppKit 0x00007fff2ee0142a -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45
7 AppKit 0x00007fff2eec0d4a -[NSPanel initWithContentRect:styleMask:backing:defer:] + 64
8 QtGui 0x0000000108524028 -[QCocoaPanel initWithContentRect:styleMask:backing:defer:] + 74
9 QtGui 0x000000010852abbf -[NSWindow(QWidgetIntegration) qt_initWithQWidget:contentRect:styleMask:] + 81
10 QtGui 0x000000010851be1c _ZL20qt_mac_create_windowP7QWidgetjmRK5QRect + 427
11 QtGui 0x000000010851baa1 _ZN14QWidgetPrivate18qt_create_root_winEv + 65
12 QtGui 0x000000010851d8d3 _ZN14QWidgetPrivate10create_sysElbb + 1079
13 QtGui 0x00000001085b3d25 _ZN7QWidget6createElbb + 587
14 QtGui 0x00000001085b29f6 _ZN14QWidgetPrivate4initEP7QWidget6QFlagsIN2Qt10WindowTypeEE + 362
15 QtGui 0x00000001085b27e0 _ZN7QWidgetC2EPS_6QFlagsIN2Qt10WindowTypeEE + 122
16 QtGui 0x0000000108535cca _ZN14QDesktopWidgetC2Ev + 32
17 QtGui 0x0000000108579183 _ZN12QApplication7desktopEv + 53
18 QtGui 0x0000000108531a05 _Z9flipPointRK7CGPoint + 32
19 QtGui 0x0000000108511961 _ZN7QCursor3posEv + 47
20 QtGui 0x00000001085890c5 _ZN11QMouseEventC2EN6QEvent4TypeERK6QPointN2Qt11MouseButtonE6QFlagsIS6_ES7_INS5_16KeyboardModifierEE + 79
21 QtGui 0x0000000108a9e426 _ZN20QGraphicsViewPrivateC2Ev + 340
22 QtGui 0x0000000108aa19d5 _ZN13QGraphicsViewC2EP7QWidget + 37
23 cv2.cpython-37m-darwin.so 0x000000010343178f _ZN15DefaultViewPortC2EP8CvWindowi + 31
24 cv2.cpython-37m-darwin.so 0x000000010342c99d _ZN8CvWindowC2E7QStringi + 397
25 cv2.cpython-37m-darwin.so 0x00000001034242c3 _ZN11GuiReceiver12createWindowE7QStringi + 227
26 cv2.cpython-37m-darwin.so 0x0000000103424109 cvNamedWindow + 553
27 cv2.cpython-37m-darwin.so 0x00000001034268cf _ZN11GuiReceiver9showImageE7QStringPv + 159
28 cv2.cpython-37m-darwin.so 0x0000000103426769 cvShowImage + 585
29 cv2.cpython-37m-darwin.so 0x000000010341ecca _ZN2cv6imshowERKNSt3__112basic_stringIcNS0_11char_traitsIcEENS0_9allocatorIcEEEERKNS_11_InputArrayE + 474
30 cv2.cpython-37m-darwin.so 0x0000000102824822 ZL18pyopencv_cv_imshowP7_objectS0_S0 + 434
31 Python 0x0000000101969560 _PyMethodDef_RawFastCallKeywords + 544
32 Python 0x0000000101968ab2 _PyCFunction_FastCallKeywords + 44
33 Python 0x00000001019fde82 call_function + 636
34 Python 0x00000001019f6c70 _PyEval_EvalFrameDefault + 6421
35 Python 0x00000001019fe7a6 _PyEval_EvalCodeWithName + 1870
36 Python 0x00000001019686e5 _PyFunction_FastCallDict + 441
37 Python 0x00000001019f6f43 _PyEval_EvalFrameDefault + 7144
38 Python 0x0000000101968e8c function_code_fastcall + 112
39 Python 0x00000001019fdef7 call_function + 753
40 Python 0x00000001019f6c57 _PyEval_EvalFrameDefault + 6396
41 Python 0x0000000101968e8c function_code_fastcall + 112
42 Python 0x00000001019fdef7 call_function + 753
43 Python 0x00000001019f6c57 _PyEval_EvalFrameDefault + 6396
44 Python 0x0000000101968e8c function_code_fastcall + 112
45 Python 0x0000000101969829 _PyObject_Call_Prepend + 150
46 Python 0x0000000101968bbf PyObject_Call + 136
47 Python 0x0000000101a65261 t_bootstrap + 71
48 Python 0x0000000101a2be95 pythread_wrapper + 25
49 libsystem_pthread.dylib 0x00007fff5d8d72eb _pthread_body + 126
50 libsystem_pthread.dylib 0x00007fff5d8da249 _pthread_start + 66
51 libsystem_pthread.dylib 0x00007fff5d8d640d thread_start + 13
)

when I run elg_demo.py,I also got the wrong message.

I also got the wrong message:

(GazeML) D:\labPro\GazeML-master\src>python elg_demo.py --from_video one.mp4
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "elg_demo.py", line 65, in
eye_image_shape=(108, 180))
File "D:\labPro\GazeML-master\src\datasources\video.py", line 22, in init
super().init(staging=False, **kwargs)
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 47, in init
shuffle=False, staging=staging, **kwargs)
File "D:\labPro\GazeML-master\src\core\data_source.py", line 59, in init
labels, dtypes, shapes = self._determine_dtypes_and_shapes()
File "D:\labPro\GazeML-master\src\core\data_source.py", line 194, in _determine_dtypes_and_shapes
raw_entry = next(self.entry_generator(yield_just_one=True))
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 106, in entry_generator
self.detect_landmarks(frame)
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 171, in detect_landmarks
predictor = get_landmarks_predictor()
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 371, in get_landmarks_predictor
_landmarks_predictor = dlib.shape_predictor(dat_path)
RuntimeError: Error deserializing object of type int64
while deserializing a floating point number.
while deserializing a dlib::matrix
while deserializing object of type std::vector
while deserializing object of type std::vector
while deserializing object of type std::vector

look_vec[0] = -look_vec[0]?

Sorry to bother you! It's really a great work. I am a newbee to this fild. I'm confused by this line

look_vec[0] = -look_vec[0]

Why it should be negetive? And I also saw that in the UnityEye offical visualize.py , but it is "look_vec[1] = - look_vec[1]" instead.
Your help means a lot to me.

how to compare result against UT multi view gaze dataset?

Hi Sir,
I am trying to train your net against UT multi-view dataset. I am facing a problem about the converting of gaze vector to yaw and pitch angle .
In the dataset, s00/test/000_left.csv contain following lines:
0.657246 0.0937712 -0.74782 -0.0511348 -0.548227 -0.0143822 -7.6967E-06 1.92417E-06 600
the first 3 numbers '0.657246 0.0937712 -0.74782' is gaze direction in camera coordinate, I convert it to yaw and pitch directly like this:

   # gaze direction (gx, gy, gz)
    yaw = np.arctan2(-gx, -gz)
    pitch = np.arcsin(-gy)

But it seems not correct after I plot the gaze vector to original image. Could you give some tips or code?

Thanks very much.

: cannot connect to X server

This might be related to OpenCV problem. Do you have a good idea to solve this?

Error happened when i want to execute:

python3 elg_demo.py --from_video demo.mp4 --record_video record
Instructions for updating:
Use `tf.compat.v1.graph_util.tensor_shape_from_node_def_name`
Parsing Inputs...
I0702 14:44:45.164403 140231903774528 model.py:192] ------------------------------
I0702 14:44:45.164565 140231903774528 model.py:193]  Approximate Model Statistics 
I0702 14:44:45.164623 140231903774528 model.py:194] ------------------------------
I0702 14:44:45.164686 140231903774528 model.py:195] FLOPS per input: 1,006,288,359.0
I0702 14:44:45.169330 140231903774528 model.py:198] Trainable Parameters: 712,527
I0702 14:44:45.169401 140231903774528 model.py:201] ------------------------------
: cannot connect to X server 
W0702 14:44:45.174305 140231903774528 deprecation_wrapper.py:119] From /home/oliver/git/GazeML/src/core/checkpoint_manager.py:45: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.

elg_demo.py run-time issue using video source input/output

I ran into a elg_demo.py run-time issue using video source input and output (tensorflow 1.12/cuda 9.0/Python 3.6.9). The program just seems to "hang" with the following output message:

Video "/home/ubuntu/cviz/GazeML/giphygaze.mp4" closed.

Could there be a codec/encoding issue related to processing video input/output???


(dl4cvpy36-tf.12) ubuntu@ip-172-31-0-76:~/cviz/GazeML/src$ python3 elg_demo.py -v debug --from_video /home/ubuntu/cviz/GazeML/giphygaze.mp4 --record_video /home/ubuntu/cviz/GazeML/video-out/test.mp4
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2019-12-15 03:01:45.268322: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-15 03:01:48.919134: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-15 03:01:48.919551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2019-12-15 03:01:48.919580: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-12-15 03:01:51.232533: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-15 03:01:51.232586: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-12-15 03:01:51.232596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-12-15 03:01:51.232905: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
Video "/home/ubuntu/cviz/GazeML/giphygaze.mp4" closed.

Gaze evaluation in MPIIGaze(Normalized data)

Hello, I use the ELG_i6036_f6036_n32_m2 model to get landmarks, method estimate_gaze_from_landmarks in models.elg to get gaze angle. Then I test in MPIIGaze data selected from Evaluation Set, ground truth and eye images are in normalized space.
I got a mean angular error(degrees) 9.56. It seem that model-based method performance not so good in MPIIGaze data.
Does the methods in code for gaze estimation only has Model-based Estimation? I can't find the model about SVR trained in MPIIGaze.
Maybe I need to train a SVR model in MPIIGaze data that will performance better than model-based? Does you manually annotated eyelid and iris landmarks in MPIIGaze to train the SVR?

In case I made a wrong evaluation. I post one MPIIGaze normalized sample tested used gazeML, for pitch angle ,up is positive, for yaw angle, left is positive.
#img_name# p00/day14/0326.jpg
#eye_side#: left
#3D gaze_vector#: -0.01114 0.25135 0.9678
#true gaze angle(degrees) [pitch,yaw]#: [-14.5578 0.6596]
#gazeML pre gaze angle(degrees)[pitch,yaw]#: [-5.7414 0.9097]
#angular error(degrees)#: 8.8198

Try to run elg_demo.py code, but freeze every time

Hi I am wondering why it is stuk for a while when I tried to run elg_demo.py.
Could you tell me which part was wrong?Thank you.

My settings:
Windows10
CUDA 10.0 with 6358MB memory
cudNN V7
Tensorflow-gpu 1.14.0

D:\GazeML-master\src>python elg_demo.py --from_video D:\GazeML-master\00257.mp4 --record_video D:\GazeML-master\00257_2.mp4
WARNING: Logging before flag parsing goes to stderr.
W0712 16:48:17.462151 12260 deprecation_wrapper.py:119] From D:\GazeML-master\src\core\model.py:33: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2019-07-12 16:48:17.482970: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-07-12 16:48:17.490055: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2019-07-12 16:48:17.568523: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8095
pciBusID: 0000:01:00.0
2019-07-12 16:48:17.575132: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-07-12 16:48:17.580505: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-07-12 16:48:18.175797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-12 16:48:18.179223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0
2019-07-12 16:48:18.181317: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N
2019-07-12 16:48:18.183957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6358 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Video "D:\GazeML-master\00257.mp4" closed.

except the warning, it just stuck.....
GazeML_stop training2

I have to close the terminal in order to stop the program, and Ctrl + C just didn't work either.
PS. I tried both branch and master, and the results are the same.

test Images

Hello!
I want to know how to test the single image for the gaze direction? Do you implement?
thank you very much !

Single Image processing

Hi!
Is there a possibility to only get the theta and phi angles of every eye for only a single Image? I already tried to use and modify your demo but I can't handle the Video streams with only one Image. I need this as soon as possible for an important project. Thank you in advance.

Example of inference of model

Hi,

I already checked #7 for gaze inference, but i still do not know how to load the model and apply model on a single image and get the output.

infer = model.inference_generator()
output = next(infer)
# Then deal with output.

It seems that you use an inference generator. Do you have something easier to use?

Such as

model = DPG.load('model weights')
image = cv2.imread('image path')

output = model.predict(image)

Just like that,

Thank you so much.

Error occurs when I run it on Mac Os - NSInternalInconsistencyException

- python3 elg_demo.py

but I get the following error
How can I resolve it ?

>27/03 12:51 WARNING From /usr/local/lib/python3.7/site-packages/tensorflow/python/profiler/internal/flops_registry.py:243: tensor_shape_from_node_def_name (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.remove_training_nodes
Parsing Inputs...
27/03 12:51 INFO ------------------------------
27/03 12:51 INFO  Approximate Model Statistics
27/03 12:51 INFO ------------------------------
27/03 12:51 INFO FLOPS per input: 205,323,382.0
27/03 12:51 INFO Trainable Parameters: 150,813
27/03 12:51 INFO ------------------------------
2019-03-27 12:51:20.286 Python[8185:261347] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'NSWindow drag regions should only be invalidated on the Main Thread!'
*** First throw call stack:
(
	0   CoreFoundation                      0x00007fff483aeecd __exceptionPreprocess + 256
	1   libobjc.A.dylib                     0x00007fff7446a720 objc_exception_throw + 48
	2   CoreFoundation                      0x00007fff483c895d -[NSException raise] + 9
	3   AppKit                              0x00007fff458c9c8e -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 324
	4   AppKit                              0x00007fff458c707c -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1488
	5   AppKit                              0x00007fff4598795b -[NSPanel _initContent:styleMask:backing:defer:contentView:] + 50
	6   AppKit                              0x00007fff458c6aa6 -[NSWindow initWithContentRect:styleMask:backing:defer:] + 45
	7   AppKit                              0x00007fff45987910 -[NSPanel initWithContentRect:styleMask:backing:defer:] + 64
	8   QtGui                               0x000000010c107028 -[QCocoaPanel initWithContentRect:styleMask:backing:defer:] + 74
	9   QtGui                               0x000000010c10dbbf -[NSWindow(QWidgetIntegration) qt_initWithQWidget:contentRect:styleMask:] + 81
	10  QtGui                               0x000000010c0fee1c _ZL20qt_mac_create_windowP7QWidgetjmRK5QRect + 427
	11  QtGui                               0x000000010c0feaa1 _ZN14QWidgetPrivate18qt_create_root_winEv + 65
	12  QtGui                               0x000000010c1008d3 _ZN14QWidgetPrivate10create_sysElbb + 1079
	13  QtGui                               0x000000010c196d25 _ZN7QWidget6createElbb + 587
	14  QtGui                               0x000000010c1959f6 _ZN14QWidgetPrivate4initEP7QWidget6QFlagsIN2Qt10WindowTypeEE + 362
	15  QtGui                               0x000000010c1957e0 _ZN7QWidgetC2EPS_6QFlagsIN2Qt10WindowTypeEE + 122
	16  QtGui                               0x000000010c118cca _ZN14QDesktopWidgetC2Ev + 32
	17  QtGui                               0x000000010c15c183 _ZN12QApplication7desktopEv + 53
	18  QtGui                               0x000000010c114a05 _Z9flipPointRK7CGPoint + 32
	19  QtGui                               0x000000010c0f4961 _ZN7QCursor3posEv + 47
	20  QtGui                               0x000000010c16c0c5 _ZN11QMouseEventC2EN6QEvent4TypeERK6QPointN2Qt11MouseButtonE6QFlagsIS6_ES7_INS5_16KeyboardModifierEE + 79
	21  QtGui                               0x000000010c681426 _ZN20QGraphicsViewPrivateC2Ev + 340
	22  QtGui                               0x000000010c6849d5 _ZN13QGraphicsViewC2EP7QWidget + 37
	23  cv2.cpython-37m-darwin.so           0x000000010715ed3f _ZN15DefaultViewPortC2EP8CvWindowi + 31
	24  cv2.cpython-37m-darwin.so           0x000000010715a3ad _ZN8CvWindowC2E7QStringi + 397
	25  cv2.cpython-37m-darwin.so           0x00000001071523d3 _ZN11GuiReceiver12createWindowE7QStringi + 227
	26  cv2.cpython-37m-darwin.so           0x000000010715222c cvNamedWindow + 540
	27  cv2.cpython-37m-darwin.so           0x0000000107154961 _ZN11GuiReceiver9showImageE7QStringPv + 161
	28  cv2.cpython-37m-darwin.so           0x000000010715480c cvShowImage + 572
	29  cv2.cpython-37m-darwin.so           0x000000010714cf4b _ZN2cv6imshowERKNS_6StringERKNS_11_InputArrayE + 475
	30  cv2.cpython-37m-darwin.so           0x000000010674d6a4 _ZL18pyopencv_cv_imshowP7_objectS0_S0_ + 404
	31  Python                              0x0000000105a9c344 _PyMethodDef_RawFastCallKeywords + 545
	32  Python                              0x0000000105a9b8af _PyCFunction_FastCallKeywords + 44
	33  Python                              0x0000000105b31b2b call_function + 636
	34  Python                              0x0000000105b2a771 _PyEval_EvalFrameDefault + 7016
	35  Python                              0x0000000105b32432 _PyEval_EvalCodeWithName + 1835
	36  Python                              0x0000000105a9b4dd _PyFunction_FastCallDict + 441
	37  Python                              0x0000000105b2aa88 _PyEval_EvalFrameDefault + 7807
	38  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	39  Python                              0x0000000105b31ba0 call_function + 753
	40  Python                              0x0000000105b2a758 _PyEval_EvalFrameDefault + 6991
	41  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	42  Python                              0x0000000105b31ba0 call_function + 753
	43  Python                              0x0000000105b2a758 _PyEval_EvalFrameDefault + 6991
	44  Python                              0x0000000105a9bc8a function_code_fastcall + 112
	45  Python                              0x0000000105a9c60d _PyObject_Call_Prepend + 150
	46  Python                              0x0000000105a9b9bd PyObject_Call + 136
	47  Python                              0x0000000105b982ca t_bootstrap + 71
	48  libsystem_pthread.dylib             0x00007fff7572c305 _pthread_body + 126
	49  libsystem_pthread.dylib             0x00007fff7572f26f _pthread_start + 70
	50  libsystem_pthread.dylib             0x00007fff7572b415 thread_start + 13
)
libc++abi.dylib: terminating with uncaught exception of type NSException
[1]    8185 abort      python3 elg_demo.py

training in TX2

I am trying to train this implementation in TX2.

I made few modifications to webcam.py as below. cv2 recognizes the frames, I tested it out.
class WebcamTX2(FramesSource):
"""Webcam frame grabbing and preprocessing."""

def __init__(self, camera_id=2, fps=120.000000, **kwargs):
    """Create queues and threads to read and preprocess data."""
    self._short_name = 'WebcamTX2'

    self._capture = cv.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)96/1 ! nvvidconv flip-method=6 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
    #self._capture = cv.VideoCapture(camera_id)
    #self._capture.set(cv.CAP_PROP_FRAME_WIDTH, 1280)
    #self._capture.set(cv.CAP_PROP_FRAME_HEIGHT, 720)
    #self._capture.set(cv.CAP_PROP_FOURCC, cv.VideoWriter_fourcc(*'MJPG'))
    #self._capture.set(cv.CAP_PROP_FPS, fps)

    # Call parent class constructor
    super().__init__(**kwargs)

def frame_generator(self):
    """Read frame from webcam."""
    if not self._capture.isOpened() :
        print("not capture")
        exit()

    ret,bgr = self._capture.read()
    while True:
        ret, bgr = self._capture.read()
        if ret:
            yield bgr

I run into errors on executing elg_demo.py. Below is the stack trace. Can you please guide me in debugging this?

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1599, in exit
self._default_graph_context_manager.exit(exec_type, exec_value, exec_tb)
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5285, in get_controller
yield g
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5093, in get_controller
yield default
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5285, in get_controller
yield g
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/eager/context.py", line 295, in _mode
yield
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5285, in get_controller
yield g
File "elg_demo.py", line 98, in
eye_image_shape=(36, 60))
File "/home/nvidia/GazeML/src/datasources/webcamTX2.py", line 21, in init
super().init(**kwargs)
File "/home/nvidia/GazeML/src/datasources/frames.py", line 47, in init
shuffle=False, staging=staging, **kwargs)
File "/home/nvidia/GazeML/src/core/data_source.py", line 105, in init
(label, tensor) for label, tensor in zip(labels, self._staging_area.get())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 2109, in exit
self._current_name_scope.exit(type_arg, value_arg, traceback_arg)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 6017, in exit
self._name_scope.exit(type_arg, value_arg, traceback_arg)
File "/usr/lib/python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 4141, in name_scope
yield "" if new_stack is None else new_stack + "/"
File "/home/nvidia/GazeML/src/core/data_source.py", line 59, in init
labels, dtypes, shapes = self._determine_dtypes_and_shapes()
File "/home/nvidia/GazeML/src/core/data_source.py", line 194, in _determine_dtypes_and_shapes
raw_entry = next(self.entry_generator(yield_just_one=True))
File "/home/nvidia/GazeML/src/datasources/frames.py", line 106, in entry_generator
self.detect_landmarks(frame)
File "/home/nvidia/GazeML/src/datasources/frames.py", line 171, in detect_landmarks
predictor = get_landmarks_predictor()
File "/home/nvidia/GazeML/src/datasources/frames.py", line 371, in get_landmarks_predictor
_landmarks_predictor = dlib.shape_predictor(dat_path)
RuntimeError: Error deserializing object of type unsigned long
while deserializing object of type std::vector
while deserializing object of type std::vector
while deserializing object of type std::vector

Thanks for the help.
Shreyas

program is blocked when training

Hi Sir.
I am trying to train your model against more pictures. when the program restore parameters from your pre-trained model ELG_i180x108_f60x36_n64_m3, it hangs and I get following errors.

7/01 14:11 INFO Initialized data source: "UnityEyes"
17/01 14:11 INFO y1 heatmaps for train:(16, 18, 36, 60)
17/01 14:11 INFO y2 landmarks for train:(16, 18, 2)
17/01 14:11 INFO y3 radius for train:(16,)
17/01 14:11 INFO Built model for training.
17/01 14:11 INFO Built optimizer for: radius_mse, heatmaps_mse
17/01 14:11 INFO begin load checkpoint
17/01 14:11 INFO prefix:radius
17/01 14:11 INFO loading from output_path:/root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius
restore modle from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius/model-4203370
INFO:tensorflow:Restoring parameters from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius/model-4203370
17/01 14:11 INFO Restoring parameters from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius/model-4203370
17/01 14:11 INFO prefix:hourglass
17/01 14:11 INFO loading from output_path:/root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass
restore modle from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass/model-4203370
INFO:tensorflow:Restoring parameters from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass/model-4203370
17/01 14:11 INFO Restoring parameters from /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass/model-4203370
17/01 14:11 INFO save model to /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius
17/01 14:11 INFO save model to /root/project/GAZEML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass
2019-01-17 14:11:57.398307: W tensorflow/core/kernels/queue_base.cc:277]_0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2019-01-17 14:11:57.398371: W tensorflow/core/kernels/queue_base.cc:277]_0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2019-01-17 14:11:57.398380: W tensorflow/core/kernels/queue_base.cc:277]_0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2019-01-17 14:11:57.398386: W tensorflow/core/kernels/queue_base.cc:277]_0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed
2019-01-17 14:11:57.398390: W tensorflow/core/kernels/queue_base.cc:277]_0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed

If I delete the pre-trained model, and train from scratch, these errors disappear.
Any tips about this error?

Project Glaze vector

Hi Seonwook,
Thanks for the project. I am trying to use your implementation in research.

I would like to find the coordinates at which the user focus. Can this the be found by the intersection points of glaze vectors of left and right eye?

Another option would be assume that the focal length of human eye is roughly a meter while staring at screen. So the glaze vector can just be extended to a meter.

Can you suggest the best approach?

Thanks
Shreyas

I also got the wrong message when run elg_demo.py

(GazeML) D:\labPro\GazeML-master\src>python elg_demo.py --from_video one.mp4
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
D:\IDE\Anaconda3\envs\GazeML\lib\site-packages\tensorflow\python\framework\dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Traceback (most recent call last):
File "elg_demo.py", line 65, in
eye_image_shape=(108, 180))
File "D:\labPro\GazeML-master\src\datasources\video.py", line 22, in init
super().init(staging=False, **kwargs)
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 47, in init
shuffle=False, staging=staging, **kwargs)
File "D:\labPro\GazeML-master\src\core\data_source.py", line 59, in init
labels, dtypes, shapes = self._determine_dtypes_and_shapes()
File "D:\labPro\GazeML-master\src\core\data_source.py", line 194, in _determine_dtypes_and_shapes
raw_entry = next(self.entry_generator(yield_just_one=True))
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 106, in entry_generator
self.detect_landmarks(frame)
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 171, in detect_landmarks
predictor = get_landmarks_predictor()
File "D:\labPro\GazeML-master\src\datasources\frames.py", line 371, in get_landmarks_predictor
_landmarks_predictor = dlib.shape_predictor(dat_path)
RuntimeError: Error deserializing object of type int64
while deserializing a floating point number.
while deserializing a dlib::matrix
while deserializing object of type std::vector
while deserializing object of type std::vector
while deserializing object of type std::vector

How many pictures you are using for training?

Hi sir.
I see your gaze demo on youtube, it's amazing. I download you pre-trained model and test on my own videos, but don't get good results. my questions are:

  1. How many pictures are used for your pre-trained model?
  2. which gaze method is used in your video demo? feature based method or model based method?

thanks very much

Performance of DPG Model

Thanks for your work! I run the dpg_train.py and performed leave-one-person-out evaluation on MPIIGaze. What is the reasonable way to count the result, the mean of 15-fold final test results or the mean of 15-fold best live test results?

Aborted(core dumped)

Greate job! I got some errors when I run the elg_demo.py.

Traceback (most recent call last):
File "elg_demo.py", line 69, in
eye_image_shape=(36, 60))
File "/home/mihan/Desktop/GazeML-master/src/datasources/webcam.py", line 21, in init
super().init(**kwargs)
File "/home/mihan/Desktop/GazeML-master/src/datasources/frames.py", line 47, in init
shuffle=False, staging=staging, **kwargs)
File "/home/mihan/Desktop/GazeML-master/src/core/data_source.py", line 59, in init
labels, dtypes, shapes = self._determine_dtypes_and_shapes()
File "/home/mihan/Desktop/GazeML-master/src/core/data_source.py", line 194, in _determine_dtypes_and_shapes
raw_entry = next(self.entry_generator(yield_just_one=True))
File "/home/mihan/Desktop/GazeML-master/src/datasources/frames.py", line 106, in entry_generator
self.detect_landmarks(frame)
File "/home/mihan/Desktop/GazeML-master/src/datasources/frames.py", line 171, in detect_landmarks
predictor = get_landmarks_predictor()
File "/home/mihan/Desktop/GazeML-master/src/datasources/frames.py", line 371, in get_landmarks_predictor
_landmarks_predictor = dlib.shape_predictor(dat_path)
RuntimeError: Error deserializing object of type int
FATAL: exception not rethrown
Aborted (core dumped)

And the enviroment is:
OS: Ubuntu 16.04
GPU: GTX 1060
CPU: Intel(R) Core(TM) i3-4170 CPU @ 3.70GHz

_libgcc_mutex 0.1
absl-py 0.8.0
astor 0.8.0
ca-certificates 2019.5.15
certifi 2018.8.24
coloredlogs 10.0
dlib 19.17.0
gast 0.2.2
google-pasta 0.1.7
grpcio 1.23.0
h5py 2.9.0
humanfriendly 4.18
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
libedit 3.1.20181209
libffi 3.2.1
libgcc-ng 9.1.0
libstdcxx-ng 9.1.0
Markdown 3.1.1
ncurses 6.1
numpy 1.17.1
opencv-python 4.1.0.25
openssl 1.0.2s
pandas 0.25.1
pip 10.0.1
pip 19.2.3
protobuf 3.9.1
python 3.5.6
python-dateutil 2.8.0
pytz 2019.2
readline 7.0
scipy 1.3.1
setuptools 41.2.0
setuptools 40.2.0
six 1.12.0
sqlite 3.29.0
tensorboard 1.14.0
tensorflow-estimator 1.14.0rc1
tensorflow-gpu 1.14.0
termcolor 1.1.0
tk 8.6.8
ujson 1.35
Werkzeug 0.15.5
wheel 0.31.1
wrapt 1.11.1
xz 5.2.4
zlib 1.2.11

Heatmaps values are exceptionally low- unable to generate gaze

Hello,

I've been trying to get this code to run for some time now. When I do, i can only see two red dots at corner of eyes, and no gaze vector or iris landmarks as in the video.

I digged deeper and the issue apparently is in the values of heatmaps.
This is how it looks:
HeatMaps_amax: [0.04515158 0.07932845 0.05255887 0.06401039 0.02747708 0.0497707
0.06598783 0.04077904 0.03464168 0.00753675 0.01112193 0.00911523
0.03209448 0.03100399 0.0287441 0.02697178 0.03532294 0.05629761]

None of them are even close to 0.1, and the key conditions require them to atleast greater than 0.7.
So the estimation code block never gets executed.

I have downloaded the model files from the BASH script and kept both folders under output directory.
Still there is no improvement in the values.

Can you please suggest what could be the issue here.
TIA
Vaibhav

Videoes stream error

i have tried so many videos i got.
it shows so many different error...

one is like:

threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "elg_demo.py", line 224, in _visualize_output
bgr[v1:v2, u0:u1] = eye_image_annotated
ValueError: could not broadcast input array from shape (216,360,3) into shape (72,360,3)

second is like:

Video "xxxxxxxxxxxxxxxxx.mp4" closed.

only one face

Hello,
if I want to choice only one face, how to modify code?

live_tester in training problem

self._tester = LiveTester(self, self._test_data, use_batch_statistics_at_test)

Hi, the work is really good. When running the DPG code, I encountered some problems. As you can see in the link, the model initialized live_tester during model initialization, however, the use_batch_statistics_at_test was set to True, which lead to during testing in do_full_test function, the use_batch_statistics was set to True. So, my question is , why set the is_training in batch_norm function to True during test? I have tried to set it to false and during the training, I found the test result was very bad.

StreamExecutor device (0)

OS: Lubuntu 18.04
Python: 3.6.7
Tensorflow: 1.13.1

I am running this inside a virtual environment python = python3.
from inside src as instructed ran python elg_demo.py

This is the output i get.

What have i done wrong?

python elg_demo.py --from_video /home/ffffff/my_video-1.mkv
2019-04-18 10:41:57.908373: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-04-18 10:41:57.912338: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1800000000 Hz
2019-04-18 10:41:57.912703: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0xfdd420 executing computations on platform Host. Devices:
2019-04-18 10:41:57.912757: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
Video "/home/ffffff/my_video-1.mkv" closed.

UnityEyes forbidden to download

Hi, can you offer me a link to download UnityEyes zip software. I visited the website you offer and when I clicked the download link of the zip file. It redirected to a forbidden web page. Thanks :)

Training problems

Hi, I use UnityEyes to generate images. Then I run the egg_train.py and got this problem:

10/04 06:34 INFO 0079261> heatmaps_mse = 0.00100194, radius_mse = 1.17517e-07
10/04 06:34 INFO 0079270> heatmaps_mse = 0.00119301, radius_mse = 8.82096e-08
10/04 06:34 INFO 0079280> heatmaps_mse = 0.00114937, radius_mse = 1.55061e-07
10/04 06:34 INFO 0079289> heatmaps_mse = 0.00109943, radius_mse = 1.84821e-07
Exception in thread preprocess_UnityEyes_27:
Traceback (most recent call last):
File "/home/wang/anaconda3/envs/tensorflow-gpu/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/home/wang/anaconda3/envs/tensorflow-gpu/lib/python3.5/threading.py", line 862, in run
self._target(self._args, **self._kwargs)
File "/media/wang/Toshiba/lmj/2019term/papers/GazeML/GazeML-win/src/core/data_source.py", line 245, in preprocess_job
preprocessed_entry_dict = self.preprocess_entry(raw_entry)
File "/media/wang/Toshiba/lmj/2019term/papers/GazeML/GazeML-win/src/datasources/unityeyes.py", line 237, in preprocess_entry
thickness=int(6line_rand_nums[j + 4]), lineType=cv.LINE_AA)
cv2.error: OpenCV(3.4.3) /io/opencv/modules/imgproc/src/drawing.cpp:1811: error: (-215:Assertion failed) 0 < thickness && thickness <= MAX_THICKNESS in function 'line'

Failed to load the frozen graph of GazeML

Hi:
With below code added to GazeML/src/core/model.py inference_generator, I can successfully export the GazeML to a frozen graph gaze.pb with the weights loaded by saver:

    sess = self._tensorflow_session
    from tensorflow.python.framework import graph_util
    constant_graph = graph_util.convert_variables_to_constants(
            sess, sess.graph_def,
            ['hourglass/hg_2/after/hmap/conv/BiasAdd', # heatmaps
             'upscale/mul', # landmarks
             'radius/out/fc/BiasAdd', # radius
             'Webcam/fifo_queue_DequeueMany', # frame_index, eye, eye_index
            ])
    with tf.gfile.FastGFile('./gaze.pb', mode='wb') as f:
        f.write(constant_graph.SerializeToString())

Then I try to load it with below code:

    with tf.gfile.FastGFile('gaze.pb', 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')

But it will be loaded failed with below errors:

2019-05-16 22:49:42.059659: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Traceback (most recent call last):
  File "C:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\importer.py", line 489, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node radius/fc3/BatchNorm/cond_1/AssignMovingAvg_1/Switch was passed float from radius/fc3/BatchNorm/moving_variance:0 incompatible with expected float_ref.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "gaze.py", line 47, in <module>
    eye,eye_index,frame_index,landmarks,radius = model()
  File "gaze.py", line 37, in model
    _ = tf.import_graph_def(graph_def, name='')
  File "C:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\util\deprecation.py", line 432, in new_func
    return func(*args, **kwargs)
  File "C:\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\importer.py", line 493, in import_graph_def
    raise ValueError(str(e))
ValueError: Input 0 of node radius/fc3/BatchNorm/cond_1/AssignMovingAvg_1/Switch was passed float from radius/fc3/BatchNorm/moving_variance:0 incompatible with expected float_ref.

I googled it and find this page, https://stackoverflow.com/questions/34265768/what-is-a-tensorflow-float-ref, according to its comments, that there is maybe a use of tf.Variable instead of a t.fplaceholder, thus this issue happened.

But I am not familiar with the GazeML code, could you help to fix this issue, really thanks?

error occurs when i run elg.py

OS: ubuntu 18.04
Python: 3.6.7
Tensorflow: 1.12.0
opencv: 4.0.0

Thanks for your work! When I run the elg_train.py, it works perfect util step (about) 075000.
when i meet this error, sometimes, it can work continue or just stop after work 4-6 steps. Looking forward to your reply.
`
10/05 16:14 INFO 0083137> heatmaps_mse = 0.00106249, radius_mse = 1.75501e-07
10/05 16:15 INFO 0083139> heatmaps_mse = 0.00100996, radius_mse = 1.32564e-07
Exception in thread preprocess_UnityEyes_4:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(self._args, **self._kwargs)
File "/home/vision_rd/huhuaidi/project/GazeML/src/core/data_source.py", line 245, in preprocess_job
preprocessed_entry_dict = self.preprocess_entry(raw_entry)
File "/home/vision_rd/huhuaidi/project/GazeML/src/datasources/unityeyes.py", line 238, in preprocess_entry
thickness=int(6
line_rand_nums[j + 4]), lineType=cv.LINE_AA)
cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/drawing.cpp:1811: error: (-215:Assertion failed) 0 < thickness && thickness <= MAX_THICKNESS in function 'line'

10/05 16:15 INFO 0083141> heatmaps_mse = 0.00116619, radius_mse = 1.57683e-07
10/05 16:15 INFO 0083143> heatmaps_mse = 0.000970301, radius_mse = 1.92502e-07
10/05 16:15 INFO 0083145> heatmaps_mse = 0.00101215, radius_mse = 1.51191e-07
`

[elg_demo error] ValueError: could not broadcast input array from shape (216,360,3) into shape (216,180,3)

Hi all,
Has anybody encountered the following error: when trying to run elg_demo.py?
I have struggle for few days, could anyone tell me how to fix this issue?

my environment:
Windows10
CUDA 10.0
cuDNN 7.6
Tensorflow-gpu 1.14.0
opencv-python 4.1.0.25
python 3.6

Exception in thread visualization:
Traceback (most recent call last):
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 916, in _bootstrap_inner
    self.run()
  File "C:\Miniconda3\envs\tensorflow-gpu\lib\threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "elg_demo.py", line 234, in _visualize_output
    bgr[v0:v1, u0:u1] = eye_image_raw
ValueError: could not broadcast input array from shape (216,360,3) into shape (216,180,3)

2019-07-19 13:32:50.978486: W tensorflow/core/kernels/queue_base.cc:277] _0_Video/fifo_queue: Skipping cancelled enqueue attempt with queue not closed

Thank you.

ValueError: could not broadcast input array from shape (216,360,3) into shape (216,280,3)

Step 1: I verified the virtual environment could read video input and write video output with another python application.
Step 2: I then ran the demo app (debug mode) with the same video input file that was validated in step 1 (env=tensorflow 1.12/cuda 9.0/Python 3.6.9).

Here is the error:

File "elg_demo.py", line 223, in _visualize_output
bgr[v0:v1, u0:u1] = eye_image_raw
ValueError: could not broadcast input array from shape (216,360,3) into shape (216,280,3)


(dl4cvpy36-tf.12) ubuntu@ip-172-31-0-76:~/cviz/GazeML/src$ python3 elg_demo.py -v debug --from_video /home/ubuntu/cviz/GazeML/driving-drowsy.mov --record_video /home/ubuntu/cviz/GazeML/video-out/driving-drowsy.avi
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ubuntu/.virtualenvs/dl4cvpy36-tf.12/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
2019-12-16 02:24:43.940549: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-16 02:24:47.134274: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-16 02:24:47.134705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2019-12-16 02:24:47.134735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-12-16 02:24:49.512019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-16 02:24:49.512094: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-12-16 02:24:49.512109: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-12-16 02:24:49.512409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
16/12 02:24 INFO Initialized data source: "Video"
16/12 02:24 DEBUG I do not know how to create a summary for videostream/frame_index ([2])
16/12 02:24 DEBUG I do not know how to create a summary for videostream/eye_index ([2])
16/12 02:25 INFO Built model.
Parsing Inputs...
16/12 02:25 INFO ------------------------------
16/12 02:25 INFO Approximate Model Statistics
16/12 02:25 INFO ------------------------------
16/12 02:25 INFO FLOPS per input: 1,006,288,359.0
16/12 02:25 INFO Trainable Parameters: 712,527
16/12 02:25 INFO ------------------------------
INFO:tensorflow:Restoring parameters from /home/ubuntu/cviz/GazeML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass/model-4203370
16/12 02:25 INFO Restoring parameters from /home/ubuntu/cviz/GazeML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/hourglass/model-4203370
INFO:tensorflow:Restoring parameters from /home/ubuntu/cviz/GazeML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius/model-4203370
16/12 02:25 INFO Restoring parameters from /home/ubuntu/cviz/GazeML/outputs/ELG_i180x108_f60x36_n64_m3/checkpoints/radius/model-4203370
Exception in thread visualization:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "elg_demo.py", line 223, in _visualize_output
bgr[v0:v1, u0:u1] = eye_image_raw
ValueError: could not broadcast input array from shape (216,360,3) into shape (216,280,3)

2019-12-16 02:25:19.946175: W tensorflow/core/kernels/queue_base.cc:277] _0_Video/fifo_queue: Skipping cancelled enqueue attempt with queue not closed

TypeError: profile() missing 1 required positional argument: 'graph'

I ran into a runtime error with TF 1.5 (see below).

Which version of tensorflow did you test with?


(cv40py35) nvidia@tegra-ubuntu:~/cviz/GazeML/src$ python3 elg_demo.py
2019-12-03 17:27:32.703689: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:859] ARM64 does not support NUMA - returning NUMA node zero
2019-12-03 17:27:32.703844: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1105] Found device 0 with properties:
name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9984
pciBusID: 0000:00:00.0
totalMemory: 3.89GiB freeMemory: 1.84GiB
2019-12-03 17:27:32.703909: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1195] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)

(process:15956): GStreamer-CRITICAL **: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
VIDEOIO ERROR: V4L: Property (6) not supported by device
03/12 17:27 INFO Initialized data source: "Webcam"
03/12 17:28 INFO Built model.
Traceback (most recent call last):
File "elg_demo.py", line 92, in
'loss_terms_to_optimize': {'dummy': ['hourglass', 'radius']},
File "/home/nvidia/cviz/GazeML/src/models/elg.py", line 27, in init
super().init(tensorflow_session, **kwargs)
File "/home/nvidia/cviz/GazeML/src/core/model.py", line 101, in init
self._build_all_models()
File "/home/nvidia/cviz/GazeML/src/core/model.py", line 190, in _build_all_models
tf.profiler.ProfileOptionBuilder.float_operation()
TypeError: profile() missing 1 required positional argument: 'graph'
Exception ignored in: <bound method BaseModel.del of <models.elg.ELG object at 0x7f5899b710>>
Traceback (most recent call last):
File "/home/nvidia/cviz/GazeML/src/core/model.py", line 109, in del
File "/home/nvidia/cviz/GazeML/src/core/data_source.py", line 156, in cleanup
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 895, in run
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1128, in _run
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1350, in _do_call
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1320, in _run_fn
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1381, in _extend_graph
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1042, in SerializeToString
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1051, in SerializePartialToString
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/google/protobuf/internal/python_message.py", line 1063, in InternalSerialize
File "/home/nvidia/.virtualenvs/cv40py35/lib/python3.5/site-packages/google/protobuf/descriptor.py", line 147, in GetOptions
File "", line 969, in _find_and_load
File "", line 954, in _find_and_load_unlocked
File "", line 887, in _find_spec
TypeError: 'NoneType' object is not iterable

Ouput raw data points from model prediction

I would like a bit of clarification, is it possible to output like head pose value, eye gaze position data from the model in debug mode or log it. And how to use it for single image prediction rather than video or webcam feed. Your help would be highly appreciated.

Not getting output for realtime using webcam

Can anyone please tell me which all options to use to get output using Webcam.
command is : python3 elg_demo.py --help
Mentioned options are:
Demonstration of landmarks localization.

optional arguments:
-h, --help show this help message and exit
-v {debug,info,warning,error,critical}
logging level
--from_video FROM_VIDEO
Use this video path instead of webcam
--record_video RECORD_VIDEO
Output path of video of demonstration.
--fullscreen
--headless
--fps FPS Desired sampling rate of webcam
--camera_id CAMERA_ID
ID of webcam to use

Thanks in advance.

Program Hang in training stage

Hi,
I wonder that did anyone get the same problem as I have right now.
The training code works fine at the beginning, but it is stuck after saving the checkpoint. There is no error message appears. I have been waiting for hours, and it is still froze.

I0730 11:34:55.581835 18580 time_manager.py:50] 0023481> heatmaps_mse = 0.000482749, radius_mse = 2.46166e-08
I0730 11:34:57.720099 18580 time_manager.py:50] 0023497> heatmaps_mse = 0.000477729, radius_mse = 2.61259e-08
I0730 11:35:03.389537 18580 checkpoint_manager.py:86] CheckpointManager::save_all call done

It is just stuck on the last line above.

Also, situation2, it stuck at this point....."Exiting thread preprocess"

I0730 15:46:04.360245 10444 checkpoint_manager.py:86] CheckpointManager::save_all call done
I0730 15:46:04.368223 16840 data_source.py:253] Exiting thread preprocess_UnityEyes_5
I0730 15:46:04.370217 25984 data_source.py:253] Exiting thread preprocess_UnityEyes_4
I0730 15:46:04.371214 29716 data_source.py:253] Exiting thread preprocess_UnityEyes_3
I0730 15:46:04.371214 13788 data_source.py:253] Exiting thread preprocess_UnityEyes_7
I0730 15:46:04.372213 28312 data_source.py:253] Exiting thread preprocess_UnityEyes_0
I0730 15:46:04.372213 28344 data_source.py:253] Exiting thread preprocess_UnityEyes_6
I0730 15:46:04.372213 28700 data_source.py:253] Exiting thread preprocess_UnityEyes_2
I0730 15:46:04.372213  6704 data_source.py:253] Exiting thread preprocess_UnityEyes_1

I was trying to train from scratch with UnityEyes dataset, and my environment settings are as follow:

Windows10
CUDA 10.0
cuDNN 7.6
Tensorflow-gpu 1.14.0
opencv-python 4.1.0.25
python 3.6

Or do I need something dependency to run this repo? because I have problems to make elg_demo.py run, too.

Identify screen coordinates where user would be looking

Hi,

We've tried GazeML model to detect the gaze vector/direction. However, I would like to understand how can we convert the Gaze coordinates (Pitch and Yaw) on to screen coordinates where I would be looking at.

Is this possible with out doing calibration of gaze with fixed points on screen? Please consider the below assumptions from our side:

  1. Camera and Screen are in the same plane (Camera is on top of the screen just like a webcam)

  2. Distance of person detected from the camera is unknown and may not be constant

Any help would be highly appreciated!!!!

How to get gaze estimation from landmarks?

a Nice work!
i have 4 questions about the paper:
1.The original paper says that the author trained a SVR model from MPIIGaze datasite. How to do with that? Useing a trained GazeML model to predict landmarks on MPII Gaze data images and use the detected landmarks results as the ground truth to train a SVR?
2.why not directly using Unityeyes dataset 's landmark and its gaze vector ground truth to train the SVR?
3.why use SVR? using several FC layers to regress result is ok or not?
4. what does the "calabration with 20 or more samples" mean exactlly in author's paper, calabrating to get camera paramters or get what? don't understand exactlly.

i would appreciate very very very much if the author or others could answer my question.
thank you very very very much.

Error on elg_train.py

I executed elg_train.py with 1000 Unity Eyes picture and .json files.
But an error occurred.

tensorflow/core/kernels/queue_base.cc:277] _0_UnityEyes/random_shuffle_queue: Skipping cancelled enqueue attempt with queue not closed

Please tell me how to solve this error.

Evaluation on UnityEyes Dataset

We can train on one unityeyes dataset with the elg_train.py

How to evaluate on another unityeyes dataset? Given another dataset and checkpoint model.

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.