Git Product home page Git Product logo

emotion-detection's Introduction

Emotion detection using deep learning

Introduction

This project aims to classify the emotion on a person's face into one of seven categories, using deep convolutional neural networks. The model is trained on the FER-2013 dataset which was published on International Conference on Machine Learning (ICML). This dataset consists of 35887 grayscale, 48x48 sized face images with seven emotions - angry, disgusted, fearful, happy, neutral, sad and surprised.

Dependencies

  • Python 3, OpenCV, Tensorflow
  • To install the required packages, run pip install -r requirements.txt.

Basic Usage

The repository is currently compatible with tensorflow-2.0 and makes use of the Keras API using the tensorflow.keras library.

  • First, clone the repository and enter the folder
git clone https://github.com/atulapra/Emotion-detection.git
cd Emotion-detection
  • Download the FER-2013 dataset inside the src folder.

  • If you want to train this model, use:

cd src
python emotions.py --mode train
  • If you want to view the predictions without training again, you can download the pre-trained model from here and then run:
cd src
python emotions.py --mode display
  • The folder structure is of the form:
    src:

    • data (folder)
    • emotions.py (file)
    • haarcascade_frontalface_default.xml (file)
    • model.h5 (file)
  • This implementation by default detects emotions on all faces in the webcam feed. With a simple 4-layer CNN, the test accuracy reached 63.2% in 50 epochs.

Accuracy plot

Data Preparation (optional)

  • The original FER2013 dataset in Kaggle is available as a single csv file. I had converted into a dataset of images in the PNG format for training/testing.

  • In case you are looking to experiment with new datasets, you may have to deal with data in the csv format. I have provided the code I wrote for data preprocessing in the dataset_prepare.py file which can be used for reference.

Algorithm

  • First, the haar cascade method is used to detect faces in each frame of the webcam feed.

  • The region of image containing the face is resized to 48x48 and is passed as input to the CNN.

  • The network outputs a list of softmax scores for the seven classes of emotions.

  • The emotion with maximum score is displayed on the screen.

References

  • "Challenges in Representation Learning: A report on three machine learning contests." I Goodfellow, D Erhan, PL Carrier, A Courville, M Mirza, B Hamner, W Cukierski, Y Tang, DH Lee, Y Zhou, C Ramaiah, F Feng, R Li,
    X Wang, D Athanasakis, J Shawe-Taylor, M Milakov, J Park, R Ionescu, M Popescu, C Grozea, J Bergstra, J Xie, L Romaszko, B Xu, Z Chuang, and Y. Bengio. arXiv 2013.

emotion-detection's People

Contributors

atulapra avatar dependabot[bot] avatar karthikkec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

emotion-detection's Issues

Does it overfitting?

Friend, I'm trainning the FER2013 like you, approximate answer like you, the acc will be around 95%, but val_acc remain around 60%. I search it on google, and others tell me that because overfitting.

Change of sequence of labels

why you have changed the sequence of emotions from {0:"Angry", 1:"Disgust", 2:"Fear", 3:"Happy", 4:"Sad", 5:"Surprise", 6:"Neutral"} to {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"} in emotions.py?

99% of the times "Neutral"

Dear mr Balaji,
I tried your project Emotion-detection. First I faced problem with the requirements, I solved it but then in the web camera i was smiling like crazy or opening mouth for surprised but it was always “Neutral”. How can i solve this?
Thanks in advance.
Regards,
P. Antonina

TypeError

during training, it is showing this error:
File "C:\Users\lenovo\Anaconda3\envs\mypr\lib\site-packages\numpy\core_asarray.py", line 85, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: float() argument must be a string or a number, not 'PngImageFile'

Running this model in C# or C++

Hello,
Thanks for this work, I want to know how I can run this model in the C# or C++?
I want to integrate this repo with OpenFace.

Thanks in advance for any help

Unable to open webcam

Found 28709 images belonging to 7 classes.
Found 7178 images belonging to 7 classes.

when iam running its just classifying dataset but webcam is not opening can you explain why is this happening?

Emotion detection using deep learning

Hi
I want to research Emotion detection using deep learning, I saw your project on github. But unfortunately I can not see the output. I am currently using spyder.
please guide me
Thanks
Askari

Data issue

I am facing this error called "The system cannot find the path specified : 'data/train' ". Can you please let me know that should we have to first divide data into train and test then run this code ?

Puedo conocer más sobre el algoritmo?

Buenas tardes! me parece excelente tu trabajo. Mil gracias por compartirlo, me ha servido mucho en mi trabajo de tesis para combatir el bullying. Podría conocer un poco más sobre los algoritmos?

What are the CUDA and cuDNN versions corresponding to tensorflow=2.9.3? I have tried many methods online but have failed, unable to use GPU

My cuda version is 11.2 and cuDNN is 8.1
image
2024-02-23 14:22:42.333730: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2024-02-23 14:22:42.334621: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublas64_11.dll'; dlerror: cublas64_11.dll not found
2024-02-23 14:22:42.335515: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cublasLt64_11.dll'; dlerror: cublasLt64_11.dll not found

LICENSE issue

Hi,

Nice and neat work, I want to know will the license be provided? preferably MIT license, because I could not find any LICENSE file in the repo..

Kind Regards

Name 'cascade_classifier' is not defined?

I'm getting this error when i run program with: python model.py singleface

File structure:

./
├── __pycache__/
│  ├── model.cpython-36.pyc
│  └── singleface.cpython-36.pyc
├── Data/
├── emojis/
│  ├── angry.png
│  ├── disgusted.png
│  ├── fearful.png
│  ├── happy.png
│  ├── neutral.png
│  ├── sad.png
│  └── surprised.png
├── Data.tar.gz
├── haarcascade_frontalface_default.xml
├── model.py
├── model_1_atul.tflearn.data-00000-of-00001
├── model_1_atul.tflearn.index
├── model_1_atul.tflearn.meta
├── multiface.py
└── singleface.py

Full Traceback:

------------Emotion Detection Program------------

Input data      (48, 48, 1)
WARNING:tensorflow:From /home/dentrax/Projects/GitHub/emotion-recognition-neural-networks/lib/python3.6/site-packages/tflearn/initializations.py:119: UniformUnitScaling.__init__ (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
Conv1           (48, 48, 64)
Maxpool1        (24, 24, 64)
Conv2           (24, 24, 64)
Maxpool2        (12, 12, 64)
Conv3           (12, 12, 128)
Dropout         (12, 12, 128)
Fully connected (3072,)
Output          (7,)
WARNING:tensorflow:From /home/dentrax/Projects/GitHub/emotion-recognition-neural-networks/lib/python3.6/site-packages/tflearn/objectives.py:66: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
2019-01-20 22:18:54.042648: E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Traceback (most recent call last):
  File "model.py", line 75, in <module>
    import singleface
  File "/home/dentrax/Projects/GitHub/Emotion-detection/TFLearn/singleface.py", line 66, in <module>
    result = network.predict(format_image(frame))
  File "/home/dentrax/Projects/GitHub/Emotion-detection/TFLearn/singleface.py", line 22, in format_image
    faces = cascade_classifier.detectMultiScale(image,scaleFactor = 1.3 ,minNeighbors = 5)
NameError: name 'cascade_classifier' is not defined

Try my own trained model

Hello,
I want to use my own trained model for emotion detection (.h5 model), but I get the below error, could you let me know what is the problem?

2022-10-04 17:31:37.690958: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Found 28709 images belonging to 7 classes.
Found 7178 images belonging to 7 classes.
2022-10-04 17:31:50.018247: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'nvcuda.dll'; dlerror: nvcuda.dll not found
2022-10-04 17:31:50.018462: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2022-10-04 17:31:50.024974: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-41NG8QC
2022-10-04 17:31:50.025176: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-41NG8QC
2022-10-04 17:31:50.025657: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.```

KeyError 'acc'

I'm not sure how you even ran the code with this error.
File "emotions.py", line 98, in <module> plot_model_history(model_info) File "emotions.py", line 26, in plot_model_history axs[0].plot(range(1,len(model_history.history['acc'])+1),model_history.history['acc']) KeyError: 'acc'

You should replace all instances of 'accuracy' with 'acc', or vice versa.

NameError: name 'plt' is not defined

After training for 50 epochs (3 hours), the program threw an error because you forgot
import matplotlib.pyplot as plt
Please add this to emotions.py.

No Emotion for disgusted

It doesn't detect disgusted or fear face too. Also is there any method to improve the accuracy?

Percentual values of all emotions

Hi, thanks for the code, and your trained model seems to work quite well!

I'm a bit of a beginner, and I'm working on a project where I'd need to obtain percentile probabilities of all emotions as opposed rounding them up to 1 and then displaying the highest probability.

What would be the best way to approach this? Thanks!

graphs

how did you get graphs?
model accuracy and model loss

error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

The single face with TFLearn is working perfectly, but both the muiface and the keras are giving the same error. It seems that it is not having the frame correctly as I put:

ret, frame = cap.read()
    if not ret:
        print("Could not read frame")
    else :

and it is continuously printing "Could not read frame". Has someone solve this? I am working reading a video so I changed cap = cv2.VideoCapture(0) to cap = cv2.VideoCapture('diner.mp4'), that's the only change I made

emotion recognition

I used this code and model as it is.
but when i m going to test only 2 emotions are recongnizing.
and default it is showing happy without human face
pls solve this

How to solve the error????

When I execute the code 'python em_model.py singleface', it occurs this error. How can I solve this error???

(cv) pi@raspberrypi:~/Emotion-detection $ python em_model.py singleface
/home/pi/.virtualenvs/cv/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/home/pi/.virtualenvs/cv/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
return f(*args, **kwds)
/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters

------------Emotion Detection Program------------

---> Starting Neural Network

Input data (48, 48, 1)
WARNING:tensorflow:From /home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/initializations.py:119: UniformUnitScaling.init (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
Conv1 (48, 48, 64)
Maxpool (24, 24, 64)
Conv2 (24, 24, 64)
Maxpool2 (12, 12, 64)
Conv3 (12, 12, 128)
Dropout (12, 12, 128)
Fully connected (3072,)
Output (4,)

WARNING:tensorflow:From /home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/objectives.py:66: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
2018-10-20 00:00:06.073932: W tensorflow/core/framework/allocator.cc:113] Allocation of 12288 exceeds 10% of system memory.
2018-10-20 00:00:06.098093: W tensorflow/core/framework/allocator.cc:113] Allocation of 6400 exceeds 10% of system memory.
2018-10-20 00:00:06.105248: W tensorflow/core/framework/allocator.cc:113] Allocation of 409600 exceeds 10% of system memory.
2018-10-20 00:00:06.108324: W tensorflow/core/framework/allocator.cc:113] Allocation of 524288 exceeds 10% of system memory.
2018-10-20 00:00:06.111486: W tensorflow/core/framework/allocator.cc:113] Allocation of 226492416 exceeds 10% of system memory.
Traceback (most recent call last):
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1292, in _do_call
return fn(*args)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1277, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1367, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [4] rhs shape= [7]
[[{{node save_1/Assign_21}} = Assign[T=DT_FLOAT, _class=["loc:@FullyConnected_1/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](FullyConnected_1/b/Momentum, save_1/RestoreV2:21)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1538, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1110, in _run
feed_dict_tensor, options, run_metadata)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1286, in _do_run
run_metadata)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1308, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [4] rhs shape= [7]
[[{{node save_1/Assign_21}} = Assign[T=DT_FLOAT, _class=["loc:@FullyConnected_1/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](FullyConnected_1/b/Momentum, save_1/RestoreV2:21)]]

Caused by op 'save_1/Assign_21', defined at:
File "em_model.py", line 77, in
import singleface
File "", line 969, in _find_and_load
File "", line 958, in _find_and_load_unlocked
File "", line 673, in _load_unlocked
File "", line 673, in exec_module
File "", line 222, in _call_with_frames_removed
File "/home/pi/Emotion-detection/singleface.py", line 48, in
network.build_network()
File "/home/pi/Emotion-detection/em_model.py", line 50, in build_network
self.model = tflearn.DNN(self.network,checkpoint_path = 'model_1_atul',max_checkpoints = 1,tensorboard_verbose = 2)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/models/dnn.py", line 65, in init
best_val_accuracy=best_val_accuracy)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/helpers/trainer.py", line 147, in init
allow_empty=True)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1094, in init
self.build()
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1106, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1143, in _build
build_save=build_save, build_restore=build_restore)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 787, in _build_internal
restore_sequentially, reshape)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/ops/state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [4] rhs shape= [7]
[[{{node save_1/Assign_21}} = Assign[T=DT_FLOAT, _class=["loc:@FullyConnected_1/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](FullyConnected_1/b/Momentum, save_1/RestoreV2:21)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "em_model.py", line 77, in
import singleface
File "/home/pi/Emotion-detection/singleface.py", line 48, in
network.build_network()
File "/home/pi/Emotion-detection/em_model.py", line 52, in build_network
self.load_model()
File "/home/pi/Emotion-detection/em_model.py", line 68, in load_model
self.model.load("model_1_atul.tflearn")
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/models/dnn.py", line 308, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/helpers/trainer.py", line 490, in restore
self.restorer.restore(self.session, model_file)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1574, in restore
err, "a mismatch between the current graph and the graph")
tensorflow.python.framework.errors_impl.InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [4] rhs shape= [7]
[[{{node save_1/Assign_21}} = Assign[T=DT_FLOAT, _class=["loc:@FullyConnected_1/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](FullyConnected_1/b/Momentum, save_1/RestoreV2:21)]]

Caused by op 'save_1/Assign_21', defined at:
File "em_model.py", line 77, in
import singleface
File "", line 969, in _find_and_load
File "", line 958, in _find_and_load_unlocked
File "", line 673, in _load_unlocked
File "", line 673, in exec_module
File "", line 222, in _call_with_frames_removed
File "/home/pi/Emotion-detection/singleface.py", line 48, in
network.build_network()
File "/home/pi/Emotion-detection/em_model.py", line 50, in build_network
self.model = tflearn.DNN(self.network,checkpoint_path = 'model_1_atul',max_checkpoints = 1,tensorboard_verbose = 2)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/models/dnn.py", line 65, in init
best_val_accuracy=best_val_accuracy)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tflearn/helpers/trainer.py", line 147, in init
allow_empty=True)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1094, in init
self.build()
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1106, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 1143, in _build
build_save=build_save, build_restore=build_restore)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 787, in _build_internal
restore_sequentially, reshape)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/training/saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/ops/state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/home/pi/.virtualenvs/cv/local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1768, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:

Assign requires shapes of both tensors to match. lhs shape= [4] rhs shape= [7]
[[{{node save_1/Assign_21}} = Assign[T=DT_FLOAT, _class=["loc:@FullyConnected_1/b"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](FullyConnected_1/b/Momentum, save_1/RestoreV2:21)]]

The model is not loaded.

Hi, thanks for your sharing. When I used python em_model.py singleface , the screen is always one emotion, the result is all like this:
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]

I think that the project isn't loaded model. But I have placed the DATA in the Emotion-detection-master, and run it as you. Please help me about this question. I don't know why the model isn't run. Thank you very much!

TypeError: 'float' object is not iterable

Hi
When I execute the code 'python emotions.py --mode train',
after 50/50 epoches, it occurs this error, can't generate new training files. How can I solve this error???

full error:
Traceback (most recent call last):
File "emotions.py", line 98, in
plot_model_history(model_info)
File "emotions.py", line 31, in plot_model_history
axs[0].set_xticks(np.arange(1,len(model_history.history['accuracy'])+1),len(model_history.history['accuracy'])/10)
File "/home/jaeger/.local/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 75, in wrapper
return get_method(self)(*args, **kwargs)
File "/home/jaeger/.local/lib/python3.8/site-packages/matplotlib/axis.py", line 1857, in set_ticks
self.set_ticklabels(labels, minor=minor, **kwargs)
File "/home/jaeger/.local/lib/python3.8/site-packages/matplotlib/axis.py", line 1712, in set_ticklabels
ticklabels = [t.get_text() if hasattr(t, 'get_text') else t
TypeError: 'float' object is not iterable

thanks.

Your pre-trained model

Lovely work! Do you remember at what epoch did you get your pre-trained model from? Or what was the val_loss/acc of your pre-trained model that you made available?

Can not create two networks

The following code leads to errors:

from em_model import EMR  
n1 = EMR()    
n1.build_network()  
n2 = EMR()  
n2.build_network()

The solution is importing tensorflow and resetting default graph at the first of build_network:

tf.reset_default_graph()

set_ticks() takes 2 positional arguments but 3 were given

Getting this error when I try to train

Traceback (most recent call last):
File "emotions.py", line 98, in
plot_model_history(model_info)
File "emotions.py", line 31, in plot_model_history
axs[0].set_xticks(np.arange(1,len(model_history.history['accuracy'])+1),len(model_history.history['accuracy'])/10)
File "/home/touseef/.local/lib/python3.8/site-packages/matplotlib/axes/_base.py", line 73, in wrapper
return get_method(self)(*args, **kwargs)
TypeError: set_ticks() takes 2 positional arguments but 3 were given

problems with python emotions.py --mode train on jetson nano

Full error:
File "emotions.py", line 97, in
validation_steps=num_val // batch_size)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1957, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1147, in fit
steps_per_execution=self._steps_per_execution)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 1364, in get_data_handler
return DataHandler(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 1166, in init
model=model)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 939, in init
**kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 809, in init
peek, x = self._peek_and_restore(x)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/data_adapter.py", line 943, in _peek_and_restore
return x[0], x
File "/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py", line 57, in getitem
length=len(self)))
ValueError: Asked to retrieve element 0, but the Sequence has length 0

这个需要GPU吗?

is this need GPU?
I had run it on my computer by CPU. it's running so slowly, so jam.

and I can run kargle's py document,but I can't run TF-single document.because:
"cv2.error: OpenCV(3.4.2) /Users/travis/build/skvark/opencv-python/opencv/modules/objdetect/src/cascadedetect.cpp:1698: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'"

so I tape 'pip install OpenCV-python==3.4.1 ' in terminal , but it show back this :

ERROR: Could not find a version that satisfies the requirement opencv-python==3.4.1 (from versions: 3.4.2.16, 3.4.2.17, 3.4.3.18, 3.4.4.19, 3.4.5.20, 3.4.6.27, 3.4.7.28, 4.0.0.21, 4.0.1.24, 4.1.0.25, 4.1.1.26)
ERROR: No matching distribution found for opencv-python==3.4.1

emmm...... so hurt

Unable to start singleface.py

I get this when I run python3 model.py singleface
Traceback (most recent call last): File "model.py", line 3, in <module> import tflearn File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tflearn/__init__.py", line 4, in <module> from . import config File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tflearn/config.py", line 5, in <module> from .variables import variable File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tflearn/variables.py", line 7, in <module> from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope ModuleNotFoundError: No module named 'tensorflow.contrib'
I'm running MacOS High Sierra

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.