Facial Emotion Recognition is a Python-based project that utilizes Deep Learning models to recognize human facial expressions.
The project includes the following features:
- An interface that captures the user's webcam feed and applies the trained model to recognize the facial expression of the user in real-time.
- A trained Deep Learning model for facial expression recognition that has been trained on the FER2013 dataset using Keras with a Tensorflow backend.
- A Python script to train the model on the FER2013 dataset, and a Jupyter notebook with the model's training process.
- A requirements file that includes all the Python packages needed to run the project.
- Clone the repository:
git clone https://github.com/OmarEhab007/Facial-emotion-recognition.git
- Change the directory to the project's root directory:
cd Facial-emotion-recognition
- Create a virtual environment:
python3 -m venv env
- Activate the virtual environment:
source env/bin/activate
- Install the required Python packages:
pip install -r requirements.txt
- Run the application:
python3 facial_expression_recognition.py
- The application will prompt the user to allow access to the webcam.
- The user can then look into the camera, and the model will recognize their facial expression.
- The recognized facial expression will be displayed on the screen.
- The train.py script can be used to train the model on the FER2013 dataset.
- The Jupyter notebook facial_expression_recognition.ipynb shows the model's training process step-by-step and includes visualizations of the data and the model's performance.
- The FER2013 dataset was created by Pierre-Luc Carrier and Aaron Courville, and can be found on Kaggle.
- The trained model architecture is based on the VGG16 architecture.
- The code for capturing and processing the webcam feed is based on the OpenCV library.