Please take a look at the original repository and to their publication
Real-time demo:
$ git clone https://github.com/nicolasmetallo/emotion_and_gender_detection.git
Run the following to install the required modules
$ pip install -r face_classification/REQUIREMENTS.txt
$ apt install ffmpeg
$ cd face_classification/src
This script requires four inputs:
- input_video = 'Path to input video'
- path_to_extract = 'Path for the frames extracted from the video'
- path_to_predict = 'Path for the predictions made by our model'
- output_video = 'Path to output video'
$ python run_demo_on_video.py {input_video} {path_to_extract} {path_to_predict} {output_video}
Example using the demo video in the repo
$ python run_demo_on_video.py emotion-gender-demo.mp4 \
/content/emotion_and_gender_detection/extracted_frames \
/content/emotion_and_gender_detection/predicted_frames \
output_video.mp4