This repository contains the implementation of the project "On Face Recognition at Long Distance with Pose-Guided Angular Margin Loss", which proposes a new loss function called Pose-Guided Angular Margin Loss (PGAML) that combines both pose information and angular margin loss to enhance face recognition accuracy at long distances.
Create a virtual environment with Python=3.6
using conda:
conda create -n frald python=3.6
conda activate frald
Clone the repository:
git clone https://github.com/dustin-nguyen-qil/PoseGuided-ArcFace.git
To install the dependencies of the project, run:
pip install -r requirements.txt
|-- `config`: configuration of the project
|-- `data`: contains files for dataset preparation
|-- `model`: contains source code of model architecture
|-- `output`: contains evaluation results
|-- `trainer`: contains source code for training process
|-- `utils`: contains code to get face pose information
|-- `evaluation.ipynb`: interactive file for evaluation
|-- `evaluate.py`: evaluation file
|-- `train.py`: training file
In this project, we trained two models: Original ArcFace model
and our proposed Pose-guided model
on DroneFace dataset with 20 epochs and batch size of 16.
We use KFold Cross Training with num_folds=5
, leading to 10 pretrained models in total. Each pretrained model is named as Model_[type]_Fold[fold_id].pth
where type = [Original, Pose-guided]
, fold_id=1,...,4
.
At each fold, we use 8 IDs for training and the rest 3 IDs for testing. Test images are fed into the trained model after each fold to get the embeddings. The test embeddings would be used for evaluation purpose.
You can download the extracted test embeddings and trained models from here. Unzip the file and put work_space
at the outer most level of the project folder
|--
|-- workspace
| |-- emb: contains extracted test embeddings
| |-- save: contains trained models
You can run the following command line and refer to outputs
to see the plots of ROC Curve and CMC Curve results averaged from 5 training folds.
python evaluate.py
To see the evaluation results of each training fold and interact with the results, refer to evaluation.ipynb
.
To run training, follow the steps below
Download the DroneFace dataset and its jsons file containing metadata for training and testing from here: DroneFace dataset
Unzip the file, then put photos_all_faces
inside data
|-- data
| |-- photos_all_faces/
| |-- data_pipe.py
You can run training on the original ArcFace model by going to config/config.py
and change conf.pose = False
, then execute
python train.py -b 16 -e 20
where -b
is batch size and -e
is the number of epochs.
You can run training on the Proposed Pose-guided Model by going to config/config.py
and change conf.pose = True
, then execute
python train.py -b 16 -e 20
where -b
is batch size and -e
is the number of epochs.
Trained models of each fold and extracted test embeddings would be automatically saved in work_space
.
This project is based on the following repository and ArcFace paper: