#Introduction This repository contains all information, data and source-code used in my Master Thesys.
#Pre-Requisites
-
CUDA
-
OpenCv
-
Caffe
-
Check for CUDA device on the computer. If any, the description of the devices will be shown.
- lspci | grep -i nvidia
-
Check for the GCC installation
- gcc --version
-
Download the CUDA version for your Graphics Card architecture and Operational System.
-
Add the downloaded file to the linux repository
- sudo dpkg -i cuda-repo-.deb
-
Update the repository
- sudo apt-get update
-
Install
- sudo apt-get install cuda
-
Update Environment Variables
- export PATH=/usr/local/cuda-7.0/bin:$PATH
- export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
-
Test
- nvcc -V
-
(Optional) Install examples.
- /usr/loca/cuda-7.0/cuda-install-samples-7.0.sh /home
-
General Dependences
- sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
- sudo apt-get install --no-install-recommends libboost-all-dev
-
BLAS
- sudo apt-get install libatlas-base-dev
-
OpenCv
-
Other Dependences (for Ubuntu 12.04 LTS)
-
wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz
-
tar zxvf glog-0.3.3.tar.gz
-
cd glog-0.3.3
-
make && make install
-
wget https://github.com/schuhschuh/gflags/archive/master.zip
-
unzip master.zip
-
cd gflags-master
-
mkdir build && cd build
-
export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1
-
make && make install
-
git clone https://github.com/LMDB/lmdb
-
cd lmdb/libraries/liblmdb
-
make && make install
-
#Get the Source
-
Download the caffe and data directories from this repository. After the download, the directories showld be in the same parent directory.
-
Go to caffe directory
-
Compile
- cmake .
- make all
-
Everything showld compile without errors.
#Get the Data
-
Request your copy of the Cohn-Kanade database from
-
The database needs to be separated in the eigth non-overlap groups, to perform the experiments in the right way. To separate the data, extract the Cohn-Kanade data to the folders G1 to G8, put the files in theses folders according to the file label.txt.
-
To replicate the experiments described in the dissertation the sinthetic samples need to be generated. To perform this, use the generateData.cpp code, stored in the tools folder.
-
To generate the synthetic data, from the caffe root directory, run:
- make all
- ./tools/generateData
-
Open the file data/synthetic/solver.prototxt and change the firts line, the path to the file train.prototxt should contains the absolute path to the file (the file is in the same folder as the solver.prototxt).
#Run Training
-
The training source-code is stored in the file trainDeepFace.cpp, inside the tools folder.
-
From the caffe root directory, run:
- make all
- ./tools/trainDeepFace
#Run Testing
-
Open the file trainDeepFace.cpp and change the method called in tha Main, to test(). Remember of commenting the line that calls the train() method.
-
From the caffe root directory, run:
- make all
- ./tools/trainDeepFace
-
Evaluate: The files with the patter summary_GTIT0. are the best results, selected with the validation group. The .txt files, contains the confusion matrixes and the accuracy for both classifiers, the n-class and the binary. The .net files are the networks weights that archieve the results shown in the text files.