Git Product home page Git Product logo

mobileface's Introduction

💥Big Bang💥

Receptive Field Is Natural Anchor

Receptive Field Is All You Need

2K real-time detection is so easy!


MobileFace

A face recognition solution on mobile device.

MobileFaceV1

Prerequirements

  • Anaconda (optional but recommend)
  • MXNet and GluonCV (the easiest way to install)
  • DLib (may be deprecated in the future)
    The easiest way to install DLib is through pip.
pip install dlib

Performance

Identification

Model Framework Size CPU LFW Target
MobileFace_Identification_V1 MXNet 3.40M 8.5ms - Actual Scene
MobileFace_Identification_V2 MXNet 3.41M 9ms 99.653% Benchmark
🌟MobileFace_Identification_V3 MXNet 2.10M 💥3ms(sota) 95.466%(baseline) Benchmark

Detection

Model Framework Size CPU
MobileFace_Detection_V1 MXNet/GluonCV 30M 20ms/50fps

Landmark

Model Framework Size CPU
MobileFace_Landmark_V1 DLib 5.7M <1ms

Pose

Model Framework Size CPU
MobileFace_Pose_V1 free <1K <0.1ms

Align

Model Framework Size CPU
MobileFace_Align_V1 free <1K <0.1ms

Attribute

Model Framework Size CPU
MobileFace_Attribute_V1 MXNet/GluonCV 16.4M 14ms/71fps

Tracking

Model Framework Size CPU
MobileFace_Tracking_V1 free - <2ms

Example

To get fast face feature embedding with MXNet as follow:

cd example
python get_face_feature_v1_mxnet.py # v1, v2, v3

To get fast face detection result with MXNet/GluonCV as follow:

cd example
python get_face_boxes_gluoncv.py

To get fast face landmarks result with dlib as follow:

cd example
python get_face_landmark_dlib.py

To get fast face pose result as follow:

cd example
python get_face_pose.py

To get fast face align result as follow:

cd example
python get_face_align.py

To get fast face attribute results as follow:

cd example
python get_face_attribute_gluoncv.py

To get mobileface all results as follow:

cd example
python mobileface_allinone.py

To get mobileface fast tracking result as follow:

cd example
python get_face_tracking_v1.py

To get mobileface makeup result as follow:

cd example
python get_face_makeup_v1.py

MobileFaceMakeupV1

To get mobileface enhancement result as follow:

cd example
python get_face_enhancement_v1.py

MobileFaceEnhanceV1

Visualization

t-SNE

I used the t-SNE algorithm to visualize in two dimensions the 256-dimensional embedding space. Every color corresponds to a different person(but colors are reused): as you can see, the MobileFace has learned to group those pictures quite tightly. (the distances between clusters are meaningless when using the t-SNE algorithm)
t-SNE
To get the t-SNE feature visualization above as follow:

cd tool/tSNE
python face2feature.py # get features and lables and save them to txt
python tSNE_feature_visualization.py # load the txt to visualize face feature in 2D with tSNE

ConfusionMatrix

I used the ConfusionMatrix to visualize the 256-dimensional feature similarity heatmap of the LFW-Aligned-100Pair: as you can see, the MobileFace has learned to get higher similarity when calculating the same person's different two face photos. Although the performance of the V1 version is not particularly stunning on LFW Dataset, it does not mean that it does not apply to the actual scene.
t-SNE
To get the ConfusionMatrix feature similarity heatmap visualization above as follow:

cd tool/ConfusionMatrix
python ConfusionMatrix_similarity_visualization.py

Tool

Time

To get inference time of different version's MXNet models as follow:

cd tool/time
python inference_time_evaluation_mxnet.py --symbol_version=V3 # default = V1

Model_Prune

Prune the MXNet model through deleting the needless layers (such as classify layer and loss layer) and only retaining features layers to decrease the model size for inference as follow:

cd tool/prune
python model_prune_mxnet.py

MXNet2Caffe

Merge_bn

Benchmark

LFW

The LFW test dataset (aligned by MTCNN and cropped to 112x112) can be download from Dropbox or BaiduDrive, and then put it (named lfw.bin) in the directory of data/LFW-bin.
To get the LFW comparison result and plot the ROC curves as follow:

cd benchmark/LFW
python lfw_comparison_and_plot_roc.py

LFW ROC

MegaFace

TODO

  • MobileFace_Identification
  • MobileFace_Detection
  • MobileFace_Landmark
  • MobileFace_Align
  • MobileFace_Attribute
  • MobileFace_Pose
  • MobileFace_Tracking
  • MobileFace_Makeup
  • MobileFace_Enhancement
  • MobileFace_FacePortrait
  • MobileFace_FaceSwap
  • MobileFace_MakeupSwap
  • MobileFace_NCNN
  • MobileFace_FeatherCNN
  • Benchmark_LFW
  • Benchmark_MegaFace

Others

Coming Soon!

FacePortrait

MakeupSwap

FaceSwap

Reference

mobileface's People

Contributors

becauseofai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobileface's Issues

How to use extracted features

Thanks for your great work!
I want to use your work to build a face recognition project. So I combine the code of get_face_align.py and get_face_feature_v3_mxnet.py together to build a workflow. I can get extracted features of faces, and then I try to use dist = np.sum(np.square(f1-f2)) or cos = cosine(f1, f2) to compare the similarity, but the accuracy is pretty low. Could you give me an example of how to use them?

error??

how to use cv2 ?read image as the input of function

port face detection to TVM

I attempted to port your trained model to TVM which is a good framework to speed up on arm device (like RK3399). Unfortunately, there are some operators in your model seem not supported in TVM.

"infer_range" and "_arange" are not supported in TVM.
NotImplementedError: Operator: _arange is not supported in nnvm.

Can these 2 operators replaced by other operators?

Thanks.,

The speed of face tracking

When I run get_face_tracking.py, I found that the performance isn't that great like the gif in README introduction. It's mainly because the following code line is the most time consuming. Even though, face detection only takes 20ms/frame and tracking only takes 1ms/frame while using CPU. The following code takes almost 200ms. So it's not real-time. Is there any improvement? Thank you!
ids, scores, bboxes = [xx[0].asnumpy() for xx in result]

Inference time for face detection is misleading

Hello, the 20ms inference time claimed is incorrect.
it takes 20ms to call the engine but the calculation runs asynchronously.
You should run the example code but place the toc after the results are casted into a numpy array.

an error happened when import dlib and MxNet

When I run the ./example/get_face_align.py, an error happened. The error meassage is:

Segmentation fault: 11

Stack trace returned 10 entries:
[bt] (0) /root/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x40123a) [0x7f8a3cba723a]
[bt] (1) /root/anaconda3/lib/python3.6/site-packages/mxnet/libmxnet.so(+0x3413b76) [0x7f8a3fbb9b76]
[bt] (2) /lib64/libc.so.6(+0x36280) [0x7f8b08f7b280]
[bt] (3) /root/anaconda3/lib/python3.6/site-packages/onnx/onnx_cpp2py_export.cpython-36m-x86_64-linux-gnu.so(+0x2e69e) [0x7f8a1d80669e]
[bt] (4) /root/anaconda3/lib/python3.6/site-packages/onnx/onnx_cpp2py_export.cpython-36m-x86_64-linux-gnu.so(+0x38739) [0x7f8a1d810739]
[bt] (5) /root/anaconda3/lib/python3.6/site-packages/onnx/onnx_cpp2py_export.cpython-36m-x86_64-linux-gnu.so(+0x1ce83) [0x7f8a1d7f4e83]
[bt] (6) /root/anaconda3/lib/python3.6/site-packages/onnx/onnx_cpp2py_export.cpython-36m-x86_64-linux-gnu.so(PyInit_onnx_cpp2py_export+0xd6) [0x7f8a1d7f8946]
[bt] (7) python(_PyImport_LoadDynamicModuleWithSpec+0x185) [0x55583797ba45]
[bt] (8) python(+0x215c45) [0x55583797bc45]
[bt] (9) python(PyCFunction_Call+0x131) [0x555837878051]

It happened when importing mxnet and dlib, and I found that mxnet and dlib cannot be imported correctly in one python file. But it succeeded when importing mxnet or dlib alone. Could you tell me the version of mxnet and dlib installed in your environment ?
Thank you!

Mxnet error when running in parallel

during getting embeddings in face_embeds = face_feature_extractor.get_face_feature_batch(np.array(all_aligned_img))

c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\tensor../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)): Incompatible attr in node at 0-th output: expected [18,3,112,112], got [24,3,112,112]

What I'm doing wrong?
This error is not repeatable (happens in different time), so I think it causes tha mxnet using the same memory in different threads.

What algorithm was used for alignment and cropping

I have only managed to get the MobileFace_Identification_V1 working as the get_face_feature_mxnet.py doesn't work for V2.

I have modified get_face_feature_mxnet.py for V2 inputs (3 channels v 1 ) but i am not getting any sensible results. When comparing for cosine similarities of different faces they are coming as being similar.

I have noticed that the aligned data image pairs are closely cropped and I was wondering if you could share the algorithm/code on how you created the the training data so I can reproduce the cropping before feeding the prediction.

Many Thanks,

Simon

why gpu is slowly than cpu in get_face_boxes?

I try to run gpu model like this:

python get_face_boxes_gluoncv.py --gpus 0

I get this:

[19:46:58] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
Inference time:16.246319ms


,my gpu is rtx2060. and I try to use default cpu (inter i5),I got this:

 python get_face_boxes_gluoncv.py 
Inference time:18.076420ms


so ,why it is no difference between them ?

train codes?

hi, thanks for your work ,are you plan to share the train codes?

face size for feature extraction

Insightface uses face size (112,112) for feature extraction.
In your example in get_face_feature_mxnet.py, face size (100,100). is used.
Should we use (100,100) for feature extraction?

Thanks,

请问mobileface_enhancement.py第36行的calcHist传入的图片是bug还是故意这么写的?

mobileface_enhancement.py第36行的calcHist函数:hist = cv2.calcHist(img_gray, [0], None, [256], [0, 256])的img_gray没有用中括号,这样计算的是img_gray第一个维度的直方图而不是整个图片的直方图,计算整个图片的直方图应该是hist = cv2.calcHist([img_gray], [0], None, [256], [0, 256]),请问这里是写错了,还是算法就是要计算第一个维度的直方图?谢谢!

path issue

Hi,

I have below error:

File "/Users/xxx/Projects/MobileFace/example/get_face_feature_mxnet.py", line 25, in __init__ network = get_feature_symbol_mobilefacev1() NameError: name 'get_feature_symbol_mobilefacev1' is not defined

sys.path.append didnt work I assume...

how I can solve ?

Training framework & DataSet

@becauseofAI ,
Went though your code and the results of the model are very impressive, we would like to ask you if you have the training framework as well as the dataset as we would like to integrate this and test the model after training and check it accuracy and inference time.

Regards
Amit

mxnet2caffe2?

look forward to this part mxnet2caffe,
(1) is it mxnet2caffe2? mxnet--->onnx--->caffe2?
(2)is there a android demo?
(3)cpu performance of readme refers to a phone?
thanks for your work.

feature extraction(using V2 and V3)

---> 2 face_feature_extractor = MobileFaceFeatureExtractor(model_file, epoch, batch_size, context, gpu_id)

in init(self, model_file, epoch, batch_size, context, gpu_id)
10 self.model.bind(for_training = False, data_shapes=[('data', (self.batch_size, 1, 100, 100))])
11 sym, arg_params, aux_params = mxnet.model.load_checkpoint(self.model_file, self.epoch)
---> 12 self.model.set_params(arg_params, aux_params)
13
14

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\module\module.py in set_params(self, arg_params, aux_params, allow_missing, force_init, allow_extra)
348 self.init_params(initializer=None, arg_params=arg_params, aux_params=aux_params,
349 allow_missing=allow_missing, force_init=force_init,
--> 350 allow_extra=allow_extra)
351 return
352

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\module\module.py in init_params(self, initializer, arg_params, aux_params, allow_missing, force_init, allow_extra)
307 for name, arr in sorted(self._arg_params.items()):
308 desc = InitDesc(name, attrs.get(name, None))
--> 309 _impl(desc, arr, arg_params)
310
311 for name, arr in sorted(self._aux_params.items()):

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\module\module.py in _impl(name, arr, cache)
295 # just in case the cached array is just the target itself
296 if cache_arr is not arr:
--> 297 cache_arr.copyto(arr)
298 else:
299 if not allow_missing:

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\ndarray\ndarray.py in copyto(self, other)
2072 warnings.warn('You are attempting to copy an array to itself', RuntimeWarning)
2073 return False
-> 2074 return _internal._copyto(self, out=other)
2075 elif isinstance(other, Context):
2076 hret = NDArray(_new_alloc_handle(self.shape, other, True, self.dtype))

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\ndarray\register.py in _copyto(data, out, name, **kwargs)

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet_ctypes\ndarray.py in _imperative_invoke(handle, ndargs, keys, vals, out)
90 c_str_array(keys),
91 c_str_array([str(s) for s in vals]),
---> 92 ctypes.byref(out_stypes)))
93
94 if original_output is not None:

D:\UserInfo\DownDir\Conda\envs\tensor\lib\site-packages\mxnet\base.py in check_call(ret)
250 """
251 if ret != 0:
--> 252 raise MXNetError(py_str(_LIB.MXGetLastError()))
253
254

MXNetError: [21:47:41] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\tensor../elemwise_op_common.h:135: Check failed: assign(&dattr, vec.at(i)) Incompatible attr in node at 0-th output: expected [32,3,3,3], got [32,1,3,3]

An error when run get_face_feature_mxnet.py

Got the following error when run "python3 ./get_face_feature_mxnet.py"

/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/module/base_module.py:55: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
data
warnings.warn(msg)
/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/module/base_module.py:67: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
warnings.warn(msg)
[04:55:44] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v-209911.-80.-80. Attempting to upgrade...
[04:55:44] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
Traceback (most recent call last):
File "./get_face_feature_mxnet.py", line 50, in
face_feature_extractor = MobileFaceFeatureExtractor(model_file, epoch, batch_size, context, gpu_id)
File "./get_face_feature_mxnet.py", line 29, in init
self.model.set_params(arg_params, aux_params)
File "/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/module/module.py", line 350, in set_params
allow_extra=allow_extra)
File "/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/module/module.py", line 309, in init_params
_impl(desc, arr, arg_params)
File "/usr/local/lib/python3.5/dist-packages/mxnet-1.5.0-py3.5.egg/mxnet/module/module.py", line 300, in _impl
raise RuntimeError("%s is not presented" % name)
RuntimeError: fc5_bias is not presented

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.