Git Product home page Git Product logo

deep-convolutional-network-cascade-for-facial-point-detection's Introduction

Hi 👋, I'm Yuzhi @zhaoyuzhi

Researcher in Computer Vision | Ph.D., City University of Hong Kong

✨ Quick Facts

deep-convolutional-network-cascade-for-facial-point-detection's People

Contributors

zhaoyuzhi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

deep-convolutional-network-cascade-for-facial-point-detection's Issues

Overview of trainning and testing

I read the paper but found that the author did not elaborate much on how he did the training and how the final predictions are made when the training is done. And after reading it, I am still not quite sure if I really understands it . Here is my understanding.

The training of level 1 is obvious, so I will pass.
The training of level 2 and 3 confuses me a lot.  And because they are similar,let's just focus on LE21. The training of network LE21 first generates lots of small patches circling around the GroundTruth for each picture. For each small patch,calculate the displacement vector from the center of this patch to the GroundTruth,and use this translation as the label.  That makes many training samples. Use these training data, we could train a neural network that regress a displacement vector.

So is the level three network. The only difference is the patch size.

In brief,the training of the CNN in different levels are totally independent. Their training data all comes from the raw data. In fact, although the level 2 comes after level 1 logically, but it is completely fine to train level 2 cnns before level 1 cnns.

When all trainings are done, the predictions are made by  the following precedures.
First run the image through level 1 and output a final predictions. The nose point is averaged by three CNNs while the other are only averaged by 2.
Taking level 1 outputs as inputs, level 2 generates small patches with level 1 predictions as centers and predicts displacement vector. For each landmark,the final displacement vector is an average of two. The level 2 produce its own version of the 5 landmarks as level 1 does and feed it to level 3. And level 3 do similar predictions and outputs final positions. In conclusion, a temporary landmark position proposal must be made and fed to the next level before the final proposal is decided.

Am I right? It there is any misunderstanding,hope you can point it out.

怎么构建模型

模型有三个阶段,感觉看你的代码,是一个一个阶段的训练的吗?

code issue

I found that you delete some .py files?can you share me with the level 2 .py files?thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.