I read the paper but found that the author did not elaborate much on how he did the training and how the final predictions are made when the training is done. And after reading it, I am still not quite sure if I really understands it . Here is my understanding.
The training of level 1 is obvious, so I will pass.
The training of level 2 and 3 confuses me a lot. And because they are similar,let's just focus on LE21. The training of network LE21 first generates lots of small patches circling around the GroundTruth for each picture. For each small patch,calculate the displacement vector from the center of this patch to the GroundTruth,and use this translation as the label. That makes many training samples. Use these training data, we could train a neural network that regress a displacement vector.
So is the level three network. The only difference is the patch size.
In brief,the training of the CNN in different levels are totally independent. Their training data all comes from the raw data. In fact, although the level 2 comes after level 1 logically, but it is completely fine to train level 2 cnns before level 1 cnns.
When all trainings are done, the predictions are made by the following precedures.
First run the image through level 1 and output a final predictions. The nose point is averaged by three CNNs while the other are only averaged by 2.
Taking level 1 outputs as inputs, level 2 generates small patches with level 1 predictions as centers and predicts displacement vector. For each landmark,the final displacement vector is an average of two. The level 2 produce its own version of the 5 landmarks as level 1 does and feed it to level 3. And level 3 do similar predictions and outputs final positions. In conclusion, a temporary landmark position proposal must be made and fed to the next level before the final proposal is decided.
Am I right? It there is any misunderstanding,hope you can point it out.