Lighting estimation is difficult task. Many optimization based methods estimates lighting using Spherical Harmonics.
Spherical harmonics cannot model all types of shading.
Unfortunately, due to lack of ground truth for real images, lighting estimation becomes even more difficult task. Hence, we have used adversarial networks to bridge the gap between synthetic and real images.
We have -
Implemented LDAN which uses Generative Adversarial Networks for domain adaptation
Implemented Denoising Autoencoder to compare against Adversarial method for lighting estimation
AutoLighting/resNet_CelebA.py - Experiment 4 with CelebA
AutoLighting/resNet_sfsNet.py - Experiment 5 with SfSNet dataset
Experiements performed
Estimating lighting of CelebA
Trained with LDAN Synthetic dataset
Real Image dataset: CelebA
SIRFS SH for training GAN: Using SIRFS method to generate SH for CelebA dataaset
Ground truth for training feature net and lighting net - provided in LDAN dataset
Conclusion- Synthetic images space was adapted by the net
Estimating lighting of SfSNet dataset with SIRFS SH for GAN training
This is to verify estimated shading with ground truth shading
Training on LDAN synthetic dataset
Used SIRFS SH for training GAN: Using SIRFS method to generate SH for SfSNet dataset
Ground truth Normal, Shading for SfSNet dataset provided
MSE plot with respect to SIRFS SH goes down and with respect to ground truth SH increases verifying that SIRFS domain is being adapted
Conclusion- Although results are not better than SIRFS estimated shading, we can see domain being adapted
Estimating lighting of SfSNet dataset with ground truth SH for GAN training
This is to verify estimated shading with ground truth shading
Training on LDAN synthetic dataset
Used ground truth SH for training GAN
Ground truth Normal, Shading for SfSNet dataset provided
MSE plot with respect to SIRFS SH increases and with respect to ground truth decrease i.e. opposite to Experiment 2
Conclusion- Although results are not better than SIRFS estimated shading, we can see domain being adapted
AutoLighting
This was to compare domain adaptaion with respect to denoising autoencoder
Steps -
Generate Noisy SH using SIRFS method on SfSNet data
Train denoising autoencoder to denoise SH
Use trained denoising autoencoder to remove noise from SH on real images SH
Use Use feature net and lighting net to estimate lighting on real images
This approach does not out performs domain adaptation
Conclusion - Synthetic and real images spaces are different and adversarial approach performs well to understand and estimate lighting for real images trained on synthetic images.
Results
Lighting Estimation for CelebA dataset - trained with LDAN Synthetic dataset
Shading by SIRFS
Shading by LDAN(ours)
Shading with Domain Adaptation
Shading without Domain Adaptation
Lighting Estimation for SfSNet dataset - trained with LDAN Synthetic dataset
Shading by SIRFS
Shading by LDAN(ours)
Shading with Domain Adaptation
Shading without Domain Adaptation
Lighting Estimation for SfSNet dataset - trained with SfSNet Synthetic dataset
Shading by SIRFS
Shading by LDAN(ours)
Shading with Domain Adaptation
Shading without Domain Adaptation
Lighting Estimation by AutoLighting (Denoising AutoEncoder) for CelebA dataset - trained with SfSNet Synthetic dataset