Skip to main content
Log in

A lighting robust fitting approach of 3D morphable model for face reconstruction

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Three-dimensional morphable model (3DMM) is a powerful tool for recovering 3D shape and texture from a single facial image. The success of 3DMM relies on two things: an effective optimization strategy and a realistic approach to synthesizing face images. However, most previous methods have focused on developing an optimization strategy under Phong’s synthesis approach. In this paper, we adopt a more realistic synthesis technique that fully considers illumination and reflectance in the 3DMM fitting process. Using the sphere harmonic illumination model (SHIM), our new synthesis approach can account for more lighting factors than Phong’s model. Spatially varying specular reflectance is also introduced into the synthesis process. Under SHIM, the cost function is nearly linear for all parameters, which simplifies the optimization. We apply our new optimization algorithm to determine the shape and texture parameters simultaneously. The accuracy of the recovered shape and texture can be improved significantly by considering the spatially varying specular reflectance. Hence, our algorithm produces an enhanced shape and texture compared with previous SHIM-based methods that recover shape from feature points. Although we use just a single input image in a profile pose, our approach gives plausible results. Experiments on a well-known image database show that, compared to state-of-the-art methods based on Phong’s model, the proposed approach enhances the robustness of the 3DMM fitting results under extreme lighting and profile pose.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Ersotelos, N., Dong, F.: Building highly realistic facial modeling and animation: a survey. Vis. Comput. 24(1), 13–30 (2008)

    Article  Google Scholar 

  2. FRGC. http://www.nist.gov/itl/iad/ig/frgc.cfm

  3. Li, S., Liu, X., Chai, X., Zhang, H., Lao, S., Shan, S.: Morphable displacement field based image matching for face recognition across pose. In: Computer Vision—ECCV 2012, pp. 102–115. Springer (2012)

  4. Zhao, W., Chellappa, R.: Illumination-insensitive face recognition using symmetric shape-from-shading. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 286–293. IEEE (2000)

  5. Kemelmacher-Shlizerman, I., Suwajanakorn, S., Seitz, S.M.: Illumination-aware age progression. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3334 – 3341 (2014)

  6. Blanz, V., Scherbaum, K., Vetter, T., Seidel, H.P.: Exchanging faces in images. Comput. Graph. Forum 23(3), 669–676 (2004)

    Article  Google Scholar 

  7. Garrido, P., Valgaerts, L., Rehmsen, O., Thormählen, T., Pérez, P., Theobalt, C.: Automatic face reenactment. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4217–4224. IEEE (2014)

  8. Liu, S., Wang, J., Zhang, M., Wang, Z.: Three-dimensional cartoon facial animation based on art rules. Vis. Comput. 29(11), 1135–1149 (2013)

    Article  Google Scholar 

  9. Cao, C., Weng, Y., Lin, S., Zhou, K.: 3D shape regression for real-time facial animation. ACM Trans. Graph. 32(4), 96–96 (2013)

  10. Basri, R., Jacobs, D., Kemelmacher, I.: Photometric stereo with general, unknown lighting. Int. J. Comput. Vis. 72(3), 239–257 (2007)

    Article  Google Scholar 

  11. Kemelmacher-Shlizerman, I., Seitz, S.M.: Face reconstruction in the wild. In: IEEE International Conference on Computer Vision (ICCV), pp. 1746–1753. IEEE (2011)

  12. Biswas, S., Aggarwal, G., Chellappa, R.: Robust estimation of albedo for illumination-invariant matching and shape recovery. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 884–899 (2009)

  13. Hu, K., Liu, Y., Dong, Q., Liu, H., Xing, G.: Color face image decomposition under complex lighting conditions. Vis. Comput. 30(6–8), 685–695 (2014)

    Article  Google Scholar 

  14. Kemelmacher-Shlizerman, I., Basri, R.: 3D face reconstruction from a single image using a single reference face shape. IEEE Trans. Pattern Anal. Mach. Intell. 33(2), 394–405 (2011)

    Article  Google Scholar 

  15. Patel, A., Smith, W.A.: Driving 3D morphable models using shading cues. Pattern Recognit. 45(5), 1993–2004 (2012)

    Article  Google Scholar 

  16. Smith, W.A.P., Hancock, E.R.: Facial shape-from-shading and recognition using principal geodesic analysis and robust statistics. Int. J. Comput. Vis. 76(1), 71–91 (2008)

    Article  Google Scholar 

  17. Zhao, W.Y., Chellappa, R.: Symmetric shape-from-shading using self-ratio image. Int. J. Comput. Vis. 45(1), 55–75 (2001)

    Article  MATH  Google Scholar 

  18. Li, A., Shan, S., Chen, X., Chai, X., Gao, W.: Recovering 3D facial shape via coupled 2D/3D space learning. In: 8th IEEE International Conference on Automatic Face & Gesture Recognition (FG), pp. 1–6. IEEE (2008)

  19. Sánchez-Escobedo, D., Castelán, M.: 3D face shape prediction from a frontal image using cylindrical coordinates and partial least squares. Pattern Recognit. Lett. 34(4), 389–399 (2013)

    Article  Google Scholar 

  20. Patrik, H., Feng, Z., Christmas, W., Kittler, J., Rätsch, M.: Fitting 3D morphable models using local features. In: International Conference on Image Processing (ICIP), pp. 1–5 (2015)

  21. Sun, Y., Wang, X., Tang, X.: Deep convolutional network cascade for facial point detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3476–3483 (2013)

  22. Xiangyu, Z., Junjie, Y., Dong, Y., Zhen, L., Li, S.Z.: Discriminative 3D morphable model fitting. In: 11th IEEE International Conference on Automatic Face & Gesture Recognition (FG), pp. 1–8 (2015)

  23. Blanz, V., Vetter, T.: A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th annual conference on computer graphics and interactive techniques, (SIGGRAPH ’99), pp. 187–194. ACM Press (1999). http://dl.acm.org/citation.cfm?id=311556

  24. Romdhani, S., Blanz, V., Vetter, T.: Face identification by fitting a 3D morphable model using linear shape and texture error functions. In: Computer Vision—ECCV, vol. 2353, pp. 3–19. Springer Berlin Heidelberg (2002)

  25. Romdhani, S., Vetter, T.: Efficient, robust and accurate fitting of a 3D morphable model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 59–66 vol. 1 (2003)

  26. Romdhani, S., Vetter, T.: Estimating 3D shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2, pp. 986–993 (2005)

  27. Blanz, V., Vetter, T.: Reconstructing the complete 3D shape of faces from partial information (rekonstruktion der dreidimensionalen form von gesichtern aus partieller information). Inf. Tech. Inform. 44(6), 295–302 (2002)

    Article  Google Scholar 

  28. Ding, L., Ding, X., Fang, C.: 3D face sparse reconstruction based on local linear fitting. Vis. Comput. 30(2), 189–200 (2014)

    Article  MathSciNet  Google Scholar 

  29. Ramamoorthi, R., Hanrahan, P.: A signal-processing framework for reflection. ACM Trans. Graph. 23(4), 1004–1042 (2004)

    Article  Google Scholar 

  30. Zhang, L., Samaras, D.: Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 351–363 (2006)

    Article  Google Scholar 

  31. Aldrian, O., Smith, W.: Inverse rendering of faces with a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 35(5), 1080–1093 (2013)

    Article  Google Scholar 

  32. Wang, Y., Zhang, L., Liu, Z., Hua, G., Wen, Z., Zhang, Z., Samaras, D.: Face relighting from a single image under arbitrary unknown lighting conditions. IEEE Trans. Pattern Anal. Mach. Intell 31(11), 1968–1984 (2009)

    Article  Google Scholar 

  33. Mingyang, M., Xiyuan, H., Yuquan, X., Silong, P.: A lighting robust fitting approach of 3D morphable model using spherical harmonic illumination. In: 22nd International Conference on Pattern Recognition (ICPR), pp. 2101–2106 (2014)

  34. Wei, L.Y., Shi, K.L., Yong, J.H.: Rendering chamfering structures of sharp edges. Vis. Comput. 30(6–8), 1–9 (2014)

    Google Scholar 

  35. Li, G., Wu, C., Stoll, C., Liu, Y., Varanasi, K., Dai, Q., Theobalt, C.: Capturing relightable human performances under general uncontrolled illumination. Comput. Graph. Forum 32(2), 275–284 (2013)

    Article  Google Scholar 

  36. Wu, C., Stoll, C., Valgaerts, L., Theobalt, C.: On-set performance capture of multiple actors with a stereo camera. ACM Trans. Graph. 32(6), 161:1–161:11 (2013)

    Google Scholar 

  37. Weyrich, T., Matusik, W., Pfister, H., Bickel, B., Donner, C., Tu, C., McAndless, J., Lee, J., Ngan, A., Jensen, H.W., Gross, M.H.: Analysis of human faces using a measurement-based skin reflectance model. ACM Trans. Graph. 25(3), 1013–1024 (2006)

    Article  Google Scholar 

  38. Paysan, P., Knothe, R., Amberg, B., Romdhani, S., Vetter, T.: A 3d face model for pose and illumination invariant face recognition. In: Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance(AVSS), pp. 296–301. IEEE Computer Society (2009)

  39. Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1063–1074 (2003)

    Article  Google Scholar 

  40. Paysan, P.: Statistical modeling of facial aging based on 3D scans. Ph.D. thesis, University of Basel (2010)

  41. Basri, R., Jacobs, D.: Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intell. 25(2), 218–233 (2003)

    Article  Google Scholar 

  42. Ramamoorthi, R.: Analytic PCA construction for theoretical analysis of lighting variability in images of a lambertian object. IEEE Trans. Pattern Anal. Mach. Intell 24(10), 1322–1333 (2002)

  43. Doidge, I., Jones, M.: Probabilistic illumination-aware filtering for monte carlo rendering. Vis. Comput. 29(6–8), 707–716 (2013)

    Article  Google Scholar 

  44. Kontkanen, J., Räsänen, J., Keller, A.: Irradiance filtering for monte carlo ray tracing. In: Monte Carlo and Quasi-Monte Carlo Methods 2004, pp. 259–272. Springer (2006)

  45. Ramamoorthi, R., Hanrahan, P.: An efficient representation for irradiance environment maps. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01, pp. 497–500. ACM, New York, NY, USA (2001)

  46. Burgos-Artizzu, X.P., Perona, P., Dollár, P.: Robust face landmark estimation under occlusion. In: IEEE International Conference on Computer Vision (ICCV), pp. 1513–1520. IEEE (2013)

  47. Hu, G., Mortazavian, P: A facial symmetry prior for improved illumination fitting of 3D morphable model. In: International Conference on Biometrics (ICB), pp. 1–6. IEEE (2013)

  48. Nocedal, J., Wright, S.: Numerical Optimization. Springer, New York (2006)

    MATH  Google Scholar 

  49. Sim, T., Baker, S., Bsat, M.: The CMU pose, illumination, and expression database. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003)

    Article  Google Scholar 

  50. Phillips, P., Flynn, P., Scruggs, T., Bowyer, K., Chang, J., Hoffman, K., Marques, J., Min, J., Worek, W.: Overview of the face recognition grand challenge. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 947–954 (2005)

  51. http://faces.cs.unibas.ch/bfm/main.php?nav=1-1-2&id=experiments

  52. Donner, C., Weyrich, T., d’Eon, E., Ramamoorthi, R., Rusinkiewicz, S.: A layered, heterogeneous reflectance model for acquiring and rendering human skin. ACM Trans. Graph. 27(5), 140:1–140:12 (2008)

    Article  Google Scholar 

  53. Kurt, M., Szirmay-Kalos, L., Křivánek, J.: An anisotropic brdf model for fitting and monte carlo rendering. SIGGRAPH Comput. Graph. 44(1), 3:1–3:15 (2010)

    Article  Google Scholar 

Download references

Acknowledgments

We appreciate very much the comments from anonymous reviewers. Their suggestions have contributed to beneficial improvement on the earlier version of this manuscript and futureresearch. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 61201375, 61571438 and the National High Technology R&D Project of China (863 Program) (2013AA014602).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiyuan Hu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 9792 KB)

Appendix

Appendix

1.1 The gradient of texture cost function

The key in the whole optimization is get the gradient of cost function. We offer the formula for how to compute the gradient \({\nabla e_v^c}\). As \(e_v^c = E_d + E_s - I_\mathrm{input}^c({\pi _\gamma }(v))\),

$$\begin{aligned}&\frac{{\partial e_v^c}}{{\partial \xi }} = \frac{{\partial {E_d}}}{{\partial \xi }} + \frac{{\partial {E_s}}}{{\partial \xi }} - \frac{{\partial I_\mathrm{input}^c({\pi _\gamma }(v))}}{{\partial \xi }} \end{aligned}$$
(19)
$$\begin{aligned}&\frac{{\partial {E_d}}}{{\partial \xi }} = \frac{{\partial {t^c}(v)N(v){M^c}{N^T}(v)}}{{\partial \xi }} \end{aligned}$$
(20)
$$\begin{aligned}&\frac{{\partial {E_s}}}{{\partial \xi }} = \frac{{\partial \sum \nolimits _{l = 0}^{{l_{\max }}} {\sum \nolimits _{m = - l}^l {\rho _l^s{\varLambda _l}{L_{lm}}{Y_{lm}}(v)} } }}{{\partial \xi }} \end{aligned}$$
(21)
$$\begin{aligned} \frac{{\partial I_\mathrm{input}^c({\pi _\gamma }(v))}}{{\partial \xi }}= & {} \frac{{\partial I_\mathrm{input}^c(\pi _\gamma ^x,\pi _\gamma ^y)}}{{\partial \xi }} \nonumber \\= & {} \frac{{\partial I_\mathrm{input}^c(p_x,p_y)}}{{\partial x}}\frac{{p_x}}{{\partial \xi }}+ \frac{{\partial I_\mathrm{input}^c(p_{x},p_{y})}}{{\partial y}}\frac{{p_y}}{{\partial \xi }}\nonumber \\ \end{aligned}$$
(22)

Here, \(\xi = \left( {\alpha ,\beta ,\gamma ,\iota _{d},\iota _{s}}, \mu \right) \).

1.2 Prior cost function

According to the theory of PCA, the parameters \(\alpha \) and \(\beta \) obey K-dimension normal distribution:

$$\begin{aligned} \begin{array}{ll} \alpha \sim \frac{1}{{{{\left( {2\pi } \right) }^{K/2}}}}{e^{ - \frac{{{{\left\| \alpha \right\| }^2}}}{2}}}\\ \beta \sim \frac{1}{{{{\left( {2\pi } \right) }^{K/2}}}}{e^{ - \frac{{{{\left\| \beta \right\| }^2}}}{2}}} \end{array} \end{aligned}$$
(23)

Another parameter with several variates is \(\iota _{s}\). Nevertheless, we have no prior information about the same. According to the SHT, the height order of \(\iota _{s}\) is little, so we add regularization constrain as prior information when we optimize it. Thus, the prior cost function is

$$\begin{aligned} {C_p} = {\tau _{p1}}{\left\| \alpha \right\| ^2} + {\tau _{p2}}{\left\| \beta \right\| ^2} + {\tau _{p3}}{\left\| {{\iota _s}} \right\| ^2} \end{aligned}$$
(24)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, M., Peng, S. & Hu, X. A lighting robust fitting approach of 3D morphable model for face reconstruction. Vis Comput 32, 1223–1238 (2016). https://doi.org/10.1007/s00371-015-1158-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-015-1158-z

Keywords

Navigation