Digital Double
Automatic Generation and Stylization of 3D Facial Rigs
This project showcases a fully automatic pipeline for generating and stylizing high geometric and textural quality facial rigs. They are automatically rigged with facial blendshapes for animation, and can be used across platforms for applications including virtual reality, augmented reality, remote collaboration, gaming and more. From a set of input facial photos, our approach is to be able to create a photorealistic, fully rigged character in few minutes. The facial mesh reconstruction is based on state-of-the art photogrammetry approaches. Automatic landmarking coupled with ICP registration with regularization provide direct correspondence and registration from a given generic mesh to the acquired facial mesh. Then, using deformation transfer, existing blendshapes are transferred from the generic to the reconstructed facial mesh. The reconstructed face is then fit to the full body generic mesh. Extra geometry such as jaws, teeth and nostrils are retargeted and transferred to the character. An automatic iris color extraction algorithm is performed to colorize a separate eye texture, animated with dynamic UVs. Finally, an extra step applies a style to the photorealistic face to enable blending of personalized facial features into any other character. The user’s face can then be adapted to any human or non-human generic mesh. The project also investigates the development of new intuitive authoring tools to edit facial meshes.
Papers
Kerbiriou, G., Avril, Q., Danieau, F., & Marchal, M. (2022). Detailed Eye Region Capture and Animation. THE 21st ACM SIGGRAPH / Eurographics SYMPOSIUM ON COMPUTER ANIMATION (SCA 2022).
Olivier, N., Baert, K., Danieau, F., Multon, F., & Avril, Q. (2023). FaceTuneGAN: Face autoencoder for convolutional expression transfer using neural generative adversarial networks. Computers & Graphics, 110, 69–85. https://doi.org/https://doi.org/10.1016/j.cag.2022.12.004
Olivier, N., Kerbiriou, G., Argelaguet, F., Avril, Q., Danieau, F., Guillotel, P., Hoyet, L., & Multon, F. (2021). Study on Automatic 3D Facial Caricaturization: from rules to Deep Learning. Frontiers on Virtual Reality. https://doi.org/10.3389/frvir.2021.785104
Olivier, N., Hoyet, L., Danieau, F., Arguelaguet, F., Avril, Q., Lécuyer, A., Guillotel, P., & Multon, F. (2020). The Impact of Stylization on Face Recognition. Symposium on Applied Perception (ACM-SAP). https://doi.org/10.1145/3385955.3407930
Colas, A., Guiotte, F., Danieau, F., Le Clerc, F., & Avril, Q. (2020). Fat Pad Cages for Facial Posing. ArXiv Preprint ArXiv:2010.05528.
Danieau, F., Gubins, I. I., Olivier, N., Dumas, O., Denis, B., Lopez, T., Mollet, N., Frager, B., & Avril, Q. (2019). Automatic Generation and Stylization of 3D Facial Rigs. 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 784–792. https://doi.org/10.1109/VR.2019.8798208
Videos
Award
The Digital Double demonstration was presented at IBC 2019 and won the IBC Best Show Award presented by TVB Europe.
Press
Hardisk. October 2019. On m’a cloné (Fr)
InterDigital. September 2019. InterDigital R&I Recognized for Immersive Video Technology at Inaugural IBC Showcase (En)
Ranch Computing. July 2019. Back from Siggraph 2019 Announcements, exhibitors, keynotes… (En)