Vision & Graphics Lab (Digitization & Computer Vision and Graphics)
Research Lead: Yajie Zhao, Director
Background
Virtual characters — digitally generated humans that can speak, move and interact — are essential for entertainment, training and education systems. The central goal of the ICT Vision & Graphics Lab (VGL) is to make these characters look realistic, including lighting them convincingly. To achieve this level of fidelity, VGL uses our Academy Award-winning Light Stage, which was originally developed in-house at ICT, to capture and process production-quality assets. VGL collaborates with film industry companies, such as Sony Pictures Imageworks and WETA Digital.
Objectives
VGL is a research lab that is dedicated to advancing human digitalization. With a focus on avatar creation, we are constantly striving to improve the quality and fidelity of our work. To achieve this, we are always exploring new technologies and updating our processing pipeline.
In addition, we have made significant contributions to the field of production-quality digital human creation by incorporating AI into our workflow. Our lab has collected an extensive database and contributed an ICT-face morphable face model to the community. Using these tools, we have developed AI solutions that enable rapid data capturing and processing, personalized avatar creation, and physically-based avatar rendering.
Our mission is to make movie-quality avatars accessible to all communities through a digital human platform that enables low-cost creation, editing, and rendering. In addition to our work in avatar creation, we are also developing new representations and approaches for dynamic human capture. This will allow real-time performance capture for VR/AR applications, opening up a whole new world of possibilities for digital human interaction.
Results
The VGL Light Stage has been recognized with two Scientific and Technical Awards from the Academy of Motion Pictures, and its technology has been used in 49 movies. It partners with industry leaders such as Nvidia, Meta, and Digital Domain to advance the development of avatar technologies. Moreover, the publication of over 160 top-tier academic papers has played a pioneering role and had a significant impact in this field.
Next Steps
We are expanding our research scope to include scene/terrain understanding, reconstruction, and interaction. Ultimately, digital humans should be able to play and interact with the world we live in.