报告题目：Human-Centered Modeling for AR/VR, CG, CV
Over the past 30 years, 3D virtual human characters and avatars are traditionally created in Movie, Animation, and Gaming industry by studios with professional modeling and animation tools. However, in recent years there is another trend in research undergoing in both academia and research lab of those tech giants (like Facebook, Google, Microsoft, Samsung, etc.) — to bring virtual human to our daily life: not only allowing avatar creation from ourselves, but also making the animation with motion and speech indistinguishable from real human. Example technologies include: Apple’s ARKit for face tracking and Animoji, Huawei’s AR Engine for human body pose tracking and recognition, Samsung’s Hololab 3D scanning for digitizing humans, Facebook’s Codec Avatar project, Microsoft Hololens’ virtual AI assistant, Snapchat’s Gender Change Filter for Selfies, to the notorious DeepFake technology producing fake videos of celebrities. In this talk, I would like to review the evolution of human-centered modeling technology and its potential impact in the near future.
Xiaohu Guo received his PhD degree in Computer Science from Stony Brook University, and his BS degree in Computer Science from University of Science and Technology of China. He is currently a Full Professor of Computer Science at The University of Texas at Dallas, and a Sr. Research Consultant for Samsung Research America. His research interests include Computer Graphics and Computer Vision, Geometric Modeling and Processing, with the current emphasis on problems of Human Face and Body Modeling, Medial Axis Transform, and Medical Image Computing. He is the recipient of the National Science Foundation CAREER Award in 2012.
© 2020 厦门大学信息学院 版权所有