发明名称 METHOD FOR REAL-TIME FACE ANIMATION BASED ON SINGLE VIDEO CAMERA
摘要 The invention discloses a method for real-time face animation based on single video camera. The method tracks 3D locations of face feature points in real time by adopting a single video camera, and parameterizes head poses and facial expressions according to the 3D locations, finally may map these parameters into an avatar to drive face animation of an animation character. The present invention may achieve a real time speed by merely adopting a usual video camera of the user instead of an advanced acquisition equipment; the present invention may process all kinds of wide-angle rotations, translation and exaggerated expressions of faces accurately; the present invention may also work under different illumination and background environments, which include indoor and sunny outdoor.
申请公布号 US2015035825(A1) 申请公布日期 2015.02.05
申请号 US201414517758 申请日期 2014.10.17
申请人 ZHEJIANG UNIVERSITY 发明人 ZHOU KUN;WENG YANLIN;CAO CHEN
分类号 G06T13/40;G06K9/00;G06T7/00 主分类号 G06T13/40
代理机构 代理人
主权项 1. A method for real-time face animation based on single video camera, comprising the steps: (1) image acquisition and labeling: capturing multiple 2D images of a user with different poses and expressions by adopting a video camera, obtaining corresponding 2D face feature points for each image by adopting a 2D feature point regressor, and manually adjusting an inaccurate feature point which is detected automatically; (2) data preprocessing: generating a user expression blendshape model and calibrating a camera internal parameter by adopting the images with the labeled 2D face feature points, and thereby obtaining 3D feature points of the images; training, by adopting the 3D feature points and the 2D images acquired in step 1, to obtain a regressor that maps 2D images to the 3D feature points; (3) 3D feature point tracking: the user inputs an image in real time by using the video camera; for the input image, tracking the 3D face feature points in a current frame in real time by combining with the 3D face feature points in a previous frame and adopting the regressor obtained in step 2; (4) pose and expression parameterization: iteratively optimizing, by adopting locations of the 3D face feature points and combining with the user expression blendshape model obtained in step 2, to obtain parametric presentation of the head poses and facial expressions; (5) avatar driving: mapping the head poses and facial expression parameters into a virtual avatar to drive an animation character to perform face animation.
地址 HANGZHOU CN