Robust facial landmark detection and tracking across poses and expressions for in-the-wild monocular video
Robust facial landmark detection and tracking across poses and expressions for in-the-wild monocular video作者机构:Bournemouth University Harbin Institute of Technology
出 版 物:《Computational Visual Media》 (计算可视媒体(英文版))
年 卷 期:2017年第3卷第1期
页 面:33-47页
核心收录:
学科分类:08[工学] 080203[工学-机械设计及理论] 0835[工学-软件工程] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by the Harbin Institute of Technology Scholarship Fund 2016 the National Centre for Computer Animation,Bournemouth University
主 题:face tracking facial reconstruction landmark detection
摘 要:We present a novel approach for automatically detecting and tracking facial landmarks acrossposesandexpressionsfromin-the-wild monocular video data,e.g.,You Tube videos and smartphone *** method does not require any calibration or manual adjustment for new individual input videos or ***,we propose a method of robust 2D facial landmark detection across poses,by combining shape-face canonical-correlation analysis with a global supervised descent *** 2D regression-based methods are sensitive to unstable initialization,and the temporal and spatial coherence of videos is ignored,we utilize a coarse-todense 3D facial expression reconstruction method to refine the 2D *** one side,we employ an in-the-wild method to extract the coarse reconstruction result and its corresponding texture using the detected sparse facial landmarks,followed by robust pose,expression,and identity *** the other side,to obtain dense reconstruction results,we give a face tracking flow method that corrects coarse reconstruction results and tracks weakly textured areas;this is used to iteratively update the coarse face ***,a dense reconstruction result is estimated after it *** experiments on a variety of video sequences recorded by ourselves or downloaded from You Tube show the results of facial landmark detection and tracking under various lighting conditions,for various head poses and facial *** overall performance and a comparison with state-of-art methods demonstrate the robustness and effectiveness of our method.