咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Re-Distributing Facial Feature... 收藏

Re-Distributing Facial Features for Engagement Prediction with ModernTCN

作     者:Xi Li Weiwei Zhu Qian Li Changhui Hou Yaozong Zhang 

作者机构:College of Information and Artificial IntelligenceNanchang Institute of Science and TechnologyNanchang330108China School of Electrical and Information EngineeringWuhan Institute of TechnologyWuhan430205China School of Electronic Information EngineeringWuhan Donghu UniversityWuhan430212China 

出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))

年 卷 期:2024年第81卷第10期

页      面:369-391页

核心收录:

学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程] 

基  金:supported by the National Natural Science Foundation of China(No.62367006) the Graduate Innovative Fund of Wuhan Institute of Technology(Grant No.CX2023551) 

主  题:Engagement prediction spatiotemporal network re-distributing facial features temporal convolutional network 

摘      要:Automatically detecting learners’engagement levels helps to develop more effective online teaching and assessment programs,allowing teachers to provide timely feedback and make personalized adjustments based on students’needs to enhance teaching *** approaches mainly rely on single-frame multimodal facial spatial information,neglecting temporal emotional and behavioural features,with accuracy affected by significant pose ***,convolutional padding can erode feature maps,affecting feature extraction’s representational *** address these issues,we propose a hybrid neural network architecture,the redistributing facial features and temporal convolutional network(RefEIP).This network consists of three key components:first,utilizing the spatial attention mechanism large kernel attention(LKA)to automatically capture local patches and mitigate the effects of pose variations;second,employing the feature organization and weight distribution(FOWD)module to redistribute feature weights and eliminate the impact of white features and enhancing representation in facial feature ***,we analyse the temporal changes in video frames through the modern temporal convolutional network(ModernTCN)module to detect engagement *** constructed a near-infrared engagement video dataset(NEVD)to better validate the efficiency of the RefEIP *** extensive experiments and in-depth studies,we evaluated these methods on the NEVD and the Database for Affect in Situations of Elicitation(DAiSEE),achieving an accuracy of 90.8%on NEVD and 61.2%on DAiSEE in the fourclass classification task,indicating significant advantages in addressing engagement video analysis problems.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分