Evaluating quality of motion for unsupervised video object segmentation
作者机构:Jiangsu Key Laboratory of Big Data Analysis TechnologyJiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment TechnologyNanjing University of Information Science&TechnologyNanjing 210044China12
出 版 物:《Optoelectronics Letters》 (光电子快报(英文版))
年 卷 期:2024年第20卷第6期
页 面:379-384页
核心收录:
学科分类:08[工学] 080203[工学-机械设计及理论] 0802[工学-机械工程]
基 金:supported by the National Natural Science Foundation of China (No.61872189)
主 题:Evaluating quality of motion for unsupervised video object segmentation
摘 要:Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network s attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods.