咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >AsyCo: an asymmetric dual-task... 收藏

AsyCo: an asymmetric dual-task co-training model for partial-label learning

作     者:Beibei LI Yiyuan ZHENG Beihong JIN Tao XIANG Haobo WANG Lei FENG 

作者机构:College of Computer Science Chongqing University State Key Laboratory of Computer Science Institute of Software Chinese Academy of Sciences University of Chinese Academy of Sciences School of Software Technology Zhejiang University School of Computer Science and Engineering Nanyang Technological University 

出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))

年 卷 期:2025年

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by National Natural Science Foundation of China (Grant Nos. 62106028, 62072450) Chongqing Overseas Chinese Entrepreneurship and Innovation Support Program 

摘      要:Partial-label learning (PLL) is a typical problem of weakly supervised learning, where each training instance is annotated with a set of candidate labels. Self-training PLL models achieve state-of-the-art performance but suffer from error accumulation problems caused by mistakenly disambiguated instances. Although co-training can alleviate this issue by training two networks simultaneously and allowing them to interact with each other, most existing co-training methods train two structurally identical networks with the same task, i.e., are symmetric, rendering it insufficient for them to correct each other due to their similar limitations. Therefore, in this paper, we propose an asymmetric dual-task co-training PLL model called AsyCo,which forces its two networks, i.e., a disambiguation network and an auxiliary network, to learn from different views explicitly by optimizing distinct tasks. Specifically, the disambiguation network is trained with a self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from the noisy pairwise similarity labels that are constructed according to the learned label confidence. Finally, the error accumulation problem is mitigated via information distillation and confidence refinement. Extensive experiments on both uniform and instance-dependent partially labeled datasets demonstrate the effectiveness of AsyCo.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分