咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >De-biased knowledge distillati... 收藏

De-biased knowledge distillation framework based on knowledge infusion and label de-biasing techniques

作     者:Yan Li Tai-Kang Tian Meng-Yu Zhuang Yu-Ting Sun Yan Li;Tai-Kang Tian;Meng-Yu Zhuang;Yu-Ting Sun

作者机构:School of Economics and ManagementUniversity of Electronic Science and Technology of ChinaChengdu611731China School of Economics and ManagementBeijing University of Posts and TelecommunicationBeijing100876China School of Electrical Engineering and Computer ScienceThe University of QueenslandBrisbane4072Australia 

出 版 物:《Journal of Electronic Science and Technology》 (电子科技学刊(英文版))

年 卷 期:2024年第22卷第3期

页      面:57-68页

核心收录:

学科分类:081203[工学-计算机应用技术] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by the National Natural Science Foundation of China under Grant No.62172056 Young Elite Scientists Sponsorship Program by CAST under Grant No.2022QNRC001 

主  题:De-biasing Deep learning Knowledge distillation Model compression 

摘      要:Knowledge distillation,as a pivotal technique in the field of model compression,has been widely applied across various ***,the problem of student model performance being limited due to inherent biases in the teacher model during the distillation process still *** address the inherent biases in knowledge distillation,we propose a de-biased knowledge distillation framework tailored for binary classification *** the pre-trained teacher model,biases in the soft labels are mitigated through knowledge infusion and label de-biasing *** on this,a de-biased distillation loss is introduced,allowing the de-biased labels to replace the soft labels as the fitting target for the student *** approach enables the student model to learn from the corrected model information,achieving high-performance deployment on lightweight student *** conducted on multiple real-world datasets demonstrate that deep learning models compressed under the de-biased knowledge distillation framework significantly outperform traditional response-based and feature-based knowledge distillation models across various evaluation metrics,highlighting the effectiveness and superiority of the de-biased knowledge distillation framework in model compression.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分