咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Echo State Network Based on Im... 收藏

Echo State Network Based on Improved Knowledge Distillation for Edge Intelligence

作     者:Jian ZHOU Yuwen JIANG Lijie XU Lu ZHAO Fu XIAO Jian ZHOU;Yuwen JIANG;Lijie XU;Lu ZHAO;Fu XIAO

作者机构:College of Computer Nanjing University of Posts and Telecommunications Jiangsu High Technology Research Key Laboratory for Wireless Sensor Networks 

出 版 物:《Chinese Journal of Electronics》 (电子学报(英文))

年 卷 期:2024年第33卷第1期

页      面:101-111页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 081104[工学-模式识别与智能系统] 0811[工学-控制科学与工程] 

基  金:supported by the National Natural Science Foundation of China (Grant Nos. 61972210, 61802206, and 61803212) the Science and Technology Planning Project of Jiangsu Province (Grant No. BE2020729) the 1311 Talent Program of Nanjing University of Posts and Telecommunications 

主  题:Echo state network Reservoir structure optimization Knowledge distillation Edge intelligence Time series prediction 

摘      要:Echo state network(ESN) as a novel artificial neural network has drawn much attention from time series prediction in edge intelligence. ESN is slightly insufficient in long-term memory, thereby impacting the prediction performance. It suffers from a higher computational overhead when deploying on edge devices. We firstly introduce the knowledge distillation into the reservoir structure optimization, and then propose the echo state network based on improved knowledge distillation(ESN-IKD) for edge intelligence to improve the prediction performance and reduce the computational overhead. The model of ESN-IKD is constructed with the classic ESN as a student network, the long and short-term memory network as a teacher network, and the ESN with double loop reservoir structure as an assistant network. The student network learns the long-term memory capability of the teacher network with the help of the assistant network. The training algorithm of ESN-IKD is proposed to correct the learning direction through the assistant network and eliminate the redundant knowledge through the iterative pruning. It can solve the problems of error learning and redundant learning in the traditional knowledge distillation process. Extensive experimental simulation shows that ESN-IKD has a good time series prediction performance in both long-term and short-term memory, and achieves a lower computational overhead.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分