咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A revisit to MacKay algorithm ... 收藏

A revisit to MacKay algorithm and its application to deep network compression

作     者:Chune LI Yongyi MAO Richong ZHANG Jinpeng HUAI Chune LI;Yongyi MAO;Richong ZHANG;Jinpeng HUAI

作者机构:School of Computer Science and EngineeringBeihang UniversityBeijing100191China School of Electrical Engineering and Computer ScienceUniversity of OttawaOttawaK1N6N5Canada 

出 版 物:《Frontiers of Computer Science》 (中国计算机科学前沿(英文版))

年 卷 期:2020年第14卷第4期

页      面:39-54页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 08[工学] 081201[工学-计算机系统结构] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported partly by China Scholarship Council(201706020062) by China 973 program(2015CB358700) by the National Natural Science Foundation of China(Grant Nos.61772059,61421003) by the Beijing Advanced Innovation Center for Big Data and Brain Computing(BDBC) State Key Laboratory of Software Development Environment(SKLSDE-2018ZX-17) 

主  题:deep learning MacKay algorithm model compression neural network 

摘      要:An iterative procedure introduced in MacKay’s evidence framework is often used for estimating the hyperparameter in empirical *** with the use of a particular form of prior,the estimation of the hyperparameter reduces to an automatic relevance determination model,which provides a soft way of pruning model *** the effectiveness of this estimation procedure,it has stayed primarily as a heuristic to date and its application to deep neural network has not yet been *** paper formally investigates the mathematical nature of this procedure and justifies it as a well-principled algorithm framework,which we call the MacKay *** an application,we demonstrate its use in deep neural networks,which have typically complicated structure with millions of parameters and can be pruned to reduce the memory requirement and boost computational *** experiments,we adopt MacKay algorithm to prune the parameters of both simple networks such as LeNet,deep convolution VGG-like networks,and residual netowrks for large image classification *** results show that the algorithm can compress neural networks to a high level of sparsity with little loss of prediction accuracy,which is comparable with the state-of-the-art.

读者评论 与其他读者分享你的观点