咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A Novel Quantization and Model... 收藏

A Novel Quantization and Model Compression Approach for Hardware Accelerators in Edge Computing

作     者:Fangzhou He Ke Ding DingjiangYan Jie Li Jiajun Wang Mingzhe Chen 

作者机构:State Key Laboratory of Intelligent Vehicle Safety TechnologyChongqing401133China Foresight Technology InstituteChongqing Changan Automobile Co.Ltd.Chongqing400023China School of Computer Science and EngineeringChongqing University of Science and TechnologyChongqing401331China 

出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))

年 卷 期:2024年第80卷第8期

页      面:3021-3045页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported by Open Fund Project of State Key Laboratory of Intelligent Vehicle Safety Technology by Grant with No.IVSTSKL-202311 Key Projects of Science and Technology Research Programme of Chongqing Municipal Education Commission by Grant with No.KJZD-K202301505 Cooperation Project between Chongqing Municipal Undergraduate Universities and Institutes Affiliated to the Chinese Academy of Sciences in 2021 by Grant with No.HZ2021015 Chongqing Graduate Student Research Innovation Program by Grant with No.CYS240801 

主  题:Edge computing model compression hardware accelerator power-of-two quantization 

摘      要:Massive computational complexity and memory requirement of artificial intelligence models impede their deploy-ability on edge computing devices of the Internet of Things(IoT).While Power-of-Two(PoT)quantization is pro-posed to improve the efficiency for edge inference of Deep Neural Networks(DNNs),existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead,and their efficiency is bounded by the bottleneck of computation latency and memory *** tackle this challenge,we present an efficient inference approach on the basis of PoT quantization and model *** integer-only scalar PoT quantization(IOS-PoT)is designed jointly with a distribution loss regularizer,wherein the regularizer minimizes quantization errors and training ***,two-stage model compression is developed to effectively reduce memory requirement,and alleviate bandwidth usage in communications of networked heterogenous learning *** product look-up table(P-LUT)inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge ***,comprehensive experiments on Residual Networks(ResNets)and efficient architectures with Canadian Institute for Advanced Research(CIFAR),ImageNet,and Real-world Affective Faces Database(RAF-DB)datasets,indicate that our approach achieves 2×∼10×improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods.A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array(FPGA)platform for accelerating convolution operations,with performance results showing that P-LUT reduces memory footprint by 1.45×,achieves more than 3×power efficiency and 2×resource efficiency,compared to the conventional bit-shifting scheme.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分