Enhancing the robustness of object detection via 6G vehicular edge computing
作者机构:School of Telecommunication EngineeringXidian UniversityXi'an710071China Department of PhysicsELEDIA@AUTHAristotle University of Thessaloniki54124ThessalonikiGreece Shenzhen Institute for Advanced StudyUniversity of Electronic Science and Technology of ChinaShenzhen518110China
出 版 物:《Digital Communications and Networks》 (数字通信与网络(英文版))
年 卷 期:2022年第8卷第6期
页 面:923-931页
核心收录:
学科分类:0810[工学-信息与通信工程] 080904[工学-电磁场与微波技术] 0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 0839[工学-网络空间安全] 08[工学] 080402[工学-测试计量技术及仪器] 0804[工学-仪器科学与技术] 0835[工学-软件工程] 081001[工学-通信与信息系统] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by the National Key Research and Development Program of China(2020YFB1807500), the National Natural Science Foundation of China (62072360, 62001357, 62172438,61901367), the key research and development plan of Shaanxi province(2021ZDLGY02-09, 2020JQ-844) the Natural Science Foundation of Guangdong Province of China(2022A1515010988) Key Project on Artificial Intelligence of Xi'an Science and Technology Plan(2022JH-RGZN-0003) Xi'an Science and Technology Plan(20RGZN0005) the Xi'an Key Laboratory of Mobile Edge Computing and Security (201805052-ZD3CG36)
主 题:6G Vehicular edge computing Object detection Feature fusion Model compression Model deployment
摘 要:Academic and industrial communities have been paying significant attention to the 6th Generation (6G) wireless communication systems after the commercial deployment of 5G cellular communications. Among the emerging technologies, Vehicular Edge Computing (VEC) can provide essential assurance for the robustness of Artificial Intelligence (AI) algorithms to be used in the 6G systems. Therefore, in this paper, a strategy for enhancing the robustness of AI model deployment using 6G-VEC is proposed, taking the object detection task as an example. This strategy includes two stages: model stabilization and model adaptation. In the former, the state-of-the-art methods are appended to the model to improve its robustness. In the latter, two targeted compression methods are implemented, namely model parameter pruning and knowledge distillation, which result in a trade-off between model performance and runtime resources. Numerical results indicate that the proposed strategy can be smoothly deployed in the onboard edge terminals, where the introduced trade-off outperforms the other strategies available.