咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Competition on robust deep lea... 收藏

Competition on robust deep learning

作     者:Yinpeng Dong Chang Liu Wenzhao Xiang Hang Su Jun Zhu Yinpeng Dong;Chang Liu;Wenzhao Xiang;Hang Su;Jun Zhu

作者机构:Department of Computer Science and Technology Institute for AI Tsinghua-Bosch Joint ML Center THBI Lab BNRist Center Tsinghua University Institute of Image Communication and Networks Engineering in the Department of Electronic Engineering (EE) Shanghai Jiao Tong University Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS) Institute of Computing Technology CAS Peng Cheng Laboratory Pazhou Laboratory (Huangpu) 

出 版 物:《National Science Review》 (国家科学评论(英文版))

年 卷 期:2023年第10卷第6期

页      面:13-15页

核心收录:

学科分类:08[工学] 081104[工学-模式识别与智能系统] 0811[工学-控制科学与工程] 

基  金:supported by the National Key Research and Development Program of China(2020AAA0104304) the National Natural Science Foundation of China (62076147, 62276149,U19B2034 and U19A2081) 

主  题:robustness adversarial example adversarial training deep learning 

摘      要:PROBLEM In recent years,the rapid development of artificial intelligence (AI) technology,especially machine learning and deep learning, is profoundly changing human production and lifestyle.In various fields,such as robotics,face recognition,autonomous driving and healthcare,AI is playing an important role.However,although AI is promoting the technological revolution and industrial progress,its security risks are often overlooked.Previous studies have found that the wellperforming deep learning models are extremely vulnerable to adversarial examples [1-3].The adversarial examples are crafted by applying small,humanimperceptible perturbations to natural examples,but can mislead deep learning models to make wrong predictions.The vulnerability of deep learning models to adversarial examples can raise security and safety threats to various realworld applications.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分