咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Adaptive Backdoor Attack again... 收藏

Adaptive Backdoor Attack against Deep Neural Networks

作     者:Honglu He Zhiying Zhu Xinpeng Zhang 

作者机构:School of Computer ScienceFudan UniversityShanghai200433China 

出 版 物:《Computer Modeling in Engineering & Sciences》 (工程与科学中的计算机建模(英文))

年 卷 期:2023年第136卷第9期

页      面:2617-2633页

核心收录:

学科分类:0710[理学-生物学] 08[工学] 081104[工学-模式识别与智能系统] 080203[工学-机械设计及理论] 0802[工学-机械工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Backdoor attack AI security DNN 

摘      要:In recent years,the number of parameters of deep neural networks(DNNs)has been increasing rapidly.The training of DNNs is typically computation-intensive.As a result,many users leverage cloud computing and outsource their training procedures.Outsourcing computation results in a potential risk called backdoor attack,in which a welltrained DNN would performabnormally on inputs with a certain trigger.Backdoor attacks can also be classified as attacks that exploit fake images.However,most backdoor attacks design a uniformtrigger for all images,which can be easilydetectedand removed.In this paper,we propose a novel adaptivebackdoor attack.We overcome this defect and design a generator to assign a unique trigger for each image depending on its texture.To achieve this goal,we use a texture complexitymetric to create a specialmask for eachimage,which forces the trigger tobe embedded into the rich texture regions.The trigger is distributed in texture regions,which makes it invisible to humans.Besides the stealthiness of triggers,we limit the range of modification of backdoor models to evade detection.Experiments show that our method is efficient in multiple datasets,and traditional detectors cannot reveal the existence of a backdoor.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分