咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Improving Transferable Targete... 收藏

Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization

作     者:Ding, Zhiyi Sun, Lei Mao, Xiuqing Dai, Leyu Ding, Ruiyang 

作者机构:Informat Engn Univ Sch Cryptog Engn Zhengzhou 450000 Peoples R China 

出 版 物:《CMC-COMPUTERS MATERIALS & CONTINUA》 (计算机、材料和连续体(英文))

年 卷 期:2024年第80卷第3期

页      面:4387-4412页

核心收录:

学科分类:08[工学] 0837[工学-安全科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Object detection model security targeted attack gradient diversity 

摘      要:Object detection finds wide application in various sectors, including autonomous driving, industry, and healthcare. Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples. This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems. Most existing adversarial attack strategies focus primarily on image classification problems, failing to fully exploit the unique characteristics of object detection models, thus resulting in widespread deficiencies in their transferability. Furthermore, previous research has predominantly concentrated on the transferability issues of non-targeted attacks, whereas enhancing the transferability of targeted adversarial examples presents even greater challenges. Traditional attack techniques typically employ cross-entropy as a loss measure, iteratively adjusting adversarial examples to match target categories. However, their inherent limitations restrict their broad applicability and transferability across different models. To address the aforementioned challenges, this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models. Within the framework of iterative attacks, we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories (logit margin). Secondly, a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks. This enhances the diversity of gradients, preventing overfitting to white-box models. Lastly, perturbations are applied only within the specified object s bounding box to reduce the perturbation range, enhancing attack stealthiness. Experiments wer

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分