咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Adversarial Attacks on Feature... 收藏

Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection

作     者:Bader Rasheed Adil Khan S.M.Ahsan Kazmi Rasheed Hussain Md.Jalil Piran Doug Young Suh 

作者机构:Institute of Data Science and Articial IntelligenceInnopolis UniversityInnopolis420500Russia Institute of Information Security and Cyberphysical SystemsInnopolis UniversityInnopolis420500Russia Department of Computer Science and EngineeringSejong UniversitySeoulKorea Department of Electronics EngineeringKyung Hee UniversityYonginKorea 

出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))

年 卷 期:2021年第68卷第7期

页      面:921-939页

核心收录:

学科分类:0831[工学-生物医学工程(可授工学、理学、医学学位)] 0808[工学-电气工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0805[工学-材料科学与工程(可授工学、理学学位)] 0701[理学-数学] 0801[工学-力学(可授工学、理学学位)] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by Korea Electric Power Corporation(Grant Number:R18XA02) 

主  题:Malicious URLs detection deep learning adversarial attack web security 

摘      要:Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing *** researches have investigated the role of machine learning(ML)models to detect malicious *** using ML algorithms,rst,the features of URLs are extracted,and then different ML models are *** limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the ***,deep learning(DL)models are used to solve these issues since they are able to perform featureless ***,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial *** this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world *** propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a *** attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分