BAD-FM:Backdoor Attacks Against Factorization-Machine Based Neural Network for Tabular Data Prediction
作者机构:College of Electrical Engineering Zhejiang University School of Computer Science Wuhan University
出 版 物:《Chinese Journal of Electronics》 (电子学报(英文))
年 卷 期:2024年第33卷第4期
页 面:1077-1092页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 0839[工学-网络空间安全] 08[工学] 0835[工学-软件工程] 081201[工学-计算机系统结构] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:Adaptation models Systematics Frequency modulation Finance Predictive models Prediction algorithms Data models
摘 要:Backdoor attacks pose great threats to deep neural network models. All existing backdoor attacks are designed for unstructured data(image, voice, and text), but not structured tabular data, which has wide real-world applications, e.g., recommendation systems, fraud detection, and click-through rate prediction. To bridge this research gap, we make the first attempt to design a backdoor attack framework, named BAD-FM, for tabular data prediction models. Unlike images or voice samples composed of homogeneous pixels or signals with continuous values, tabular data samples contain well-defined heterogeneous fields that are usually sparse and discrete. Tabular data prediction models do not solely rely on deep networks but combine shallow components(e.g., factorization machine, FM) with deep components to capture sophisticated feature interactions among fields. To tailor the backdoor attack framework to tabular data models, we carefully design field selection and trigger formation algorithms to intensify the influence of the trigger on the backdoored model. We evaluate BAD-FM with extensive experiments on four datasets, i.e.,HUAWEI, Criteo, Avazu, and KDD. The results show that BAD-FM can achieve an attack success rate as high as 100%at a poisoning ratio of 0.001%, outperforming baselines adapted from existing backdoor attacks against unstructured data models. As tabular data prediction models are widely adopted in finance and commerce, our work may raise alarms on the potential risks of these models and spur future research on defenses.