Ensuring User Privacy and Model Security via Machine Unlearning: A Review
作者机构:College of ComputerNational University of Defense TechnologyChangsha410073China School of Computing and CommunicationsLancaster UniversityEnglandB23UK
出 版 物:《Computers, Materials & Continua》 (计算机、材料和连续体(英文))
年 卷 期:2023年第77卷第11期
页 面:2645-2656页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by the National Key Research and Development Program of China(2020YFC2003404) the National Natura Science Foundation of China(No.62072465,62172155,62102425,62102429) the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2021RC2071) the Natural Science Foundation of Hunan Province(No.2022JJ40564)
主 题:Machine learning machine unlearning privacy protection trusted data deletion
摘 要:As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other *** the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user ***,by polluting the training data,a malicious adversary can poison the model,thus compromising model *** data provider hopes that the model trainer can prove to them the confidentiality of the *** will be required to withdraw data when the trust *** the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model ***,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and *** this paper,we present the first comprehensive survey of this *** summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions.