咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Utility-Based Anonymization Us... 收藏

Utility-Based Anonymization Using Generalization Boundaries to Protect Sensitive Attributes

Utility-Based Anonymization Using Generalization Boundaries to Protect Sensitive Attributes

作     者:Abou-el-ela Abdou Hussien Nagy Ramadan Darwish Hesham A. Hefny 

作者机构:Department of Computer Science Faculty of Science and Arts Shaqra University Shaqra KSA Department of Computer and Information Sciences Institute of Statistical Studies and Research Cairo University Cairo Egypt 

出 版 物:《Journal of Information Security》 (信息安全(英文))

年 卷 期:2015年第6卷第3期

页      面:179-196页

学科分类:1002[医学-临床医学] 100214[医学-肿瘤学] 10[医学] 

主  题:Privacy Privacy Preserving Data Mining K-Anonymity Generalization Boundaries Suppression 

摘      要:Privacy preserving data mining (PPDM) has become more and more important because it allows sharing of privacy sensitive data for analytical purposes. A big number of privacy techniques were developed most of which used the k-anonymity property which have many shortcomings, so other privacy techniques were introduced (l-diversity, p-sensitive k-anonymity, (α, k)-anonymity, t-closeness, etc.). While they are different in their methods and quality of their results, they all focus first on masking the data, and then protecting the quality of the data. This paper is concerned with providing an enhanced privacy technique that combines some anonymity techniques to maintain both privacy and data utility by considering the sensitivity values of attributes in queries using sensitivity weights which determine taking in account utility-based anonymization and then only queries having sensitive attributes whose values exceed threshold are to be changed using generalization boundaries. The threshold value is calculated depending on the different weights assigned to individual attributes which take into account the utility of each attribute and those particular attributes whose total weights exceed the threshold values is changed using generalization boundaries and the other queries can be directly published. Experiment results using UT dallas anonymization toolbox on real data set adult database from the UC machine learning repository show that although the proposed technique preserves privacy, it also can maintain the utility of the publishing data.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分