A Study on Identifying and Removing Sensitive Attributes in Data Mining

Kannasani Srinivasa Rao, M Krishnamurthy

Abstract


Data Mining is a technique which extracts useful information and data, like sequential patterns and the trends from the large amount of databases. The space to yourself responsive input data and the output data that is often used for selecting be worthy of defence against exploitation. In this paper we describe work in progress of our research project on how and to what amount authorized and moral rules can be incorporated in data mining algorithms to prevent such exploitation. For that purpose, fact sets in the field of public safety are used, made available by law and honesty departments. The centre is on preventing that selection rules turn out to distinguish particular groups of people in immoral or against the law ways. Important questions are how existing lawful and moral rules and principles can be translated in format understandable for computers and in which way these rules can be used to guide the data mining process. Furthermore, the technical potentials are used as response to plan solid directions and recommendation for formalising legislation. This will additional clarify how existing moral and lawful standards are to be applied on new technology and, when required, which new ethical and legal principles are to be developed. Opposing to previous attempt to protect space to you in data mining, we will not focus on (a priori) access limiting method concerning input data, but rather focus on (a posterior) accountability and clearness. Instead of limiting access to data, which is more and more hard to put into effect in a world of computerized and interlinked databases and information networks, rather the question how data can and may be used is stressed.


Full Text:

PDF

Refbacks

  • There are currently no refbacks.