As an example, let's take a pharmaceutical company that sells drugs on the Internet. The marketing department may...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
want to mine the collected user data in order to fashion a new advertising campaign. To prevent privacy breaches through data inference, it is critical that this data is anonymized prior to being analyzed. Data anonymization allows analysis to take place, but ensures that no sensitive information can be learned about a specific individual. The process is a lot harder than it may seem. Even a combination of non-personal data can be exploited to deduce who a record could belong to.
Using our example, even if the dataset given to the sales department has had individual customer names and email addresses removed, research shows that about half the U.S. population can be identified just from three pieces of information: date of birth, gender and place. If a zip code is available, the figure rises to 85%. Date of birth, gender and place would provide useful information for an advertising campaign, but taken together they could potentially enable a salesperson to re-associate a customer with their purchase records, causing what is called a re-identification disclosure. If these purchases were for drugs to treat a particular illness, the salesperson could deduce that the customer had a particular disease, resulting in a predictive disclosure and a breach of his or her privacy.
When analyzing Web application data, it is important that you take steps to anonymize it. The inclusion of any sensitive data should be carefully considered. Unfortunately, data anonymization is still really in its infancy. Disguising or hiding certain data in the original dataset can provide general privacy protection while still allowing reasonably accurate analysis. Instead of providing date of birth, for example, an alternative could be to use age groups. However, the only effective way to prevent disclosures like the one above is to remove analytically valuable information from the dataset. Finally, another important warning: when testing a new system, real customer data should never be used.
- Deloitte and Touche's Russell Jones helps answer an enterprise's two biggest questions: Where is its data, and how is it handled?
- Michael Cobb explains which tools can keep personally identifiable information (PII) out of access logs?
Dig Deeper on Web application and API security best practices
Related Q&A from Michael Cobb
Can two-factor authentication be applied to a mobile device that's used as a 2FA factor? Michael Cobb explores the different knowledge factors and ...continue reading
Running a private certificate authority can pose significant risks and challenges to meet baseline requirements. Michael Cobb explores what ...continue reading
A recently discovered Android app permissions flaw can expose users to attacks. Michael Cobb explains what the risks are and how Android O security ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.