Anonymising personal data need not guarantee privacy, says ICO

November 25, 2012 |

In a recently released code of practice the UK’s Information Commissioner’s Office states that data anonymisation does not have to provide a 100% guarantee to individuals’ privacy in order for it to be lawful for organisations to disclose the information. The ICO has released a code of conduct regarding anonymisation.

Organisations that anonymise personal data can disclose that information even if there is a “remote” chance that the data can be matched with other information and lead to individuals being identified. Organisations that take action to mitigate the risk of anonymised data being used to identify individuals will be considered to have complied with the Data Protection Act even if that action cannot eradicate the threat of the data being used to identify someone. The Act does not require anonymisation to be completely risk free.

The ICO said that it can be difficult for organisations to know whether data they have anonymised can still be classed as ‘personal data’. In “borderline” cases, organisations will have to assess the individual “circumstances of the case” to determine whether there is too great a risk that disclosing anonymised data would lead to individuals being identified. In cases where the risk of data matching is high, organisations can reduce that risk by only disclosing “parts of databases” in order to make “direct linkage more difficult”.
While the ICO does not require organisations to obtain the consent of individuals to disclose anonymised data it may not always be appropriate to disclose such information if there is a risk that an “educated guess” can be made as to the identity of the person whose data is anonymised where that leads to the misidentification of an individual.

The Guardian has covered the in  New code of practice to minimise privacy risks in anonymised data

It provides:

The Information Commissioner‘s Office (ICO) has announced a new data protection code of practice, which advises on how to protect the privacy rights of individuals while dealing with large and rich databases.

The public sector is making increasing use of big data and earlier this month Margaret Hodge, chair of the Public Accounts Committee, spoke of the need for the government to improve its big data strategy in order to realise £33bn of potential savings.

With the increased use of large databases, especially those containing data on members of the public, comes a heightened risk of breaching the individual’s right to privacy.

Even when such data is anonymised, some data-sets contain such rich information that it can be possible to identify an individual through the data alone unless proper precautions are taken.

The announcement sets out best practice in ensuring anonymised data lives up to its name, ensuring even the most determined of attempts to identify an individual from a public data-set will prove fruitless.

Speaking at the publication of the new code of practice Christopher Graham, UK Information Commissioner, said:

“We have published our code of practice on managing the data protection risks related to anonymisation to provide a framework for practitioners to use when considering whether to produce anonymised information. The code also aims to bring a greater consistency of approach and to show what we expect of organisations using this data.

“Failure to anonymise personal data correctly can result in enforcement action from the ICO. However we recognise that anonymised data can have important benefits, increasing the transparency of government and aiding the UK’s widely regarded research community.”

Big data is being used by governments throughout the world across areas including clamping down on tax evasion and improving healthcare provision.

The announcement should go some way towards allaying concerns that the increasing collection and manipulation of personal data would allow individuals to be identified.

Bridget Treacy, who leads the UK Privacy and Information Management practice at law firm Hunton & Williams, spoke of the how the code will help data-holders carry anonymise their data to the highest standards:

“Ensuring that data is properly anonymised, and not just “masked” can be very difficult to achieve in practice, particularly as technology is constantly evolving. Organisations often are uncertain about the legal basis for the anonymisation process itself, and whether anonymised data might constitute personal data. The code deals with both of these issues.”

Treacy cautioned that the code “will not be legally binding”, but added that it “may influence enforcement.”

The code focuses on ensuring that new forms and quantities of data are managed within the legal framework of the Data Protection Act (DPA) 1998.

The European Data Protection Directive states that the principles of data protection do not apply to data anonymised in such a way that its subject is no longer identifiable, essentially placing the onus on ensuring that this guarantee of anonymity is met.

The key consideration here is not whether it is possible for an individual to be identified, but rather the likelihood of such identification taking place.

As such, the code sets out a framework for an organisation to follow when establishing this level of probability, and thus whether or not the DPA applies to the databases it is handling.

In addition to the initial anonymisation process, those holding such data must account for the likelihood of re-identification, the process by which someone in possession of one data-set could combine it with one or more additional databases to establish an individual’s identity.

This is of particular concern when viewed in conjunction with the Freedom of Information (FOI) Act, since an organisation dealing with an FOI request must decide whether the release of its data would breach the DPA.

Until now, there was no written obligation for an FOI officer to consider the implications of a data release for use in re-identifiying individuals in other data-sets, but this will now have to be taken into account.

Another consideration for data-holders set out in the code is the question of whether or not their anonymised data could be combined, for the purpose of re-identification, with records available through social media or internet searches.

Geospatial data is also covered, with the code recommending broadening the temporal or spatial scale of data in order to decrease the possibility of using a specific location or event to identify an individual or incident.

As an example, the code recommends using heat-maps for crime mapping, minimising the risk that an individual address, individual or crime could be identified.

The ICO has also announced the creation of a new UK Anonymisation Network (UKAN) led by the University of Manchester, with the University of Southampton, Office for National Statistics and the government’s new Open Data Institute (ODI).

The Network will receive £15,000 from the ICO over the next two years to enable good practice related to anonymisation to be shared across the public and private sector.

Leave a Reply

Verified by MonsterInsights