Court of Justice of the European Union rules that Meta must minimise the amount of personal information for personalised advertising, in this case about sexual orientation

October 7, 2024

Max Shrems has struck again. He has been successful in his claim against Meta on the user of sexual orientation about a user’s sexual orientation in personalised advertising as reported by the BBC in Meta must limit data for personalised ads – EU court and by breaking news in Activist wins privacy case against Meta over personal data on sexual orientation

Meta and other social media platforms use data to drive the effectiveness of personalised ads.  That means the collection of data, especially personal information, is a priority. In practice sensitive information, such as sexual orientation, may assist in refining the nature of ads directed at a person. 

The final judgment has not been published as yet. 

The BBC article provides:

Facebook-owner Meta must minimise the amount of people’s data it uses for personalised advertising, the EU’s highest court says.

The Court of Justice for the European Union (CJEU) ruled in favour of privacy campaigner Max Schrems, who complained that Facebook misused his personal data about his sexual orientation to target ads at him.

In complaints first heard by Austrian courts in 2020, Mr Schrems said he was targeted with adverts aimed at gay people despite never sharing information about his sexuality on the platform.

The CJEU said on Friday that data protection law does not unequivocally allow the company to use such data for personalised adverting.

“An online social network such as Facebook cannot use all of the personal data obtained for the purposes of targeted advertising, without restriction as to time and without distinction as to type of data,” it said.

Data relating to someone’s sexual orientation, race or ethnicity or health status is classed as sensitive and carries strict requirements for processing under EU data protection law.

Meta says it does not use so-called special category data to personalise adverts.

“We await the publication of the Court’s judgment and will have more to share in due course,” said a Meta spokesperson responding to a summary of the judgement on Friday.

They said the company takes privacy “very seriously” and it has invested more than five billion Euros “to embed privacy at the heart of all of our products”.

Facebook users can also access a wide range of tools and settings to manage how their information is used, they added.

“We are very pleased by the ruling, even though this result was very much expected,” said Mr Schrems’ lawyer Katharina Raabe-Stuppnig.

“Following this ruling only a small part of Meta’s data pool will be allowed to be used for advertising – even when users consent to ads,” they added.

Read the rest of this entry »

ChatGP , where privacy and AI collide

January 24, 2023

Artificial intelligence (better known as AI( has been around for awhile.  AI algorithims are a key part of Google’s success, in discerning our interests and needs and ordering goods and services as part of the search engine’s operation.  Facebook and Amazzon also rely on AI in making their money, with Facebook selling ads and Amazon putting items within tantalising reach.  AI has moved to centre stage in public policy discussion because it’s use threatens to be ubiquitous.  AI, and quantum computing, will be transformative in how business is done, services are provided and decisions are made.  That is likely to be for the good but there are legitimate concerns about its untrammeled use without regulatory oversight.  It will also impact employment with the most recent example being Microsoft laying off 10,000 staff to cut costs as it focuses on AI.

In the United States there are concerns that the use of ChatGPT has the potential of breaching privacy laws. In the UK the Information Commissioner’s Office is sufficiently concerned about the use of AI that it published an article on its website titled Addressing concerns on the use of AI by local authorities,

Chat GPT is an algorithm that is vexing educational institutions as it creates realistic text which may be difficult to distinguish from human created prose. It may defy anti plaigarism software. This is well summarised by the ABC with What is ChatGPT and why are schools and universities so worried about it?   It Read the rest of this entry »

To disclose or not disclose a data breach…UK companies fear reporting while a Brooklyn Hospital suffers a backlash because it did not notify about a data breach

December 7, 2022

In Australia under Part IIIC of the Privacy Act 1988 organisations covered by the Privacy Act and Commonwealth Government agencies are required to notify of a data breach in certain circumstances, what is known as an eligible data breach.  It is effectively a self assessment though there are consequences if there is no notification when there should have been one.  It is regime that has been justifiably criticised in the wake of the Optus and Medibank data breaches.  The recent amendments to the regime improve rather than fix its operation.

It is an open secret that there is significant under reporting of data breaches in the United States, United Kingdom and Australia.

In UK Companies Fear Reporting Cyber Incidents, Parliament Told Data Breach today reports that there may be a deep reluctance to report breaches to the UK Information Commissioner.  There is mandatory data breach notification in the United Kingdom and affected entities are supposed to report within 72 hours of becoming aware of the breach.  This reluctance to report can and often does backfire as the story Brooklyn Hospitals Decried for Silence on Cyber Incident.  In that case Brooklyn Hospitals were hit with a ransomware attack on 19 November which necessitated transferring patients to other hospitals. The lack of explanation caused annoyance, at minimum, for other hospitals as well as the patients affected.  This poor practice results in even closer scrutiny by regulators.

The reluctance of UK entities to report a data breach because of additional scrutiny from the Information Commissioner remains poor practice.  It is almost trite to say that organisations that suffer data breaches almost invariably had privacy and data security as a low priority which translated into inadequate training and data handling practices.  When regulators respond to a notification they often find a litany of other issues.  Sometimes those are the issues that cause the organisations the greater difficulty. A common problem is data collection.  Many organisations hold onto personal information long after they have any need for it. Names of long departed or deceased customers/patients, details of people who have unsubscribed to a service and solicited information are commonly held .  Because the cost of storage is relatively inexpensive and data held digitally do not absorb physical space it is not inconvenient to hold that data for whatever reason.

As Medlab discovered once Read the rest of this entry »

European Union Commission proposes an Artificial Intelligence Directive to complement the Artificial Intelligence Act

October 9, 2022

Artificial intelligence is now a key policy challenge across a range of disciplines; administration of justice, privacy, access to services, insurance and other forms of risk assessment, medicine, construction and the manufacturing.  It is transformative and will continue to be so. It also poses questions as to liability for products. The EU has proposed a legal framework on AI.

The reasons for that framework are described by the EU as follows:

The Commission is proposing the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together, the Regulatory framework and Coordinated Plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

    • address risks specifically created by AI applications;
    • propose a list of high-risk applications;
    • set clear requirements for AI systems for high risk applications;
    • define specific obligations for AI users and providers of high risk applications;
    • propose a conformity assessment before the AI system is put into service or placed on the market;
    • propose enforcement after such an AI system is placed in the market;
    • propose a governance structure at European and national level.

A risk-based approach

pyramid showing the four levels of risk: Unacceptable risk; High-risk; limited risk, minimal or no risk

The Regulatory Framework defines 4 levels of risk in AI:

    • Unacceptable risk
    • High risk
    • Limited risk
    • Minimal or no risk

Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High risk

AI systems identified as high-risk include AI technology used in:

    • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
    • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
    • safety components of products (e.g. AI application in robot-assisted surgery);
    • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
    • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
    • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
    • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
    • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

    • adequate risk assessment and mitigation systems;
    • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
    • logging of activity to ensure traceability of results;
    • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • clear and adequate information to the user;
    • appropriate human oversight measures to minimise risk;
    • high level of robustness, security and accuracy.

All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle,  prohibited.

Narrow exceptions are strictly defined and regulated, such assuch as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk

Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal or no risk

The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

step-by-step process for declaration of conformity
How does it all work in practice for providers of high risk AI systems?

Once an  AI system is on the market, authorities are in charge of market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers.

And on that note the EU has Read the rest of this entry »

Federal Trade Commission sues Kochava for selling data which tracks people’s movements to sensitive locations

September 6, 2022

The US Federal Trade Commission warned as far back as July that it would focus on illegal sharing of highly sensitive health data.  That was preceded with a warning in September 2021 to Health Apps and Connected Device Companies that they had to comply with health breach notification rules.  In June 2021 the FTC settled with Flo Health, a fertility tracking app which inappropriately shared sensitive health data with Facebook and Google. On 11 August 2022 the FTC announced it was embarking on commercial surveillance rule making.

In that context it is not surprising that the FTC has commenced proceedings against Kochava for selling data which tracks people when they are involved in sensitive activities, such as attending health clinics and places of worship.

The media release provides:

The Federal Trade Commission filed a lawsuit against data broker Kochava Inc. for selling geolocation data from hundreds of millions of mobile devices that can be used to trace the movements of individuals to and from sensitive locations. Kochava’s data can reveal people’s visits to reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities. The FTC alleges that by selling data tracking people, Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence. The FTC’s lawsuit seeks to halt Kochava’s sale of sensitive geolocation data and require the company to delete the sensitive geolocation information it has collected.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Idaho-based Kochava purchases vast troves of location information derived from hundreds of millions of mobile devices. The information is packaged into customized data feeds that match unique mobile device identification numbers with timestamped latitude and longitude locations. According to Kochava, these data feeds can be used to assist clients in advertising and analyzing foot traffic at their stores and other locations. People are often unaware that their location data is being purchased and shared by Kochava and have no control over its sale or use. Read the rest of this entry »

Bill on Health and Location Data Protection Act introduced to US Senate

June 19, 2022

The US has had a long tradition of commercialising customer lists.  It is curious With that has come the “data broker” putting holders of data in touch with those who are keen to use that data.  In the analog age it was a matter of mild concern, typically with people getting unexpected correspondence and offers.  A common example was someone signing up for a hunting magazine getting offered a membership of the National Rifle Associate.  In terms of scale the problem was real and concerning but not threatening to a person’s privacy.  Most people subscribe to a limited number of publications and it wasn’t until relatively recently the fetish for being required to provide masses of personal information for even the most anodyne activity. 

The digital age and the appreciation of businesses of the advantage of knowing as much about customers or potential customers combined with the vastly improved ability to collect masses of data and process them into useful information has mean the collection of information is key.  And that has led to worrying practices, such as the collection of sensitive and health information.  In that context on 15 June 2022 Senator Elizabeth Warren introduced Senate Bill 4408 to  prohibit data brokers from selling and transferring certain sensitive data was introduced in the U.S. Senate.

Australia has not had a tradition or framework for data brokers but that does not mean there has not been the sale of data from time to time.  Recently the Federal Government has made the transference of data between government agencies and educational institutions much easier.  The privacy protections were added as an afterthought.  It remains a problematical piece of legislation.

The Bill would Read the rest of this entry »

Facial recognition technology at Kmart, Bunnings and the Goodguys..Serious privacy concerns. It highlights the inadequacy of privacy legislation & poor regulation of what there is.

June 15, 2022

Choice has published the findings of its investigations of retailers using facial recognition with Kmart, Bunnings and The Good Guys using facial recognition technology in stores.  The Australian has picked up on that story with Faceprint technology: Kmart, Bunnings and The Good Guys are scanning customers’ faces in stores.

Both stories cover a disturbing pattern of organisations deploying privacy instrusive technology without any real restrictions or regulation.  As the stories make clear the compliance with the Privacy Act 1988 as to collection of personal information is either buried in on line privacy statements or small inconspicuous written notices under the heading conditions of entry off to the side of the entrance of Kmart stores.  This is arrogance writ large.  Kmart has undergone a box ticking exercise.  And the excuses used by Bunnings, that facial recognition technology is to “..to help identify persons of interest who have previously been involved in incidents of concern in our stores,” and that it is  “..an important measure that helps us to maintain a safe and secure environment for our team and customers.”  is, if true, a wholly disproportionate response to a problem Read the rest of this entry »

Google AI’s chatbot sentient…interesting if unlikely at the moment…but it does highlight impacts for the law.

June 14, 2022

Blake Lemoine, hardly a household name, has the tech world and Google aflutter with his suggestion that Google’s artificial intelligence chatbox has become sentient.  That has earned him a suspension and whatever else Google can come up with on his return.  Google has Read the rest of this entry »

National Institute of Standards and technology issues Blockchain for Access Control Systems NISTIR 8403

May 27, 2022

The National Institute of Standards and Technology (“NIST”) has issued a guideline Blockchain for Access Control Systems.   

The abstract provides:

The rapid development and wide application of distributed network systems have made network security – especially access control and data privacy – ever more important. Blockchain technology offers features such as decentralization, high confidence, and tamper-resistance, which are advantages to solving auditability, resource consumption, scalability, central authority, and trust issues – all of which are challenges for network access control by traditional mechanisms. This document presents general information for blockchain access control systems from the views of blockchain system properties, components, functions, and supports for access control policy models. Considerations for implementing blockchain access control systems are also included.

Blockchain systems provide an alternative (or complimentary) system for reliability, security, accountability, and scalability for AC systems. Blockchain characteristics – such as transparency, distributed computing/storage, and a tamper-evident/tamper-resistant design – help to prevent AC data from being accessed or modified by malicious users. Access logs are also recorded in blocks that allow for the detection of malicious activities. Blockchain system components and their advantages for AC systems are Read the rest of this entry »

European Commission releases proposed regulatory framework governing artificial intelligence

April 27, 2021

The European Commission has recently released its proposed regulation of Artificial Intelligence. It is the first ever legal framework on AI.  Given the impact of the European Union’s implementation of the General Data Protection Regulation (GDPR) on worldwide data collection, use, storage and security this proposal, if the Artificial Intelligence Act becomes European law it will have similarly significant impact. The proposal runs to 108 pages with 17 pages of attachments.  It will be a seriously large, laborious and slow process to go from its proposal to adoption stage.

There is a useful 10 page overview titled Communication on Fostering a European approach to Artificial Intelligence.  And being the EU there is a 66 page overlong and detailed plan on AI titled Coordinated Plan on Artificial Intelligence 2021 Review.

The media release Read the rest of this entry »

Verified by MonsterInsights