UK Ministry of Justic announces international treaty addressing risks of AI while in Australia the Department of Industry & etc launches public consultation on mandatory guardrails for high risk AI

September 6, 2024

On September 5, 2024, the UK Ministry of Justice (MoJ) announced that the UK had signed the first legally binding treaty governing the safe use of artificial intelligence (AI). The new framework, agreed by the Council of Europe, commits parties to collective action to manage AI products and protect the public from potential misuse.

The  treaty has three over-arching safeguards, namely:

  • protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected, and AI does not discriminate against them;
  • protecting democracy by ensuring countries take steps to prevent public institutions and processes from being undermined; and
  • protecting the rule of law by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harm, and ensure it is used safely.

The treaty requries countries to monitor AI development and ensure any technology is managed within strict parameters and includes provisions to protect the public and their data, human rights, democracy, and the rule of law.

Countries must also act against activities that fall outside of these parameters to tackle the misuse of AI models which pose a risk to public services and the wider public.

Meanwhile in Australia, again on September 5, 2024, the Department of Industry, Science, and Resources (DISR) announced a public consultation on a 69 page proposal paper to introduce mandatory guardrails for the safe and responsible use of artificial intelligence (AI) in high-risk settings.

The proposed mandatory guardrails Read the rest of this entry »

Western Australia moves slowly to have a Privacy Act

September 3, 2024

Western Australia is slowly moving towards having a Privacy Act. The Privacy and Responsible Information Sharing Bill 2024 has passed the Legislative Assembly and is working its way through the Legislative Council. It is principles based legislation. It is modeled broadly on the Victorian/New South Wales/Queensland legislation.  Its complaint and enforcement provisions are, like the other State Acts, quite process orientated and generally weak. It has a significant weakness in dealing with complaints which are not resolved by conciliation.  Under the legislation a complaint is determined by the Information Commissioner (section 104). However the Commission is involved in the mandatory attempt at conciliation of a complaint.  A party should have a complaint heard by an independent judicial or quasi judicial body.  Preferably a court.  Tribunals have a poor record in considering privacy complaints.  The jurisprudence by the Victorian Civil and Administrative Tribunal has been so ineffective as to render the enforcement provisions in Victoria dead letter. 

There will be 11 Information Privacy Principles (“IPPs”) will apply to IPP Entities which will include WA public entities, its contracted service providers WA Government trading enterprises and departments, local and regional governments.

Most of the IPPs follow the same structure as the Commonwealth APPs and State IPPs.  A new development is aprinciples involving the Automated Decision Making.  The weakness of the IPPs are that they are replete with exceptions, being drafted in general terms and with vague terminonology (such as what is reasonable).  That has tended to be interpreted by Courts, Tribunals and Commissioners in favour of the entities.  As such the protections are not as effective as appears on paper.

Some of the key IPPs are:

IPP 1: Collection

Collection must be “necessary” for one or more of the IPP Entity’s functions or activities.  Personal information must be collected in a “fair and reasonable”, Read the rest of this entry »

Bills to amend Privacy Act delayed again. Not being introduced in August setting but planned introduction in September sitting

August 20, 2024

Privacy reform in Australia is an object lesson on what not to do. Reform has been tentative, minimalist and always inadequate. It has been handled poorly by governments of all persuasions. The latest turn of the screw is the news, courtesy of Innovationau, that the bills to amend the Privacy Act 1988 will not be introduced into the House of Representatives in the August session. Instead it will be introduced in the September sittings, commencing 9 September, 2024. The stated reason for this was legislative congestion. It will be referred to committee and any amendment proposal(s) is likely to occur there. It is hard to see the Bill returning to the House for a 3rd reading and vote before the November sittings. Even if it passes the House of Representatives in November it is ambitious to expect it to be introduced into the Senate and pass later in November 2024. Which means it will be carried over to the sittings in 2025. And that may pose a problem. The latest the Government can have an election for both Houses of Parliament simultaneously is 17 May 2024. The budget is in May and Easter commences 18 April 2025. That means an election in March or early April is possible if not likely. That means proroguing Parliament in late January or February. If the Bill has not been passed before Parliament is prorogued then it lapses and the process has to start over.

It is a very disappointing development.  It shows what happen Read the rest of this entry »

Australian Government publishes policy for responsible use of Artificial Intelligence. Comes into force on 1 September 2024

August 17, 2024

The Australian Government has published a 19 page policy for the responsible use of AI. It comes into force on 1 September 2024.

The recommended actions include:

  • training staff on AI fundamentals taking into account roles and responsibilities such as employees involved in procurement, development, training, and deployment of AI;
  • make publicly available a statement outlining their approach to AI adoption, including information on compliance with the policy, measures to monitor the effectiveness of deployed AI systems, and efforts to protect the public against negative impacts; and
  • designate accountable officials for implementation of the policy within their organization, who:
    • are the contact point for whole-of-government AI coordination;
    • must engage in whole-of-government AI forums and processes; and
    • must keep up to date with changing requirements as they evolve over time.

The key principles of the policy are aimed at :

  • Australians are protected from harm;
  • AI risk mitigation is proportionate and targeted; and
  • AI use is ethical, responsible, transparent and explainable to the public.

The the press release is found here and the policy here.

The press release provides:

The Australian Government needs a coordinated approach if it’s to embrace the opportunities of AI. The Digital Transformation Agency has released the Policy for the responsible use of AI in government, an important step to achieve this goal while building public trust.

Coming into effect 1 September 2024, the Policy for the responsible use of AI in government positions the Australian Government to be an exemplar of safe, responsible use of AI.

Designed to evolve with technology and community expectations, it sets out how the Australian Public Service (APS) will:

  • embrace the benefits of AI by engaging with it confidently, safely and responsibly
  • strengthen public trust through enhanced transparency, governance and risk assurance
  • adapt over time by embedding a forward-learning approach to changes in both technology and policy environments.

‘This policy will ensure the Australian Government demonstrates leadership in embracing AI to benefit Australians,’ states Lucy Poole, General Manager for Strategy, Planning, and Performance.

‘Engaging with AI in a safe, ethical and responsible way is how we will meet community expectations and build public trust.’

Enable, engage and evolve

The policy is driven by the ‘enable, engage and evolve’ framework to introduce principles, mandatory requirements and recommended actions.

Enable and prepare

Agencies will safely engage with AI to enhance productivity, decision-making, policy outcomes and government service delivery by establishing clear accountabilities for its adoption and use.

Every agency will need to identify accountable officials and provide them to the DTA within 90 days of the policy effect date.

Engage responsibly

To protect Australians from harm, agencies will use proportional, targeted risk mitigation and ensure their use of AI is transparent and explainable to the public.

Agencies will need to publish a public transparency statement outlining their approach to adopting and using AI within 6 months of the policy effect date.

Evolve and integrate

Flexibility and adaptability are necessary to accommodate technological advances, requiring ongoing review and evaluation of AI uses, and embedding feedback mechanisms throughout government.

Supporting agencies standards and guidance

To help implement the policy, the DTA has published a standard for accountable officials (AOs) to lead their agency to:

  • uplift its governance of AI adoption
  • embed a culture that fairly balances risk management and innovation
  • enhance its response and adaptation to AI policy changes
  • be involved in cross-government coordination and collaboration.

‘We’re encouraging AOs to be the primary point of partnership and cooperation inside their agency and between others,’ outlines Ms Poole.

‘They connect the appropriate internal areas to responsibilities under the policy, collect information and drive agency participation in cross-government activities.’

‘Whole-of-government forums will continue to support a coordinated integration of AI into our workplaces and track current and emerging issues.’

The DTA will also soon release a standard for AI transparency statements, setting out the information agencies should make publicly available such as the agency’s:

  • intentions for why it uses or is considering adoption of AI
  • categories of use where there may be direct public interaction without a human intermediary
  • governance, processes or other measures to monitor the effectiveness of deployed AI systems
  • compliance with applicable legislation and regulation
  • efforts to protect the public against negative impacts.

‘Statements must use clear, plain language and avoid technical jargon,’ stresses Ms Poole.

Further guidance on additional opportunities and measures will be issued over the coming months.

Continuing our significant work on responsible AI

The last 12 months saw important work to better posture the APS for emerging AI technologies including the AI in Government Taskforce, co-led by the DTA and Department of Industry, Science and Resources (DISR), which concluded on 30 June 2024. 

The taskforce brought together secondees and stakeholders from across the APS for an unprecedented level of consultation, collaboration and knowledge-sharing. Its outputs directly informed this new policy and even more, continuing work to ensure a consistent, responsible approach to AI by government.

‘Our AI in Government Taskforce was crucial in demonstrating that we need a centralised approach to how government embraces AI, if it wishes to mitigate risks and increase public trust,’ states Ms Poole.

Victorian Information Commissioner launches an investigation into the University of Melbourne for using surveillance technology against students who were involved in a campus sit in

August 15, 2024

Last month the Office of the Victorian Information Commissioner was conducting preliminary enquiries with the University of Melbourne regarding the use of its surveillance technology to identify and bring misconduct hearings against students who undertook Pro Palestine sit ins. In July the University released a statement under the heading Conflict in the Middle East and activism on campus where it stated that “Last month the Office of the Victorian Information Commissioner was conducting preliminary enquiries with the University of Melbourne regarding the use of its surveillance technology to identify and bring misconduct hearings against students who undertook Pro Palestine sit ins. In July the University released a statement under the heading Conflict in the Middle East and activism on campus where it stated that it ” University of Melbourne”.. is a diverse, multi-cultural and multi-faith community..”, it “has a duty to uphold the principles of academic freedom and freedom of speech, and respect for legitimate and peaceful protest is core to our university’s values, as well as an activity protected by law”, it “operates fairly and in accordance with the law. Our policies also provide the basis for addressing actions or behaviours that adversely affect other members of the University community” and “to understand and implement appropriate support for students and graduate researchers during this time, with an increase in provisions for health and wellbeing, assessments, and safety on our campuses.” Waffly boilerplate that many organisations cobble together to cover and justify other activities and mask other behaviours not so consistent with the principles of the Enlightment which Universities should use as a touchstone. Such as using surveillance technology to bring action against students for conducting a sit in. As a result of disciplinary hearings 21 students received warnings. OVIC has now confirmed that it will launch an investigation into the University of Melbourne under the Privacy and Data Protection Act 2014.

The confirmation was reported by the Australian in “OVIC to probe Melbourne Uni over student surveillance” which provides:

The Office of the Victorian Information Commissioner will launch an investigation into the University of Melbourne after the academic institution used surveillance technology to gather evidence against students involved in a sit-in at a campus building.
Last month OVIC confirmed it was conducting preliminary enquiries with the university.
Victorian Information Commissioner Sean Morrison on Thursday confirmed the office has now decided to escalate the matter.
“Following conducting preliminary inquiries, the Privacy and Data Protection Deputy Commissioner has decided to commence an investigation under the Privacy and Data Protection Act 2014,” he said in a statement to The Australian.
“Given this is an active matter OVIC is unable to comment further until the investigation has concluded.”
In July, 21 students faced misconduct hearings before senior university representatives.
The students were notified of the disciplinary proceedings when the university sent them an email informing them they had breached its code of conduct during demonstrations and cited evidence from CCTV footage and Wi-Fi data obtained from the university’s network tracking their movements within the Arts West building during the 10-day sit in. Read the rest of this entry »

The UK Information Commissioner fines Advanced Computer Software Group Ltd (Advance) 6 million pound fine after 2022 ransomware attack that disrupted NHS

August 10, 2024

Cyber attacks on service providers working for large institutions, especially in the health sector, are common. Health Services often contract out IT services, as they did with Advanced Computer Software Group Ltd (Advanced). Unfortunately organisations and agencies spend insufficient time in ensuring that those contractors maintain adequate cyber protections and proper training regimes for their staff. Advanced provided IT services and handled personal information collected by the UK National Health Service in its capacity as a data processor. In August 2022 Advanced was hit with a ransomware attack which also involved personal information of 82,946 people being exfiltrated. NHS was impacted in not being able to access patient records. The ICO has announced that it will fine Advanced 6.09 million pounds.

The announcement provides:

We have provisionally decided to fine Advanced Computer Software Group Ltd (Advanced) £6.09m, following an initial finding that the provider failed to implement measures to protect the personal information of 82,946 people, including some sensitive personal information.  

Advanced provides IT and software services to organisations on a national scale, including the NHS and other healthcare providers, and handles people’s personal information on behalf of these organisations as their data processor. Read the rest of this entry »

FTC commences an action against Tik Tok and Byte Dance for violating Children’s Privacy Law and against Tik Tok for infringing an existing consent order

August 6, 2024

The FTC, through the Department of Justice, has commenced an action against the video-sharing platform TikTok, and its parent company ByteDance,alleging that they flagrantly violating Children’s Online Privacy Protection Act.  The FTC also alleges Tick Tok infringed an existing FTC 2019 consent order against TikTok for violating COPPA shortly after it went into effect. The FTC also allege that two TikTok entities (previously Musical.ly and Musical.ly Inc., which ByteDance acquired in 2017 and renamed) agreed to the terms of the order to settle allegations that they violated the COPPA Rule by unlawfully collecting personal information from children under the age of 13.

The complaint alleges defendants failed to comply with the COPPA requirement to notify and obtain parental consent before collecting and using personal information from children under the age of 13.

The Press Release provides:

On behalf of the Federal Trade Commission, the Department of Justice sued video-sharing platform TikTok, its parent company ByteDance, as well as its affiliated companies, with flagrantly violating a children’s privacy law—the Children’s Online Privacy Protection Act—and also alleged they infringed an existing FTC 2019 consent order against TikTok for violating COPPA.

The complaint alleges defendants failed to comply with the COPPA requirement to notify and obtain parental consent before collecting and using personal information from children under the age of 13.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina M. Khan. “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.” Read the rest of this entry »

Texas Attorney General secures 1.4 billion dollar settlement over unauthorised collecting of personal biometric data

On July 30, 2024, the Office of the Attorney General of Texas (AG) announced that Texas has obtained a $1.4 billion settlement, payable over 5 years, with Meta Platform Inc. over the unauthorized capture and use of personal biometric data of Texans under the Texas Capture or Use of Biometric Identifier Act (CUBI). In 2011, Meta released and automatically activated a feature allowing users to ‘tag’ photographs with the names of people on the photo, as well as ran facial recognition software on every face in the photographs uploaded to Facebook, capturing records of the facial geometry of anindividual. In February 2022, the AG sued Meta for unlawfully capturing the biometric data of millions of Texans without obtaining their informed consent. The story has been reported by Reuters

The press release Read the rest of this entry »

Global privacy sweep finds that nearly all of 1,000 websites and mobile apps tested use deceptive design patterns

July 20, 2024

The Privacy Commissioner in GPEN Sweep finds majority of websites and mobile apps use deceptive design to influence privacy choices to highlight the results of annual Global Privacy Enforcement Network. The GPEN issued a press release on this with 2024 GPEN Sweep on deceptive design patterns. The sweep highlights the poor design, intentional or not, which frustrates users in either finding out what happens to their personal information or determining their rights via privacy policies.  Unfortunately regulators do not take a strong enough position on incomprehensible privacy policies or complex processes designed to thwart individuals which organisations comply with legislation. 

The GPEN media statement provides:

A global privacy sweep that examined more than 1,000 websites and mobile applications (apps) has found that nearly all of them employed one or more deceptive design patterns that made it difficult for users to make privacy-protective decisions. Read the rest of this entry »

Medisecure reveals that data breach earlier this year resulted in the theft of personal information of 12.9 million Australians. That makes the need for proper reform of the Privacy Act even more urgent

July 19, 2024

The numbers used to be staggering. Thousands, then hundreds of thousands of records taken in this or that cyber attack. Now the administrator of Medisecure Ltd and the liquidators of Operations MDS Pty Ltd made a statement that the personal information of 12.9 million Australians has been compromised (fancy word for stolen). This prompted the Department of Home Affairs curating that statement into its own press release. This has been followed by reports on the ABC and the Guardian. As with many health related services the personal information collected is both voluminuous and comprensive.  It included individuals:

  • full name;
  • title;
  • date of birth;
  • gender;
  • email address;
  • address;
  • phone number;
  • individual healthcare identifier (IHI);
  • Medicare card number, including individual identifier, and expiry;
  • Pensioner Concession card number and expiry;
  • Commonwealth Seniors card number and expiry;
  • Healthcare Concession card number and expiry;
  • Department of Veterans’ Affairs (DVA) (Gold, White, Orange) card number and expiry;
  • prescription medication, including name of drug, strength, quantity and repeats; and
  • reason for prescription and instructions.

The administrator and liquidator’s statement relevantly Read the rest of this entry »