Bills to amend Privacy Act delayed again. Not being introduced in August setting but planned introduction in September sitting

August 20, 2024

Privacy reform in Australia is an object lesson on what not to do. Reform has been tentative, minimalist and always inadequate. It has been handled poorly by governments of all persuasions. The latest turn of the screw is the news, courtesy of Innovationau, that the bills to amend the Privacy Act 1988 will not be introduced into the House of Representatives in the August session. Instead it will be introduced in the September sittings, commencing 9 September, 2024. The stated reason for this was legislative congestion. It will be referred to committee and any amendment proposal(s) is likely to occur there. It is hard to see the Bill returning to the House for a 3rd reading and vote before the November sittings. Even if it passes the House of Representatives in November it is ambitious to expect it to be introduced into the Senate and pass later in November 2024. Which means it will be carried over to the sittings in 2025. And that may pose a problem. The latest the Government can have an election for both Houses of Parliament simultaneously is 17 May 2024. The budget is in May and Easter commences 18 April 2025. That means an election in March or early April is possible if not likely. That means proroguing Parliament in late January or February. If the Bill has not been passed before Parliament is prorogued then it lapses and the process has to start over.

It is a very disappointing development.  It shows what happen Read the rest of this entry »

Australian Government publishes policy for responsible use of Artificial Intelligence. Comes into force on 1 September 2024

August 17, 2024

The Australian Government has published a 19 page policy for the responsible use of AI. It comes into force on 1 September 2024.

The recommended actions include:

  • training staff on AI fundamentals taking into account roles and responsibilities such as employees involved in procurement, development, training, and deployment of AI;
  • make publicly available a statement outlining their approach to AI adoption, including information on compliance with the policy, measures to monitor the effectiveness of deployed AI systems, and efforts to protect the public against negative impacts; and
  • designate accountable officials for implementation of the policy within their organization, who:
    • are the contact point for whole-of-government AI coordination;
    • must engage in whole-of-government AI forums and processes; and
    • must keep up to date with changing requirements as they evolve over time.

The key principles of the policy are aimed at :

  • Australians are protected from harm;
  • AI risk mitigation is proportionate and targeted; and
  • AI use is ethical, responsible, transparent and explainable to the public.

The the press release is found here and the policy here.

The press release provides:

The Australian Government needs a coordinated approach if it’s to embrace the opportunities of AI. The Digital Transformation Agency has released the Policy for the responsible use of AI in government, an important step to achieve this goal while building public trust.

Coming into effect 1 September 2024, the Policy for the responsible use of AI in government positions the Australian Government to be an exemplar of safe, responsible use of AI.

Designed to evolve with technology and community expectations, it sets out how the Australian Public Service (APS) will:

  • embrace the benefits of AI by engaging with it confidently, safely and responsibly
  • strengthen public trust through enhanced transparency, governance and risk assurance
  • adapt over time by embedding a forward-learning approach to changes in both technology and policy environments.

‘This policy will ensure the Australian Government demonstrates leadership in embracing AI to benefit Australians,’ states Lucy Poole, General Manager for Strategy, Planning, and Performance.

‘Engaging with AI in a safe, ethical and responsible way is how we will meet community expectations and build public trust.’

Enable, engage and evolve

The policy is driven by the ‘enable, engage and evolve’ framework to introduce principles, mandatory requirements and recommended actions.

Enable and prepare

Agencies will safely engage with AI to enhance productivity, decision-making, policy outcomes and government service delivery by establishing clear accountabilities for its adoption and use.

Every agency will need to identify accountable officials and provide them to the DTA within 90 days of the policy effect date.

Engage responsibly

To protect Australians from harm, agencies will use proportional, targeted risk mitigation and ensure their use of AI is transparent and explainable to the public.

Agencies will need to publish a public transparency statement outlining their approach to adopting and using AI within 6 months of the policy effect date.

Evolve and integrate

Flexibility and adaptability are necessary to accommodate technological advances, requiring ongoing review and evaluation of AI uses, and embedding feedback mechanisms throughout government.

Supporting agencies standards and guidance

To help implement the policy, the DTA has published a standard for accountable officials (AOs) to lead their agency to:

  • uplift its governance of AI adoption
  • embed a culture that fairly balances risk management and innovation
  • enhance its response and adaptation to AI policy changes
  • be involved in cross-government coordination and collaboration.

‘We’re encouraging AOs to be the primary point of partnership and cooperation inside their agency and between others,’ outlines Ms Poole.

‘They connect the appropriate internal areas to responsibilities under the policy, collect information and drive agency participation in cross-government activities.’

‘Whole-of-government forums will continue to support a coordinated integration of AI into our workplaces and track current and emerging issues.’

The DTA will also soon release a standard for AI transparency statements, setting out the information agencies should make publicly available such as the agency’s:

  • intentions for why it uses or is considering adoption of AI
  • categories of use where there may be direct public interaction without a human intermediary
  • governance, processes or other measures to monitor the effectiveness of deployed AI systems
  • compliance with applicable legislation and regulation
  • efforts to protect the public against negative impacts.

‘Statements must use clear, plain language and avoid technical jargon,’ stresses Ms Poole.

Further guidance on additional opportunities and measures will be issued over the coming months.

Continuing our significant work on responsible AI

The last 12 months saw important work to better posture the APS for emerging AI technologies including the AI in Government Taskforce, co-led by the DTA and Department of Industry, Science and Resources (DISR), which concluded on 30 June 2024. 

The taskforce brought together secondees and stakeholders from across the APS for an unprecedented level of consultation, collaboration and knowledge-sharing. Its outputs directly informed this new policy and even more, continuing work to ensure a consistent, responsible approach to AI by government.

‘Our AI in Government Taskforce was crucial in demonstrating that we need a centralised approach to how government embraces AI, if it wishes to mitigate risks and increase public trust,’ states Ms Poole.

Victorian Information Commissioner launches an investigation into the University of Melbourne for using surveillance technology against students who were involved in a campus sit in

August 15, 2024

Last month the Office of the Victorian Information Commissioner was conducting preliminary enquiries with the University of Melbourne regarding the use of its surveillance technology to identify and bring misconduct hearings against students who undertook Pro Palestine sit ins. In July the University released a statement under the heading Conflict in the Middle East and activism on campus where it stated that “Last month the Office of the Victorian Information Commissioner was conducting preliminary enquiries with the University of Melbourne regarding the use of its surveillance technology to identify and bring misconduct hearings against students who undertook Pro Palestine sit ins. In July the University released a statement under the heading Conflict in the Middle East and activism on campus where it stated that it ” University of Melbourne”.. is a diverse, multi-cultural and multi-faith community..”, it “has a duty to uphold the principles of academic freedom and freedom of speech, and respect for legitimate and peaceful protest is core to our university’s values, as well as an activity protected by law”, it “operates fairly and in accordance with the law. Our policies also provide the basis for addressing actions or behaviours that adversely affect other members of the University community” and “to understand and implement appropriate support for students and graduate researchers during this time, with an increase in provisions for health and wellbeing, assessments, and safety on our campuses.” Waffly boilerplate that many organisations cobble together to cover and justify other activities and mask other behaviours not so consistent with the principles of the Enlightment which Universities should use as a touchstone. Such as using surveillance technology to bring action against students for conducting a sit in. As a result of disciplinary hearings 21 students received warnings. OVIC has now confirmed that it will launch an investigation into the University of Melbourne under the Privacy and Data Protection Act 2014.

The confirmation was reported by the Australian in “OVIC to probe Melbourne Uni over student surveillance” which provides:

The Office of the Victorian Information Commissioner will launch an investigation into the University of Melbourne after the academic institution used surveillance technology to gather evidence against students involved in a sit-in at a campus building.
Last month OVIC confirmed it was conducting preliminary enquiries with the university.
Victorian Information Commissioner Sean Morrison on Thursday confirmed the office has now decided to escalate the matter.
“Following conducting preliminary inquiries, the Privacy and Data Protection Deputy Commissioner has decided to commence an investigation under the Privacy and Data Protection Act 2014,” he said in a statement to The Australian.
“Given this is an active matter OVIC is unable to comment further until the investigation has concluded.”
In July, 21 students faced misconduct hearings before senior university representatives.
The students were notified of the disciplinary proceedings when the university sent them an email informing them they had breached its code of conduct during demonstrations and cited evidence from CCTV footage and Wi-Fi data obtained from the university’s network tracking their movements within the Arts West building during the 10-day sit in. Read the rest of this entry »

The UK Information Commissioner fines Advanced Computer Software Group Ltd (Advance) 6 million pound fine after 2022 ransomware attack that disrupted NHS

August 10, 2024

Cyber attacks on service providers working for large institutions, especially in the health sector, are common. Health Services often contract out IT services, as they did with Advanced Computer Software Group Ltd (Advanced). Unfortunately organisations and agencies spend insufficient time in ensuring that those contractors maintain adequate cyber protections and proper training regimes for their staff. Advanced provided IT services and handled personal information collected by the UK National Health Service in its capacity as a data processor. In August 2022 Advanced was hit with a ransomware attack which also involved personal information of 82,946 people being exfiltrated. NHS was impacted in not being able to access patient records. The ICO has announced that it will fine Advanced 6.09 million pounds.

The announcement provides:

We have provisionally decided to fine Advanced Computer Software Group Ltd (Advanced) £6.09m, following an initial finding that the provider failed to implement measures to protect the personal information of 82,946 people, including some sensitive personal information.  

Advanced provides IT and software services to organisations on a national scale, including the NHS and other healthcare providers, and handles people’s personal information on behalf of these organisations as their data processor. Read the rest of this entry »

FTC commences an action against Tik Tok and Byte Dance for violating Children’s Privacy Law and against Tik Tok for infringing an existing consent order

August 6, 2024

The FTC, through the Department of Justice, has commenced an action against the video-sharing platform TikTok, and its parent company ByteDance,alleging that they flagrantly violating Children’s Online Privacy Protection Act.  The FTC also alleges Tick Tok infringed an existing FTC 2019 consent order against TikTok for violating COPPA shortly after it went into effect. The FTC also allege that two TikTok entities (previously Musical.ly and Musical.ly Inc., which ByteDance acquired in 2017 and renamed) agreed to the terms of the order to settle allegations that they violated the COPPA Rule by unlawfully collecting personal information from children under the age of 13.

The complaint alleges defendants failed to comply with the COPPA requirement to notify and obtain parental consent before collecting and using personal information from children under the age of 13.

The Press Release provides:

On behalf of the Federal Trade Commission, the Department of Justice sued video-sharing platform TikTok, its parent company ByteDance, as well as its affiliated companies, with flagrantly violating a children’s privacy law—the Children’s Online Privacy Protection Act—and also alleged they infringed an existing FTC 2019 consent order against TikTok for violating COPPA.

The complaint alleges defendants failed to comply with the COPPA requirement to notify and obtain parental consent before collecting and using personal information from children under the age of 13.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina M. Khan. “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.” Read the rest of this entry »

Texas Attorney General secures 1.4 billion dollar settlement over unauthorised collecting of personal biometric data

On July 30, 2024, the Office of the Attorney General of Texas (AG) announced that Texas has obtained a $1.4 billion settlement, payable over 5 years, with Meta Platform Inc. over the unauthorized capture and use of personal biometric data of Texans under the Texas Capture or Use of Biometric Identifier Act (CUBI). In 2011, Meta released and automatically activated a feature allowing users to ‘tag’ photographs with the names of people on the photo, as well as ran facial recognition software on every face in the photographs uploaded to Facebook, capturing records of the facial geometry of anindividual. In February 2022, the AG sued Meta for unlawfully capturing the biometric data of millions of Texans without obtaining their informed consent. The story has been reported by Reuters

The press release Read the rest of this entry »