News South Wales Reconstruction Authority suffers significant data breach. Third party use of AI partly to blame
October 6, 2025
Artificial Intelligence is the runaway train of administration, the law and most areas which use it. Its capabilities are rarely fully understood, its dangers are not considered and most users have no idea of how it works. If the use of AI causes or contributes to the misuse of personal information the aforementioned ignorance is no excuse for failing to comply with privacy legislation. The New South Wales Reconstruction Authority (the “Authority”) today announced that it has been the subject of a data breach. The data breach occurred from 12 – 15 March 2025 with names, addresses, email addresses, phone numbers and “some personal and health information.” Names and addresses are personal information. While the Authority stresses the contractor did not use authorised AI that does not change its liability. Third party providers are a chronic weak link in any data security network. They are often used because they are cost effective. That may mean they are less invested in data security and proper training. Organisations should include proper cyber security requirements in contracts but also insist on a right to inspect the effectiveness of cyber security.
This episode highlights the need to determine whether the AI used is properly integrated and compatiable with existing systems and whether there are appropriate security measures and there is a proper assessment of risk.
Some of the factors organisations needs to consider are:
-
Security – In this regard an organisation needs to consider the model type. The starting prefrence shoudl be a “Closed Model”. This is different to an “Open Model” such as standard ChatGPT. “Closed Models” generally do not allow prompts and results to train the underlying model, and do not retain any data. This deals with unapproved disclosure of confidential or personal information. Such as in this case. Any AI system should comply with local and international data sovereignty laws. That would mean data remaining within Australian borders. It is critical to know the frequency, and how, the underlying Large Language Model (LLM) is trained and updated. It is critical to ensure that these underlying updates are secure and trustworthy, or otherwise subject to sufficient controls.
-
Quality of data and training – In addition to quality in, quality out for data it is important to have quality training. It is necessary to look at models that have invested in industry-specific pre-training to achieve optimal results. .
-
Quality Assurance – If an organisation uses AI to make decisions it is critical to have quality assurance. That involves using statistical methods, such as precision and recall. There should be Regular testing and validation.
-
Tracking – It is important to trace work products and decisions. That should involve having methods to monitor and document where AI has been involved in the development of work products. That could involve logs of AI interactions or tagging outputs generated by AI systems.
Clearly the Authority will have to review how its third party providers use their AI. There was a failure to properly monitor and proscribe practices involving the personal information collected by the Authority and used by third parties.
The data breach has been reported by the ABC with Read the rest of this entry »