New South Wales Information and Privacy Commissioner publishes a guide to Public Impact Assessments on AI systems

September 9, 2024 |

Regulators are now publishing guidelines on AI at a rapid rate while legislatures are grappling with legislation. On September 5, 2024, the Council of Europe (CoE) announced that the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the Convention) was open for signature. The latest is the New South Wales Information Commissioner’s guide on Public Impact Assessments on AIs. It released the Guide on Friday. The guide also supports agencies in undertaking privacy-related assessments under the NSW AI Assessment Framework (AIAF) and the National framework for the assurance of AI in government.

The guide provides advice to agencies on:
  • determining when a PIA is necessary;
  • determining the likely scope and scale of a PIA;
  • PIA considerations when assessing AI systems and projects; and
  • common AI privacy risks and mitigations.
The press release provides:

The Information and Privacy Commission (IPC) has released its new Guide to undertaking Privacy Impact Assessments on AI systems and projects for consultation and feedback.

The Guide has been developed to support agencies in understanding, assessing and mitigating privacy risks in relation to the use of AI systems and projects when undertaking Privacy Impact Assessments (PIAs). It also supports agencies in undertaking privacy related assessments under the NSW AI Assessment Framework (AIAF) and the National framework for the assurance of artificial intelligence in government.

The new Guide builds on and is complementary to the Guide to Privacy Impact Assessments in NSW, to provide more specific guidance on AI-related privacy risks.

The IPC values the input of privacy practitioners in NSW and is seeking feedback on this updated guidance. In particular, feedback would be appreciated for the following focus questions:

    1. Would this guidance assist you in navigating AI projects from a privacy perspective?
    2. Are there areas in the guidance that you believe are missing?
    3. Is the guidance relevant, useful, clear, and practical?
The Guide goes through a Principle by principle analysis of AI . In the Guide:
  • AI is defined as ” the ability of a computer system to perform tasks that would normally require human intelligence, such as learning, reasoning, and making decisions. AI encompasses various specialised domains that focus on different tasks and includes automation.
  • Generative AI, Machine Learning (ML), Natural Language Processing (NLP) and Computer Vision (CV) are all kinds of AI,
  • ways government agencies use AI include:
    • an AI-powered chatbot on an agency website that visitors can interact with
    • a traffic management system that collects or uses vehicle registration data from CCTV footage and/or toll collection systems to automatically issue fines
    • a piece of software which uses large amounts of data held by the agency to predict or determine who is eligible for a subsidy and/or to calculate the subsidy they are entitled to
    • a piece of software that uses agency records to predict which individuals or businesses are more likely to be non-compliant with certain obligations
    • a technology that analyses crowd sentiment in a stadium by using CCTV footage combined with social media data and environmental system data to alert the stadium management to changes in customer sentiment during crowded events
  • The first question to ask when assessing whether a PIA is needed is, “Will any personal or health information be collected, stored, used or disclosed in the project?” This question is equally important when assessing whether a PIA is needed on an AI system. In fact, the use of an AI system could mean there is an elevated risk associated with the collection, storage, use or disclosure of personal or health information.
  • If an AI system or project involves handling personal information, a PIA will typically be required
  • The cost or size of a project or system is not a reliable indicator of whether a PIA should be conducted, as even low-cost or small-scale projects may have privacy impacts.
  • The PIA process helps to identify and manage the privacy risks that may arise from using AI systems and projects that involve personal and health information.
  • AI systems and projects could involve data that, at first glance, may not appear to be personal information, such as randomly assigned identifiers that distinguish individuals from each other but do not include attributes such as a name, address or driver licence number. This kind could still be considered personal information if a person’s identity can be reasonably ascertained by referring to other data sources – even if the agency doesn’t have any specific intention to do such an identification.
  • Because personal information can include an opinion about an individual, AI-generated inferences about individuals are also considered personal information, even if they are incorrect.
  • in practice AI can:
    • enable the processing and production of vast quantities of data on a scale which is uncommon among other technology solutions.
    • also be readily used to make or guide decision-making processes which have a profound impact on individuals.
  • AI has the potential to amplify privacy concerns in its consumption and analysis of personal information.
  • One of the key challenges for developing and using AI systems in a responsible manner is to identify and mitigate the potential privacy risks that may arise from the use of AI technology
  • At all stages of the AI lifecycle, personal information should be limited to the minimum amount necessary to achieve the required purpose.
  • Agencies should have clear contractual agreements with third parties, outlining how the third party will protect personal information, limitations on the use and disclosure of the personal information, processes in the event of a data breach, and mechanisms for auditing compliance with the agreement
  • Agencies should be transparent about the use of AI systems and disclosures to third parties supporting AI systems.
  • When implementing AI systems and projects, agencies should ensure security reviews have been undertaken and security risks are managed to protect the personal information from unauthorised access, disclosure and loss.
  • Destruction should be automated wherever feasible to reduce the personal information retained through the use of AI systems.
  • Internal policies and procedures to manage the use of AI systems and AI system outputs should be defined, documented and implemented. These policies should consider the ways AI systems and their outputs should and should not be used, considering both the benefits and risks of the AI system. For example, outputs that are appropriate for general insights and analysis may not be appropriate for making decisions about an individual’s eligibility for a service.
  • Staff should be trained on how to responsibly use AI systems, including the collection and use of data and the privacy impacts
  • access controls should be implemented so only those with a need to use AI systems or view AI outputs are given those permissions. This includes programming regular review of access permissions and the removal of permissions when staff leave or change roles where the existing permissions are no longer appropriate or required for the role.
  • human validation should be used where appropriate to reduce the likelihood of harms from unmonitored AI systems.
  • technical controls that can be implemented to reduce privacy risks, such as differential privacy, federated learning, and fully homomorphic encryption.
  • Synthetic data is artificially generated data that mimics real data and can be used safely as an alternative to real data. Where feasible, synthetic data should be used rather than production data

Leave a Reply





Verified by MonsterInsights