Privacy warnings about sharing personal information while using ChatGPT

March 15, 2023 |

The principles are consistent if the technology is in aconstant state of flux; do not share personal information through a program unless you know the privacy protections are strong and properly administered. In most cases that means don’t share! The Times story Do not share sensitive information with chatbots, cyber-experts warn makes that point very clearly. The privacy concerns are reinforced in the article AI uptake inhibited by security and data quality concerns: CSIRO.

It provides:

GCHQ’s cyber-experts have warned people not to share sensitive information with ChatGPT and similar artificial intelligence systems.

Private or confidential information included in questions to the chatbot could be viewed by others and leave users at risk of being hacked, the National Cyber Security Centre (NCSC) said.

Such so-called large language models (LLMs) could also be a boon for cyber-attackers, who could use them to impersonate people in emails, the centre warned in its blog.

The post is subtitled “Do loose prompts sink ships?” Two NCSC experts, David C and Paul J, write that the models are “undoubtedly impressive” but add: “They’re not magic, they’re not artificial general intelligence.”

The bloggers also highlight general flaws in LLMs: they can get things wrong, be biased, gullible and toxic and require huge computing resources. They are also prone to “prompt injection” attacks, in which malicious users can trick the AI into revealing sensitive information through the questions they ask.

For most users, however, the warning about sharing sensitive information is most pertinent. The experts give examples such as a chief executive asking “how best to lay off an employee” and somebody asking revealing health or relationship questions. The queries will be visible to the organisation that runs the system (OpenAI, co-founded by Elon Musk, and Microsoft in the case of ChatGPT) and will be stored and used to develop the model, they say.

Another risk is that queries stored online may be hacked, leaked or accidentally made public. The operator of the model could in time be taken over by an organisation with a different approach to privacy, the experts add.

Alan Woodward, professor of computer security at Surrey University, said: “The big thing is, if you put information in it [an LLM], it’s liable to be blurted out.

“One of the fundamental questions was, can I keep my private information private? And the answer is, if you put it in the model, no, because that’s the universe as far as the model is concerned and it can use what it likes. To get it to not use parts of that is very difficult.”

Amazon, the investment bank JP Morgan and the law firm Mishcon de Reya have all restricted their staff from using ChatGPT over privacy fears.

This week the government commissioned Matt Clifford, chairman of the UK’s Advanced Research and Invention Agency (ARIA), to lead a task force on such AI models because of the opportunities they offer.

The Alan Turing Institute has lobbied the government to set up a “sovereign” LLM so that the UK can compete with big tech firms and ensure its security in this area.

 

Leave a Reply





Verified by MonsterInsights