ChatGP , where privacy and AI collide
January 24, 2023 |
Artificial intelligence (better known as AI( has been around for awhile. AI algorithims are a key part of Google’s success, in discerning our interests and needs and ordering goods and services as part of the search engine’s operation. Facebook and Amazzon also rely on AI in making their money, with Facebook selling ads and Amazon putting items within tantalising reach. AI has moved to centre stage in public policy discussion because it’s use threatens to be ubiquitous. AI, and quantum computing, will be transformative in how business is done, services are provided and decisions are made. That is likely to be for the good but there are legitimate concerns about its untrammeled use without regulatory oversight. It will also impact employment with the most recent example being Microsoft laying off 10,000 staff to cut costs as it focuses on AI.
In the United States there are concerns that the use of ChatGPT has the potential of breaching privacy laws. In the UK the Information Commissioner’s Office is sufficiently concerned about the use of AI that it published an article on its website titled Addressing concerns on the use of AI by local authorities,
Chat GPT is an algorithm that is vexing educational institutions as it creates realistic text which may be difficult to distinguish from human created prose. It may defy anti plaigarism software. This is well summarised by the ABC with What is ChatGPT and why are schools and universities so worried about it? It provides:
Type in any prompt or question and you will receive an eloquently worded answer that’s (mostly) accurate.
Generative AI ChatGPT offers endless opportunities to seek an answer to any question you’ve ever had, but its creation has also been met with concern.
What is ChatGPT?
ChatGPT — or Chat Generative Pre-Trained Transformer — is a chatbot that was launched by OpenAI, an artificial intelligence research and deployment company, in November 2022.
It is a language model that can generate realistic, human-like text.
ChatGPT can be used in language translation and to summarise large chunks of text to give a precis of an article.
It can also generate text responses on any subject when prompted, making it useful as a chatbot for customer service.
Why are schools and universities worried about it?
The big concern is that ChatGPT could potentially be used by university and school students to cheat on written assignments without being detected.
It’s been described as akin to students outsourcing their homework to robots.
But some educators have also spoken about harnessing the existence of ChatGPT as a major opportunity to make assessments more authentic, mirroring challenges students may face in the real world.
This would require a radical rethink of school and university assessment to make it much more difficult to plagiarise.
Griffith Institute for Educational Research director Leonie Rowan said ChatGPT also had the potential to improve learning outcomes for disadvantaged children who did not have access to tutors.
“There’s a lot of positive dimensions,” Professor Rowan said.
“It’s got huge potential.
“This might be an opportunity to help, for example, kids with language backgrounds other than English, culturally and linguistically diverse learners, refugee kids.”
How will educators know if students are using it?
In response to the launch of ChatGPT, an online tool capable of detecting artificial-intelligence-generated material has already emerged, dubbed AICheatCheck.
It uses its own artificial intelligence models to predict if text has been written by a human or a machine, based on word choice and sentence structure.
A Queensland Department of Education spokesperson said it had blocked ChatGPT for students on the department’s network until it could be fully “assessed for appropriateness”.
“Misuse of artificial intelligence by a student may be considered academic misconduct or a technology violation,” a spokesperson for the department said.
“As part of their policies and guidelines, it is also important for schools to educate students about the ethical and academic implications of using AI to complete assignments.”
Should ChatGPT be banned by schools and universities?
Professor Rowan advocates embracing the technology, rather than banning it.
“I can understand why banning it looks like a simple and quick solution,” she said.
“But you can’t lock people up away from the world, you can’t ban kids from the world.
“I don’t think it should be banned. I don’t think it can be banned.
“Let’s get curious before we get scared. Humans are amazing. We learned how to use fire to our advantage, and how to make life better thanks to all sorts of technological advantages.
“I’m optimistic about the present and the future with ChatGPT.”
Professor Rowan said there was no evidence that the emergence of ChatGPT would lead to a “tsunami of cheating”.
And she said there were already people writing assignments for students before the emergence of ChatGPT.
“That’s already an industry,” she said.
“[ChatGPT] is an opportunity for us to think about, ‘Wow, what is this amazing new space going to allow us to do and encourage us to do in new kinds of ways?’
“Maybe this is a wake-up call that our assessments do need to be more individualised.”
Besides education, how can ChatGPT be used for good?
Professor Rowan said the chatbot had implications for people trying “to navigate all sorts of foreign systems”.
For example, it has huge potential to help people, such as those from culturally and linguistically diverse backgrounds, write job applications.
Healthcare Info Security highlights in Health Entities Should Vet Risks of ChatGPT Use, there is a real privacy risk in health providers using ChatGPT in writing reports or referrals as personal information would be included in the text. The dangers are multi pronged; the possibility of ChatGPT being hacked, the lack of control in the use of personal information when using AI and the potential lack of oversight in relying on AI to compose an appropriate text when the opposite may be the result. Sources in the article suggest that using ChatGPT may violate US health privacy laws.
The article provides:
Clinicians should think twice about using artificial intelligence tools as productivity boosters, healthcare attorneys warned after a Florida doctor publicized on TikTok how he had used ChatGPT to write a letter to an insurer arguing for patient coverage.
Palm Beach-based rheumatologist Dr. Clifford Stermer showed on the social media platform how he had asked ChatGPT to write a letter to UnitedHealthcare asking it to approve a costly anti-inflammatory for a pregnant patient.
“Save time. Save effort. Use these programs, ChatGPT, to help out in your medical practice,” he told the camera after demonstrating a prompt for the tool to reference a study concluding that the prescription was an effective treatment for pregnant patients with Crohn’s disease.
Stermer did not respond to Information Security Media Group’s request for additional details about the use of ChatGPT in his practice or about potential data security and privacy considerations.
Privacy experts interviewed by ISMG did not say Stermer’s use of ChatGPT violated HIPAA or any other privacy or security regulations.
But the consensus advice is that healthcare sector entities must carefully vet the use of ChatGPT or similar AI-enabled tools for potential patient data security and privacy risks. Technology such as ChatGPT presents tempting opportunities for overburdened clinicians and other staff to boost productivity and ease mundane tasks.
“This is a change to the environment that requires careful and thoughtful attention to identify appropriate risks and implement appropriate mitigation strategies,” says privacy attorney Kirk Nahra of the law firm WilmerHale, speaking about artificial intelligence tools in the clinic.
“This is a good reason why security is so hard – the threats change constantly and require virtually nonstop diligence to stay on top of changing risks.”
Entities must be careful in their implementations of promising new AI tech tools, warns technology attorney Steven Teppler, chair of the cybersecurity and privacy practice of law firm Mandelbaum Barrett PC.
“Right now, the chief defense is increased diligence and oversight,” Teppler says. “It appears that, from a regulatory perspective, ChatGPT capability is now in the wild.”
Besides an alert this week from the U.S. Department of Health and Human Service’s Health Sector Cyber Coordination Center warning healthcare entities over hackers’ exploitation of ChatGPT for the creation of malware and convincing phishing scams, other government agencies have yet to announce public guidance.
While HHS’ Office for Civil Rights has not issued formal guidance on ChatGPT or similar AI tools, the agency in a statement to ISMG on Thursday says, “HIPAA regulated entities should determine the potential risks and vulnerabilities to electronic protected health information before adding any new technology into their organization.”*
“Until we have some detective capability, it will present a threat that must be addressed by human attention,” Teppler says about the potential risks involving ChatGPT and similar emerging tools in healthcare.
The Good and the Bad
Most, if not all, technologies “can be used for good or evil, and ChatGPT is no different,” says Jon Moore, chief risk officer at privacy and security consultancy Clearwater.
Healthcare organizations should have a policy in place preventing the use of tools such as ChatGPT without prior approval or, at a minimum, not allowing the entry of any electronic protected health information or other confidential information into them, Moore says.
“If an organization deems the risk of a breach still too high, it might also elect to block access to the sites so employees are unable to reach them at all from their work environment.”
Besides potential HIPAA and related compliance issues, the use of emerging AI tools without proper diligence can presents additional concerns, such as software quality, coding bias and other problems.
“Without testing, true peer review and other neutral evaluation tools, implementation should not be in a monetization prioritized ‘release first and fix later’ typical tech product/service introduction,” Teppler says.
“If things go wrong, and AI is to blame, who bears liability?”
The ICO blog provides:
So many of people’s interactions with the government, both local and central, involve us handing over data about ourselves. This could be as simple as our name or date of birth, or as personal as our financial history or health information.
People should feel confident that this data is handled appropriately, lawfully, and fairly. This should especially be the case when accessing welfare or social support, where an individual may be at their most vulnerable. They should also be confident that none of their personal data is being used to discriminate against them, either consciously or unconsciously.
When concerns were raised about the use of algorithms in decision-making around benefit entitlement and in the welfare system more broadly, we conducted an inquiry to understand the development, purpose and functions of algorithms and similar systems being used by local authorities. We wanted to make sure people could feel confident in how their data was being handled.
As part of this inquiry, we consulted with a range of technical suppliers, a representative sample of local authorities across the country and the Department for Work and Pensions. Overall 11 local authorities were identified through a risk assessment process to ensure a representative sample based on geographical location and those with the largest benefits workload. This inquiry has greatly increased our understanding of the development, practical application and use of this technology in this sector, and the findings will be fed into the ICO’s wider work in this area.
In this instance, we have not found any evidence to suggest that claimants are subjected to any harms or financial detriment as a result of the use of algorithms or similar technologies in the welfare and social care sector. It is our understanding that there is meaningful human involvement before any final decision is made on benefit entitlement. Many of the providers we spoke with confirmed that the processing is not carried out using AI or machine learning but with what they describe as a simple algorithm to reduce administrative workload, rather than making any decisions of consequence.
It is not the role of the ICO to endorse or ban a technology, but as the use of AI in everyday life increases we have an opportunity to ensure it does not expand without due regard for data protection, fairness and the rights of individuals.
While we did not find evidence of discrimination or unlawful usage in this case, we understand that these concerns exist. In order to alleviate concerns around the fairness of these technologies, as well as remaining compliant with data protection legislation, there are a number of practical steps that local authorities and central government can take when using algorithms or AI.
-
-
Take a data protection by design and default approach
As a data controller, local authorities are responsible for ensuring that their processing complies with the UK GDPR. That means having a clear understanding of what personal data is being held and why it is needed, how long it is kept for, and erase it when it is no longer required. Data processed using algorithms, data analytics or similar systems should be reactively and proactively reviewed to ensure it is accurate and up to date. This includes any processing carried out by an organisation or company on their behalf. If a local authority decides to engage a third party to process personal data using algorithms, data analytics or AI, they are responsible for assessing that they are competent to process personal data in line with the UK GDPR.
-
Be transparent with people about how you are using their data
Local authorities should regularly review their privacy policies, and identify areas for improvement. There are some types of information that organisations must always provide, while the provision of other types of information depends on the particular circumstances of the organisation, and how and why people’s personal data is used. They should also bring any new uses of an individual’s personal data to their attention.
-
Identify the potential risks to people’s privacy
Local authorities should consider conducting a Data Protection Impact Assessment (DPIA) to help identify and minimise the data protection risks of using algorithms, AI or data analytics. A DPIA should consider compliance risks, but also broader risks to the rights and freedoms of people, including the potential for any significant social or economic disadvantage. Our DPIA checklist can help when carrying out this screening exercise.
-
The potential benefits of AI are plain to see. It can streamline processes, reduce costs, improve services and increase staff power. Yet the economic and societal benefits of these innovations are only possible by maintaining the trust of the public. It is important that where local authorities use AI, it is employed in a way that is fair, in accordance with the law, and repays the trust that the public put in them when they hand their data over.
We will continue to work with and support the public sector to ensure that the use of AI is lawful, and that a fair balance is struck between their own purposes and the interests and rights of the public.