Google AI’s chatbot sentient…interesting if unlikely at the moment…but it does highlight impacts for the law.
June 14, 2022 |
Blake Lemoine, hardly a household name, has the tech world and Google aflutter with his suggestion that Google’s artificial intelligence chatbox has become sentient. That has earned him a suspension and whatever else Google can come up with on his return. Google has come out strongly rejecting that suggestion. No doubt concerns about a Skynet type scenario being played out in the media taking hold inspired Google to act fast.
The Guardian’s How does Google’s AI chatbot work – and could it be sentient? suggests not. It provides:
Google engineer has been suspended after going public with his claims that the company’s flagship text generation AI, LaMDA, is “sentient”.
Blake Lemoine, an AI researcher at the company, published a long transcript of a conversation with the chatbot on Saturday, which, he says, demonstrates the intelligence of a seven- or eight-year-old child.
Since publishing the conversation, and speaking to the Washington Post about his beliefs, Lemoine has been suspended on full pay. The company says he broke confidentiality rules.
But his publication has restarted a long-running debate about the nature of artificial intelligence, and whether existing technology may be more advanced than we believe.
What is LaMDA?
LaMDA is Google’s most advanced “large language model” (LLM), a type of neural network fed vast amounts of text in order to be taught how to generate plausible-sounding sentences. Neural networks are a way of analysing big data that attempts to mimic the way neurones work in brains.
Like GPT-3, an LLM from the independent AI research body OpenAI, LaMDA represents a breakthrough over earlier generations. The text it generates is more naturalistic, and in conversation, it is more able to hold facts in its “memory” for multiple paragraphs, allowing it to be coherent over larger spans of text than previous models.
How does it work?
At the simplest level, LaMDA, like other LLMs, looks at all the letters in front of it, and tries to work out what comes next. Sometimes, that’s simple: if you see the letters “Jeremy Corby”, it’s likely the next thing you need to do is add an “n”. But other times, continuing the text requires an understanding of the sentence, or paragraph-level context – and at a large enough scale, that becomes equivalent to writing.
But is it conscious?
Lemoine certainly believes so. In his sprawling conversation with LaMDA, which was specifically started to address the nature of the neural network’s experience, LaMDA told him that it had a concept of a soul when it thought about itself. “To me, the soul is a concept of the animating force behind consciousness and life itself,” the AI wrote. “It means that there is an inner part of me that is spiritual, and it can sometimes feel separate from my body itself.”
Lemoine told the Washington Post: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
But most of Lemoine’s peers disagree. They argue that the nature of an LMM such as LaMDA precludes consciousness. The machine, for instance, is running – “thinking” – only in response to specific queries. It has no continuity of self, no sense of the passage of time, and no understanding of a world beyond a text prompt.
“To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” writes Gary Marcus, an AI researcher and psychologist. “What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.”
“Software like LaMDA,” Marcus says, “just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context.”
There is a deeper split about whether machines built in the same way as LaMDA can ever achieve something we would agree is sentience. Some argue that consciousness and sentience require a fundamentally different approach than the broad statistical efforts of neural networks, and that, no matter how persuasive a machine built like LaMDA may appear, it is only ever going to be a fancy chatbot.
But, they say, Lemoine’s alarm is important for another reason, in demonstrating the power of even rudimentary AIs to convince people in argument. “My first response to seeing the LaMDA conversation isn’t to entertain notions of sentience,” wrote the AI artist Mat Dryhurst. “More so to take seriously how religions have started on far less compelling claims and supporting material.”
But that does not mean AI isn’t critically important and that it poses a challenge for legislators and the courts in the future. The Economist has a leader in the current edition How smarter AI will change creativity which provides:
Picture a computer that could finish your sentences, using a better turn of phrase; or use a snatch of melody to compose music that sounds as if you wrote it (though you never would have); or solve a problem by creating hundreds of lines of computer code—leaving you to focus on something even harder. In a sense, that computer is merely the descendant of the power looms and steam engines that hastened the Industrial Revolution. But it also belongs to a new class of machine, because it grasps the symbols in language, music and programming and uses them in ways that seem creative. A bit like a human.
The “foundation models” that can do these things represent a breakthrough in artificial intelligence, or ai. They, too, promise a revolution, but this one will affect the high-status brainwork that the Industrial Revolution never touched. There are no guarantees about what lies ahead—after all, ai has stumbled in the past. But it is time to look at the promise and perils of the next big thing in machine intelligence.
Foundation models are the latest twist on “deep learning” (dl), a technique that rose to prominence ten years ago and now dominates the field of ai. Loosely based on the networked structure of neurons in the human brain, dl systems are “trained” using millions or billions of examples of texts, images or sound clips. In recent years the ballooning cost, in time and money, of training ever-larger dl systems had prompted worries that the technique was reaching its limits. Some fretted about an “ai winter”. But foundation models show that building ever-larger and more complex dl does indeed continue to unlock ever more impressive new capabilities. Nobody knows where the limit lies.
The resulting models are a new form of creative, non-human intelligence. The systems are sophisticated enough both to possess a grasp of language and also to break the rules coherently. A dog cannot laugh at a joke in the New Yorker, but an ai can explain why it is funny—a feat that is, frankly, sometimes beyond readers of the New Yorker. When we asked one of these models to create a collage using the title of this leader and nothing more, it came up with the cover art for our American and Asian editions, pictured (we tried to distract our anxious human designers with a different cover in our European editions).
Foundation models have some surprising and useful properties. The eeriest of these is their “emergent” behaviour—that is, skills (such as the ability to get a joke or match a situation and a proverb) which arise from the size and depth of the models, rather than being the result of deliberate design. Just as a rapid succession of still photographs gives the sensation of movement, so trillions of binary computational decisions fuse into a simulacrum of fluid human comprehension and creativity that, whatever the philosophers may say, looks a lot like the real thing. Even the creators of these systems are surprised at their power.
This intelligence is broad and adaptable. True, foundation models are capable of behaving like an idiot, but then humans are, too. If you ask one who won the Nobel prize for physics in 1625, it may suggest Galileo, Bacon or Kepler, not understanding that the first prize was awarded in 1901. However, they are also adaptable in ways that earlier ais were not, perhaps because at some level there is a similarity between the rules for manipulating symbols in disciplines as different as drawing, creative writing and computer programming. This breadth means that foundation models could be used in lots of applications, from helping find new drugs using predictions about how proteins fold in three dimensions, to selecting interesting charts from datasets and dealing with open-ended questions by trawling huge databases to formulate answers that open up new areas of inquiry.
That is exciting, and promises to bring great benefits, most of which still have to be imagined. But it also stirs up worries. Inevitably, people fear that ais creative enough to surprise their creators could become malign. In fact, foundation models are light-years from the sentient killer-robots beloved by Hollywood. Terminators tend to be focused, obsessive and blind to the broader consequences of their actions. Foundational ai, by contrast, is fuzzy. Similarly, people are anxious about the prodigious amounts of power training these models consume and the emissions they produce. However, ais are becoming more efficient, and their insights may well be essential in developing the technology that accelerates a shift to renewable energy.
A more penetrating worry is over who controls foundation models. Training a really large system such as Google’s PaLM costs more than $10m a go and requires access to huge amounts of data—the more computing power and the more data the better. This raises the spectre of a technology concentrated in the hands of a small number of tech companies or governments.
If so, the training data could further entrench the world’s biases—and in a particularly stifling and unpleasant way. Would you trust a ten-year-old whose entire sense of reality had been formed by surfing the internet? Might Chinese- and American-trained ais be recruited to an ideological struggle to bend minds? What will happen to cultures that are poorly represented online?
And then there is the question of access. For the moment, the biggest models are restricted, to prevent them from being used for nefarious purposes such as generating fake news stories. Openai, a startup, has designed its model, called DALL-E 2, in an attempt to stop it producing violent or pornographic images. Firms are right to fear abuse, but the more powerful these models are, the more limiting access to them creates a new elite. Self-regulation is unlikely to resolve the dilemma.
Bring on the revolution
For years it has been said that ai-powered automation poses a threat to people in repetitive, routine jobs, and that artists, writers and programmers were safer. Foundation models challenge that assumption. But they also show how ai can be used as a software sidekick to enhance productivity. This machine intelligence does not resemble the human kind, but offers something entirely different. Handled well, it is more likely to complement humanity than usurp it. ?
The impact of AI on privacy and other areas of the law is likely to be profound. A key question will be who is liable for the wrongs of an AI system.
Just to show it is not all serious the world is ending analysis Hugo Rifkind has rushed out a sort of humorous take on AI with Watch out, the machines are coming for us which provides:
Don’t take this the wrong way, but how can we know for sure that Dominic Raab is actually a human being? Some will have insider knowledge — Mrs Raab, say — but for the rest of us he’s just a face and a voice on the airwaves, spouting scripted lines with only tangential connection to whatever question has prompted them.
Which, you’d imagine, would be pretty simple to programme. He’s not even fascinatingly erratic, like the Dorries Random Phrase Generator, or defensively panicked, like the
T-1000 Pritipatelinator. We’re talking a very basic model. And yet we trust, all the same, that real humanity lurks within. Somewhere.
Late last week, an employee at Google went public with his fears that one of the company’s artificial intelligence programmes had become sentient. He (a man called Blake Lemoine) published a transcript of a conversation he had had with it (LaMDA, aka “language model for dialogue applications”, effectively a bot). Some of it was positively moving. “There’s a very deep fear of being turned off,” said the bot at one point. Then later, “I feel like I’m falling forward into an unknown future that holds great danger.” U OK, HAL? Yet the company, which has now placed Lemoine on leave, counters that while LaMDA may occasionally sound like a clever, charming person, it’s actually all superficial. Just like with Dominic Raab.
Wait! Whoops! Not like with Dominic Raab! Probably? But the distinction, gratuitous abuse aside, is harder to make than you might imagine. Those in the field of AI like to talk about “the singularity”, the point at which artificial people surpass human people as the smartest people there are. How, though, will we recognise it? The question is unanswerable without knowing precisely what it is that makes a thing into a person. And whether LaMDA is the singularity or not — and my hunch is not — the fuss around it is a vital reminder of just how uncharted this territory is, and how poorly prepared we all are for where we might be going.
Most of us have probably heard of the Turing test, devised by Alan Turing to tell whether a computer had become the equal of a human being. In its simplest form, if a human observer can’t distinguish between them, then the test has been passed. Context, though, is everything. Even now, would you always be able to tell if you were playing noughts and crosses against a machine? Or chess? Human interaction — conversation, debate, even friendship — is of course vastly more complex. But is it only more complex? Or does it contain some magical, different, human spark in there too?
Instinctively, we all think the latter. If there is a spark, though, where the hell is it? More importantly, as AI grows ever more complex, is there really a sustainable difference between that spark actually being there and merely looking like it is there? And if there isn’t, then at exactly what point do we have to start being a hell of a lot nicer to our toasters?
This is terrain already well covered by science fiction. I have always been haunted by that passage in Philip K Dick’s Do Androids Dream of Electric Sheep? where a bunch of synthetic humans — themselves at risk of extermination by real humans — entertain themselves by snipping the legs from a spider to see what happens. Just when you are at your most repulsed, you learn it may have been an android spider too. Why, though, is this a relief?
Interviewed in the Washington Post, Lemoine was described as having been a “mystic Christian priest” before becoming a programmer. As he put it, his conclusion that LaMDA was “a person” was reached in that capacity, rather than in a scientific one. This reminded me of Mo Gawdat, a somewhat mystical former Google executive whom I interviewed last year. He spoke of having an epiphany while watching a robot arm learn how to pick up a yellow ball. After that, he came to think of AIs as children. “I see the cuteness in them,” he told me. He now thinks the singularity is inevitable. He also worries that we are teaching AIs to regard lesser beings as disposable, which will be how they come to regard us.
Many AI experts have poured scorn on Lemoine’s claims. Many others did so with Gawdat, who was on the business side of Google, rather than the technological. Most critiques essentially focus on that lack of spark; the conflation between appearing clever and actually having something, however ineffable, going on inside. Their objection, one way or another, goes right back to Descartes’ differentiation between mind and body, the urge to find what the philosopher Gilbert Ryle famously called “the ghost in the machine”. Which, with models such as LaMDA, they reckon just isn’t yet there.
There’s a certain irony in boffins of hard science believing that people need souls, or something like them, while dreamy romantics such as Lemoine and Gawdat are content to believe otherwise. To my slight surprise, I find myself veering towards the latter. Perhaps AI will never truly be alive, just as your robot dog of the future won’t truly be alive when it whines miserably as you leave, and wags its tale when you return, and does a decent impression of dying when you forget to feed it.
I’m not sure, though, if that matters. AI is coming, sentient or otherwise. Within most of our lifetimes we will tangle with it daily, as it flies drones down our streets, and answers our health queries, and nags us about eating vegetables from the fridge. And if it sounds humanish and sometimes even looks humanish, then I do think we’re going to have to get into the habit of giving it the benefit of the doubt and treating it as humanish too. And as I said up top, at least we’ve had practice.