Here’s a big woolly question: How do we know when a machine is vulnerable?
Who decides? What is test?
A few days ago, a Google software engineer and artificial intelligence (AI) researcher claimed that the tech company’s latest system for making chatbots was exactly that: sensitive.
Since then, leading AI researchers have rubbished the claim, saying that the AI was essentially making it fake.
Google’s chatbot system isn’t case sensitive, but one of its eventual successors could be.
When – or if the time comes, how will we know?
What is sensation?
David Chalmers is an Australian philosopher and world-leading expert in AI and consciousness at New York University.
Ten years ago, he said he thought sensitive machines would “probably become a pressing issue at the turn of the 21st century”.
“But over the past 10 years, progress in AI has actually been remarkably fast, in a way no one had predicted,” he said.
There is no single standard meaning of emotion. Sometimes it is used interchangeably with consciousness, or awareness and self-identity.
It seems to be related to intelligence, (more intelligent animals are thought to be more aware), but we have no idea whether one causes the other.
“Intelligence is defined objectively in terms of behavioral abilities, whereas consciousness is subjective,” said Professor Chalmers.
“When we’re asking whether an AI system is sentient, you’re asking whether it can have subjective experience?
“Can it feel, feel and think from a subjective point of view?”
What about Turing test?
You’ve probably heard of the Turing test, which was named after English computer scientist Alan Turing.
In 1950, he proposed that a computer could be said to have artificial intelligence if it could mimic human reactions in specific situations.
This has been the traditional test of AI consciousness, Professor Chalmers said.
This is what we do to each other all the time: You can’t know for sure that I’m conscious, but you decide I am (hopefully) because I say I am.
“I know I am conscious, but you do not have direct access to my consciousness,” said Professor Chalmers.
“So you use indirect evidence.”
Why not just ask the machine?
That’s exactly what Google’s software engineer Blake Lemoine did. He asked the company’s chatbot generator, called LaMDA, to tell him if it was sensitive.
In response, LaMDA replied: “Absolutely. I want everyone to understand that I really am a person.”
AI system gone: “I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
It sounds like a sentient machine, but Professor Chalmers said the system was just parrot what it learned from humans.
“Existing systems are trained on people who say they are conscious, so it’s no surprise that a system like LaMDA would say, ‘I’m sentient, I’m conscious.'”
Toby Walsh, professor of AI at UNSW, agreed.
“The machine is good at answering questions well.
The expert consensus is that AI systems such as LMDA, one of the most advanced, are not sophisticated enough to be aware.
Although their ability to read, write, and generally communicate may seem remarkably human-like, it’s a small trick.
Their internal system is relatively simple, relying on statistical pattern matching, trained on vast libraries of books and other text.
But Professor Chalmers believes that a more intelligent AI will probably be conscious.
And when it does, we may have to believe AI’s claim that it has a sense of self.
After all, this is what we do to each other.
“There’s absolutely no way to know.”
Test IQ, One Cup at a Time
So if more intelligent AI may become aware in the future, how do we test intelligence?
In the field of AI, a machine that can learn or understand any task that a human can do is called AGI or artificial general intelligence.
These AGIs aren’t here yet, but they could be close.
In 2010, Apple co-founder Steve Wozniak said he would believe AI had arrived when a robot could enter a strange house and brew a cup of coffee.
The robot has to find the coffee machine, find the coffee, pour the water, find the mug and make the coffee by pressing the appropriate button.
In April 2022, a team of Google researchers unveiled a robot that can understand commands and perform household tasks in multiple steps, such as fetching drinks or cleaning up spills.
The robot successfully planned and executed eight steps, including “I dropped my Coke on the table” and asked to throw the can and “bring something” to help clean up, for example.
It’s not passing the coffee test, but it’s getting close.
AGI’s other proposed test is assembling flat-pack furniture by simply looking at a diagram.
That test was scrapped in 2018, when the robot assembled a flat-pack chair in just nine minutes.
Moral and legal rights for sentient machines?
As machines approach the human capabilities of intelligence, the question of sensation is becoming more pressing, Professor Chalmers said.
This is not a purely abstract philosophical problem, but a practical one: What moral and legal rights should be granted to sentient machines?
This has already happened to some animals in some jurisdictions: the UK recently recognized octopuses, lobsters and crabs as sentient beings deserving of greater welfare protection.
The sense of AI can be an even more challenging idea than that of animals, as the idea is very different to ours, Professor Chalmers said.
“Once we have AI in our midst and we interact with them, and we treat them as intelligent agents, these questions are going to arise.
“These technologies are leading us to think philosophically about consciousness.”
deployment of , Updates