Thursday, July 7, 2022

Why AI should be afraid of us

Artificial intelligence is gradually catching up with ours. AI algorithms can constantly defeat us with chess, poker and multiplayer video games, generate images of human faces indistinguishable from rights, write news articles (not this one!) And even love stories, and drive cars better than which most teens do.

But AI is not yet perfect if Woebot is an indicator. Woebot, as Karen Brown wrote in Science Times this week, is a smartphone app with AI development aimed at providing inexpensive counseling, using dialogue to guide users through the basic techniques of cognitive behavioral therapy. But many psychologists doubt whether an AI algorithm can ever express the kind of empathy needed to make interpersonal therapy work.

“These programs really vary the essential ingredient that – according to evidence from the evidence – is what helps with therapy, that’s the therapeutic relationship,” said Linda Michaels, a therapist in Chicago who co-chairs the Psychotherapy Action Network. a professional group. , told The Times.

Empathy is, of course, a two-way street, and our people do not display much more of it for clashes than clashes do for us. Numerous studies have found that when people are placed in a situation where they can cooperate with a benevolent AI, they are less likely to do so than when the bot is a real person.

“There seems to be something missing about reciprocity,” Ophelia Deroy, a philosopher at Ludwig Maximilian University in Munich, told me. “We will treat a perfect stranger better than AI.”

In a recent study, The dr. Deroy and her neuroscientists try to understand why this is. The researchers linked human subjects to unseen partners, sometimes human and sometimes AI; each pair then played one in a series of classic economic games – Trust, Prisoner’s Dilemma, Chicken and Stag Hunt, as well as one they created, called Reciprocity – designed to measure and reward collaboration.

Read Also:  Heat stress responsible for thousands of cattle deaths in Kansas

Our lack of reciprocity towards AI is widely accepted as a lack of confidence. It’s hyperrational and insensitive, it’s probably just unpredictable, probably not to work together, so why should we? Dr. Deroy and her colleagues came to a different and perhaps less comforting conclusion. In their study, it was found that people are less likely to cooperate with a bot, even if the bot would like to cooperate. It’s not that we do not trust the bot, but that we do: the bot is guaranteed, a capital S-piston, and therefore we use it.

The conclusion was confirmed by the reports of the study participants. “Not only did they tend not to reciprocate the collaborative intentions of the artificial agents,” said dr. Deroy said, “but when they basically betrayed the trust of the bot, they did not report guilt while doing so to people.” She added: “You can just ignore the bot and there is no sense that you have breached any mutual obligation.”

This can have real implications. When we think of AI, we tend to think of the Alexas and Siris of our future world, with whom we can form a kind of intimate relationship. But most of our interactions are one-time, often wordless encounters. Imagine you are driving on the highway, and a car wants to merge in front of you. If you notice that the car is driverless, you will enter it much less. And if the AI ​​is not responsible for your bad behavior, an accident can occur.

Read Also:  Doctors used to not be considered in Dobbs, but now they are on the legal front of abortion

“What sustains cooperation in society on any scale is the establishment of certain norms,” ​​said dr. Deroy said. ‘The social function of guilt is precisely to make people follow social norms that lead them to compromise and work with others. And we have not evolved to have social or moral norms for non-sentimental beings and clashes. ”

That, of course, is half the premise of ‘Westworld’. (To my surprise, Dr. Deroy has yet to hear of the HBO series.) But a debt-free landscape can have consequences, she says: ‘We are habits. So, what guarantees that behavior that is repeated, and where you show less politeness, less moral obligation, less cooperation, will not color and pollute the rest of your behavior when you interact with another human being? ‘

There are also similar consequences for AI. “When people treat them badly, they are programmed to learn from what they are experiencing,” she said. “An AI that has been put on the road and programmed to be benevolent should not be so good for humans, because otherwise it will sit in traffic forever.” (This is basically the other half of the Westworld premise.)

There we have it: the real Turing test is road rage. When a self-driving car starts honking wildly from behind because you cut it off, you will know that humanity has reached the pinnacle of achievement. Hopefully AI therapy will be so advanced that driverless cars can help solve their anger management problems.

Nation World News Desk
Nation World News Desk
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news
- Advertisement -