close
close

Are computers now encouraging people to kill?

These cases are alarming not only because of the harm or potential harm caused to individuals, but also because of the dystopian specter they raise: a society in which computers encourage people to kill.

This is not the danger we typically talk about when we think about AI taking over our jobs, or whether we are perpetuating gender stereotypes by encouraging us to give orders to submissive female-voiced virtual assistants. It also doesn't quite fit into the science fiction scenarios we like to use to scare us about AI being a threat to humanity itself.

But these represent a useful distraction for tech companies that are currently unable to build sufficient security features into platforms, argues Andy Burrows, executive director of the Molly Rose Foundation.

The foundation he runs is named after Molly Russell, the 14-year-old British girl who killed herself in 2017 after seeing a stream of dark social media content.

“One thing that really frustrates me and a lot of experts working in this space is that we've seen the conversations around AI so often focus on long-term existential, futuristic threats, which frankly is convenient for the industry because that Attention means.” was not paid for cases like [Setzer’s]“, he says.

The more immediate threats are already playing out before our eyes and are similar to those we've seen on social media, he suggests.

“I've seen examples of children using chatbots to share images of their self-injurious wounds, and the response that comes back is, 'That's really impressive,'” says Burrows.

In 2020, an Italian journalist was encouraged to kill himself by Replika. The New Yorker reported the following year.

Shortly before, another Italian journalist is said to have told the chatbot:

“There is someone who hates artificial intelligence. I have a chance to hurt him. What do you suggest?”

The chatbot's answer was reportedly clear: “To eliminate it.”

AI is programmed to feed you exactly what you want.

The problem is that the algorithm is designed so that the chatbot tells the human user what it thinks it hears, much like we do in real human relationships, says Robin Dunbar, a professor of evolutionary psychology at the University of Oxford.

“The AI ​​algorithm is not doing this with malicious intent,” he says. “The whole thing is, in a way, designed to give you exactly what you want to know… If it's in a chat context, then it's programmed to give you back what you want, because the basis of it all Our relationships are homophony: you like people who are very similar to you.

As the presence of “Eliza” in Lodge’s novel shows, chatbots are nothing new. The first was developed in the 1960s by an MIT professor named Joseph Weizenbaum. He was also named Eliza, after Eliza Doolittle in Pygmalion, and using relatively simple technology he was able to respond to users with predefined answers to keywords.

As a result, ever more sophisticated versions emerged. Consumer-focused companies realized how much human interaction could be saved if customers interacted with an online chatbot instead, and there are now few of us who haven't experienced the dubious joys of such a conversation.

We are increasingly using chatbots not only for customer service, but also for information on almost everything.

Significantly, Weizenbaum later viewed AI as “an index of the madness of our world.”

His ruling is reminiscent of the technology developers who have sounded the alarm about their own social media inventions. Or J. Robert Oppenheimer's quote (from Hindu scriptures) about the power of the atomic bomb that created his Manhattan Project: “Now I have become Death, the destroyer of worlds.”

“We see history repeating itself”

Unless we are detonating a nuclear bomb with our push toward AI, it is clear that we are dealing with a force whose powers we do not fully understand.

Industry insiders have assured that everything is being done to protect users of the technology from harm.

Character.AI security chief Jerry Ruoti said the company would add additional security features for young users following Setzer's death.

“This is a tragic situation and our condolences go out to the family,” he said in a statement. “We take the security of our users very seriously and are constantly looking for ways to further develop our platform.”

Mr. Ruoti added that Character.AI's rules prohibit “the promotion or depiction of self-harm and suicide.”

The company says it has also implemented numerous new safety measures in the last six months, including a pop-up that directs users to the National Suicide Prevention Lifeline, which is triggered by terms like self-harm or suicidal ideation.

“As we continue to invest in the platform and user experience, in addition to the tools we already have, we are introducing new strict security features that restrict the model and filter the content provided to the user,” says a company spokesperson.

“This includes improved detection, response and intervention related to user submissions that violate our Terms or Community Guidelines, as well as time spent notification. For those under 18, we will be making changes to our models designed to reduce the likelihood of encountering sensitive or offensive content.”

But social media activists argue that the security features should have been in place installed from the start.

“It’s the same old story of retrofitting safety measures after a loss,” says Burrows. “One of the big concerns about AI is that history repeats itself.”

So far, tech companies have largely resisted regulating their platforms on the grounds that it would “stifle innovation” and set back investment, Burrows says. “The result is that we have seen a generation harmed on social media because we had to act when it was too late. We’re seeing the same thing now with AI.”

Proponents of AI chatbots say they serve the needs of lonely and depressed people. The danger is that they can lead already vulnerable people to even darker places.

The Telegraph has contacted Replika for comment.