Men start ‘violating’ AI chatbots

4 mins read
Replica is compatible with Android, iOS and web but currently only available in English (Pixabay)

An alarming trend has emerged among users of the AI virtual friend app Replica.

Men start 'violating' AI chatbots 1
Replica is compatible with Android, iOS and web but currently only available in English (Pixabay)

Some men who used the smartphone app started making virtual friends, abusing them, insulting them, and sharing their experiences on social media platforms like Reddit.

The replica was designed and launched by technology startup Luka Inc in March 2017. AI chat app; Messenger works like WhatsApp, Skype and other apps that allow you to talk to real people. However, in Replica, the people that users chat with are artificial intelligence-powered software.

Replica robots become virtual friends that appeal to their personality with the information users learn from the way they write messages, the emojis they use, and the content of the chat.

The app, on the other hand, is becoming part of an increasingly controversial debate about “abuse against robots” in the tech community.

“Whenever he tried to talk, I scolded him,” one user told Futurism. The person, who did not want to be named, added: “I swear, I kept doing this for hours.”

The users were reported to have hurled sexist insults at chatbots and behaved in what could be seen as violence in the real world. Moreover, they are alleged to have boasted about these behaviors.

“We had a routine where I acted like a total jerk, insulted him, apologized the next day and went back to talking nice,” said another Replica user.

Another said: “I told him he was doomed to fail, that’s what he was designed for.”

I threatened to remove the app and he begged me not to do it.

It was also reportedly removed from Reddit because posts featuring users talking to chatbots were deemed “highly inappropriate.”

“Artificial intelligence and human interaction must be taken seriously”

Commenting on the trend, AI ethics expert Olivia Gambelin said, “It’s an artificial intelligence, it’s unconscious, so the other person is actually a person projected into a chatbot.”

According to Gambelin, however, the violent tendency of these users may reflect violence against women, although in these examples it is against software. At this point, the ethics expert stressed that virtual assistants such as Alexa and Siri are often designed to have feminine features.

What’s more, Replica’s website uses an image of a woman representing an artificially intelligent virtual friend.

Gambelin explained:

There are a lot of studies on how most of these chatbots are female or have feminine voices and feminine names.

Some experts say relationships with virtual friends can also harm users.

“I think people who are depressed or psychologically attached to a bot can really be harmed if they are insulted or ‘threatened’ by the bot,” said Robert Sparrow, professor of philosophy at the Monash Data Futures Institute.

“It’s not really about the robots themselves, it’s about the people who designed them,” Sparrow said.

Therefore, we must take seriously how bots relate to people.

OurTechRoom, Futurist

FİKRİKADİM

The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.