Artificial intelligence researcher and YouTuber Yannic Kilcher trained an artificial intelligence language algorithm with data from a notorious website.
Kilcher used 3.3 million messages from the social media app 4chan, where animes and memes were shared, which is already blocked in many countries.
The messages Kilcher used were obtained from the /pol/ extension, which contains some of the most offensive content on the platform.
In the end, the algorithm turned into a “hate speech machine”, generating highly offensive content.
“The language model was horrible,” Kilcher said:
“It perfectly summarized the aggression, nihilism, trolling and suspicion that permeated most posts on /pol/.
The YouTuber then created 9 bot accounts from the language algorithm. The bot accounts posted about 15,000 times in 24 hours.
Meanwhile, AI researchers saw Kilcher’s activity as an “unethical experiment”.
Lauren Oakden-Rayner, senior research fellow at the Australian Institute for Machine Learning, said, “This experiment would never pass an ethics committee.”
“Medical research has a strong ethical culture because we have a terrible history of harming people. This experiment violates every principle of human research ethics.
On the other hand, Kilcher said that he did not intend this as an experiment and that users already share very offensive content on the social media platform in question.
Kilcher, who shared his observations in a video posted on his YouTube channel with the note “The worst artificial intelligence you’ll ever see”, named this language algorithm “GPT-4chan”.
With this name, Kilcher was referring to GPT-3, the famous language algorithm of the artificial intelligence firm Open AI.
GPT-3 was famous for its ability to design websites, answer questions, write articles and prescriptions.