AI researcher and YouTuber Yannic Kilcher trained an AI language algorithm with data from a notorious website
Kilcher used 3.3 million posts from 4chan, a social media app for sharing anime and memes that is currently blocked in many countries.
The messages Kilcher used came from the /pol/ extension, which contains some of the most offensive content on the platform.
In the end, the algorithm turned into a “hate speech machine”, generating highly offensive content.
- Advice for aspiring astronauts from a European Space Agency employee
- US considers temporarily lifting fuel tax
“The language model was horrible,” Kilcher said:
“It perfectly summarized the aggression, nihilism, trolling and suspicion that permeated most posts on /pol/.
The YouTuber then created 9 bot accounts from the language algorithm. The bot accounts posted about 15,000 times in 24 hours.
Meanwhile, AI researchers saw Kilcher’s activity as an “unethical experiment”.
Lauren Oakden-Rayner, senior research fellow at the Australian Institute for Machine Learning, said, “his experiment would never pass a human research #ethics board,”
This experiment would never pass a human research #ethics board. Here are my recommendations.
— Lauren Oakden-Rayner 🏳️⚧️ (@DrLaurenOR) June 6, 2022
“Medical research has a strong ethical culture because we have a terrible history of harming people. This experiment violates every principle of human research ethics.
On the other hand, Kilcher said that he did not intend this as an experiment and that users already share very offensive content on the social media platform in question.
Kilcher, who shared his observations in a video posted on his YouTube channel with the note “The worst artificial intelligence you will ever see”, named this language algorithm “GPT-4chan”.
With this name, Kilcher was referring to GPT-3, the famous language algorithm of the artificial intelligence firm Open AI.
GPT-3 was famous for its ability to design websites, answer questions, write articles and prescriptions.