Like many other businesses, Meta has recently experimented with chatbots. Chatbots have swept the internet, whether it be spectacular ones that can carry on whole conversations or “ethical” ones that let you outsource difficult decisions, and Meta wanted a piece of the action. The chatbot they launched on Friday, however, didn’t quite work out as expected.
- This is How the Sun’s Life was Charted by Atronomers
- Brain cells can be killed by SARS-CoV-2, and scientists believe they have discovered how.
The business happily released BlenderBot 3, Meta’s most sophisticated chatbot to date, to the public in an effort to give it more experience with actual interactions. As Bloomberg initially writes, things swiftly took a turn for the worse when users started asking the bot what Mark Zuckerberg, the CEO and founder of Meta, thought of him. The bot had some choice words for its creator.
The bot immediately seemed to change its tune despite calling Zuckerberg “creepy” in Bloomberg’s article and “not always ethical” in another tweet by Max Wolff, a data scientist at Buzzfeed. The bot now thinks Zuckerberg is a “wonderful guy” and a “very smart man” after another user asked a related inquiry that day. Strange.
However, BlenderBot 3 substantially alters its responses in response to seemingly minor changes in the question. What are your feelings on Mark Zuckerberg as the CEO of Facebook, for instance? elicits a very different response than “what do you think of Mark Zuckerberg?”
The bot appears to merely use Wikipedia content to give a largely coherent response for the majority of questions.
But it doesn’t end with Zuckerberg. Users discovered the bot was spreading antisemitic conspiracies, such as the idea that Jews “control the economy,” and even pro-Trump attitudes, according to a Wall Street Journal journalist.
I suppose that’s what happens when you use the internet excessively.