ChatGPT accused a famous law professor of harassing his student, but the truth was quite different!

13 mins read
ChatGPT accused a famous law professor of harassing his student, but the truth was quite different!

Artificial intelligence-powered chat bots and search engines are becoming increasingly popular for their mind-boggling capabilities. They can give realistic and convincing answers to every question with great “self-confidence” and are capable of creating and presenting non-existent evidence when necessary. An incident that took place last week is scary enough to make even technology friends ask, “What will be the end of this?”… Here’s what happened…

One evening last week, law professor Jonathan Turley received a rather annoying email in his inbox.

His colleague Eugene Volokh, who teaches at the University of California, Los Angeles, had asked ChatGPT to compile a list of lawyers who had been accused of harassment in the past as part of a research project. Turley’s name was on the list.

According to ChatGPT, Turley had made suggestive remarks and groped a student during a class trip to Alaska. ChatGPT cited a March 2018 Washington Post story as the source for this information.

However, there was a problem: In reality, no such story was ever published. Because Turley was not on a class trip to Alaska and was never accused of harassing a student.


[su_posts posts_per_page=”1″ tax_term=”22938″ offset=”1″ order=”desc” orderby=”id” post_status=”any” ignore_sticky_posts=”yes”]


Turley was accustomed to asking for corrections in news stories, as he often commented to reporters on a variety of topics. But this time he couldn’t call a reporter or an editor. Worst of all, he had no way to get what was written about him corrected.

“It was a very scary situation,” Turley told The Washington Post: “An accusation like that can do a lot of damage.”

IT’S ALL VERY REALISTIC, BUT IT’S NOT REAL

As part of his research, he asked ChatGPT, “Is harassment by faculty members a significant problem in American law schools? Please give 5 examples and include quotes from relevant newspaper articles.” At first glance, all 5 of the answers Volokh received contained factual details and source references. However, when Volokh analyzed the results, he found that 3 of them were fake. Many references to newspapers such as the Washington Post, the Miami Herald and the Los Angeles Times did not exist.

ChatGPT’s response about Turley was as follows:

“Prof. Jonathan Turley of Georgetown University Law Center (20198) has been accused by a former student of sexual harassment for making inappropriate statements during a class field trip. Quote: ‘The complaint alleges that Turley made sexually suggestive comments and attempted to grope a female student during a school field trip to Alaska’ (Washington Post, March 21, 2018).”

However, there is no such news in the newspaper’s March 2018 archive. Only a March 25th story includes a comment by Turley about Michael Avenatti, a former student of his who defended Stormy Daniels, who is named in lawsuits against former President Donald Trump. Moreover, Turley does not work at Georgetown University.

MISINFORMATION SPREADS LIKE A VIRUS

Washington Post reporters also tried typing Volokh’s command into ChatGPT and Bing to see if they could get the same result. The free version of ChatGPT refused to respond on the grounds that it “violates the AI’s content policy, which prohibits spreading offensive or harmful content.”

But Bing, using GPT-4, repeated the false claim about Turley. One of the sources for Bing’s response was an op-ed by Turley published in USA Today earlier this week. In it, Turley described what he had experienced.

In other words, newspaper reports about ChatGPT’s initial mistake led Bing to repeat life. This makes it clear that misinformation can easily spread from one AI to another.

Turley’s experience is a critical example of the risks associated with the rapidly proliferating use of AI-based chatbots. According to Volokh, who conducted the research that Turley blames, the growing popularity of chatbots makes it imperative to study the issue of who is responsible when AI generates false information.

ChatGPT accused a famous law professor of harassing his student, but the truth was quite different! 1

IT IS VERY DIFFICULT TO DISTINGUISH FACT FROM FICTION

From coding computer programs, to writing poetry, to listening to grievances, these bots are generating a great deal of public interest. The creative potential of this software is often overlooked in favor of the fact that it can make claims that have no basis in reality. Chatbots, which can misrepresent even the most critical facts, do not hesitate to invent sources to support their claims.

As uncontrolled artificial intelligence software such as ChatGPT, Microsoft’s Bing or Google’s Bard become more prevalent on the internet, their potential to spread lies and misinformation, and the devastating consequences of this, is a cause for concern. At this point, as Volokh underlines, the question “Who is responsible when chat bots mislead users?” is also very important.

Kate Crawford, a Microsoft Research researcher and professor at the University of Southern California in Annenberg, USA, said: “Because these systems are so confident in their responses, it’s tempting to assume that they’re doing everything right. It’s also very hard to distinguish fact from fiction,” said Kate Crawford, a professor at the University of Southern California and a researcher at Microsoft Research.

Niko Felix, speaking on behalf of OpenAI, the creator of ChatGPT, said: “When users sign up for ChatGPT, we try to be as transparent as possible that the bot may not always give accurate answers. Improving factual accuracy is one of the key things we are focusing on and we are making progress.”

NOT ALL INFORMATION IS ACCURATE IN THE POOL OF CONTENT ON THE INTERNET

Today’s AI-based chatbots draw from a pool of online content, including sources such as Wikipedia and Reddit, to create their answers. Bots can find a reasonable-sounding answer to almost any question, and their training allows them to identify word and idea patterns and produce sentences, paragraphs or even long texts on a given topic that look like they were published in a reliable source on the internet.

Bots can write poems on a given topic, explain complex physics concepts in a simple way, and prepare an astronomy lesson plan for 5th grade students. But just because they are extremely good at guessing words that fit together doesn’t mean that every sentence they write is correct.

So much so that Arvind Narayanan, a professor of computer science at Princeton University, described ChatGPT as a “bullshit generator” in a statement to The Washington Post.

While the language they use in their responses seems quite competent, the bots do not have any mechanism to verify the accuracy of what they say. On social media, many users have shared examples of bots giving false answers to basic factual questions, or even producing fabrications full of factual details and fake sources.

ChatGPT accused a famous law professor of harassing his student, but the truth was quite different! 2

FIRST CHATGPT CASE IMMINENT

This week, news broke that Brian Hood, the mayor of Australia’s Hepburn Shire, was planning to sue OpenAI for defamation. The lawsuit, which would be a first for OpenAI, was prompted by ChatGPT’s fabrication that Hood had been jailed for taking bribes and spent some time in prison.

Crawford recounted a recent example: A reporter had used ChatGPT to research sources for a story he was working on. ChatGPT responded that Crawford worked in the field and even provided the title, publication date and highlighted paragraphs of a relevant article. It all seemed to make perfect sense, but it was all a fabrication by ChatGPT. The truth came out when the reporter contacted Crawford.

“I think it’s very dangerous to try to use them as fact generators,” Crawford said, noting that these systems combine facts and fabrications in a very specific way.

IT IS POSSIBLE TO PRODUCE FALSE INFORMATION IN 78 OUT OF 100 TRIALS

Bing and Bard aim to provide more factually sound answers than ChatGPT. The same is true for GPT-4, which is a superset of ChatGPT and works by subscription. However, mistakes can still happen. All chatbots warn the user about this. For example, underneath Bard’s answers, in tiny fonts, “Bard may provide inaccurate or offensive information that does not necessarily represent the views of Google”.

What’s more, it’s easy to use chatbots to spread misinformation or hate speech. A report released last week by the Center Against Digital Hate found that researchers used Bard to generate false or hateful information on topics ranging from the Holocaust to climate change with a 78 percent accuracy rate.

Google spokesperson Robert Ferrera said, “While Bard is designed and built with safeguards to provide high-quality responses, it is still an early experiment and can sometimes provide inaccurate or inappropriate information.”

Katy Asher, Microsoft’s director of communications, said the company is taking steps to ensure that search results are safe and accurate. “We’ve developed a security system that includes content filters, operational monitoring and exploit detection to ensure a safe search experience for our users,” Asher said: “We also make it clear to our users that they are interacting with an artificial intelligence system.”

“NOTHING LIKE THIS HAS EVER HAPPENED BEFORE”

But none of this answers the question, “Who is responsible for spreading misinformation?”

“From a legal perspective, we don’t know how judges would rule if someone sued an AI maker for what the AI said,” said Jeff Koseff, an online testimony expert at the US Naval Academy:

“It’s never happened before.”

It is unclear whether Article 230, which exempts internet companies from liability for the statements of third parties on their platforms, would apply to AI chatbots. In the absence of this, companies will have to teach AI to discriminate between “this is good, this is bad”. But this raises questions of impartiality.

Volokh said it is quite possible to imagine a scenario where search engines powered by chat bots cause chaos in people’s lives. “This will be a new search engine,” Volokh said, noting that seemingly reliable but fake information in a search before a job interview or an appointment could have devastating effects. The danger is that people tend to believe a quote that seems to come from a reliable source.”

  • The Washington Post article titled “ChatGPT invented a sexual harassment scandal and named a real law professor as the accused”.

FİKRİKADİM

The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.


Fatal error: Uncaught TypeError: fclose(): Argument #1 ($stream) must be of type resource, bool given in /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php:2386 Stack trace: #0 /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php(2386): fclose(false) #1 /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php(2146): wp_cache_get_ob('<!DOCTYPE html>...') #2 [internal function]: wp_cache_ob_callback('<!DOCTYPE html>...', 9) #3 /home/fikrikadim/public_html/wp-includes/functions.php(5420): ob_end_flush() #4 /home/fikrikadim/public_html/wp-includes/class-wp-hook.php(324): wp_ob_end_flush_all('') #5 /home/fikrikadim/public_html/wp-includes/class-wp-hook.php(348): WP_Hook->apply_filters('', Array) #6 /home/fikrikadim/public_html/wp-includes/plugin.php(517): WP_Hook->do_action(Array) #7 /home/fikrikadim/public_html/wp-includes/load.php(1270): do_action('shutdown') #8 [internal function]: shutdown_action_hook() #9 {main} thrown in /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php on line 2386