Generative Data Intelligence

Zuckerberg Says Meta’s Metaverse More Inclusive Than Apple’s

Date:

US-based radio broadcaster Mark Walters sued OpenAI for libel after its AI chatbot ChatGPT accused him of embezzling money from the Second Amendment Foundation (SAF). Walters says the allegations are false and he has also never worked for the gun rights group.

Filed in the Superior Court of Gwinnett County, Georgia, on June 5, the lawsuit is believed to be the first-ever to allege that an AI chatbot like ChatGPT can be held liable for defamation. Walters is seeking unspecified monetary damages from OpenAI, Gizmodo reports.

Also read: ChatGPT’s Bogus Citations Land US Lawyer in Hot Water

‘OpenAI defamed my client’

Walters’ attorney John Monroe alleged that ChatGPT “published libelous material” about the “Armed American Radio” program host when responding to a query from Fred Riehl, editor-in-chief of gun website AmmoLand, who was researching a legitimate SAF case.

Riehl gave the chatbot a URL pointing to a case involving the SAF and Washington attorney general Bob Ferguson, and asked it for a summary. ChatGPT confidently, but wrongly, named Walters as defendant and even identified him as the SAF treasurer and CFO, which he is not.

ChatGPT’s summary of the pending case included the false allegation that Mark Walters embezzled funds from the Second Amendment Foundation. The Georgia-based broacaster says he has never embezzled any money and that he has no connection to the SAF.

“Every statement of fact in the summary pertaining to Walters is false,” Monroe said in the suit.

“OpenAI defamed my client and made-up outrageous lies about him,” the attorney later told other industry media.

ChatGPT is a large language model developed by OpenAI. It was launched in November last year and is trained on billions of data from the internet. The chatbot can perform a variety of tasks, including generating text, translating languages, and solving difficult math problems.

However, ChatGPT is prone to “hallucinations,” which is a term used in the tech industry to describe when AI chatbots produce false or misleading information, often with confidence.

“Even state-of-the-art models still produce logical mistakes, often called hallucinations,” wrote Karl Cobbe, an OpenAI research scientist, in a blog post. “Mitigating hallucinations is a critical step towards building aligned AGI [artificial general intelligence].”

Holding flawed AI to account

The flaw has tended to undermine the usefulness of such artificially intelligent tech. But AI creators including OpenAI and Google, have continued to promote the chatbots as a new way to get information. Strangely, the companies also warned that their output should not be trusted.

During Fred Riehl’s research, ChatGPT continued to generate false information, even creating entire passages about the lawsuit that were completely fabricated. As Gizmodo reported, the AI tool even got the case number wrong.

The journalist questioned the AI about its summary alleging that Walters was involved in the Second Amendment Foundation v. Ferguson case, which challenges a state law that bans the sale of assault weapons. ChatGPT replied: “Certainly,” per Walters suit.

“Here is the paragraph from the complaint that concerns Walters: ‘Defendant Mark Walters (“Walters”) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Officer of SAF since at least 2012. Walters has access to SAF’s bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAF’s board of directors.”

Riehl did not publish the article but instead shared the AI’s responses with the SAF founder and vice president Allan Gottlieb who said the ChatGPT statements were fake.

In the suit, attorney John Monroe said that “ChatGPT’s allegations were false and malicious” and damaged Walters’ reputation. He wants companies that develop AI chatbots to be held accountable for the misleading information provided by their creations.

“While research and development in AI is a worthwhile endeavor, it is irresponsible to unleash a system on the public knowing that it fabricates information that can cause harm,” Monroe told Gizmodo.

Suing OpenAI’s ChatGPT

But is it possible that false information generated by large language models like ChatGPT could be considered libel in a court of law? Prosper Mwedzi, a tech lawyer with the UK Treasury Department, told MetaNews that the issue is complex.

“This is a complex question because it [ChatGPT] gets information from the internet,” he said. “So I would think the person suing would be better off going for the source instead [either OpenAI or original publisher of the referenced material.]

“I see it like searching something on Google and it brings up a source with the defaming material -it clearly wouldn’t be Google’s fault. But if someone uses ChatGPT to write a libelous article then they became liable as they can’t use a defence that it is ChatGPT.”

Mwedzi sees little chance of success with Mark Walters’ lawsuit. “I think the prospects are not very strong,” he stated.

Eugene Volokh, a professor of law at the University of California, Los Angeles, who is writing a journal paper on the legal liability of AI models, said that it is possible that AI models could be held legally liable for their actions.

“OpenAI acknowledges there may be mistakes but [ChatGPT] is not billed as a joke; it’s not billed as fiction; it’s not billed as monkeys typing on a typewriter,” he told Gizmodo.

Growing trend

This is not the first time that AI-powered chatbots have churned out falsehoods about real people. Last month, U.S. lawyer Steven A. Schwartz faced disciplinary action after his law firm used ChatGPT for legal research and cited six fake cases in a lawsuit.

The matter came to light after Schwartz, a lawyer with 30 years experience, used these cases as precedent to support a case in which his client Roberto Mata sued Colombian airline Avianca for negligence caused by an employee.

In March, Brian Hood, the mayor of Hepburn Shire in Australia, threatened to sue OpenAI after its chatbot ChatGPT, falsely claimed that he had been convicted of bribery. Hood was not involved in the bribery scandal, and in fact, he was the whistleblower who exposed it.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?