Generative Data Intelligence

Lawyers who cited fake cases invented by ChatGPT must pay

Date:

Attorneys who filed court documents citing cases completely invented by OpenAI’s ChatGPT have been formally slapped down by a New York judge.

Judge Kevin Castel on Thursday issued an opinion and order on sanctions [PDF] that found Peter LoDuca, Steven A. Schwartz, and the law firm of Levidow, Levidow & Oberman P.C. had “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations created by the artificial intelligence tool ChatGPT, then continued to stand by the fake opinions after judicial orders called their existence into question.”

Yes, you got that right: the lawyers asked ChatGPT for examples of past cases to include in their legal filings, the bot just made up some previous proceedings, and the attorneys slotted those in to help make their argument and submitted it all as usual. That is not going to fly.

Built by OpenAI, ChatGPT is a large language model attached to a chat interface that responds to text prompts and questions. It was created through a training process that involves analyzing massive amounts of text and figuring out statistically probable patterns. It will then respond to input with likely output patterns, which often make sense because they resemble familiar training text. But ChatGPT, like other large language models, is known to hallucinate – to state things that are not true. Evidently, not everyone got the memo about that.

Is this is a good moment to bring up that Microsoft is heavily marketing OpenAI’s GPT family of bots, pushing them deep into its cloud and Windows empire, and letting them loose on people’s corporate data? The same models that imagine lawsuits and obituaries, and were described as “incredibly limited” by the software’s creator? That ChatGPT?

Timeline

In late May, Judge Castel challenged the attorneys representing plaintiff Roberto Mata, a passenger injured on a 2019 Avianca airline flight, to explain themselves when the airline’s lawyers suggested the opposing counsel had cited fabulated rulings.

Not only did ChatGPT invent fake cases that never existed, such as “Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2009),” but, as Schwartz told the judge in his June 6 declaration, the chatty AI model also lied when questioned about the veracity of its citation, saying the case “does indeed exist” and insisting the case can be found on Westlaw and LexisNexis, despite assertions to the contrary by the court and the defense counsel.

Screenshot of ChatGPT insisting non-existent case exists

Screenshot of ChatGPT insisting non-existent case exists … Click to enlarge

The attorneys eventually apologized but the judge found their contrition unconvincing because they failed to admit their mistake when the issue was initially raised by the defense on March 15, 2023. Instead, they waited until May 25, 2023, after the court had issued an Order to Show Cause, to acknowledge what happened.

“Many harms flow from the submission of fake opinions,” Judge Castel wrote in his sanctions order.

“The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.”

A future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity

To punish the attorneys, the judge directed each to pay a $5,000 fine to the court, to notify their client, and to notify each real judge falsely identified as the author of the cited fake cases.

Concurrently, the judge dismissed [PDF] plaintiff Roberto Mata’s injury claim against Avianca because more than two years had passed between the injury and the lawsuit, a time limit set by the Montreal Convention.

“The lesson here is that you can’t delegate to a machine the things for which a lawyer is responsible,” said Stephen Wu, shareholder in Silicon Valley Law Group and chair of the American Bar Association’s Artificial Intelligence and Robotics National Institute, in a phone interview with The Register.

Wu said that Judge Castel made it clear technology has a role in the legal profession when he wrote, “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance.”

But that role, Wu said, is necessarily subordinate to legal professionals since Rule 11 of the Federal Rules of Civil Procedure requires attorneys to take responsibility for information submitted to the court.

“As lawyers, if we want to use AI to help us write things, we need something that has been trained on legal materials and has been tested rigorously,” said Wu. “The lawyer always bears responsibility for the work product. You have to check your sources.”

The judge’s order includes, as an exhibit, the text of the invented Varghese case atop a watermark that says, “DO NOT CITE OR QUOTE AS LEGAL AUTHORITY.”

However, future large language models trained on repeated media mentions of the fictitious case may keep the lie alive a bit longer. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?