Showing posts with label AI-generated nonexistent cases. Show all posts
Showing posts with label AI-generated nonexistent cases. Show all posts

Friday, November 14, 2025

Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession; Cleveland.com, November 12, 2025

 , Cleveland.com; Cleveland attorney’s use of AI in court filings raises ethical questions for legal profession

"A Cleveland defense attorney is under scrutiny in two counties after submitting court filings containing fabrications generated by artificial intelligence — a case that’s prompting broader questions about how lawyers are ethically navigating the use of AI tools in legal practice.

William Norman admitted that a paralegal in his office used ChatGPT to draft a motion to reopen a murder conviction appeal. The document included quotes that did not exist in the trial transcript and misrepresented statements made by the prosecutor."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

  , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Wednesday, September 24, 2025

Suspended lawyer accused of citing hallucinated case in bid to reinstate law license; ABA Journal, September 12, 2025

DEBRA CASSENS WEISS , ABA Journal; Suspended lawyer accused of citing hallucinated case in bid to reinstate law license

"A suspended Iowa lawyer cited at least one hallucinated case likely generated by artificial intelligence in his bid to return to law practice, according to a motion filed by the Iowa Supreme Court Attorney Disciplinary Board.

Court filings by suspended Des Moines lawyer Royce David Turner include “what appears to be at least one AI-generated citation to a case that does not exist or does not stand for the proposition asserted in the filings,” the board says in a July 9 motion. Turner cited the “imaginary case” In re Mears in a July 5 brief supporting his application for reinstatement and in two other court filings, according to motion to strike the three filings."

Monday, June 2, 2025

Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder; Daily Mail, May 31, 2025

JOE HUTCHISON FOR DAILYMAIL.COMExcruciating reason Utah lawyer presented FAKE case in court after idiotic blunder

"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.

Opposing counsel said that the only way they would find any mention of the case was by using the AI

They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.

Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.

He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."