Showing posts with label Innocence Project. Show all posts
Showing posts with label Innocence Project. Show all posts

Thursday, November 27, 2025

Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say; The New York Times, November 25, 2025

, The New York Times ; Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say

"On Friday, the lawyers were joined by a group of 22 legal and technology scholars who warned that the unchecked use of A.I. could lead to wrongful convictions. The group, which filed its own brief with the state Supreme Court, included Barry Scheck, a co-founder of the Innocence Project, which has helped to exonerate more than 250 people; Chesa Boudin, a former district attorney of San Francisco; and Katherine Judson, executive director of the Center for Integrity in Forensic Sciences, a nonprofit that seeks to improve the reliability of criminal prosecutions.

The problem of A.I.-generated errors in legal papers has burgeoned along with the popular use of tools like ChatGPT and Gemini, which can perform a wide range of tasks, including writing emails, term papers and legal briefs. Lawyers and even judges have been caught filing court papers that were rife with fake legal references and faulty arguments, leading to embarrassment and sometimes hefty fines.

The Kjoller case, though, is one of the first in which prosecutors, whose words carry great sway with judges and juries, have been accused of using A.I. without proper safeguards...

Lawyers are not prohibited from using A.I., but they are required to ensure that their briefs, however they are written, are accurate and faithful to the law. Today’s artificial intelligence tools are known to sometimes “hallucinate,” or make things up, especially when asked complex legal questions...

Westlaw executives said that their A.I. tool does not write legal briefs, because they believe A.I. is not yet capable of the complex reasoning needed to do so...

Damien Charlotin, a senior researcher at HEC Paris, maintains a database that includes more than 590 cases from around the world in which courts and tribunals have detected hallucinated content. More than half involved people who represented themselves in court. Two-thirds of the cases were in United States courts. Only one, an Israeli case, involved A.I. use by a prosecutor."