Showing posts with label AI errors. Show all posts
Showing posts with label AI errors. Show all posts

Sunday, November 9, 2025

California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025

Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case

"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.

The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.

The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.

“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”

Tuesday, June 3, 2025

Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond; The Federalist Society, June 3, 2025 12 PM EDT

The Federalist Society; Emerging Issues in the Use of Generative AI: Ethics, Sanctions, and Beyond

"The idea of Artificial Intelligence has long presented potential challenges in the legal realm, and as AI tools become more broadly available and widely used, those potential hurdles are becoming ever more salient for lawyers in their day-to-day operations. Questions abound, from what potential risks of bias and error may exist in using an AI tool, to the challenges related to professional responsibility as traditionally understood, to the risks large language learning models pose to client confidentiality. Some contend that AI is a must-use, as it opens the door to faster, more efficient legal research that could equip lawyers to serve their clients more effectively. Others reject the use of AI, arguing that the risks of use and the work required to check the output it gives exceed its potential benefit.

Join us for a FedSoc Forum exploring the ethical and legal implications of artificial intelligence in the practice of law.

Featuring: 

  • Laurin H. Mills, Member, Werther & Mills, LLC
  • Philip A. Sechler, Senior Counsel, Alliance Defending Freedom
  • Prof. Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law Emeritus, UCLA School of Law; Thomas M. Siebel Senior Fellow, Hoover Institution, Stanford University
  • (Moderator) Hon. Brantley Starr, District Judge, United States District Court for the Northern District of Texas"