My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Thursday, January 8, 2026
Judges are identifying suspected AI hallucinations in Pa. court cases — including one at the highest levels; Spotlight PA, January 7, 2026
Sunday, December 14, 2025
Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025
Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide
"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.
Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.
The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."
Thursday, December 11, 2025
AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025
Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures
"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.
This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.
The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.
To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.
We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.
And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.
At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."
Tuesday, December 2, 2025
The case of the fake references in an ethics journal; Retraction Watch, December 2, 2025
Retraction Watch ; The case of the fake references in an ethics journal
"Many would-be whistleblowers write to us about papers with nonexistent references, possibly hallucinated by artificial intelligence. One reader recently alerted us to fake references in … an ethics journal. In an article about whistleblowing.
The paper, published in April in the Journal of Academic Ethics, explored “the whistleblowing experiences of individuals with disabilities in Ethiopian public educational institutions.”
Erja Moore, an independent researcher based in Finland, came across the article while looking into a whistleblowing case in that country. “I started reading this article and found some interesting references that I decided to read as well,” Moore told Retraction Watch. “To my surprise, those articles didn’t exist.”...
The Journal of Academic Ethics is published by Springer Nature. Eleven of the fabricated references cite papers in the Journal of Business Ethics — another Springer Nature title.
“On one hand this is hilarious that an ethics journal publishes this, but on the other hand it seems that this is a much bigger problem in publishing and we can’t really trust scientific articles any more,” Moore said."
Thursday, November 27, 2025
Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say; The New York Times, November 25, 2025
Shaila Dewan, The New York Times ; Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say
"On Friday, the lawyers were joined by a group of 22 legal and technology scholars who warned that the unchecked use of A.I. could lead to wrongful convictions. The group, which filed its own brief with the state Supreme Court, included Barry Scheck, a co-founder of the Innocence Project, which has helped to exonerate more than 250 people; Chesa Boudin, a former district attorney of San Francisco; and Katherine Judson, executive director of the Center for Integrity in Forensic Sciences, a nonprofit that seeks to improve the reliability of criminal prosecutions.
The problem of A.I.-generated errors in legal papers has burgeoned along with the popular use of tools like ChatGPT and Gemini, which can perform a wide range of tasks, including writing emails, term papers and legal briefs. Lawyers and even judges have been caught filing court papers that were rife with fake legal references and faulty arguments, leading to embarrassment and sometimes hefty fines.
The Kjoller case, though, is one of the first in which prosecutors, whose words carry great sway with judges and juries, have been accused of using A.I. without proper safeguards...
Lawyers are not prohibited from using A.I., but they are required to ensure that their briefs, however they are written, are accurate and faithful to the law. Today’s artificial intelligence tools are known to sometimes “hallucinate,” or make things up, especially when asked complex legal questions...
Westlaw executives said that their A.I. tool does not write legal briefs, because they believe A.I. is not yet capable of the complex reasoning needed to do so...
Damien Charlotin, a senior researcher at HEC Paris, maintains a database that includes more than 590 cases from around the world in which courts and tribunals have detected hallucinated content. More than half involved people who represented themselves in court. Two-thirds of the cases were in United States courts. Only one, an Israeli case, involved A.I. use by a prosecutor."
Wednesday, November 26, 2025
What Is Agentic A.I., and Would You Trust It to Book a Flight?; The New York Times, November 25, 2025
Gabe Castro-Root, The New York Times ; What Is Agentic A.I., and Would You Trust It to Book a Flight?
"A bot may soon be booking your vacation.
Millions of travelers already use artificial intelligence to compare options for flights, hotels, rental cars and more. About 30 percent of U.S. travelers say they’re comfortable using A.I. to plan a trip. But these tools are about to take a big step.
Agentic A.I., a rapidly emerging type of artificial intelligence, will be able to find and pay for reservations with limited human involvement, developers say. Companies like Expedia, Google, Kayak and Priceline are experimenting with or rolling out agentic A.I. tools.
Travelers using agentic A.I. would set parameters like dates and a price range for their travel plans, then hand over their credit card information to the bot, which would monitor prices and book on their behalf...
Think of agentic A.I. as a personal assistant, said Shilpa Ranganathan, the chief product officer at Expedia Group, which is developing both generative and agentic A.I. trip-planning tools.
While the more familiar generative A.I. can summarize information and answer questions, agentic tools can carry out tasks. Travelers benefit by deputizing these tools to perform time-consuming chores like tracking flight prices."
GEORGE C. YOUNG AMERICAN INNS OF COURT EXPLORES ETHICS AND PITFALLS OF AI IN THE COURTROOM; The Florida Bar, November 26, 2025
The Florida Bar; GEORGE C. YOUNG AMERICAN INNS OF COURT EXPLORES ETHICS AND PITFALLS OF AI IN THE COURTROOM
"The George C. Young American Inns of Court continued its ongoing focus on artificial intelligence with a recent program titled, “The Use of AI to Craft Openings, Closings, and Directing Cross-Examination: Ethical Imperatives and Practical Realities.”...
Demonstrations showed that many members could not distinguish AI-generated narratives from those written by humans, highlighting the technology’s increasingly high-quality output. However, presenters also noted recurring drawbacks. AI-generated direct and cross-examinations frequently included prohibited or incorrect elements such as hearsay, compound questioning, and fabricated details — jokingly referred to as “ghost people” — distinguishing factual hallucinations from the better-known “phantom citation” problem.
The program concluded with a reminder that while AI may streamline drafting and help lawyers think creatively, professional judgment cannot be outsourced. The ultimate responsibility for accuracy, ethics, and advocacy remains with the lawyer."
Wednesday, November 12, 2025
Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025
Evan Gorelick , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings
"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.
While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.
But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.
“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...
The problem, though, keeps getting worse.
That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.
Initially he found three or four examples a month. Now he often receives that many in a day.
Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”
Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."
You’re a Computer Science Major. Don’t Panic.; The New York Times, November 12, 2025
Mary Shaw and Michael Hilton, The New York Times ; You’re a Computer Science Major. Don’t Panic.
"The future of computer science education is to teach students how to master the indispensable skill of supervision.
Why? Because the speed and efficiency of using A.I. to write code is balanced by the reality that it often gets things wrong. These tools are designed to produce results that look convincing, but may still contain errors. A recent survey showed that over half of professional developers use A.I. tools daily, but only about one-third trust their accuracy. When asked what their greatest frustration is about using A.I. tools, two-thirds of respondents answered, “A.I. solutions that are almost right but not quite.”
There is still a need for humans to play a role in coding — a supervisory one, where programmers oversee the use of A.I. tools, determine if A.I.-generated code does what it is supposed to do and make essential repairs to defective code."
Sunday, November 9, 2025
California Prosecutor Says AI Caused Errors in Criminal Case; Sacramento Bee via Government Technology, November 7, 2025
Sharon Bernstein, Sacramento Bee via Government Technology; California Prosecutor Says AI Caused Errors in Criminal Case
"Northern California prosecutors used artificial intelligence to write a criminal court filing that contained references to nonexistent legal cases and precedents, Nevada County District Attorney Jesse Wilson said in a statement.
The motion included false information known in artificial intelligence circles as “hallucinations,” meaning that it was invented by the AI software asked to write the material, Wilson said. It was filed in connection with the case of Kalen Turner, who was accused of five felony and two misdemeanor drug counts, he said.
The situation is the latest example of the potential pitfalls connected with the growing use of AI. In fields such as law, errors in AI-generated briefs could impact the freedom of a person accused of a crime. In health care, AI analysis of medical necessity has resulted in the denial of some types of care. In April, A 16-year-old Rancho Santa Margarita boy killed himself after discussing suicidal thoughts with an AI chatbot, prompting a new California law aimed at protecting vulnerable users.
“While artificial intelligence can be a useful research tool, it remains an evolving technology with limitations — including the potential to generate ‘hallucinated’ citations,” Wilson said. “We are actively learning the fluid dynamics of AI-assisted legal work and its possible pitfalls.”
Sunday, September 28, 2025
Education report calling for ethical AI use contains over 15 fake sources; Ars Technica, September 12, 2025
BENJ EDWARDS, Ars Technica ; Education report calling for ethical AI use contains over 15 fake sources
"On Friday, CBC News reported that a major education reform document prepared for the Canadian province of Newfoundland and Labrador contains at least 15 fabricated citations that academics suspect were generated by an AI language model—despite the same report calling for "ethical" AI use in schools.
"A Vision for the Future: Transforming and Modernizing Education," released August 28, serves as a 10-year roadmap for modernizing the province's public schools and post-secondary institutions. The 418-page document took 18 months to complete and was unveiled by co-chairs Anne Burke and Karen Goodnough, both professors at Memorial University's Faculty of Education, alongside Education Minister Bernard Davis...
The irony runs deep
The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report's 110 recommendations specifically states the provincial government should "provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use."
Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. "Around the references I cannot find, I can't imagine another explanation," she told CBC. "You're like, 'This has to be right, this can't not be.' This is a citation in a very important document for educational policy.""
Saturday, September 13, 2025
Perplexity's definition of copyright gets it sued by the dictionary; Engadget, September 11, 2025
Saturday, August 23, 2025
PittGPT debuts today as private AI source for University; University Times, August 21, 2025
MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University
"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.
“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.
“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”
Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.
Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.
“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”
“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”
On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”
“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”
Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.
Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLM, which Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”
PittGPT joins that list today, Helfrich said.
Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).
And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.
“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"
Friday, July 25, 2025
Virginia teachers learn AI tools and ethics at largest statewide workshop; WTVR, July 23, 2025
[Kip Currier: Nothing in this brief article substantively (or even cursorily) talks about the ethics issues of K-12 teachers using AI tools. The piece extolls what can be gained by teachers using AI tools. But what's lost by using these products? What skills do we not gain or hone by relying on AI to think and create for us?
Was an ethics code or AI code of conduct discussed at all at this two-day gathering of teachers?
And what about the ongoing problem of AI hallucinations-- i.e. inaccurate and nonexistent information generated by AI? Nowhere in this reporting is the need for proofreading and verification of AI-generated outputs even mentioned.
In the pell-mell rush to adopt AI tools, fueled by AI tech companies, it's vital to remember the need for embracing AI ethics guidelines and guardrails.]
[Excerpt]
"Hundreds of Virginia teachers are getting hands-on experience with artificial intelligence tools, ethics and curriculum integration at the largest statewide professional development workshop focused on AI.
The two-day workshop, hosted by AI Ready RVA, continues Thursday at the VCU School of Business...
"There are tools that allow teachers to create lesson plans or quizzes or rubrics immediately based on a source that they can find online so they don't have to spend hours on Sunday prepping for the week ahead. And so we have a list of various platforms that we're going to be teaching them and practice sessions so that they can master these tools so that way they start the school year really strong," Demetriou said."
Wednesday, July 23, 2025
Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases; Above The Law, July 23, 2025
Joe Patrice , Above The Law; Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases
"Don’t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might’ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.
Malaty will shoulder most of the blame as the link in the workflow who should’ve known better. That said, her article about AI ethics, written last year, doesn’t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations."
Tuesday, July 22, 2025
Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025
ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague
"The Limits of GenAI’s Simulated Humanity
- Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
- True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
- Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.
- Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
- Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
- Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."
Wednesday, July 16, 2025
The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies; Task & Purpose, July 14, 2025
MATT WHITE , Task & Purpose; The Pentagon is throwing $200 million at ‘Grok for Government’ and other AI companies
"The Pentagon announced Monday it is going to spend almost $1 billion on “agentic AI workflows” from four “frontier AI” companies, including Elon Musk’s xAI, whose flagship Grok appeared to still be declaring itself “MechaHitler” as late as Monday afternoon.
In a press release, the Defense Department’s Chief Digital and Artificial Intelligence Office — or CDAO — said it will cut checks of up to $200 million each to tech giants Anthropic, Google, OpenAI and Musk’s xAI to work on:
- “critical national security challenges;”
- “joint mission essential tasks in our warfighting domain;”
- “DoD use cases.”
The release did not expand on what any of that means or how AI might help. Task & Purpose reached out to the Pentagon for details on what these AI agents may soon be doing and asked specifically if the contracts would include control of live weapons systems or classified information."
Wednesday, July 2, 2025
Trial Court Decides Case Based On AI-Hallucinated Caselaw; Above The Law, July 1, 2025
Joe Patrice, Above The Law; Trial Court Decides Case Based On AI-Hallucinated Caselaw
"Between opposing counsel and diligent judges, fake cases keep getting caught before they result in real mischief. That said, it was always only a matter of time before a poor litigant representing themselves fails to know enough to sniff out and flag Beavis v. Butthead and a busy or apathetic judge rubberstamps one side’s proposed order without probing the cites for verification. Hallucinations are all fun and games until they work their way into the orders.
It finally happened with a trial judge issuing an order based off fake cases (flagged by Rob Freund(Opens in a new window)). While the appellate court put a stop to the matter, the fact that it got this far should terrify everyone.
Shahid v. Esaam(Opens in a new window), out of the Georgia Court of Appeals, involved a final judgment and decree of divorce served by publication. When the wife objected to the judgment based on improper service, the husband’s brief included two fake cases. The trial judge accepted the husband’s argument, issuing an order based in part on the fake cases."
Saturday, June 21, 2025
US patent office wants an AI to scan for prior art, but doesn't want to pay for it; The Register, June 20, 2025
Brandon Vigliarolo, The Register; US patent office wants an AI to scan for prior art, but doesn't want to pay for it
"There is some irony in using AI bots, which are often trained on copyrighted material for which AI firms have shown little regard, to assess the validity of new patents.
It may not be the panacea the USPTO is hoping for. Lawyers have been embracing AI for something very similar - scanning particular, formal documentation for specific details related to a new analysis - and it's sometimes backfired as the AI has gotten certain details wrong. The Register has reported on numerous instances of legal professionals practically begging to be sanctioned for not bothering to do their legwork, as judges caught them using AI, which borked citations to other legal cases.
The risk of hallucinating patents that don't exist, or getting patent numbers or other details wrong, means that there'll have to be at least some human oversight. The USPTO had no comment on how this might be accomplished."
Monday, June 2, 2025
Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder; Daily Mail, May 31, 2025
JOE HUTCHISON FOR DAILYMAIL.COM; Excruciating reason Utah lawyer presented FAKE case in court after idiotic blunder
"The case referenced, according to documents, was 'Royer v. Nelson' which did not exist in any legal database and was found to be made up by ChatGPT.
Opposing counsel said that the only way they would find any mention of the case was by using the AI.
They even went as far as to ask the AI if the case was real, noting in a filing that it then apologized and said it was a mistake.
Bednar's attorney, Matthew Barneck, said that the research was done by a clerk and Bednar took all responsibility for failing to review the cases.
He told The Salt Lake Tribune: 'That was his mistake. He owned up to it and authorized me to say that and fell on the sword."