Showing posts with label lawyers. Show all posts
Showing posts with label lawyers. Show all posts

Saturday, November 16, 2024

Tracking The Slow Movement Of AI Copyright Cases; Law360, November 7, 2024

Mark Davies and Anna Naydonov , Law360; Tracking The Slow Movement Of AI Copyright Cases

"There is a considerable gap between assumptions in the technology community and assumptions in the legal community concerning how long the legal questions around artificial intelligence and copyright law will take to reach resolution.

The principal litigated question asks whether copyright law permits or forbids the process by which AI systems are using copyright works to generate additional works.[1] AI technologists expect that the U.S. Supreme Court will resolve these questions in a few years.[2] Lawyers expect it to take much longer.[3] History teaches the answer...

Mark S. Davies and Anna B. Naydonov are partners at White & Case LLP.

Mark Davies represented Stephen Thaler in Thaler v. Vidal, Oracle in Google v. Oracle, and filed an amicus brief on behalf of a design professional in Apple v. Samsung."

Sunday, November 3, 2024

Ahead of US election, lawyers fight over ethics breach accusations; Reuters, November 2, 2024

, Reuters; Ahead of US election, lawyers fight over ethics breach accusations

"After Donald Trump's bid to overturn his 2020 election loss, an advocacy group was launched to take on the lawyers who aided in his doomed effort, hitting them with more than 80 ethics complaints.

With Trump again the Republican candidate for the U.S. presidency, his allies have fired back at this group, named the 65 Project. A pro-Trump nonprofit known as America First Legal has accused the 65 Project of engaging in a left-wing attempt to intimidate conservative lawyers, filing a bar complaint earlier this week against the 65 Project's top lawyer Michael Teter. The Oct. 28 complaint said Teter was targeting lawyers "based solely upon their representation of a disfavored client...

The 65 Project, named for the number of unsuccessful lawsuits it says were filed to challenge Democratic President Joe Biden's win, says its mission is to deter lawyers from bringing false election claims. In September, the group pledged to spend at least $100,000 on advertisements in legal journals in battleground states warning lawyers not to risk losing their law license by helping Trump.

America First Legal, a nonprofit founded in 2021 by former Trump White House aide Stephen Miller, harshly criticized the ads on its website in announcing its complaint against Teter. The group has increasingly focused on the election this year after previously bringing suits challenging diversity and migration policies."

Thursday, October 31, 2024

'The Calculator Mistake': Denial, hostility won't help lawyers deal with emergence of AI; ABA Journal, October 23, 2024

TRACY HRESKO PEARL , ABA Journal; 'The Calculator Mistake': Denial, hostility won't help lawyers deal with emergence of AI

"There are two ways to deal with this kind of uncertainty. The first is denial and hostility. Legal news outlets have been filled with articles in recent months about the problems with AI-generated legal briefs. Such briefs may contain fake citations. They miss important points. They lack nuance.

The obvious solution, when the problem is framed in this way, is to point lawyers away from using AI, impose strong sanctions on attorneys who misuse it, and redouble law school exam security and anti-plagiarism measures to ensure that law students are strongly disincentivized from using these new forms of technology. “Old school” law practice and legal teaching techniques, in this view, should continue to be the gold standard of our profession.

The problem, of course, is that technology gets better and does so at an increasingly (and sometimes alarmingly) rapid rate. No lawyer worth their salt would dare turn in an AI-generated legal brief now, given the issues listed above and the potential consequences. But we are naive to think that the technology won’t eventually overtake even the most gifted of legal writers.

That point may not be tomorrow; it may not be five years from now. But that time is coming, and when it does, denial and hostility won’t get us around the fact that it may no longer be in the best interests of our clients for a lawyer to write briefs on their own. Denial and hostility won’t help us deal with what, at that point, will be a serious existential threat to our profession.

The second way to deal with the uncertainty of emerging technology is to recognize that profound change is inevitable and then do the deeper, tougher and more philosophical work of discerning how humans can still be of value in a profession that, like nearly every other, will cede a great deal of ground to AI in the not-too-distant future. What will it mean to be a lawyer, a judge or a law professor in that world? What should it mean?

I am increasingly convinced that the answers to those questions are in so-called soft skills and critical thinking."

Saturday, October 5, 2024

Police reports written with advanced tech could help cops but comes with host of challenges: expert; Fox News, September 24, 2024

Christina Coulter , Fox News; Police reports written with advanced tech could help cops but comes with host of challenges

"Several police departments nationwide are debuting artificial intelligence that writes officers' incident reports for them, and although the software could cause issues in court, an expert says, the technology could be a boon for law enforcement.

Oklahoma City's police department was among the first to experiment with Draft One, an AI-powered software that analyzes police body-worn camera audio and radio transmissions to write police reports that can later be used to justify criminal charges and as evidence in court.

Since The Associated Press detailed the software and its use by the department in a late August article, the department told Fox News Digital that it has put the program on hold. 

"The use of the AI report writing has been put on hold, so we will pass on speaking about it at this time," Capt. Valerie Littlejohn wrote via email. "It was paused to work through all the details with the DA’s Office."...

According to Politico, at least seven police departments nationwide are using Draft One, which was made by police technology company Axon to be used with its widely used body-worn cameras."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

 Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, September 23, 2024

Generative AI and Legal Ethics; JD Supra, September 20, 2024

Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Tuesday, August 20, 2024

He Regulated Medical Devices. His Wife Represented Their Makers.; The New York Times, August 20, 2024

 , The New York Times; He Regulated Medical Devices. His Wife Represented Their Makers.

"For 15 years, Dr. Jeffrey E. Shuren was the federal official charged with ensuring the safety of a vast array of medical devices including artificial knees, breast implants and Covid tests.

When he announced in July that he would be retiring from the Food and Drug Administration later this year, Dr. Robert Califf, the agency’s commissioner, praised him for overseeing the approval of more novel devices last year than ever before in the nearly half-century history of the device division.

But the admiration for Dr. Shuren is far from universal. Consumer advocates see his tenure as marred by the approval of too many devices that harmed patients and by his own close ties to the $500 billion global device industry.

One connection stood out: While Dr. Shuren regulated the booming medical device industry, his wife, Allison W. Shuren, represented the interests of device makers as the co-leader of a team of lawyers at Arnold & Porter, one of Washington’s most powerful law firms."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

  Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"

Friday, August 2, 2024

Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’; The New York Times, August 1, 2024

 , The New York Times; Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’

"A bipartisan American Bar Association task force is calling on lawyers across the country to do more to help protect democracy ahead of the 2024 election, warning in a statement to be delivered Friday at the group’s annual meeting in Chicago that the nation faces a serious threat in “rising authoritarianism.”

The statement by a panel of prominent legal thinkers and other public figures — led by J. Michael Luttig, a conservative former federal appeals court judge appointed by President George Bush, and Jeh C. Johnson, a Homeland Security secretary during the Obama administration — does not mention by name former President Donald J. Trump.

But in raising alarms, the panel appeared to be clearly referencing Mr. Trump’s attempt to subvert his loss of the 2020 election, which included attacks on election workers who were falsely accused by Mr. Trump and his supporters of rigging votes and culminated in the violent attack on the Capitol by his supporters on Jan. 6, 2021."

Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says; Bloomberg Law, August 1, 2024

Sam Skolnik , Bloomberg Law; Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says

"Trump administration Justice Department official Jeffrey Clark should receive a two-year suspension for attempting dishonesty over his efforts to overturn the 2020 election, a DC Board on Professional Responsibility panel recommended Thursday.

“Disciplinary Counsel has proven by clear and convincing evidence that Mr. Clark attempted dishonesty and did so with truly extraordinary recklessness,” the panel said.

The recommendation from a board hearing committee is in stark contrast to that of DC Disciplinary Counsel Phil Fox, who on April 29 said that disbarment is “the only possible sanction” for Clark.

Clark, a former US assistant attorney general, in late 2020 tried to get his Justice Department superiors to send a letter to Georgia state officials improperly questioning the election outcome, three lawyers for the bar, led by Fox, wrote. Clark engaged in a “dishonest attempt to create national chaos on the verge of January 6,” they wrote.

Fox didn’t prove “by clear and convincing evidence that Mr. Clark was as culpable” as Trump lawyers Rudy Giuliani or John Eastman, but he was culpable, the committee said in its 213-page, Aug. 1 report."

Tuesday, July 2, 2024

Navigate ethical and regulatory issues of using AI; Thomson Reuters, July 1, 2024

Thomson Reuters ; Navigate ethical and regulatory issues of using AI

"However, the need for regulation to ensure clarity, trust, and mitigate risk has not gone unnoticed. According to the report, the vast majority (93%) of professionals surveyed said they recognize the need for regulation. Among the top concerns: a lack of trust and unease about the accuracy of AI. This is especially true in the context of using the AI output as advice without a human checking for its accuracy."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Monday, April 1, 2024

From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize; The New York Times, March 31, 2024

Elizabeth Williamson, The New York Times ; From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize

"Convinced that viral lies threaten public discourse and democracy, he is at the forefront of a small but growing cadre of lawyers deploying defamation, one of the oldest areas of the law, as a weapon against a tide of political disinformation."

Wednesday, January 31, 2024

Lawyers viewed as more ethical than car salespeople and US lawmakers; ABA Journal, January 30, 2024

DEBRA CASSENS WEISS, ABA Journal ; Lawyers viewed as more ethical than car salespeople and US lawmakers

"Only 16% of Americans rate lawyers’ honesty and ethical standards as "high" or "very high," according to a Gallup poll taken in December.

The percentage has decreased since 2022, when 21% of Americans said lawyers had high or very high honesty and ethical standards, and since 2019, when the percentage was 22%, according to a Jan. 22 press release with results of Gallup’s 2023 Honesty and Ethics poll.

Lawyers did better than business executives, insurance salespeople and stockbrokers. Twelve percent of Americans viewed those occupations as having high or very high ethics and honesty. The percentage decreased to 8% for advertising practitioners, car salespeople and senators, and 6% for members of Congress."

Tuesday, January 30, 2024

Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark; JDSupra, January 29, 2024

Ralph Artigliere , JDSupra; Florida’s New Advisory Ethics Opinion on Generative AI Hits the Mark

"As a former Florida trial lawyer and judge who appreciates emerging technology, I admit that I had more than a little concern when The Florida Bar announced it was working on a new ethics opinion on generative AI. Generative AI promises to provide monumental advantages to lawyers in their workflow, quality of work product, productivity, and time management and more. For clients, use of generative AI by their lawyers can mean better legal services delivered faster and with greater economy. In the area of eDiscovery, generative AI promises to surpass technology assisted review in helping manage the increasingly massive amounts of data.

Generative AI is new to the greater world, and certainly to busy lawyers who are not reading every blogpost on AI. The internet and journals are afire over concerns of hallucinations, confidentiality, bias, and the like. I felt a new ethics opinion might throw a wet blanket on generative AI and discourage Florida lawyers from investigating the new technology.

Thankfully, my concerns did not become reality. The Florida Bar took a thorough look at the technology and the existing ethical guidance and law and applied existing guidelines and rules in a thorough and balanced fashion. This article briefly summarizes Opinion 24-1 and highlights some of its important features.

The Opinion

On January 19, 2024, The Florida Bar released Ethics Opinion 24-1(“Opinion 24-1”)regarding the use of generative artificial intelligence (“AI”) in the practice of law. The Florida Bar and the State Bar of California are leaders in issuing ethical guidance on this issue. Opinion 24-1 draws from a solid background of ethics opinions and guidance in Florida and around the country and provides positive as well as cautionary statements regarding the emerging technologies. Overall, the guidance is well-placed and helpful for lawyers at a time when so many are weighing the use of generative AI technology in their law practices."

Lawyers weigh strength of copyright suit filed against BigLaw firm; Rhode Island Lawyers Weekly, January 29, 2024

 Pat Murphy , Rhode Island Lawyers Weekly; Lawyers weigh strength of copyright suit filed against BigLaw firm

"Jerry Cohen, a Boston attorney who teaches IP law at Roger Williams University School of Law, called the suit “not so much a copyright case as it is a matter of professional responsibility and respect.”"

Monday, January 1, 2024

Roberts sidesteps Supreme Court’s ethics controversies in yearly report; The Washington Post, December 31, 2023

 , The Washington Post; Roberts sidesteps Supreme Court’s ethics controversies in yearly report

"Roberts, a history buff, also expounded on the potential for artificial intelligence to both enhance and detract from the work of judges, lawyers and litigants. For those who cannot afford a lawyer, he noted, AI could increase access to justice.

“AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as it risks invading privacy interests and dehumanizing the law,” Roberts wrote, “machines cannot fully replace key actors in court.”...

Roberts also did not mention in his 13-page report the court’s adoption for the first time of a formal code of conduct, announced in November, specific to the nine justices and intended to promote “integrity and impartiality.” For years, the justices said they voluntarily complied with the same ethical guidelines that apply to other federal judges and resisted efforts by Congress to impose a policy on the high court...

The policy was praised by some as a positive initial step, but criticized by legal ethics experts for giving the justices too much discretion over recusal decisions and for not including a process for holding the justices accountable if they violate their own rules."