Showing posts with label lawyers. Show all posts
Showing posts with label lawyers. Show all posts

Tuesday, February 18, 2025

ABA condemns remarks questioning legitimacy of courts and judicial review; American Bar Association (ABA), February 11, 2025

American Bar Association (ABA); ABA condemns remarks questioning legitimacy of courts and judicial review

"Last week, the administration lost a pretrial motion in a federal district court, which halted government efforts to gain access to Department of Treasury records including private records of many, if not all, U.S. citizens.  

It is certainly not the first time an administration has not prevailed in a pretrial motion in one of thousands of cases it files or defends each year. There is no final judgment in this case and, in any event, the government can appeal in a manner it has done countless times over the years. The right to appeal is there for any party dissatisfied with a court’s decision. It is also the right of every American and the government to criticize a decision made by the courts.   

What is never acceptable is what was said by representatives of this administration, including the misleading assertion that judges cannot control the executive’s legitimate power and calls for impeachment of a judge who did not rule in the administration’s favor. It is also not acceptable to attack the judge making the ruling or try to interfere with the independence of the court.   

These statements attack the legitimacy of judicial oversight just because a court’s ruling is not what the administration wants in a particular case. It is a fundamental cornerstone of our democracy that the courts are the protectors of the citizenry from government overreach. All lawyers know that judges have the authority to determine whether the administration’s actions are lawful and a legitimate exercise of executive branch authority. It is one of the oldest and most revered precedent in United States legal history — Marbury v. Madison. This is a key principle that is taught in the first year of law school.  

These bold assertions, designed to intimidate judges by threatening removal if they do not rule the government's way, cross the line. They create a risk to the physical security of judges and have no place in our society. There have also been suggestions that the executive branch should consider disobeying court orders. These statements threaten the very foundation of our constitutional system.   

The ABA calls for every lawyer and legal organization to speak with one voice and to condemn the efforts of any administration that suggests its actions are beyond the reach of judicial review. We also call for condemnation and rejection of calls for the impeachment of a judge who did not rule in a certain way.   

This is not the first time we have called out criticism and efforts to demonize the courts. The ABA spoke last fall during the previous administration and called out comments from both sides.  

We recognize the potential risk to our profession, the ABA and our members, by speaking. But to stay silent is to suggest that these statements are acceptable or the new norm. They are not. And we will not be silent in the face of such words that are contrary to our constitutional system. They pose a clear and present challenge to our democracy and the separation of powers among the three independent branches. We will stand for the rule of law today as we have for nearly 150 years.  

The ABA is one of the largest voluntary associations of lawyers in the world. As the national voice of the legal profession, the ABA works to improve the administration of justice, promotes programs that assist lawyers and judges in their work, accredits law schools, provides continuing legal education, and works to build public understanding around the world of the importance of the rule of law. View our privacy statement online."

Monday, February 10, 2025

The ABA supports the rule of law; American Bar Association (ABA), February 10, 2025

 American Bar Association (ABA) ; The ABA supports the rule of law

"It has been three weeks since Inauguration Day. Most Americans recognize that newly elected leaders bring change. That is expected. But most Americans also expect that changes will take place in accordance with the rule of law and in an orderly manner that respects the lives of affected individuals and the work they have been asked to perform.  

Instead, we see wide-scale affronts to the rule of law itself, such as attacks on constitutionally protected birthright citizenship, the dismantling of USAID and the attempts to criminalize those who support lawful programs to eliminate bias and enhance diversity.

We have seen attempts at wholesale dismantling of departments and entities created by Congress without seeking the required congressional approval to change the law. There are efforts to dismiss employees with little regard for the law and protections they merit, and social media announcements that disparage and appear to be motivated by a desire to inflame without any stated factual basis. This is chaotic. It may appeal to a few. But it is wrong. And most Americans recognize it is wrong. It is also contrary to the rule of law. 

The American Bar Association supports the rule of law. That means holding governments, including our own, accountable under law. We stand for a legal process that is orderly and fair. We have consistently urged the administrations of both parties to adhere to the rule of law. We stand in that familiar place again today. And we do not stand alone. Our courts stand for the rule of law as well.

Just last week, in rejecting citizenship challenges, the U.S. District Judge John Coughenour said that the rule of law is, according to this administration, something to navigate around or simply ignore. “Nevertheless,” he said, “in this courtroom and under my watch, the rule of law is a bright beacon which I intend to follow.” He is correct. The rule of law is a bright beacon for our country.

In the last 21 days, more than a dozen lawsuits have been filed alleging that the administration’s actions violate the rule of law and are contrary to the Constitution or laws of the United States. The list grows longer every day. 

These actions have forced affected parties to seek relief in the courts, which stand as a bulwark against these violations. We support our courts who are treating these cases with the urgency they require. Americans know there is a right way and a wrong way to proceed. What is being done is not the right way to pursue the change that is sought in our system of government.   

These actions do not make America stronger. They make us weaker. Many Americans are rightly concerned about how leaders who are elected, confirmed or appointed are proceeding to make changes. The goals of eliminating departments and entire functions do not justify the means when the means are not in accordance with the law. Americans expect better. Even among those who want change, no one wants their neighbor or their family to be treated this way. Yet that is exactly what is happening.   

These actions have real-world consequences. Recently hired employees fear they will lose their jobs because of some matter they were assigned to in the Justice Department or some training they attended in their agency. USAID employees assigned to build programs that benefit foreign countries are being doxed, harassed with name-calling and receiving conflicting information about their employment status. These stories should concern all Americans because they are our family members, neighbors and friends. No American can be proud of a government that carries out change in this way. Neither can these actions be rationalized by discussion of past grievances or appeals to efficiency. Everything can be more efficient, but adherence to the rule of law is paramount. We must be cognizant of the harm being done by these methods. 

Moreover, refusing to spend money appropriated by Congress under the euphemism of a pause is a violation of the rule of law and suggests that the executive branch can overrule the other two co-equal branches of government. This is contrary to the constitutional framework and not the way our democracy works. The money appropriated by Congress must be spent in accordance with what Congress has said. It cannot be changed or paused because a newly elected administration desires it. Our elected representatives know this. The lawyers of this country know this. It must stop.

There is much that Americans disagree on, but all of us expect our government to follow the rule of law, protect due process and treat individuals in a way that we would treat others in our homes and workplaces. The ABA does not oppose any administration. Instead, we remain steadfast in our support for the rule of law.  

We call upon our elected representatives to stand with us and to insist upon adherence to the rule of law and the legal processes and procedures that ensure orderly change. The administration cannot choose which law it will follow or ignore. These are not partisan or political issues. These are rule of law and process issues. We cannot afford to remain silent. We must stand up for the values we hold dear. The ABA will do its part and act to protect the rule of law.

We urge every attorney to join us and insist that our government, a government of the people, follow the law. It is part of the oath we took when we became lawyers. Whatever your political party or your views, change must be made in the right way. Americans expect no less.

– William R. Bay, president of the American Bar Association"

Tuesday, January 28, 2025

Career US Justice Department official in charge of public corruption cases resigns; Reuters, January 27, 2025

 , Reuters; Career US Justice Department official in charge of public corruption cases resigns

"Corey Amundson, the U.S. Justice Department's senior career official in charge of overseeing public corruption and other politically sensitive investigations, resigned on Monday after the Trump administration tried to reassign him to a new role working on immigration issues, according to a letter seen by Reuters.

"I am honored and blessed to have served our country and this department for the last 23 years," Amundson wrote in his letter to Acting Attorney General James McHenry.

"I spent my entire professional life committed to the apolitical enforcement of the federal criminal law and to ensuring that those around me understood and embraced that central tenet of our work," Amundson said.

Amundson is one of an estimated 20 career officials inside the Justice Department who was reassignedlast week to a new Sanctuary City Working Group inside the Associate Attorney General's office."

The Power of Three: Civility, Professionalism, and Zealous Advocacy; ABA Journal, November 5, 2024

 Jeanne M Huey, ABA Journal; The Power of Three: Civility, Professionalism, and Zealous Advocacy

"Balancing Civility, Professionalism, and Zealous Advocacy

 The “power of three” reminds us that civility, professionalism, and zealous advocacy are not competing ideals but instead work together to define our duty to our clients, our duty to the justice system, and our duty to respect others, which is the mark of effective lawyering. Zealous advocacy without civility leads to unproductive conflict, while civility without zeal risks losing sight of the client’s interests. Professionalism embraces both, ensuring that civility and advocacy serve the client and the justice system. A balanced commitment to all three creates a steady, resilient structure that upholds a lawyer’s duty to serve their client’s best interests within the rule of law."


Saturday, November 16, 2024

Tracking The Slow Movement Of AI Copyright Cases; Law360, November 7, 2024

Mark Davies and Anna Naydonov , Law360; Tracking The Slow Movement Of AI Copyright Cases

"There is a considerable gap between assumptions in the technology community and assumptions in the legal community concerning how long the legal questions around artificial intelligence and copyright law will take to reach resolution.

The principal litigated question asks whether copyright law permits or forbids the process by which AI systems are using copyright works to generate additional works.[1] AI technologists expect that the U.S. Supreme Court will resolve these questions in a few years.[2] Lawyers expect it to take much longer.[3] History teaches the answer...

Mark S. Davies and Anna B. Naydonov are partners at White & Case LLP.

Mark Davies represented Stephen Thaler in Thaler v. Vidal, Oracle in Google v. Oracle, and filed an amicus brief on behalf of a design professional in Apple v. Samsung."

Sunday, November 3, 2024

Ahead of US election, lawyers fight over ethics breach accusations; Reuters, November 2, 2024

, Reuters; Ahead of US election, lawyers fight over ethics breach accusations

"After Donald Trump's bid to overturn his 2020 election loss, an advocacy group was launched to take on the lawyers who aided in his doomed effort, hitting them with more than 80 ethics complaints.

With Trump again the Republican candidate for the U.S. presidency, his allies have fired back at this group, named the 65 Project. A pro-Trump nonprofit known as America First Legal has accused the 65 Project of engaging in a left-wing attempt to intimidate conservative lawyers, filing a bar complaint earlier this week against the 65 Project's top lawyer Michael Teter. The Oct. 28 complaint said Teter was targeting lawyers "based solely upon their representation of a disfavored client...

The 65 Project, named for the number of unsuccessful lawsuits it says were filed to challenge Democratic President Joe Biden's win, says its mission is to deter lawyers from bringing false election claims. In September, the group pledged to spend at least $100,000 on advertisements in legal journals in battleground states warning lawyers not to risk losing their law license by helping Trump.

America First Legal, a nonprofit founded in 2021 by former Trump White House aide Stephen Miller, harshly criticized the ads on its website in announcing its complaint against Teter. The group has increasingly focused on the election this year after previously bringing suits challenging diversity and migration policies."

Thursday, October 31, 2024

'The Calculator Mistake': Denial, hostility won't help lawyers deal with emergence of AI; ABA Journal, October 23, 2024

TRACY HRESKO PEARL , ABA Journal; 'The Calculator Mistake': Denial, hostility won't help lawyers deal with emergence of AI

"There are two ways to deal with this kind of uncertainty. The first is denial and hostility. Legal news outlets have been filled with articles in recent months about the problems with AI-generated legal briefs. Such briefs may contain fake citations. They miss important points. They lack nuance.

The obvious solution, when the problem is framed in this way, is to point lawyers away from using AI, impose strong sanctions on attorneys who misuse it, and redouble law school exam security and anti-plagiarism measures to ensure that law students are strongly disincentivized from using these new forms of technology. “Old school” law practice and legal teaching techniques, in this view, should continue to be the gold standard of our profession.

The problem, of course, is that technology gets better and does so at an increasingly (and sometimes alarmingly) rapid rate. No lawyer worth their salt would dare turn in an AI-generated legal brief now, given the issues listed above and the potential consequences. But we are naive to think that the technology won’t eventually overtake even the most gifted of legal writers.

That point may not be tomorrow; it may not be five years from now. But that time is coming, and when it does, denial and hostility won’t get us around the fact that it may no longer be in the best interests of our clients for a lawyer to write briefs on their own. Denial and hostility won’t help us deal with what, at that point, will be a serious existential threat to our profession.

The second way to deal with the uncertainty of emerging technology is to recognize that profound change is inevitable and then do the deeper, tougher and more philosophical work of discerning how humans can still be of value in a profession that, like nearly every other, will cede a great deal of ground to AI in the not-too-distant future. What will it mean to be a lawyer, a judge or a law professor in that world? What should it mean?

I am increasingly convinced that the answers to those questions are in so-called soft skills and critical thinking."

Saturday, October 5, 2024

Police reports written with advanced tech could help cops but comes with host of challenges: expert; Fox News, September 24, 2024

Christina Coulter , Fox News; Police reports written with advanced tech could help cops but comes with host of challenges

"Several police departments nationwide are debuting artificial intelligence that writes officers' incident reports for them, and although the software could cause issues in court, an expert says, the technology could be a boon for law enforcement.

Oklahoma City's police department was among the first to experiment with Draft One, an AI-powered software that analyzes police body-worn camera audio and radio transmissions to write police reports that can later be used to justify criminal charges and as evidence in court.

Since The Associated Press detailed the software and its use by the department in a late August article, the department told Fox News Digital that it has put the program on hold. 

"The use of the AI report writing has been put on hold, so we will pass on speaking about it at this time," Capt. Valerie Littlejohn wrote via email. "It was paused to work through all the details with the DA’s Office."...

According to Politico, at least seven police departments nationwide are using Draft One, which was made by police technology company Axon to be used with its widely used body-worn cameras."

Friday, October 4, 2024

Ethical uses of generative AI in the practice of law; Reuters, October 3, 2024

 Thomson Reuters; Ethical uses of generative AI in the practice of law

"In the rapidly evolving landscape of legal technology, the integration of generative AI tools presents both unprecedented opportunities and significant ethical challenges. Ryan Groff, a distinguished member of the Massachusetts Bar and a lecturer at New England Law, explores these dimensions in his enlightening webinar, “Ethical Uses of Generative AI in the Practice of Law.” 

In the webinar, Ryan Groff discusses the ethical implications of using generative AI (GenAI) in legal practices, tracing the history of GenAI applications in law and distinguishing between various AI tools available today.  He provides an insightful overview of the historical application of GenAI in legal contexts and differentiates the various AI tools currently available. Groff emphasizes that while AI can enhance the efficiency of legal practices, it should not undermine the critical judgment of lawyers. He underscores the importance of maintaining rigorous supervision, safeguarding client confidentiality, and ensuring technological proficiency."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, September 23, 2024

Generative AI and Legal Ethics; JD Supra, September 20, 2024

Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Tuesday, August 20, 2024

He Regulated Medical Devices. His Wife Represented Their Makers.; The New York Times, August 20, 2024

 , The New York Times; He Regulated Medical Devices. His Wife Represented Their Makers.

"For 15 years, Dr. Jeffrey E. Shuren was the federal official charged with ensuring the safety of a vast array of medical devices including artificial knees, breast implants and Covid tests.

When he announced in July that he would be retiring from the Food and Drug Administration later this year, Dr. Robert Califf, the agency’s commissioner, praised him for overseeing the approval of more novel devices last year than ever before in the nearly half-century history of the device division.

But the admiration for Dr. Shuren is far from universal. Consumer advocates see his tenure as marred by the approval of too many devices that harmed patients and by his own close ties to the $500 billion global device industry.

One connection stood out: While Dr. Shuren regulated the booming medical device industry, his wife, Allison W. Shuren, represented the interests of device makers as the co-leader of a team of lawyers at Arnold & Porter, one of Washington’s most powerful law firms."

Sunday, August 18, 2024

UC Berkeley Law School To Offer Advanced Law Degree Focused On AI; Forbes, August 16, 2024

  Michael T. Nietzel, Forbes; UC Berkeley Law School To Offer Advanced Law Degree Focused On AI

"The University of California, Berkeley School of Law has announced that it will offer what it’s calling “the first-ever law degree with a focus on artificial intelligence (AI).” The new AI-focused Master of Laws (LL.M.) program is scheduled to launch in summer 2025.

The program, which will award an AI Law and Regulation certificate for students enrolled in UC Berkeley Law’s LL.M. executive track, is designed for working professionals and can be completed over two summers or through remote study combined with one summer on campus...

According to Assistant Law Dean Adam Sterling, the curriculum will cover topics such as AI ethics, the fundamentals of AI technology, and current and future efforts to regulate AI. “This program will equip participants with in-depth knowledge of the ethical, regulatory, and policy challenges posed by AI,” Sterling added. “It will focus on building practice skills to help them advise and represent leading law firms, AI companies, governments, and non-profit organizations.”"

Friday, August 2, 2024

Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’; The New York Times, August 1, 2024

 , The New York Times; Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’

"A bipartisan American Bar Association task force is calling on lawyers across the country to do more to help protect democracy ahead of the 2024 election, warning in a statement to be delivered Friday at the group’s annual meeting in Chicago that the nation faces a serious threat in “rising authoritarianism.”

The statement by a panel of prominent legal thinkers and other public figures — led by J. Michael Luttig, a conservative former federal appeals court judge appointed by President George Bush, and Jeh C. Johnson, a Homeland Security secretary during the Obama administration — does not mention by name former President Donald J. Trump.

But in raising alarms, the panel appeared to be clearly referencing Mr. Trump’s attempt to subvert his loss of the 2020 election, which included attacks on election workers who were falsely accused by Mr. Trump and his supporters of rigging votes and culminated in the violent attack on the Capitol by his supporters on Jan. 6, 2021."

Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says; Bloomberg Law, August 1, 2024

Sam Skolnik , Bloomberg Law; Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says

"Trump administration Justice Department official Jeffrey Clark should receive a two-year suspension for attempting dishonesty over his efforts to overturn the 2020 election, a DC Board on Professional Responsibility panel recommended Thursday.

“Disciplinary Counsel has proven by clear and convincing evidence that Mr. Clark attempted dishonesty and did so with truly extraordinary recklessness,” the panel said.

The recommendation from a board hearing committee is in stark contrast to that of DC Disciplinary Counsel Phil Fox, who on April 29 said that disbarment is “the only possible sanction” for Clark.

Clark, a former US assistant attorney general, in late 2020 tried to get his Justice Department superiors to send a letter to Georgia state officials improperly questioning the election outcome, three lawyers for the bar, led by Fox, wrote. Clark engaged in a “dishonest attempt to create national chaos on the verge of January 6,” they wrote.

Fox didn’t prove “by clear and convincing evidence that Mr. Clark was as culpable” as Trump lawyers Rudy Giuliani or John Eastman, but he was culpable, the committee said in its 213-page, Aug. 1 report."

Tuesday, July 2, 2024

Navigate ethical and regulatory issues of using AI; Thomson Reuters, July 1, 2024

Thomson Reuters ; Navigate ethical and regulatory issues of using AI

"However, the need for regulation to ensure clarity, trust, and mitigate risk has not gone unnoticed. According to the report, the vast majority (93%) of professionals surveyed said they recognize the need for regulation. Among the top concerns: a lack of trust and unease about the accuracy of AI. This is especially true in the context of using the AI output as advice without a human checking for its accuracy."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Monday, April 1, 2024

From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize; The New York Times, March 31, 2024

Elizabeth Williamson, The New York Times ; From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize

"Convinced that viral lies threaten public discourse and democracy, he is at the forefront of a small but growing cadre of lawyers deploying defamation, one of the oldest areas of the law, as a weapon against a tide of political disinformation."