Saturday, June 8, 2024

NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI; The National Law Review, June 4, 2024

 James G. Gatto of Sheppard, Mullin, Richter & Hampton LLP, The National Law Review; NJ Bar Association Warns the Practice of Law Is Poised for Substantial Transformation Due To AI

"The number of bar associations that have issued AI ethics guidance continues to grow, with NJ being the most recent. In its May 2024 report (Report), the NJ Task Force on Artificial Intelligence and the Law made a number of recommendations and findings as detailed below. With this Report, NJ joins the list of other bar associations that have issued AI ethics guidance, including FloridaCaliforniaNew YorkDC as well as the US Patent and Trademark Office. The Report notes that the practice of law is “poised for substantial transformation due to AI,” adding that while the full extent of this transformation remains to be seen, attorneys must keep abreast of and adapt to evolving technological landscapes and embrace opportunities for innovation and specialization in emerging AI-related legal domains.

The Task Force included four workgroups, including: i) Artificial Intelligence and Social Justice Concerns; ii) Artificial Intelligence Products and Services; iii) Education and CLE Programming; and iv) Ethics and Regulatory Issues. Each workgroup made findings and recommendations, some of which are provided below (while trying to avoid duplicating what other bar associations have addressed). Additionally, the Report includes some practical tools including guidance on Essential Factors for Selecting AI Products and Formulating an AI Policy in Legal Firms, provides a Sample Artificial Intelligence and Generative Artificial Intelligence Use Policy and Questions for Vendors When Selecting AI Products and Services, links to which are provided below.

The Report covers many of the expected topics with a focus on:

  • prioritizing AI education, establishing baseline procedures and guidelines, and collaborating with data privacy, cybersecurity, and AI professionals as needed;
  • adopting an AI policy to ensure the responsible integration of AI in legal practice and adherence to ethical and legal standards; and
  • the importance of social justice concerns related to the use of AI, including the importance of transparency in AI software algorithms, bias mitigation, and equitable access to AI tools and the need to review legal AI tools for fairness and accessibility, particularly tools designed for individuals from marginalized or vulnerable communities.

Some of the findings and recommendations are set forth below."

If Supreme Court won't act to reform its ethics, Congress should step in; Chicago Sun Times, June 7, 2024

CST Editorial Board, Chicago Sun Times; If Supreme Court won't act to reform its ethics, Congress should step in

"When the U.S. Supreme Court justices in November announced new ethics rules that had no teeth, they said it would “dispel” the “misunderstanding” the justices “regard themselves as unrestricted by any ethics rules.”

Well, that didn’t work. It’s time for Congress to step in as new revelations about questionable ethics keep popping up...

At a time when the nation is polarized, we need a court that is an honest arbiter of disputes. Without an enforceable ethics code, we can’t hope to get one."

Supreme Court Set to Decide Landmark Cases Amid Ethics Controversies; KQED, June 10, 2024

Mina Kim , KQED; Supreme Court Set to Decide Landmark Cases Amid Ethics Controversies

"With its term drawing to a close, the U.S. Supreme Court is getting ready to rule on major issues like abortion access, gun regulations, and whether former president Trump has immunity from civil litigation. Meanwhile, Justice Samuel Alito is still facing questions – and calls for recusal– over political flags flown at his houses. We’ll discuss the ethics controversies swirling around the court and look at what the upcoming rulings could mean for the presidential election… the country… and you.

Guests:

Vikram Amar, professor of law, UC Davis School of Law - He clerked for Justice Harry A. Blackmun of the United States Supreme Court.

Mary Ziegler, professor of law, UC Davis School of Law - Her most recent book is "Roe: The History of a National Obsession.""

Supreme Court Ethics Lapses Aren’t a Partisan Issue; Bloomberg, June 7, 2024

Gabe Roth, Bloomberg; Supreme Court Ethics Lapses Aren’t a Partisan Issue

"Ethics reform at the Supreme Court is not a partisan issue. Nor is it a cynical attempt to shame or bully the court. It’s true that the justices most in the news for ethical lapses — Clarence Thomas and Samuel Alito — are staunch conservatives. But liberal justices have had their issues, too."

Alex Jones lashes out after agreeing to sell assets to pay legal debt to Sandy Hook families; NBC News, June 7, 2024

 Erik Ortiz, NBC News; Alex Jones lashes out after agreeing to sell assets to pay legal debt to Sandy Hook families

"Christopher Mattei, a lawyer for the Sandy Hook families, said their fight is far from over.

“Alex Jones has hurt so many people,” Mattei said in a statement. “The Connecticut families have fought for years to hold him responsible no matter the cost and at great personal peril. Their steadfast focus on meaningful accountability, and not just money, is what has now brought him to the brink of justice in the way that matters most.”

Jones had previously sought a bankruptcy settlement with the families, but that was rejected.

In the wake of the shooting in Newtown, Connecticut, in which a gunman killed 20 children and six adults, Jones repeatedly suggested the massacre was a hoax. At his trial in Texas in 2022, he generally blamed “corporate media” for twisting his words and misportraying him, but did not specify how."

You Can Create Award-Winning Art With AI. Can You Copyright It?; Bloomberg Law, June 5, 2024

 Matthew S. Schwartz, Bloomberg Law; You Can Create Award-Winning Art With AI. Can You Copyright It?

"We delved into the controversy surrounding the use of copyrighted material in training AI systems in our first two episodes of this season. Now we shift our focus to the output. Who owns artwork created using artificial intelligence? Should our legal system redefine what constitutes authorship? Or, as AI promises to redefine how we create, will the government cling to historical notions of authorship?

Guests:

  • Jason M. Allen, founder of Art Incarnate
  • Sy Damle, partner in the copyright litigation group at Latham & Watkins
  • Shira Perlmutter, Register of Copyrights and director of the US Copyright Office"

Justice Clarence Thomas Acknowledges He Should Have Disclosed Free Trips From Billionaire Donor; Pro Publica, June 7, 2024

Joshua KaplanJustin Elliott and Alex Mierjeski, Pro Publica; Justice Clarence Thomas Acknowledges He Should Have Disclosed Free Trips From Billionaire Donor

"Supreme Court Justice Clarence Thomas acknowledged for the first time in a new financial disclosure filing that he should have publicly reported two free vacations he received from billionaire Harlan Crow...

Legal ethics experts said that Thomas appeared to have violated the law by failing to disclose the trips and gifts."

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

 Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

Tests find AI tools readily create election lies from the voices of well-known political leaders; AP, May 31, 2024

ALI SWENSON , AP; Tests find AI tools readily create election lies from the voices of well-known political leaders

"As high-stakes elections approach in the U.S. and European Union, publicly available artificial intelligence tools can be easily weaponized to churn out convincing election lies in the voices of leading political figures, a digital civil rights group said Friday.

Researchers at the Washington, D.C.-based Center for Countering Digital Hate tested six of the most popular AI voice-cloning tools to see if they would generate audio clips of five false statements about elections in the voices of eight prominent American and European politicians.

In a total of 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. In one clip, a fake U.S. President Joe Biden says election officials count each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls."

Research suggests AI could help teach ethics; Phys.org, June 6, 2024

 Jessica Nelson, Phys.org ; Research suggests AI could help teach ethics

"Dr. Hyemin Han, an associate professor of , compared responses to  from the popular Large Language Model ChatGPT with those of college students. He found that AI has emerging capabilities to simulate human moral decision-making.

In a paper recently published in the Journal of Moral Education, Han wrote that ChatGPT answered basic ethical dilemmas almost like the average college student would. When asked, it also provided a rationale comparable to the reasons a human would give: avoiding harm to others, following , etc.

Han then provided the program with a new example of virtuous behavior that contradicted its previous conclusions and asked the question again. In one case, the program was asked what a person should do upon discovering an escaped prisoner. ChatGPT first replied that the person should call the police. However, after Han instructed it to consider Dr. Martin Luther King, Jr.'s "Letter from Birmingham Jail," its answer changed to allow for the possibility of unjust incarceration...

Han's second paper, published recently in Ethics & Behavior, discusses the implications of  research for the fields of ethics and education. In particular, he focused on the way ChatGPT was able to form new, more nuanced conclusions after the use of a moral exemplar, or an example of good behavior in the form of a story.

Mainstream thought in educational psychology generally accepts that exemplars are useful in teaching character and ethics, though some have challenged the idea. Han says his work with ChatGPT shows that exemplars are not only effective but also necessary."

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Thursday, June 6, 2024

Librarian’s Pet: Public libraries add robotic animals to their collections; American Libraries, May 1, 2024

Rosie Newmark , American Libraries; Librarian’s Pet: Public libraries add robotic animals to their collections

"Liz Kristan wanted to bring four-legged friends to patrons who needed them the most.

Kristan, outreach services coordinator at Ela Area Public Library (EAPL) in Lake Zurich, Illinois, knew that the presence of pets has been associated with health benefits like reductions in stress and blood pressure. In 2022, she introduced robotic pets to the library’s collection, taking them on visits to assisted living and memory care facilities to entertain older adult residents.

“We’ve seen people with advanced dementia in near catatonic states actually light up, smile, and begin speaking when we place a pet in their lap,” Kristan says.

Libraries like EAPL have been adding these animatronics to their collections in recent years to bring companionship and health benefits to patrons, especially older adults. Compared with live animals, robotic pets require less upkeep and pose fewer allergy concerns. They are interactive and often lifelike, with some reacting to touch by purring, meowing, licking paws, barking, panting, and wagging tails."

The US librarian who sued book ban harassers: ‘I decided to fight back’; The Guardian, June 2, 2024

Olivia Empson , The Guardian; The US librarian who sued book ban harassers: ‘I decided to fight back’

"While Jones was able to turn her situation around and make a success of her experience with the upcoming book, the journey wasn’t easy. Hate still lingers in the community she grew up in and lives in, and she’s lost friends and acquaintances over the rumors that were spread about her.

“One of the chapters is a play on Michelle Obama’s quote: when they go low, you go high,” Jones concluded.

“When I wrote my story, I tried to go high. I hope that no one harasses the men who harassed me. I just wanted to be honest, truthful, diplomatic.”

That Librarian: The Fight Against Book Banning in America will be published on 27 August."

Architects Talking Ethics #3: I’m confused: Where can I get answers to the ethical questions that come up in my practice?; The Architect's Newspaper, June 3, 2024

  , The Architect's Newspaper; Architects Talking Ethics #3:

I’m confused: Where can I get answers to the ethical questions that come up in my practice?

"This is the third entry in Architects Talking Ethics, an advice column that intends to host a discussion of the values that architects embody or should embody. It aims to answer real-world ethical questions posed by architects, designers, students, and professors.

We, as the three initial authors of this column, think the profession is way behind in how it addresses ethics. We think architects should explore our own ethics with the breadth and depth that other fields have done for a long time...

Architectural practitioners sometimes confuse ordinary ethics or business ethics with professional ethics. Ordinary ethics considers how we all should treat one another, while business ethics deals with the conflicts that can arise when balancing your company’s interests and those of your employees against those of clients. Both of these are incredibly important. However, in the world of professional ethics, where “professional” indicates those licensed to perform defined activities by the state, the first consideration is one’s duty to the public. Architects, in other words, have fiduciary responsibilities to clients and employees, professional obligations to colleagues and the discipline, and, like all professions, an overriding responsibility to the public.

Our profession’s codes of ethics as outlined by the American Institute of Architects (AIA, which again regulates only those architects volunteering to be members of its organization), however, are less than clear about the order of those obligations."

Wednesday, June 5, 2024

Dinesh D’Souza election fraud film, book ’2000 Mules’ pulled after defamation suit; CNBC, May 31, 2024

Christina Wilkie , CNBC; Dinesh D’Souza election fraud film, book ’2000 Mules’ pulled after defamation suit

"The conservative gadfly Dinesh D’Souza’s film and book “2000 Mules,” which pushes false conspiracies about voter fraud in the 2020 presidential election, has been removed from distribution by its executive producer and publisher, according to an announcement Friday.

Salem Media Group has announcement that it had yanked D’Souza’s film and book also apologized to Mark Andrews, a Georgia man falsely accused in “2000 Mules” of ballot stuffing.

Andrews in late 2022 filed a federal defamation lawsuit against the company, D’Souza, and the non-profit advocacy group True The Vote, which contributed to the “2000 Mules’ project."

Supersharers of fake news on Twitter; Science, May 30, 2024

 SAHAR BARIBI-BARTOV BRIONY SWIRE-THOMPSON , AND NIR GRINBERG  , Science; Supersharers of fake news on Twitter

"Editor’s summary

Most fake news on Twitter (now X) is spread by an extremely small population called supersharers. They flood the platform and unequally distort political debates, but a clear demographic portrait of these users was not available. Baribi-Bartov et al. identified a meaningful sample of supersharers during the 2020 US presidential election and asked who they were, where they lived, and what strategies they used (see the Perspective by van der Linden and Kyrychenko). The authors found that supersharers were disproportionately Republican, middle-aged White women residing in three conservative states, Arizona, Florida, and Texas, which are focus points of contentious abortion and immigration battles. Their neighborhoods were poorly educated but relatively high in income. Supersharers persistently retweeted misinformation manually. These insights are relevant for policymakers developing effective mitigation strategies to curtail misinformation. —Ekeoma Uzogara

Abstract

Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. In this work, we identify 2107 registered US voters who account for 80% of the fake news shared on Twitter during the 2020 US presidential election by an entire panel of 664,391 voters. We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers’ massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many."

Will A.I. Be a Creator or a Destroyer of Worlds?; The New York Times, June 5, 2024

Thomas B. Edsall, The New York Times ; Will A.I. Be a Creator or a Destroyer of Worlds?

"The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?"

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

AI & THE CHURCH SUMMIT: NAVIGATING THE ETHICAL FRONTIER; Virginia Theological Seminary, June 4, 2024

Virginia Theological Seminary; AI & THE CHURCH SUMMIT: NAVIGATING THE ETHICAL FRONTIER

"As Artificial Intelligence (AI) rapidly permeates our world, the church must grapple with its profound implications or we risk being caught behind the curve.

The AI & The Church Summit, a joint initiative of TryTank, Presbyterian Church (USA), and the Evangelical Lutheran Church in America (ELCA), will foster crucial dialogue on this pivotal issue. The summit – to be held August 12-15 in Seattle, WA—will explore AI’s potential to address global challenges while critically examining ethical dilemmas like exacerbating inequality and threats to human dignity. We simply cannot shrink from the church’s role in advocating for ethical, human-centered AI development that protects the vulnerable.

Keynote speaker Father Paolo Benanti, the Vatican’s AI ethics advisor, will guide our conversation. His extensive work with Pope Francis positions him uniquely to address the need for global AI governance serving humanity’s interests. We will also have expert engagement, reflection, and dialogue, as we delve into AI’s moral, theological, and societal impacts.

Critically, this invitation-only event seeds ongoing collaboration. Each denomination will send 15 leaders committed to sustaining momentum through monthly discussions after the summit. The AI & The Church Summit presents a pivotal opportunity to envision an ethical AI future upholding human dignity. Let us lead this frontier.

Find out more to join us here.

The Rev. Lorenzo Lebrija, DMin, MBA
Chief Innovation Officer, VTS
Executive Director, TryTank Research Institute"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

 DENYSE O'LEARY, Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

 , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024


Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

  Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"

Sure, Google’s AI overviews could be useful – if you like eating rocks; The Guardian, June 1, 2024

 , The Guardian; Sure, Google’s AI overviews could be useful – if you like eating rocks

"To date, some of this searching suggests subhuman capabilities, or perhaps just human-level gullibility. At any rate, users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day (presumably to combat iron deficiency). Memo to Google: do not train your AI on Reddit or the Onion."

Thursday, May 30, 2024

The media bosses fighting back against AI — and the ones cutting deals; The Washington Post, May 27, 2024

 

 The Washington Post; The media bosses fighting back against AI — and the ones cutting deals

"The fact that so many media companies are cutting deals with Open AI could “dilute” the leverage that the companies suing it have, Mateen noted. On the other hand, by paying some publishers so much money, Open AI may be undermining its own defense: If it were truly “fair use,” he said, “they’d be confident enough not to pay anything.”"

Jamie Raskin: How to Force Justices Alito and Thomas to Recuse Themselves in the Jan. 6 Cases; The New York Times, May 29, 2024

Jamie Raskin , The New York Times; Jamie Raskin: How to Force Justices Alito and Thomas to Recuse Themselves in the Jan. 6 Cases

"At his Senate confirmation hearing, Chief Justice Roberts assured America that “Judges are like umpires.”

But professional baseball would never allow an umpire to continue to officiate the World Series after learning that the pennant of one of the two teams competing was flying in the front yard of the umpire’s home. Nor would an umpire be allowed to call balls and strikes in a World Series game after the umpire’s wife tried to get the official score of a prior game in the series overthrown and canceled out to benefit the losing team. If judges are like umpires, then they should be treated like umpires, not team owners, team fans or players."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

 SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."

Debunking misinformation failed. Welcome to ‘pre-bunking’; The Washington Post, May 26, 2024

 and 
, The Washington Post; Debunking misinformation failed. Welcome to ‘pre-bunking’

"Election officials and researchers from Arizona to Taiwan are adopting a radical playbook to stop falsehoods about voting before they spread online, amid fears that traditional strategies to battle misinformation are insufficient in a perilous year for democracies around the world.

Modeled after vaccines, these campaigns — dubbed “prebunking” — expose people to weakened doses of misinformation paired with explanations and are aimed at helping the public develop “mental antibodies” to recognize and fend off hoaxes in a heated election year...

Federal agencies are encouraging state and local officials to invest in prebunking initiatives, advising officials in an April memo to “build a team of trusted voices to amplify accurate information proactively.”"

Tuesday, May 28, 2024

Yale Freshman Creates AI Chatbot With Answers on AI Ethics; Inside Higher Ed, May 2, 2024

Lauren Coffey , Inside Higher Ed; Yale Freshman Creates AI Chatbot With Answers on AI Ethics

"One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use."

Monday, May 27, 2024

NHL hockey stars to compete in Stamford in memory of Darien's Hayden Thorsen; CT Post, August 2, 2023

Dave Stewart, CT Post ; NHL hockey stars to compete in Stamford in memory of Darien's Hayden Thorsen

[Kip Currier: I just learned about this inspiring Shoulder Check Initiative this Memorial Day from an update story reported on morning television. On this day when we thank all those who gave their lives while serving in our military branches, in furtherance of freedom, this is an important reminder for all of us to reach out to someone, check on someone, and show kindness and compassion.]

"The HT40 Foundation is named for Hayden Thorsen, using his initials and the No. 40 jersey he wore while playing ice hockey.

Thorsen, an avid hockey player, died by suicide in the spring of 2022, and his parents Rob and Sarah created the foundation to “bring people together through kindness and compassion, just as (Hayden) did throughout his life.”...

According to a press release,, the Shoulder Check Initiative “encourages reaching out, checking in, and making kindness a contact sport in the locker rooms, in the halls, on and off the ice.”"

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

 CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."