Wednesday, June 5, 2024

Dinesh D’Souza election fraud film, book ’2000 Mules’ pulled after defamation suit; CNBC, May 31, 2024

Christina Wilkie , CNBC; Dinesh D’Souza election fraud film, book ’2000 Mules’ pulled after defamation suit

"The conservative gadfly Dinesh D’Souza’s film and book “2000 Mules,” which pushes false conspiracies about voter fraud in the 2020 presidential election, has been removed from distribution by its executive producer and publisher, according to an announcement Friday.

Salem Media Group has announcement that it had yanked D’Souza’s film and book also apologized to Mark Andrews, a Georgia man falsely accused in “2000 Mules” of ballot stuffing.

Andrews in late 2022 filed a federal defamation lawsuit against the company, D’Souza, and the non-profit advocacy group True The Vote, which contributed to the “2000 Mules’ project."

Supersharers of fake news on Twitter; Science, May 30, 2024

 SAHAR BARIBI-BARTOV BRIONY SWIRE-THOMPSON , AND NIR GRINBERG  , Science; Supersharers of fake news on Twitter

"Editor’s summary

Most fake news on Twitter (now X) is spread by an extremely small population called supersharers. They flood the platform and unequally distort political debates, but a clear demographic portrait of these users was not available. Baribi-Bartov et al. identified a meaningful sample of supersharers during the 2020 US presidential election and asked who they were, where they lived, and what strategies they used (see the Perspective by van der Linden and Kyrychenko). The authors found that supersharers were disproportionately Republican, middle-aged White women residing in three conservative states, Arizona, Florida, and Texas, which are focus points of contentious abortion and immigration battles. Their neighborhoods were poorly educated but relatively high in income. Supersharers persistently retweeted misinformation manually. These insights are relevant for policymakers developing effective mitigation strategies to curtail misinformation. —Ekeoma Uzogara

Abstract

Governments may have the capacity to flood social media with fake news, but little is known about the use of flooding by ordinary voters. In this work, we identify 2107 registered US voters who account for 80% of the fake news shared on Twitter during the 2020 US presidential election by an entire panel of 664,391 voters. We found that supersharers were important members of the network, reaching a sizable 5.2% of registered voters on the platform. Supersharers had a significant overrepresentation of women, older adults, and registered Republicans. Supersharers’ massive volume did not seem automated but was rather generated through manual and persistent retweeting. These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many."

Will A.I. Be a Creator or a Destroyer of Worlds?; The New York Times, June 5, 2024

Thomas B. Edsall, The New York Times ; Will A.I. Be a Creator or a Destroyer of Worlds?

"The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?"

OpenAI and Google DeepMind workers warn of AI industry risks in open letter; The Guardian, June 4, 2024

 , The Guardian; OpenAI and Google DeepMind workers warn of AI industry risks in open letter

"A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers.

The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

AI & THE CHURCH SUMMIT: NAVIGATING THE ETHICAL FRONTIER; Virginia Theological Seminary, June 4, 2024

Virginia Theological Seminary; AI & THE CHURCH SUMMIT: NAVIGATING THE ETHICAL FRONTIER

"As Artificial Intelligence (AI) rapidly permeates our world, the church must grapple with its profound implications or we risk being caught behind the curve.

The AI & The Church Summit, a joint initiative of TryTank, Presbyterian Church (USA), and the Evangelical Lutheran Church in America (ELCA), will foster crucial dialogue on this pivotal issue. The summit – to be held August 12-15 in Seattle, WA—will explore AI’s potential to address global challenges while critically examining ethical dilemmas like exacerbating inequality and threats to human dignity. We simply cannot shrink from the church’s role in advocating for ethical, human-centered AI development that protects the vulnerable.

Keynote speaker Father Paolo Benanti, the Vatican’s AI ethics advisor, will guide our conversation. His extensive work with Pope Francis positions him uniquely to address the need for global AI governance serving humanity’s interests. We will also have expert engagement, reflection, and dialogue, as we delve into AI’s moral, theological, and societal impacts.

Critically, this invitation-only event seeds ongoing collaboration. Each denomination will send 15 leaders committed to sustaining momentum through monthly discussions after the summit. The AI & The Church Summit presents a pivotal opportunity to envision an ethical AI future upholding human dignity. Let us lead this frontier.

Find out more to join us here.

The Rev. Lorenzo Lebrija, DMin, MBA
Chief Innovation Officer, VTS
Executive Director, TryTank Research Institute"

GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS; Mind Matters, June 3, 2024

 DENYSE O'LEARY, Mind Matters; GENERATIVE AI IS CREATING A COPYRIGHT CRISIS FOR ARTISTS

"The problem, Crawford and Schultz say, is that copyright law, as currently framed, does not really protect individuals under these circumstances. That’s not surprising. Copyright dates back to at least 1710 and the issues were very different then.

For one thing, as Jonathan Bartlett pointed out last December, when the New York Times launched a lawsuit for copyright violation against Microsoft and OpenAI, everyone accepted that big search engines have always violated copyright. But if they brought people to your site, while saving and using your content for themselves, you were getting something out of it at least.

But it’s different with generative AI and the chatbot. They use and replace your content. Users are not going back to you for more. OpenAI freely admits that it violates copyright but relies on loopholes to get around legal responsibility.

As the lawsuits pile up, it’s clear that gen AI and chatbots can’t work without these billions of images and texts. So we either do without them or we find a way to compensate the producers."

Adobe gets called out for violating its own AI ethics; Digital Trends, June 3, 2024

 , Digital Trends; Adobe gets called out for violating its own AI ethics

"Last Friday, the estate of famed 20th century American photographer Ansel Adams took to Threads to publicly shame Adobe for allegedly offering AI-genearated art “inspired by” Adams’ catalog of work, stating that the company is “officially on our last nerve with this behavior.”...

Adobe has since removed the offending images, conceding in the Threads conversation that, “this goes against our Generative AI content policy.”

However, the Adams estate seemed unsatisfied with that response, claiming that it had been “in touch directly” with the company “multiple times” since last August. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible AI, while demonstrating respect for the creative community,” the estate continued, “we invite you to become proactive about complaints like ours, & to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms.”"

AI isn't useless. But is it worth it?; [citation needed], April 17, 2024


Google’s A.I. Search Leaves Publishers Scrambling; The New York Times, June 1, 2024

  Nico Grant and , The New York Times; Google’s A.I. Search Leaves Publishers Scrambling

"In May, Google announced that the A.I.-generated summaries, which compile content from news sites and blogs on the topic being searched, would be made available to everyone in the United States. And that change has Mr. Pine and many other publishing executives worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.

“It potentially chokes off the original creators of the content,” Mr. Pine said. The feature, AI Overviews, felt like another step toward generative A.I. replacing “the publications that they have cannibalized,” he added."

How news coverage, often uncritical, helps build up the AI hype; Reuters Institute, May 20, 2024

Prof. Rasmus Kleis Nielsen , Reuters Institute; How news coverage, often uncritical, helps build up the AI hype

"“I would put media reporting [about AI] at around two out of 10,” David Reid, professor of Artificial Intelligence at Liverpool Hope University, said to the BBC earlier this year. “When the media talks about AI, they think of it as a single entity. It is not. What I would like to see is more nuanced reporting.”

While some individual journalists and outlets are highly respected for their reporting on AI, overall, social science research on news media coverage of artificial intelligence provides some support for Reid’s assessment.

Some working in the technology industry may feel very put upon – a few years ago Zachary Lipton, then an assistant professor at the machine learning department at Carnegie Mellon University, was quoted in the Guardian calling media coverage of artificial intelligence “sensationalised crap” and likening it to an “AI misinformation epidemic”. In private conversations, many computer scientists and technologists working in the private sector echo his complaints, decrying what several describe as relentlessly negative coverage obsessed with “killer robots.”"

Sure, Google’s AI overviews could be useful – if you like eating rocks; The Guardian, June 1, 2024

 , The Guardian; Sure, Google’s AI overviews could be useful – if you like eating rocks

"To date, some of this searching suggests subhuman capabilities, or perhaps just human-level gullibility. At any rate, users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day (presumably to combat iron deficiency). Memo to Google: do not train your AI on Reddit or the Onion."

Thursday, May 30, 2024

The media bosses fighting back against AI — and the ones cutting deals; The Washington Post, May 27, 2024

 

 The Washington Post; The media bosses fighting back against AI — and the ones cutting deals

"The fact that so many media companies are cutting deals with Open AI could “dilute” the leverage that the companies suing it have, Mateen noted. On the other hand, by paying some publishers so much money, Open AI may be undermining its own defense: If it were truly “fair use,” he said, “they’d be confident enough not to pay anything.”"

Jamie Raskin: How to Force Justices Alito and Thomas to Recuse Themselves in the Jan. 6 Cases; The New York Times, May 29, 2024

Jamie Raskin , The New York Times; Jamie Raskin: How to Force Justices Alito and Thomas to Recuse Themselves in the Jan. 6 Cases

"At his Senate confirmation hearing, Chief Justice Roberts assured America that “Judges are like umpires.”

But professional baseball would never allow an umpire to continue to officiate the World Series after learning that the pennant of one of the two teams competing was flying in the front yard of the umpire’s home. Nor would an umpire be allowed to call balls and strikes in a World Series game after the umpire’s wife tried to get the official score of a prior game in the series overthrown and canceled out to benefit the losing team. If judges are like umpires, then they should be treated like umpires, not team owners, team fans or players."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Will the rise of AI spell the end of intellectual property rights?; The Globe and Mail, May 27, 2024

 SHEEMA KHAN , The Globe and Mail; Will the rise of AI spell the end of intellectual property rights?

"AI’s first challenge to IP is in the inputs...

Perhaps the question will become: Will IP be the death of AI?...

The second challenge relates to who owns the AI-generated products...

Yet IP rights are key to innovation, as they provide a limited monopoly to monetize investments in research and development. AI represents an existential threat in this regard.

Clearly, the law has not caught up. But sitting idly by is not an option, as there are too many important policy issues at play."

Debunking misinformation failed. Welcome to ‘pre-bunking’; The Washington Post, May 26, 2024

 and 
, The Washington Post; Debunking misinformation failed. Welcome to ‘pre-bunking’

"Election officials and researchers from Arizona to Taiwan are adopting a radical playbook to stop falsehoods about voting before they spread online, amid fears that traditional strategies to battle misinformation are insufficient in a perilous year for democracies around the world.

Modeled after vaccines, these campaigns — dubbed “prebunking” — expose people to weakened doses of misinformation paired with explanations and are aimed at helping the public develop “mental antibodies” to recognize and fend off hoaxes in a heated election year...

Federal agencies are encouraging state and local officials to invest in prebunking initiatives, advising officials in an April memo to “build a team of trusted voices to amplify accurate information proactively.”"

Tuesday, May 28, 2024

Yale Freshman Creates AI Chatbot With Answers on AI Ethics; Inside Higher Ed, May 2, 2024

Lauren Coffey , Inside Higher Ed; Yale Freshman Creates AI Chatbot With Answers on AI Ethics

"One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use."

Monday, May 27, 2024

NHL hockey stars to compete in Stamford in memory of Darien's Hayden Thorsen; CT Post, August 2, 2023

Dave Stewart, CT Post ; NHL hockey stars to compete in Stamford in memory of Darien's Hayden Thorsen

[Kip Currier: I just learned about this inspiring Shoulder Check Initiative this Memorial Day from an update story reported on morning television. On this day when we thank all those who gave their lives while serving in our military branches, in furtherance of freedom, this is an important reminder for all of us to reach out to someone, check on someone, and show kindness and compassion.]

"The HT40 Foundation is named for Hayden Thorsen, using his initials and the No. 40 jersey he wore while playing ice hockey.

Thorsen, an avid hockey player, died by suicide in the spring of 2022, and his parents Rob and Sarah created the foundation to “bring people together through kindness and compassion, just as (Hayden) did throughout his life.”...

According to a press release,, the Shoulder Check Initiative “encourages reaching out, checking in, and making kindness a contact sport in the locker rooms, in the halls, on and off the ice.”"

‘That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI; The Hill, May 23, 2024

 CHRISTOPHER KENNEALLY, The Hill; That’ll cost you, ChatGPT’ — copyright needs an update for the age of AI

"Beyond commercially published books, journals, and newspapers, AI databases derive from a vast online trove of publicly available social media and Wikipedia entries, as well as digitized library and museum collections, court proceedings, and government legislation and regulation.

Consumption of public and private individual data on the “open” web marks an important shift in digital evolution. No one is left out. Consequently, we have all become stakeholders.

AI is now forcing us to consider viewing copyright as a public good...

Statutory licensing schemes for copyright-protected works are already applied to cable television systems and music recordings with great success. Fees collected for AI rights-licensing of publicly available works need not be burdensome. The funds can help to underwrite essential public education in digital literacy and civil discourse online.

OpenAI, along with Meta, Apple, Google, Amazon, and others who stand to benefit, must recognize the debt owed to the American people for the data that fuels their AI solutions."

Saturday, May 25, 2024

Can I Use A.I. to Grade My Students’ Papers?; The New York Times, May 24, 2024

Kwame Anthony Appiah, The New York Times; Can I Use A.I. to Grade My Students’ Papers?

"Can I Use A.I. to Grade My Students’ Papers?

I am a junior-high-school English teacher. In the past school year, there has been a significant increase in students’ cheating on writing assignments by using artificial intelligence. Our department feels that 13-year-old students will only become better writers if they practice and learn from the successes and challenges that come with that.

Recently our department tasked students with writing an argumentative essay, an assignment we supported by breaking down the process into multiple steps. The exercise took several days of class time and homework to complete. All of our students signed a contract agreeing not to use A.I. assistance, and parents promised to support the agreement by monitoring their children when they worked at home. Yet many students still used A.I.

Some of our staff members uploaded their grading rubric into an A.I.-assisted platform, and students uploaded their essays for assessment. The program admittedly has some strengths. Most notable, it gives students writing feedback and the opportunity to edit their work before final submission. The papers are graded within minutes, and the teachers are able to transfer the A.I. grade into their roll book.

I find this to be hypocritical. I spend many hours grading my students’ essays. It’s tedious work, but I feel that it’s my responsibility — if a student makes an effort to complete the task, they should have my undivided attention during the assessment process.

Here’s where I struggle: Should I embrace new technology and use A.I.-assisted grading to save time and my sanity even though I forbid my students from using it? Is it unethical for teachers to ask students not to use A.I. to assist their writing but then allow an A.I. platform to grade their work? — Name Withheld" 

Friday, May 24, 2024

Navigating the Patchwork of AI Laws, Standards, and Guidance; American Bar Association (ABA), May 9, 2024

Emily Maxim Lamm , American Bar Association (ABA); Navigating the Patchwork of AI Laws, Standards, and Guidance

"The opening weeks of 2024 have seen a record number of state legislative proposals seeking to regulate artificial intelligence (AI) across different sectors in the United States...

With this type of rapid-fire start to the 2024 legislative season, the AI legal landscape will likely continue evolving across the board. As a result, organizations today are facing a complex and dizzying web of proposed and existing AI laws, standards, and guidance.

This article aims to provide a cohesive overview of this AI patchwork and to help organizations navigate this increasingly intricate terrain. The focus here will be on the implications of the White House AI Executive Order, existing state and local laws in the United States, the European Union’s AI Act, and, finally, governance standards to help bring these diverse elements together within a framework."

Thursday, May 23, 2024

A UCLA doctor is on a quest to free modern medicine from a Nazi-tainted anatomy book; Los Angeles Times, May 23, 2024

Emily Alpert Reyes, Los Angeles Times ; A UCLA doctor is on a quest to free modern medicine from a Nazi-tainted anatomy book

"So far, Amara Yad has completed two volumes focused on the anatomy of the heart and is enlisting teams at other universities for more. The plan is to draft a freely available, ethically sourced road map to the entire body that eclipses the weathered volumes of watercolors from Pernkopf and honors the Nazis’ victims.

Anatomists have told him, “‘You’re crazy. It’s impossible. How could you ever surpass it?’” Shivkumar said of the Pernkopf atlas in a speech last year before members of the Heart Rhythm Society.

But “can it be beaten? The answer is yes...

Amara Yad is also an act of “moral repair” meant to honor the victims, said Dr. Barbara Natterson-Horowitz, a UCLA cardiologist and evolutionary biologist who helped support the project. The Nazi atlases “were like documents of death. The atlases that Shiv is creating are really living, interactive tools to support life.”

When Shivkumar decided to launch the project, he had been inspired by the words of USC emeritus professor of rheumatology Dr. Richard Panush, who had pushed to set the atlas aside in the library of the New Jersey medical center where he had worked, moving it to a display case that explained its history.

Panush said the old atlas should be preserved only as “a symbol of what we should not do, and how we should not behave, and the kind of people that we cannot respect.”"

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."

An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen; The Register, May 18, 2024

Thomas Claburn , The Register; An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen

"In December, 2023, University of Illinois Urbana-Champaign information sciences professor Masooda Bashir led a study titled "Patron Privacy Protections in Public Libraries" that was published in The Library Quarterly. The study found that while libraries generally have basic privacy protections, there are often gaps in staff training and in privacy disclosures made available to patrons.

It also found that some libraries rely exclusively on social media for their online presence. "That is very troubling," said Bashir in a statement. "Facebook collects a lot of data – everything that someone might be reading and looking at. That is not a good practice for public libraries.""

When Online Content Disappears; Pew Research Center, May 17, 2024

 Athena Chapekis, Samuel Bestvater, Emma Remy and Gonzalo Rivero, Pew Research Center; When Online Content Disappears

"38% of webpages that existed in 2013 are no longer accessible a decade later...

How we did this

1

PEW RESEARCH CENTER

Pew Research Center conducted the analysis to examine how often online content that once existed becomes inaccessible. One part of the study looks at a representative sample of webpages that existed over the past decade to see how many are still accessible today. For this analysis, we collected a sample of pages from the Common Crawl web repository for each year from 2013 to 2023. We then tried to access those pages to see how many still exist.

A second part of the study looks at the links on existing webpages to see how many of those links are still functional. We did this by collecting a large sample of pages from government websites, news websites and the online encyclopedia Wikipedia.

We identified relevant news domains using data from the audience metrics company comScore and relevant government domains (at multiple levels of government) using data from get.gov, the official administrator for the .gov domain. We collected the news and government pages via Common Crawl and the Wikipedia pages from an archive maintained by the Wikimedia Foundation. For each collection, we identified the links on those pages and followed them to their destination to see what share of those links point to sites that are no longer accessible.

A third part of the study looks at how often individual posts on social media sites are deleted or otherwise removed from public view. We did this by collecting a large sample of public tweets on the social media platform X (then known as Twitter) in real time using the Twitter Streaming API. We then tracked the status of those tweets for a period of three months using the Twitter Search API to monitor how many were still publicly available.

Refer to the report methodology for more details."

New Windows AI feature records everything you’ve done on your PC; Ars Technica, May 20, 2024

, Ars Technica ; New Windows AI feature records everything you’ve done on your PC

"At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users."

Alito Ethics Defense Blown Up by Second Insurrectionist Flag; New York Magazine, May 22, 2024

, , New York Magazine; Alito Ethics Defense Blown Up by Second Insurrectionist Flag

"Now, the New York Times reports that Alito flew another flag associated with Trump’s insurrection, the “Appeal to Heaven” flag, which was carried by insurrectionists on January 6. The flag was confirmed to have flown over Alito’s beach house in July and September 2023.

Note how the old Alito defenses are totally useless in the face of this new case. It can’t be chalked up to a dispute with a neighbor, unless the same neighbor happens to own property near Alito in two different states. The wife excuse is also threadbare. (Indeed, Alito, who blamed his wife in a response to the first Times story, has no comment in response to the second one.) And the excuse that it was “a heated time in January 2021” obviously does not explain why the Alito home continued to display insurrectionist flags two and a half years later."

Wednesday, May 22, 2024

Are Ethics Taking a Backseat in AI Jobs?; Statista, May 22, 2024

 Anna Fleck, Statista; Are Ethics Taking a Backseat in AI Jobs?

"Data published jointly by the OECD and market analytics platform Lightcast has found that few AI employers are asking for creators and developers of AI to have ethical decision making AI skills. The two research teams looked for keywords such as “AI ethics”, “responsible AI” and “ethical AI” in job postings for AI workers across 14 OECD countries, in both English and the official languages spoken in the 14 countries studied. According to Lightcast, out of these, an average of less than two percent of AI job postings listed these skills. However, between 2019 and 2022 the share of job postings mentioning ethics-related keywords increased in the majority of surveyed countries. For example, the figure rose from 0.1 percent to 0.5 percent in the United States between the four years and from 0.1 percent to 0.4 percent in the United Kingdom.

According to Lightcast writer Layla O’Kane, federal agencies in the U.S. are, however, now being encouraged to hire Chief AI Officers to monitor the use of AI technologies following an executive order for the Safe, Secure, and Trustworthy Development and Use Of Artificial Intelligence. O’Kane writes: “While there are currently a very small number of postings for Chief AI Officer jobs across public and private sector, the skills they call for are encouraging: almost all contain at least one mention of ethical considerations in AI.”"

Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content; UT News, The University of Texas at Austin, May 21, 2024

 UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content

"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.

To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.

“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"

Ethics Panel Cautions Judge in Trump Trial Over Political Donations; The New York Times, May 17, 2024

William K. RashbaumJonah E. Bromwich and , The New York Times ; Ethics Panel Cautions Judge in Trump Trial Over Political Donations

"A state ethics panel quietly dismissed a complaint last summer against the New York judge presiding over the criminal trial of Donald J. Trump, issuing a warning over small donations the judge had made to groups supporting Democrats, including the campaign of Joseph R. Biden Jr.

The judge, Juan M. Merchan, donated a total of $35 to the groups in 2020, including a $15 donation earmarked for the Biden campaign, and $10 to a group called “Stop Republicans.”

Political contributions of any kind are prohibited under state judicial ethics rules...

In its 2024 annual report, the commission said it was made aware of dozens of New York judges who had violated the rules against political contributions in recent years. Most were modest amounts, the report said, and many appeared to stem from the misperception that the rules only apply to state campaigns. In fact, judges are prohibited from contributing to any campaigns, including for federal office."

Tuesday, May 21, 2024

Alito’s inverted flag makes a mockery of the Supreme Court’s code of ethics; The Hill, May 21, 2024

 CEDRIC MERLIN POWELL, The Hill; Alito’s inverted flag makes a mockery of the Supreme Court’s code of ethics

"This violates the court’s newly minted code of conduct, which remains unenforced because the court regulates itself. Its legitimacy is buttressed by a paper tiger.

The inverted flag on Justice Alito’s flagpole violates nearly all of the court’s ethical rules. 

Its first disqualification rule says “A Justice is presumed impartial and has an obligation to sit unless disqualified.” Alito’s impartiality has been shattered. He should be disqualified because he has not avoided “impropriety and the appearance of impropriety in all activities” per Canon 2, and he cannot “perform the duties of office fairly [and] impartially” per Canon 3 because the inverted flag signals his allegiance to a party that has pending matters before the court. 

The partisan and volatile tenor of the inverted flag “endorse[s] a … candidate for public office,” which violates Canon 5 because it trades in the discredited Trump-invented trope of a stolen election. Justice Alito has not refrained from political activity; thus, the independence of the judiciary is called into question, per Canon 1. This is particularly disconcerting because Justice Alito is the third most senior justice on the court...

Every branch of government, including state and lower federal courts, has enforceable and binding codes of conduct that ensure impartiality, fairness and legitimacy. Congress must adopt a binding code of conduct for the Supreme Court. We should right the flag by turning it upward toward our democratic principles."

Saturday, May 18, 2024

Stability AI, Midjourney should face artists' copyright case, judge says; Reuters, May 8, 2024

 , Reuters; Stability AI, Midjourney should face artists' copyright case, judge says

"A California federal judge said he was inclined to green-light a copyright lawsuit against Stability AI, Midjourney and other companies accused of misusing visual artists' work to train their artificial intelligence-based image generation systems.

U.S. District Judge William Orrick said on Tuesday that the ten artists behind the lawsuit had plausibly argued that Stability, Midjourney, DeviantArt and Runway AI copied and stored their work on company servers and could be liable for using it without permission...

Orrick also said that he was likely to dismiss some of the artists' related claims but allow their allegations that the companies violated their trademark rights and falsely implied that they endorsed the systems.

The case is Andersen v. Stability AI, U.S. District Court for the Northern District of California, No. 3:23-cv-00201."