Monday, September 9, 2024

Justice Kagan elaborates on potential Supreme Court ethics code enforcement; NBC News, September 9, 2024

Lawrence Hurley, NBC News;  Justice Kagan elaborates on potential Supreme Court ethics code enforcement

"Justice Elena Kagan on Monday outlined how the Supreme Court's new ethics code could be improved if it had an enforcement mechanism, rejecting claims that the idea she has proposed would be ineffective.

The court, under pressure over claims of ethics violations mostly aimed at conservative justices Clarence Thomas and Samuel Alito, issued a new code last year but it was immediately criticized for lacking any way of enforcing it.

Kagan, a member of the court's liberal minority, has called for creating a panel of lower court judges appointed by Chief Justice John Roberts to handle allegations made against the justices."

New Resource Examines Data Governance in the Age of AI; Government Technology, September 6, 2024

 News Staff, Government Technology; New Resource Examines Data Governance in the Age of AI

"A new guide for policymakers, “Data Policy in the Age of AI: A Guide to Using Data for Artificial Intelligence,” aims to educate leaders on responsible AI data use.

The question of how to best regulate artificial intelligence (AI) is one lawmakers are still addressing, as they are trying to balance innovation with risk mitigation. Meanwhile, state and local governments are creating their own regulations in the absence of a comprehensive federal policy.

The new white paper, from the Data Foundation, a nonprofit supporting data-informed public policy, is intended to be a comprehensive resource. It outlines three key pieces of effective data policy: high-quality data, effective governance principles and technical capacity."

Internet Archive Court Loss Leaves Higher Ed in Gray Area; Inside Higher Ed, September 9, 2024

  Lauren Coffey, Inside Higher Ed; Internet Archive Court Loss Leaves Higher Ed in Gray Area

"Pandemic-era library programs that helped students access books online could be potentially threatened by an appeals court ruling last week. 

Libraries across the country, from Carnegie Mellon University to the University of California system, turned to what’s known as a digital or controlled lending program in 2020, which gave students a way to borrow books that weren’t otherwise available. Those programs are small in scale and largely experimental but part of a broader shift in modernizing the university library.

But the appeals court ruling could upend those programs...

Still, librarians at colleges and elsewhere, along with other experts, feared that the long-running legal fight between the Internet Archive and leading publishers could imperil the ability of libraries to own and preserve books, among other ramifications."

Teacher's Post About Students Using ChatGPT Sparks Debate; Business Insider, September 8, 2024

Jaures Yip, Business Insider; Teacher's Post About Students Using ChatGPT Sparks Debate

"Fritts said that technology addiction has affected students' general agency when interacting with information."

Sunday, September 8, 2024

Forget the Yacht. The Best Travel Is on Foot, Through Wilderness.; The New York Times, September 7, 2024

, The New York Times; Forget the Yacht. The Best Travel Is on Foot, Through Wilderness

"It was in this same period that I developed a passion for backpacking, and I suspect that I unconsciously prescribed myself wilderness therapy to heal.

It works. I see wild spaces as a place to think, to escape cellphones and editors (sorry, boss!), to connect with loved ones, to be dazzled and humbled by the vastness of space and the slowness of geologic time, to escape class divides, to purge ourselves of frustrations and political toxicity, to bare our souls, to be recharged.

Thank God for America’s best idea."

Lifeline or distraction? Georgia shooting reignites debate over cellphones in schools; NBC News, September 7, 2024

 Elizabeth Chuck, NBC News; Lifeline or distraction? Georgia shooting reignites debate over cellphones in schools

"There is clear research showing the detriments of smartphones, particularly to adolescents. The phones and their addictive social media platforms have been tied to poor sleep, cyberbullying and unhealthy body esteem in young people. A 2023 study by technology and media research group Common Sense Media found that adolescents are overwhelmed with notifications from their smartphones — receiving a median of 237 alerts daily, with about a quarter arriving during the school day.

At least 13 states have passed laws or put policies in place that ban or restrict students’ use of cellphones in schools statewide, or recommend that local districts enact their own restrictions, according to Education Week. Individual school districts, both large and small, have also implemented policies that limit or prohibit cellphone use, with a growing number relying on magnetically sealed pouches to lock up the devices so students aren’t tempted to check them when they should be learning.

Being able to get in touch if there’s an emergency is the top reason parents say they want their children to have access to phones at school, according to a National Parents Union survey conducted in February of more than 1,500 parents of K-12 public school students.

Yet fatal shootings in schools are exceedingly rare. And while parents may want to reach their children should there be shots fired or another emergency, phones “can actually detract from the safety of students,” according to Ken Trump, president of National School Safety and Security Services, a consulting firm that focuses on school security and emergency preparedness training."

R.O. Kwon on writing in the age of A.I.; The Ink, September 8, 2024

 The Ink; R.O. Kwon on writing in the age of A.I.

"What exactly are A.I.s doing when they churn out words? When they ghostwrite our notes and letters, summarize our news, and maybe take over the jobs of those writing the news in the first place — are they writing?

The novelist R.O. Kwon — author of two novels, The Incendiaries and Exhibit, both deeply concerned not just with language but with the way language is rooted in the very human experiences of faith and love and sex and being in a body — doesn’t think so. For Kwon, a writer is a person who writes — that is, a human in a body, who struggles with language. And, in her typically crystal-clear fashion, she made this point the other day in a series of posts."

Saturday, September 7, 2024

Revisiting: Libraries and the Contested Terrain of “Neutrality”; The Scholarly Kitchen, September 3, 2024

 , The Scholarly Kitchen; Revisiting: Libraries and the Contested Terrain of “Neutrality”

"The question of whether libraries can – or even should – be “neutral” has been a difficult and controversial one for years. It is now becoming even more so as book bans become more prevalent and command more public attention. Recently, the political Right has increased its efforts to get books on various topics pulled from library shelves, especially in public and school libraries; the Left, on the other hand, generally engages in book banning from a different angle, trying to stop books from being publishedcalling for them not to be sold, and retroactively censoring books already published. In this politically charged context, the American Library Association offers an incoherent advocacy message, on one hand asserting that libraries must provide “an impartial environment” that offers “information spanning the spectrum of knowledge and opinions,” while on the other decrying “neutrality rhetoric” in librarianship for its role in “emboldening and encouraging white supremacy and fascism.”

A fundamental question remains insufficiently examined: in the context of libraries, what does “neutral” actually mean? Are there ways in which libraries can and should be “neutral,” and ways in which they should not? This post from several years ago examined these questions – ones that seem even more urgent in the current moment than they did then."

Trump’s other legal problem: Copyright infringement claims; The Washington Post, September 7, 2024

 

 The Washington Post; Trump’s other legal problem: Copyright infringement claims

"Music industry experts and copyright law attorneys say the cases, as well as Trump’s decision to continue playing certain songs despite artists’ requests that he desist, underscore the complex legalities of copyright infringement in today’s digital, streaming and licensing era — and could set an important precedent on the of use of popular music in political campaigns."

Council of Europe opens first ever global treaty on AI for signature; Council of Europe, September 5, 2024

 Council of Europe; Council of Europe opens first ever global treaty on AI for signature

"The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.

The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union...

The treaty provides a legal framework covering the entire lifecycle of AI systems. It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral."

Friday, September 6, 2024

Only the First Amendment Can Protect Students, Campuses and Speech; The New York Times, September 6, 2024

Cass R. Sunstein , The New York Times; Only the First Amendment Can Protect Students, Campuses and Speech

"To answer those questions, we should turn to the First Amendment of the U.S. Constitution, which states that Congress “shall make no law … abridging the freedom of speech.” Those words provide the right foundation for forging a new consensus about the scope and importance of free speech in higher education.

As a rallying cry, that consensus should endorse the greatest sentence ever written by a Supreme Court justice. In 1943, Justice Robert H. Jackson wrote, “Compulsory unification of opinion achieves only the unanimity of the graveyard.”

It is true that private colleges and universities, unlike public ones, are not subject to the First Amendment, which applies only to public officials and institutions. If Harvard, Stanford, Baylor, Vanderbilt, Pomona or Colby wants to restrict speech, the First Amendment does not stand in their way.

Still, most institutions of higher learning, large or small, would do well to commit themselves to following the First Amendment of their own accord.

First Amendment doctrine, developed over the centuries, provides excellent guidance."

A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism; Arxiv, 2024

Brian Thompson,∗ Mehak Preet Dhaliwal,† Peter Frisch,Tobias Domhan,Marcello Federico1 1AWS AI Labs 2UC Santa Barbara 3Amazon

brianjt@amazon.com, Arxiv ; A Shocking Amount of the Web is Machine Translated: Insights from Multi-Way Parallelism

"Abstract

We show that content on the web is often translated into many languages, and the low quality of these multi-way translations indicates they were likely created using Machine Translation (MT). Multi-way parallel, machine generated content not only dominates the translations in lower resource languages; it also constitutes a large fraction of the total web content in those languages. We also find evidence of a selection bias in the type of content which is translated into many languages, consistent with low qual- ity English content being translated en masse into many lower resource languages, via MT. Our work raises serious concerns about training models such as multilingual large language models on both monolingual and bilingual data scraped from the web."

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

The Internet Archive Loses Its Appeal of a Major Copyright Case; Wired, September 4, 2024

 Kate Knibbs, Wired; The Internet Archive Loses Its Appeal of a Major Copyright Case

"THE INTERNET ARCHIVE has lost a major legal battle—in a decision that could have a significant impact on the future of internet history. Today, the US Court of Appeals for the Second Circuit ruled against the long-running digital archive, upholding an earlier ruling in Hachette v. Internet Archive that found that one of the Internet Archive’s book digitization projects violated copyright law.

Notably, the appeals court’s ruling rejects the Internet Archive’s argument that its lending practices were shielded by the fair use doctrine, which permits for copyright infringement in certain circumstances, calling it “unpersuasive.”"

US conservative influencers say they are ‘victims’ of Russian disinformation campaign; The Guardian, September 4, 2024

Guardian staff and agencies , The Guardian; US conservative influencers say they are ‘victims’ of Russian disinformation campaign

"A number of high-profile, conservative influencers in the US have said they are “victims” of an alleged Russian disinformation campaign, after the Biden administration accused Moscow of carrying out a sustained campaign to influence the outcome of November’s presidential elections.

Tim Pool, Dave Rubin and Benny Johnson published statements on Wednesday evening addressing allegations that a US content creation company they were associated with had been provided with nearly $10m from Russian state media employees to publish videos with messages in favour of Moscow’s interests and agenda, including over the war in Ukraine...

“The company never disclosed to the influencers – or to their millions of followers – its ties to [Russian state media company] RT and the Russian government,” US attorney general Merrick Garland said. His department described Wednesday’s indictment as the most sweeping effort yet to push back against what it says are Russian attempts to spread disinformation ahead of the November presidential election."

DOJ outlines Russia’s disinformation campaigns designed to interfere with U.S. election; PBS News, September 4, 2024

Kyle Midura , PBS News; DOJ outlines Russia’s disinformation campaigns designed to interfere with U.S. election

"Attorney General Merrick Garland outlined sophisticated disinformation campaigns undertaken by Russia to interfere with the U.S. presidential election. He warned that Russia is pumping lies into the U.S. via fake news outlets and real social media influencers. Amna Nawaz discussed more with National Security Council spokesman John Kirby."

Wednesday, September 4, 2024

Yuval Noah Harari: What Happens When the Bots Compete for Your Love?; The New York Times, September 4, 2024

Yuval Noah Harari. Mr. Harari is a historian and the author of the forthcoming book “Nexus: A Brief History of Information Networks From the Stone Age to AI,” from which this essay is adapted., The New York Times; Yuval Noah Harari: What Happens When the Bots Compete for Your Love?

"Democracy is a conversation. Its function and survival depend on the available information technology...

Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots. When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.

Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg introduced print to Europe, the first best sellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they made possible the rise not only of modern democracy but also of modern totalitarianism.

Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users. Before the rise of A.I., it was impossible to create fake humans, so nobody bothered to outlaw doing so. Soon the world will be flooded with fake humans.

A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots."

One urgent reason the justices need a credible ethics code? Ginni Thomas.; The Washington Post, September 4, 2024

, The Washington Post; One urgent reason the justices need a credible ethics code? Ginni Thomas.

"The impropriety here is multilayered — and staggering.

Ginni Thomas is a political activist by vocation, and, as I’ve written before, that’s her prerogative. It’s her constitutional right. And justices’ spouses have every right to pursue separate careers, including in politics and advocacy. “We have our own separate careers and our own ideas and opinions too,” she told the Washington Free Beacon. “Clarence doesn’t discuss his work with me, and I don’t involve him in my work.”

That was in 2022, when reports surfaced about Ginni Thomas’s attendance at the Jan. 6 rally on the Ellipse and her broader involvement in the “Stop the Steal” movement, including pressing state legislators to set aside the election results.

Now Ginni Thomas’s isn’t just lobbying to “Stop the Steal” — she’s trying to Stop the Reform of her husband’s own institution. So much for separate careers. Ginni Thomas’s own behavior around the 2020 election, and Clarence Thomas’s conduct in accepting, and failing to disclose, thousands of dollars’ worth of gifts from wealthy conservatives helped trigger the push for court reform in the first place. Now, we know, Ginni Thomas is a behind-the-scenes player seeking to frustrate any changes — and a grateful (“THANK YOU SO, SO, SO MUCH”) beneficiary of First Liberty’s efforts on the Thomases’ behalf."

Ginni Thomas Privately Praised Group Working Against Supreme Court Reform: “Thank You So, So, So Much”; ProPublica, September 4, 2024

Andy Kroll, ProPublica, and Nick SurgeyDocumented , ProPublica; Ginni Thomas Privately Praised Group Working Against Supreme Court Reform: “Thank You So, So, So Much”

[Kip Currier: Res Ipsa Loquitur. Shame on those U.S. Supreme Court Justices (not all) who publicly and behind the scenes are working to oppose an enforceable ethics code; the same kinds of enforceable ethics codes that all federal judges are bound by. 

Ethics has a long history and important role in the preparation of law school students to join the legal profession. Upon their successful graduation from law schools and passage of bar exams for the states and courts where they seek to be licensed and practice law, new lawyers become officers of the court, sworn to uphold the rule of law. The importance of ethics is borne out by the requirement that licensed attorneys complete a specific number of ethics courses each year as part of the Continuing Legal Education (CLE) credit requirements set forth by the states and courts that grant their licenses and ensure that they are in compliance with professional development requisites. Moreover, the Model Rules for Professional Responsibility specify the ethics-centered requirements that govern and guide the conduct of attorneys. Adherence to these kinds of requirements and standards promotes attorney competencies, which in turn protects clients. Additionally, ethics-based standards promote transparency, accountability, integrity, and, perhaps most importantly to support and advance functioning judiciaries, public trust in the inherent fairness of lawyers, judges, and the U.S. legal system.

Why should the nine Justices of the U.S. Supreme Court, all of whom currently have life-time appointments, be held to a different, less stringent standard than all other members of the federal judiciary?

Why, too, should these individuals who have been entrusted to uphold the highest standards of the U.S. judicial and legal systems be held to lesser standards than any of the enforceable ethics code-bound officers of the court who make arguments before them in the august courtroom of the U.S. Supreme Court?

Any of the reasons and rationales that have previously been put forth by Chief Justice John Roberts, Justice Samuel Alito, and Justice Clarence Thomas do not pass muster in the face of the ongoing damage that is being done to public trust in the integrity and impartiality of the U.S. Supreme Court through the appearances of impropriety of many of the Justices.

It is high time for the U.S. Supreme Court to proactively institute an ethics code that is enforceable and that can work to restore public faith in this vital branch of the U.S. government.

We as Americans also owe a debt of gratitude to members of the free and independent presses whose reporting serves as an essential check on our three branches of government, and whose work, at its best, can serve to promote the informed citizenries that are integral to healthy, responsive democracies.


[Excerpt from ProPublica]

"Thomas wrote that First Liberty’s opposition to court-reform proposals gave a boost to certain judges. According to Shackelford, Thomas wrote in all caps: “YOU GUYS HAVE FILLED THE SAILS OF MANY JUDGES. CAN I JUST TELL YOU, THANK YOU SO, SO, SO MUCH.”...

On the same call, Shackelford attacked Justice Elena Kagan as “treasonous” and “disloyal” after she endorsed an enforcement mechanism for the court’s newly adopted ethics code in a recent public appearance. He said that such an ethics code would “destroy the independence of the judiciary.” (This past weekend, Justice Ketanji Brown Jackson said she too was open to an enforceable ethics code for the Supreme Court.)

After the call, First Liberty sent a recording of the 45-minute conversation to some of its supporters. ProPublica and Documented obtained that recording."

Censorship Throughout the Centuries; American Libraries, September 3, 2024

 Cara S. Bertram, American Libraries; Censorship Throughout the Centuries

"American Libraries travels through time to outline our country’s history of censorship—and the library workers, authors, and advocates who have defended the right to read."

From School Librarian to Activist: ‘The Hate Level and the Vitriol Is Unreal’; The New York Times, September 3, 2024

 , The New York Times; From School Librarian to Activist: ‘The Hate Level and the Vitriol Is Unreal'

"She was alarmed when Lunsford, in a recent YouTube video, put her home address on the screen when he pulled up the business filings for Louisiana Citizens Against Censorship to show that she was listed as one of the organization’s directors.

“It’s Banana Jones, the girl who sued us because she couldn’t take the heat,” he says, while displaying her address. Asked about the video in an interview, Lunsford said he had “no idea” that it was her home address and was simply sharing a public business record.

Still, there have also been hopeful moments for Jones. Former students have reached out with messages of support. Authors whose books have been banned have praised her memoir, including Nikki Grimes, Jodi Picoult and Ellen Oh.

And Jones has been hearing from other librarians from around the country. Some are grateful that she has taken a stand. Others share distressing stories about harassment that they’ve faced for opposing book bans."

Trump campaign ordered to stop using classic R&B song; Associated Press via Politico, September 3, 2024

Associated Press via Politico; Trump campaign ordered to stop using classic R&B song; Associated Press via Politico

"A federal judge in Atlanta ruled Tuesday that Donald Trump and his campaign must stop using the song “Hold On, I’m Comin’” while the family of one of the song’s co-writers pursues a lawsuit against the former president over its use.

The estate of Isaac Hayes Jr. filed a lawsuit last month alleging that Trump, his campaign and several of his allies had infringed its copyright and should pay damages. After a hearing on the estate’s request for an emergency preliminary injunction, U.S. District Judge Thomas Thrash ruled that Trump must stop using the song, but he denied a request to force the campaign to take down any existing videos that include the song."

NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence; National Endowment for the Humanities (NEH), August 27, 2024

Press Release, National Endowment for the Humanities (NEH); NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence

"The National Endowment for the Humanities (NEH) today announced grant awards totaling $2.72 million for five colleges and universities to create new humanities-led research centers that will serve as hubs for interdisciplinary collaborative research on the human and social impact of artificial intelligence (AI) technologies.

As part of NEH’s third and final round of grant awards for FY2024, the Endowment made its inaugural awards under the new Humanities Research Centers on Artificial Intelligence program, which aims to foster a more holistic understanding of AI in the modern world by creating scholarship and learning centers across the country that spearhead research exploring the societal, ethical, and legal implications of AI. 

Institutions in California, New York, North Carolina, Oklahoma, and Virginia were awarded NEH grants to establish the first AI research centers and pilot two or more collaborative research projects that examine AI through a multidisciplinary humanities lens. 

The new Humanities Research Centers on Artificial Intelligence grant program is part of NEH’s agencywide Humanities Perspectives on Artificial Intelligence initiative, which supports humanities projects that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties. The initiative responds to President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which establishes new standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights."

Sunday, September 1, 2024

QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS; Creative Commons, July 24, 2024

Anna Tumadóttir , Creative Commons; QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS

"The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?...

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

  • Folks want more choice over how their work is used.
  • If they have no choice, they might not share their work at all (under a CC license or strict copyright).

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon."

ASU workgroup addresses ethical questions about the use of AI in higher ed; ASU News, August 27, 2024

 , ASU; ASU workgroup addresses ethical questions about the use of AI in higher ed

"As artificial intelligence becomes more ubiquitous in our everyday lives, the AI and Ethics Workgroup at Arizona State University's Lincoln Center for Applied Ethics is working to establish ethical guidelines and frameworks for the deployment of AI technologies. 

Composed of experts from a variety of fields, the workgroup is dedicated to navigating the complex ethical challenges arising from rapid advancements in AI. The group published their first white paper earlier this month, which focuses on the use of AI tools in higher education.

The workgroup’s co-chairs are Sarah Florini, the associate director of the Lincoln Center and an associate professor of film and media studies, and Nicholas Proferes, an associate professor for ASU’s School of Social and Behavioral Sciences.

Florini and Proferes shared some insights into their workgroup’s research process and their publication, “AI and Higher Education: Questions and Projections.”...

Q: What can educators and institutions start doing today to instill more responsible, ethical adoption of AI-related technologies?

Florini: Get involved and participate in the conversations surrounding these technologies. We all need to be part of the efforts to shape how they will be integrated into colleges and universities. The terrain around AI is moving quickly, and there are many stakeholders with diverging opinions about the best course of action. We all need to be developing a critical understanding of these technologies and contributing to the process of determining how they align with our values.

Proferes: Have conversations with your community. Not just your peers, but with every stakeholder who might be impacted. Create spaces for that dialogue. Map out what the collective core values you want to achieve with the technology are, and then develop policies and procedures that can help support that.

But also, be willing to revisit these conversations. Very often with tech development, ethics is treated as a checkbox, rather than an ongoing process of reflection and consideration. Living wisely with technology requires phronesis, or practical wisdom. That’s something that’s gained over time through practice. Not a one-and-done deal."

A bill to protect performers from unauthorized AI heads to California governor; NPR, August 30, 2024

 , NPR; A bill to protect performers from unauthorized AI heads to California governor

"Other proposed guardrails

In addition to AB2602, the performer’s union is backing California bill AB 1836 to protect deceased performers’ intellectual property from digital replicas.

On a national level, entertainment industry stakeholders, from SAG-AFTRA to The Recording Academy and the MPA, and others are supporting The “NO FAKES Act” (the Nurture Originals, Foster Art, and Keep Entertainment Safe Act) introduced in the Senate. That law would make creating a digital replica of any American illegal.

Around the country, legislators have proposed hundreds of laws to regulate AI more generally. For example, California lawmakers recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which regulates AI models such as ChatGPT.

“It's vital and it's incredibly urgent because legislation, as we know, takes time, but technology matures exponentially. So we're going to be constantly fighting the battle to stay ahead of this,” said voice performer Zeke Alton, a member of SAG-AFTRA’s negotiating committee. “If we don't get to know what's real and what's fake, that is starting to pick away at the foundations of democracy.”

Alton says in the fight for AI protections of digital doubles, Hollywood performers have been the canary in the coal mine. “We are having this open conversation in the public about generative AI and it and using it to replace the worker instead of having the worker use it as a tool for their own efficiency,” he said. “But it's coming for every other industry, every other worker. That's how big this sea change in technology is. So what happens here is going to reverberate.”"