Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in September 2025. Kip Currier, PhD, JD
Friday, April 5, 2024
Assisted living managers say an algorithm prevented hiring enough staff; The Washington Post, April 1, 2024
Thursday, April 4, 2024
Ethics in an age of disinformation: Free webinar series from the National Press Club Journalism Institute; National Press Club Journalism Institute, April 4, 2024
Press Release, National Press Club Journalism Institute; Ethics in an age of disinformation: Free webinar series from the National Press Club Journalism Institute
"The National Press Club Journalism Institute is pleased to announce a free, four-part webinar series focused on ethics in the age of disinformation. These discussions are geared toward equipping journalists and the public with tools to combat mis and disinformation efforts aimed at disrupting journalism and democracy.
All of these webinars are free and open to the public and are designed to provide tools and best practices to support ethical, trustworthy journalism."
George Carlin’s Estate Reaches Settlement After A.I. Podcast; The New York Times, April 2, 2024
Christopher Kuo , The New York Times; George Carlin’s Estate Reaches Settlement After A.I. Podcast
"The estate of the comedian George Carlin reached a settlement on Monday with the makers of a podcast who had said they had used artificial intelligence to impersonate Mr. Carlin for a comedy special...
Mr. Carlin’s estate filed the suit in January, saying that Mr. Sasso and Mr. Kultgen, hosts of the podcast “Dudesy,” had infringed on the estate’s copyrights by training an A.I. algorithm on five decades of Mr. Carlin’s work for the special “George Carlin: I’m Glad I’m Dead,” which was posted on YouTube. The lawsuit also said they had illegally used Mr. Carlin’s name and likeness."
Billie Eilish and Nicki Minaj want stop to 'predatory' music AI; BBC, April 2, 2024
Liv McMahon , BBC; Billie Eilish and Nicki Minaj want stop to 'predatory' music AI
"Billie Eilish and Nicki Minaj are among 200 artists calling for the "predatory" use of artificial intelligence (AI) in the music industry to be stopped.
In an open letter also signed by Katy Perry and the estate of Frank Sinatra, they warn AI "will set in motion a race to the bottom" if left unchecked...
Other artists have since spoken out about it, with Sting telling the BBC he believes musicians face "a battle" to defend their work against the rise of songs written by AI.
"The building blocks of music belong to us, to human beings," he said.
But not all musicians oppose developments in or use of AI across the music industry, and electronic artist Grimes and DJ David Guetta are among those backing the use of such AI tools."
Wednesday, April 3, 2024
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets; The Guardian, April 3, 2024
Bethan McKernan in Jerusalem and Harry Davies, The Guardian ; ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
"Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines."
How to start winning the information war; The Washington Post, April 2, 2024
Formerly, we thought about national security in terms of battles on land, at sea and in the air. The newest battlefield is the human mind. Our adversaries are fully deployed on that field of battle. We are all but absent. Thus, we are losing the information war by default to malefactor regimes in Russia, China and Iran.
What explains this alarming state of affairs? Lack of leadership and lack of means. No one is in charge of telling America’s still-inspiring story to the world. For three years, the U.S. Advisory Commission on Public Diplomacy, part of the State Department, has urged the White House and Congress to designate a lead official in the information war. The recommendations appear to have been ignored. This reflects inattention at the very top.
As for lack of means, since 1999, when Congress unwisely abolished the U.S. Information Agency (USIA), the United States has lacked the capability to fight back using counternarrative. We have the invaluable Voice of America, of course, but VOA’s product is news. News is not counternarrative. It is not the marshaling of truth and fact to tell our story. Putin’s high standing in domestic polls and in some nonaligned countries is proof we need more than news to achieve victory on the battlefield of the human mind. We need counternarrative as well.
Joe Biden was one of 49 senators who voted against abolishing the USIA. It should be an easy walk for the president to take the steps necessary to get us on the offensive."
Monday, April 1, 2024
Conspiracy, monetisation and weirdness: this is why social media has become ungovernable; The Guardian, April 1, 2024
Nesrine Malik , The Guardian; Conspiracy, monetisation and weirdness: this is why social media has become ungovernable
"Something has changed about the way social media content is presented to us. It is both a huge and subtle shift. Until recently, types of content were segregated by platform. Instagram was for pictures and short reels, TikTok for longer videos, X for short written posts. Now Instagram reels post TikTok videos, which post Instagram reels, and all are posted on X. Often it feels like a closed loop, with the algorithm taking you further and further away from discretion and choice in who you follow. All social media apps now have the equivalent of a “For you” page, a feed of content from people you don’t follow, and which, if you don’t consciously adjust your settings, the homepage defaults to. The result is that increasingly, you have less control over what you see."
An attack of the vapours: Tennessee bill endorses chemtrails conspiracy theory; The Guardian, March 31, 2024
Adam Gabbatt, The Guardian; An attack of the vapours: Tennessee bill endorses chemtrails conspiracy theory
"Proponents of the debunked chemtrails idea believe that the cloudy white lines created by airplane emissions are chemicals being released into the atmosphere. The idea is that the government, or shadowy private organizations, are pumping out toxic chemicals, with the aim being anything from modifying the weather to controlling a population’s minds.
This is not happening, scientists say.
“There’s no such thing as chemtrails,” said Alan Robock, a climate science professor at Rutgers university...
Numerous debunkings of the chemtrails concept have not succeeded in quieting those fearful of airplane condensation trails. A YouGov/Statista survey conducted in 2019 found that 8% of Americans “strongly believe” that “the government is using chemicals to control the population (chemtrails)”. (A further 11% said they “somewhat believe” in the theory.)...
Karen Douglas, a professor of social psychology at the University of Kent, said people are drawn to various conspiracy theories because “a simple explanation is often not very attractive”.
“People assume that there must somehow be a bigger explanation, or more going on than people know about. The simple explanations often seem too mundane and not satisfying enough,” Douglas said."
From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize; The New York Times, March 31, 2024
Elizabeth Williamson, The New York Times ; From Pizzagate to the 2020 Election: Forcing Liars to Pay or Apologize
"Convinced that viral lies threaten public discourse and democracy, he is at the forefront of a small but growing cadre of lawyers deploying defamation, one of the oldest areas of the law, as a weapon against a tide of political disinformation."
A fight to protect the dignity of Michelangelo’s David raises questions about freedom of expression; AP, March 28, 2024
Sunday, March 31, 2024
British Museum Sues Former Curator for Return of Stolen Items; The New York Times, March 27, 2024
Alex Marshall, The New York Times ; British Museum Sues Former Curator for Return of Stolen Items
"The museum claims that the former curator, Peter Higgs, who once ran the museum’s Greek and Roman antiquities department, stole or damaged over 1,800 artifacts from its collections and sold hundreds of those items on eBay, according to court documents.
Officials also want Mr. Higgs to explain the whereabouts of other artifacts that they say the former curator sold online. The court documents state that Mr. Higgs disputes the accusations against him...
In the filing, the museum also accuses the former curator of attempting to cover up the thefts by altering the museum’s digital catalog, including changing descriptions of missing items."
Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence; Rochester Institute of Technology (RIT), March 7, 2024
Felicia Swartzenberg, Rochester Institute of Technology (RIT); Philosophy, ethics, and the pursuit of 'responsible' artificial intelligence
"Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics of AI and the policy gaps that need to be filled in. Through a humanities lens, Selinger asks the questions, "How can AI cause harm, and what can governments and companies creating AI programs do to address and manage it?" Answering them, he explained, requires an interdisciplinary approach...
“AI ethics has core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms,” said Selinger. “Some people are rightly worried that AI can be co-opted into ‘ethics washing’—weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with folks at Microsoft, on why it is important to consider a range of positions.”
There are many issues that need to be addressed as companies pursue responsible AI, including public concern over whether generative AI is stealing from artists. Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human toll that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers that facial recognition and other automated approaches to surveillance.
In a chapter for a book that will be published by MIT Press, Selinger, along with co-authors Brenda Leong, partner at Luminos.Law, and Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project, offer concrete suggestions for conducting responsible AI audits, while also considering civil liberties objections."
THE RECKONING; Science, March 7, 2024
CATHLEEN O’GRADY , Science; THE RECKONING
"Part of the failure lies with France’s law on research ethics, Amiel says, which is out of step with international standards. “It’s provincial,” he says. “And it’s really a problem.” Because the law allows some human studies to proceed without ethical approval, Amiel says, similar violations are ongoing elsewhere in France, though not at the scale of the IHU’s. The best solution would be to overhaul the law, he says—but “I don’t think it’s a priority for the government at the moment.”
The close relationship between political powers and scientific institutions in France is also to blame for the foot-dragging institutional response, Lacombe says. Without external voices—like Bik, Frank, Besançon, Molimard, and Garcia—“I’m not sure that things would have moved,” she says."
Saturday, March 30, 2024
A.I.-Generated Garbage Is Polluting Our Culture; The New York Times, March 29, 2024
Erik Hoel, The New York Times ; A.I.-Generated Garbage Is Polluting Our Culture
"To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture."
Thursday, March 28, 2024
AI hustlers stole women’s faces to put in ads. The law can’t help them.; The Washington Post, March 28, 2024
Panel of Distinguished AI Experts Discuss Challenges of AI Regulation with the Honorable Ro Khanna; Markkula Center for Applied Ethics at Santa Clara University, March 27, 2024
Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Panel of Distinguished AI Experts Discuss Challenges of AI Regulation with the Honorable Ro Khanna
"Leadership takes many forms, and often the most important thing leaders can do is listen. The Institute for Technology Ethics and Culture at the Markkula Center for Applied Ethics and the Santa Clara School of Law hosted a roundtable discussion on March 18, 2024, with Congressman Ro Khanna and leaders from industry, civil society, and academia. Congressman Khanna wanted to hear from experts in his district to inform his thinking about AI regulation. I was honored to moderate the discussion.
Opinions were as diverse as the group bringing them forward. It was observed that many of us are used to speaking so frequently with those in our own field that the chance to connect with those in other areas reveals sharp differences in perspective. Several participants felt, for example, that deepfakes are not something to be too concerned about since they are easily identifiable, whereas others felt there are still many people who struggle to identify them. People are often confused by false images or voices and as technology advances, this confusion will only deepen."
Your newsroom needs an AI ethics policy. Start here.; Poynter, March 25, 2024
Kelly McBride , Poynter; Your newsroom needs an AI ethics policy. Start here.
"Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence. Why? Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.
Until we create those standards — even though it’s early in the game — we are holding back innovation.
So here’s a starter kit, created by Poynter’s Alex Mahadevan, Tony Elkins and me. It’s a statement of journalism values that roots AI experimentation in the principles of accuracy, transparency and audience trust, followed by a set of specific guidelines.
Think of it like a meal prep kit. Most of the work is done, but you still have to roll up your sleeves and do a bit of labor. This policy includes blank spaces, where newsroom leaders will have to add details, saying “yes” or “no” to very specific activities, like using AI-generated illustrations.
In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised."
Wednesday, March 27, 2024
Amicus Briefs Filed in Internet Archive Copyright Case; Publishers Weekly, March 25, 2024
Andrew Albanese , Publishers Weekly; Amicus Briefs Filed in Internet Archive Copyright Case
"Internet Archive lawyers filed their principal appeal brief on December 15, and 11 amicus briefs were filed in support of the Internet Archive a week later, in December, representing librarians and library associations, authors, public advocacy groups, law professors, and IP scholars, although some of the IA amicus briefs are presented as neutral.
The briefs are the latest development in the long-running copyright infringement case and come a year after a ruling by judge John G. Koeltl on March 24, 2023 that emphatically rejected the IA’s fair use defense, finding the scanning and lending of print library books under a protocol known as “controlled digital lending” to be copyright infringement.
The Internet Archive’s reply brief is now due on April 19, and oral arguments are expected to be set for this fall."
Eyes in the sky: why drones are ‘beyond effective’ for animal rights campaigners around the world; The Guardian, March 26, 2024
Laura Trethewey, The Guardian ; Eyes in the sky: why drones are ‘beyond effective’ for animal rights campaigners around the world
"Over the past decade, drones have become irreplaceable tools in activist and conservation circles. In 2013, the animal rights group Peta (People for the Ethical Treatment of Animals) launched a drone campaign tracking illegal bowhunting in Massachusetts.
Since then, drones have been used to record factory farm pollution in the American midwest, sea lice outbreaks in Icelandic salmon pens, and deforestation in the Brazilian Amazon. Drones are popular because they’re relatively cheap, easy to use and extend a person’s range in difficult or inaccessible terrain. They also provide a bird’s-eye view of the scale of an issue, such as an oil spill or illegal logging...
In some cases, the drones capture the secret lives of animals hidden from view, such as Romeo the manatee in Miami...
“Conservation can be a very dangerous occupation to be in and there are more environmentalists killed every year,” says Ager. “Drones are a perfect way to study something without putting yourself in harm’s way and then decide whether it’s worth the risk.”"
Tuesday, March 26, 2024
Monday, March 25, 2024
Judge dismisses Elon Musk's suit against hate speech researchers; NPR, March 25, 2024
Bobby Allyn, NPR; Judge dismisses Elon Musk's suit against hate speech researchers
"A federal judge has dismissed X owner Elon Musk's lawsuit against a research group that documented an uptick in hate speech on the social media site, saying the organization's reports on the platform formerly known as Twitter were protected by the First Amendment.
Musk's suit, "is so unabashedly and vociferously about one thing that there can be no mistaking that purpose," wrote U.S. District Judge Charles Breyer in his Monday ruling, "This case is about punishing the Defendants for their speech."
Amid an advertiser boycott of X last year, Musk sued the research and advocacy organization Center for Countering Digital Hate, alleging it violated the social media site's terms of service in gathering data for its reports."
Saturday, March 23, 2024
Magali Berdah: Dozens jailed in France's largest cyberbully case; BBC, March 19, 2024
Ian Casey, BBC; Magali Berdah: Dozens jailed in France's largest cyberbully case
"Twenty-eight people have been jailed for up to 18 months for the harassment of an influencer in France's largest cyberbullying case to date.
Judges found the accused guilty of harassing Magali Berdah, spurred on by a campaign by the French rapper Booba against "thieving influencers"...
Ms Berdah has built a prominent career in France as a lifestyle and fashion expert, while also marketing other social media stars through her company Shauna Events.
Her lawyers said posts from Booba, real name Élie Yaffa, encouraged a "mob" of people online to send hateful and insulting messages to their client, something Booba denied.
The court said that each of the defendants "made a conscious choice to join in" with the cyberbullying.
The accused, aged between 20 and 49, received jail terms ranging from four to 18 months, some of which were suspended."
Expert Insights: An estimated 1 in 4 teens has experienced cyberbullying. Parents can help.; The Pittsburgh Post-Gazette, March 23, 2024
JONATHAN PERLE, The Pittsburgh Post-Gazette; Expert Insights: An estimated 1 in 4 teens has experienced cyberbullying. Parents can help.
"What are the outcomes of cyberbullying?
The residual effects of cyberbullying are often considered like those of traditional bullying. Related to the victim, cyberbullying can lead to:
• General life stress.
• Avoidance of school or social situations for fear of what others may have learned about them.
• Avoidance of online pleasurable activities (e.g., gaming, social media).
• Anxiety, whether general (e.g., “what if” thoughts) or specific (e.g., about particular events or people).
• Depression that could include changes in eating, sleeping or interests, as well as increases in withdrawal, irritability, and potentially suicidal thoughts or activities.
• Substance use to cope.
• Trauma that could develop over time from recurrent exposure to bullying and make the individual fearful of their safety at home, online, in social settings and/or at school.
Combined issues can not only result in a decline in self-esteem, but also reduced academic and social performance. Finally, some have suggested that being cyberbullied has the potential to increase the chances of traditional in-person bullying.
Despite the victim being the focus, cyberbullying can also lead to issues for the bully.
Depending on what they share and the outcomes, many schools have implemented bullying policies to hold children (and families) accountable. Similarly, many states have integrated cyberbullying laws that could result in formal charges being brought against a bully. To learn more about specific laws, visit StopBullying.gov/resources/laws."
Tennessee becomes the first state to protect musicians and other artists against AI; NPR, March 22, 2024
Rebecca Rosman, NPR; Tennessee becomes the first state to protect musicians and other artists against AI
"Tennessee made history on Thursday, becoming the first U.S. state to sign off on legislation to protect musicians from unauthorized artificial intelligence impersonation.
"Tennessee (sic) is the music capital of the world, & we're leading the nation with historic protections for TN artists & songwriters against emerging AI technology," Gov. Bill Lee announced on social media.
The Ensuring Likeness Voice and Image Security Act, or ELVIS Act, is an updated version of the state's old right of publicity law. While the old law protected an artist's name, photograph or likeness, the new legislation includes AI-specific protections."
Thursday, March 21, 2024
‘Social media is like driving with no speed limits’: the US surgeon general fighting for youngsters’ happiness; The Guardian, March 19, 2024
Robert Booth, The Guardian; ‘Social media is like driving with no speed limits’: the US surgeon general fighting for youngsters’ happiness
The Anxious Generation by Jonathan Haidt – a pocket full of poison; The Guardian, Book Review, March 21, 2024
Sophie McBain, The Guardian, Book Review; The Anxious Generation by Jonathan Haidt – a pocket full of poison
"The American social psychologist Jonathan Haidt believes this mental health crisis has been driven by the mass adoption of smartphones, along with the advent of social media and addictive online gaming. He calls it “the Great Rewiring of Childhood”.
Children are spending ever less time socialising in person and ever more time glued to their screens, with girls most likely to be sucked into the self-esteem crushing vortex of social media, and boys more likely to become hooked on gaming and porn. Childhood is no longer “play-based”, it’s “phone-based”. Haidt believes that parents have become overprotective in the offline world, delaying the age at which children are deemed safe to play unsupervised or run errands alone, but do too little to protect children from online dangers. We have allowed the young too much freedom to roam the internet, where they are at risk of being bullied and harassed or encountering harmful content, from graphic violence to sites that glorify suicide and self-harm...
The Anxious Generation is nonetheless an urgent and essential read, and it ought to become a foundational text for the growing movement to keep smartphones out of schools, and young children off social media. As well as calling for school phone bans, Haidt argues that governments should legally assert that tech companies have a duty of care to young people, the age of internet adulthood should be raised to 16, and companies forced to institute proper age verification – all eminently sensible and long overdue interventions."
Canada moves to protect coral reef that scientists say ‘shouldn’t exist’; The Guardian, March 15, 2024
Leyland Cecco, The Guardian; Canada moves to protect coral reef that scientists say ‘shouldn’t exist’
"For generations, members of the Kitasoo Xai’xais and Heiltsuk First Nations, two communities off the Central Coast region of British Columbia, had noticed large groups of rockfish congregating in a fjord system.
In 2021, researchers and the First Nations, in collaboration with the Canadian government, deployed a remote-controlled submersible to probe the depths of the Finlayson Channel, about 300 miles north-west of Vancouver.
On the last of nearly 20 dives, the team made a startling discovery – one that has only recently been made public...
The discovery marks the latest in a string of instances in which Indigenous knowledge has directed researchers to areas of scientific or historic importance. More than a decade ago, Inuk oral historian Louie Kamookak compared Inuit stories with explorers’ logbooks and journals to help locate Sir John Franklin’s lost ships, HMS Erebus and HMS Terror. In 2014, divers located the wreck of the Erebus in a spot Kamookak suggested they search, and using his directions found the Terror two years later."
Wednesday, March 20, 2024
Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini; TechCrunch, March 20, 2024
Natasha Lomas, Romain Dillet , TechCrunch; Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini
"In a never-ending saga between Google and France’s competition authority over copyright protections for news snippets, the Autorité de la Concurrence announced a €250 million fine against the tech giant Wednesday (around $270 million at today’s exchange rate).
According to the competition watchdog, Google disregarded some of its previous commitments with news publishers. But the decision is especially notable because it drops something else that’s bang up-to-date — by latching onto Google’s use of news publishers’ content to train its generative AI model Bard/Gemini.
The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content."