Friday, August 9, 2024

TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert; Episcopal News Service, August 7, 2024

 KATHRYN POST, Episcopal News Service; TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert

"The latest AI chatbot geared for spiritual seekers is AskCathy, co-launched in June by a research institute and ministry organization and aiming to roll out soon on Episcopal church websites. Cathy draws on the latest version of ChatGPT and is equipped to prioritize Episcopal resources.

“This is not a substitute for a priest,” said the Rev. Tay Moss, director of one of Cathy’s architects, the Innovative Ministry Center, an organization based at the Toronto United Church Council that develops digital resources for communities of faith. “She comes alongside you in your search queries and helps you discover material. But she is not the end-all be-all of authority. She can’t tell you how to believe or what to believe.”

The Rev. Lorenzo Lebrija, the executive director of TryTank Research Institute at Virginia Theological Seminary and Cathy’s other principal developer, said all the institute’s projects attempt to follow the lead of the Holy Spirit, and Cathy is no different. He told Religion News Service the idea for Cathy materialized after brainstorming how to address young people’s spiritual needs. What if a chatbot could meet people asking life’s biggest questions with care, insight and careful research?

The goal is not that they will end up at their nearby Episcopal church on Sunday. The goal is that it will spark in them this knowledge that God is always with us, that God never leaves us,” Lebrija said. “This can be a tool that gives us a glimpse and little direction that we can then follow on our own.”

To do that, though, would require a chatbot designed to avoid the kinds of hallucinations and errors that have plagued other ChatGPT integrations. In May, the Catholic evangelization site Catholic Answers “defrocked” their AI avatar, Father Justin, designating him as a layperson after he reportedly claimed to be an ordained priest capable of taking confession and performing marriages...

The Rev. Peter Levenstrong, an associate rector at an Episcopal church in San Francisco who blogs about AI and the church, told RNS he thinks Cathy could familiarize people with the Episcopal faith.

“We have a PR issue,” Levenstrong said. “Most people don’t realize there is a denomination that is deeply rooted in tradition, and yet open and affirming, and theologically inclusive, and doing its best to strive toward a future without racial injustice, without ecocide, all these huge problems that we as a church take very seriously.”

In his own context, Levenstrong has already used Cathy to brainstorm Harry Potter-themed lessons for children. (She recommended a related book written by an Episcopalian.)

Cathy’s creators know AI is a thorny topic. Their FAQ page anticipates potential critiques."

Wednesday, August 7, 2024

It’s practically impossible to run a big AI company ethically; Vox, August 5, 2024

Sigal Samuel, Vox; It’s practically impossible to run a big AI company ethically

"Anthropic was supposed to be the good AI company. The ethical one. The safe one.

It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly. 

Yet lately, Anthropic has been in the headlines for less noble reasons: It’s pushing back on a landmark California bill to regulate AI. It’s taking money from Google and Amazon in a way that’s drawing antitrust scrutiny. And it’s being accused of aggressively scraping data from websites without permission, harming their performance. 

What’s going on?

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI."

Dr. Ruffini: Church leadership needed to shape ‘Ethical AI’; LiCAS News via Vatican News, August 2024

Joan April, Roy Lagarde & Mark Saludes - LiCAS News via Vatican News; Dr. Ruffini: Church leadership needed to shape ‘Ethical AI’

"“The digital world is not a ready-made. It is changing every day. We, we can change it. We can shape it. And we need Catholic communicators to do it, with love and with human intelligence,” said Dr. Ruffini. 

In a recorded speech delivered during the 7th National Catholic Social Communications Convention (NCSCC) in Lipa City, south of Manila, on August 5, the Prefect of the Dicastery for Communication (Vatican News' parent organization, underscored the Church’s responsibility to guide technological advancements with moral clarity and human-centered values.

“So the basic question is not about machines, but about humans, about us. There are and always will be things that a technology cannot replace, like freedom, like the miracle of encounter between people, like the surprise of the unexpected, the conversion, the outburst of ingenuity, the gratuitous love,” he said. 

Organized by the Episcopal Commission on Social Communications (ECSC) of the Catholic Bishops’ Conference of the Philippines (CBCP), the convention aims to explore advancements and risks in AI, offering insights on leveraging the technology for positive impact while addressing potential negative consequences."

A booming industry of AI age scanners, aimed at children’s faces; The Washington Post, August 7, 2024

, The Washington Post ; A booming industry of AI age scanners, aimed at children’s faces

"Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine: Employees at Incode, a San Francisco firm that runs more than 100 million verifications a year, now internally track state bills and contact local officials to, as senior director of strategy Fernanda Sottil said, “understand where … our tech fits in.”

But while the systems are promoted for safeguarding kids, they can only work by inspecting everyone — surveying faces, driver’s licenses and other sensitive data in vast quantities. Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'”"

Tuesday, August 6, 2024

How Companies Can Take a Global Approach to AI Ethics; Harvard Business Review (HBR), August 5, 2024

Favour Borokini, and Harvard Business Review (HBR) ; How Companies Can Take a Global Approach to AI Ethics

"Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.

Many efforts, however, miss an important fact: ethics differ from one cultural context to the next...

Western perspectives are also implicitly being encoded into AI models. For example, some estimates show that less than 3% of all images on ImageNet represent the Indian and Chinese diaspora, which collectively account for a third of the global population. Broadly, a lack of high-quality data will likely lead to low predictive power and bias against underrepresented groups — or even make it impossible for tools to be developed for certain communities at all. LLMs can’t currently be trained for languages that aren’t heavily represented on the Internet, for instance. A recent survey of IT organizations in India revealed that the lack of high-quality data remains the most dominant impediment to ethical AI practices.

As AI gains ground and dictates business operations, an unchecked lack of variety in ethical considerations may harm companies and their customers.

To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies."

Sunday, August 4, 2024

Music labels' AI lawsuits create copyright puzzle for courts; Reuters, August 4, 2024

 , Reuters; Music labels' AI lawsuits create copyright puzzle for courts

"Suno and Udio pointed to past public statements defending their technology when asked for comment for this story. They filed their initial responses in court on Thursday, denying any copyright violations and arguing that the lawsuits were attempts to stifle smaller competitors. They compared the labels' protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians...

The labels' claims echo allegations by novelists, news outlets, music publishers and others in high-profile copyright lawsuits over chatbots like OpenAI's ChatGPT and Anthropic's Claude that use generative AI to create text. Those lawsuits are still pending and in their early stages.

Both sets of cases pose novel questions for the courts, including whether the law should make exceptions for AI's use of copyrighted material to create something new...

"Music copyright has always been a messy universe," said Julie Albert, an intellectual property partner at law firm Baker Botts in New York who is tracking the new cases. And even without that complication, Albert said fast-evolving AI technology is creating new uncertainty at every level of copyright law.

WHOSE FAIR USE?

The intricacies of music may matter less in the end if, as many expect, the AI cases boil down to a "fair use" defense against infringement claims - another area of U.S. copyright law filled with open questions."

Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.; The New York Times, August 2, 2024

Mike Isaac and  , The New York Times; Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.

"Meta is in discussions with Awkwafina, Judi Dench and other actors and influencers for the right to incorporate their voices into a digital assistant product called MetaAI, according to three people with knowledge of the talks, as the company pushes to build more products that feature artificial intelligence.

Apart from Ms. Dench and Awkwafina, Meta is in talks with the comedian Keegan-Michael Key and other celebrities, said the people, who spoke on the condition of anonymity because the discussions are private. They added that all of Hollywood’s top talent agencies were involved in negotiations with the tech giant."

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid; The Observer via The Guardian, August 3, 2024

Gary Marcus, The Observer via The Guardian; OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.


We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

 

honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

 

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

 

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate"."

Saturday, August 3, 2024

AI is complicating plagiarism. How should scientists respond?; Nature, July 30, 2024

Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?

"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."

Supreme Court Ethics Controversies: All The Scandals That Led Biden To Endorse Code Of Conduct; Forbes, July 29, 2024

Alison Durkee , Forbes; Supreme Court Ethics Controversies: All The Scandals That Led Biden To Endorse Code Of Conduct

"President Joe Biden endorsed the Supreme Court imposing a binding code of ethics on Monday, following a string of recent ethics issues the court has faced that have ramped up criticism of the court and sparked cries for a code of conduct from lawmakers and legal experts."

Friday, August 2, 2024

Justice Department sues TikTok, accusing the company of illegally collecting children’s data; AP, August 2, 2024

HALELUYA HADERO, AP; Justice Department sues TikTok, accusing the company of illegally collecting children’s data

"The Justice Department sued TikTok on Friday, accusing the company of violating children’s online privacy law and running afoul of a settlement it had reached with another federal agency. 

The complaint, filed together with the Federal Trade Commission in a California federal court, comes as the U.S. and the prominent social media company are embroiled in yet another legal battle that will determine if – or how – TikTok will continue to operate in the country. 

The latest lawsuit focuses on allegations that TikTok, a trend-setting platform popular among young users, and its China-based parent company ByteDance violated a federal law that requires kid-oriented apps and websites to get parental consent before collecting personal information of children under 13. It also says the companies failed to honor requests from parents who wanted their children’s accounts deleted, and chose not to delete accounts even when the firms knew they belonged to kids under 13."

Paris mayor supports Olympics opening ceremony director after death threats; The Athletic, August 2, 2024

Ben Burrows and Brendan Quinn, The Athletic; Paris mayor supports Olympics opening ceremony director after death threats

"The mayor of Paris has offered her “unwavering support” to the artistic director behind the Olympics opening ceremony after he was subjected to harassment online including death threats.

Thomas Jolly has filed a complaint with authorities after the opening ceremony — which took place on Friday night — saw him targeted by “threats” and “defamation”...

A statement from Paris mayor Anne Hidalgo on Friday read: “On behalf of the City of Paris and in my own name, I would like to extend my unwavering support to Thomas Jolly in the aftermath of the threats and harassment he has been subjected to in recent days, which have led him to lodge a complaint.”

Paris’ Central Office for Combating Crimes Against Humanity and Hate Crimes (OCLCH) is now investigating Jolly’s complaint.

Jolly’s complaint related to “death threats on account of his origin, death threats on account of his sexual orientation, public insults on account of his origin, public insults on account of his sexual orientation” as well as “defamation” and “threatening and insulting messages criticizing his sexual orientation and his wrongly assumed Israeli origins.”"

Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’; The New York Times, August 1, 2024

 , The New York Times; Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’

"A bipartisan American Bar Association task force is calling on lawyers across the country to do more to help protect democracy ahead of the 2024 election, warning in a statement to be delivered Friday at the group’s annual meeting in Chicago that the nation faces a serious threat in “rising authoritarianism.”

The statement by a panel of prominent legal thinkers and other public figures — led by J. Michael Luttig, a conservative former federal appeals court judge appointed by President George Bush, and Jeh C. Johnson, a Homeland Security secretary during the Obama administration — does not mention by name former President Donald J. Trump.

But in raising alarms, the panel appeared to be clearly referencing Mr. Trump’s attempt to subvert his loss of the 2020 election, which included attacks on election workers who were falsely accused by Mr. Trump and his supporters of rigging votes and culminated in the violent attack on the Capitol by his supporters on Jan. 6, 2021."

Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says; Bloomberg Law, August 1, 2024

Sam Skolnik , Bloomberg Law; Jeffrey Clark Should Get 2-Year Suspension, DC Ethics Board Says

"Trump administration Justice Department official Jeffrey Clark should receive a two-year suspension for attempting dishonesty over his efforts to overturn the 2020 election, a DC Board on Professional Responsibility panel recommended Thursday.

“Disciplinary Counsel has proven by clear and convincing evidence that Mr. Clark attempted dishonesty and did so with truly extraordinary recklessness,” the panel said.

The recommendation from a board hearing committee is in stark contrast to that of DC Disciplinary Counsel Phil Fox, who on April 29 said that disbarment is “the only possible sanction” for Clark.

Clark, a former US assistant attorney general, in late 2020 tried to get his Justice Department superiors to send a letter to Georgia state officials improperly questioning the election outcome, three lawyers for the bar, led by Fox, wrote. Clark engaged in a “dishonest attempt to create national chaos on the verge of January 6,” they wrote.

Fox didn’t prove “by clear and convincing evidence that Mr. Clark was as culpable” as Trump lawyers Rudy Giuliani or John Eastman, but he was culpable, the committee said in its 213-page, Aug. 1 report."

Thursday, August 1, 2024

From 'E.T.' to 'Blade Runner,' how the summer of 1982 changed cinema forever; NPR, Fresh Air, July 31, 2024

 , NPR, Fresh Air; From 'E.T.' to 'Blade Runner,' how the summer of 1982 changed cinema forever

"MOSLEY: Well, a couple of years later, then, there's "Tron"...

NASHAWATY: Yeah.

MOSLEY: ...Which is about a computer hacker who is abducted into the digital world. What did Disney learn from "The Black Hole" that then maybe helped them with the success of "Tron?"

NASHAWATY: Yeah. I mean, I think it learned that it has to gamble in order to stay alive, and, yes, "Black Hole" had been sort of an unsuccessful gamble, or at least a push, but they knew that this is the way they had to go in order to stay relevant and to stay in business."

Many People Fear A.I. They Shouldn’t.; The New York Times, July 31, 2024

 David Brooks, The New York Times; Many People Fear A.I. They Shouldn’t.

"A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind. The A.I.“mind” lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences...

Of course, bad people will use A.I. to do harm, but most people are pretty decent and will use A.I. to learn more, innovate faster and produce advances like medical breakthroughs. But A.I.’s ultimate accomplishment will be to remind us who we are by revealing what it can’t do. It will compel us to double down on all the activities that make us distinctly human: taking care of each other, being a good teammate, reading deeply, exploring daringly, growing spiritually, finding kindred spirits and having a good time."

Copyright Office tells Congress: ‘Urgent need’ to outlaw AI-powered impersonation; TechCrunch, July 31, 2024

  Devin Coldewey, TechCrunch; Copyright Office tells Congress: ‘Urgent need’ to outlaw AI-powered impersonation

"The U.S. Copyright Office has issued the first part of a report on how AI may affect its domain, and its first recommendation out of the gate is: we need a new law right away to define and combat AI-powered impersonation

“It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens,” said the agency’s director Shira Perlmutter in a statement accompanying the report. “We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods.”

The report itself, part one of several to come, focuses on this timely aspect of AI and intellectual property, which as a concept encompasses your right to control your own identity."

What do corporations need to ethically implement AI? Turns out, a philosopher; Northeastern Global News, July 26, 2024

, Northeastern Global News ; What do corporations need to ethically implement AI? Turns out, a philosopher

"As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.

Canca has also worked with organizations like the World Economic Forum and Interpol.

But what does “ethical” mean when it comes to AI? That, Canca says, is exactly the point.

“A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’” 

But, she notes, there are “different definitions of justice, distributive justice, different definitions of fairness. They conflict with each other. It is a big theoretical question. How do we define fairness?”

"Saying that ‘We optimized this for fairness,’ means absolutely nothing until you have a working,  proper definition” — which shifts from project to project, she also notes.

Now, Canca has been named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI — using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity,” the organization wrote in its announcement."

Wednesday, July 31, 2024

Who Guards AI Ethics? Ending The Blame Game; Forbes, July 30, 2024

Gary Drenik, Forbes; Who Guards AI Ethics? Ending The Blame Game

"As AI becomes increasingly sophisticated and ubiquitous, a critical question has emerged: Who bears the responsibility for ensuring its ethical development and implementation?

According to a recent survey by Prosper Insights & Analytics, about 37% of US adults agree AI solutions need human oversight. However, corporations and governments are engaging in a frustrating game of hot potato, each pointing fingers and shirking accountability. This lack of clear responsibility poses significant risks.

On one hand, excessive government control and overregulation could stifle innovation, hindering AI's progress and potential to solve complex problems. Conversely, unchecked corporate influence and a lack of proper oversight could result in an "AI Wild West," where profit-driven motives supersede ethical considerations. This could result in biased algorithms, privacy breaches and the exacerbation of social inequalities."

Copyright Office Releases Part 1 of Artificial Intelligence Report, Recommends Federal Digital Replica Law; U.S. Copyright Office, July 31, 2024

  U.S. Copyright Office; Copyright Office Releases Part 1 of Artificial Intelligence Report, Recommends Federal Digital Replica Law

"Today, the U.S. Copyright Office is releasing Part 1 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI), addressing the topic of digital replicas. This Part of the Report responds to the proliferation of videos, images, or audio recordings that have been digitally created or manipulated to realistically but falsely depict an individual. Given the gaps in existing legal protections, the Office recommends that Congress enact a new federal law that protects all individuals from the knowing distribution of unauthorized digital replicas. The Office also offers recommendations on the elements to be included in crafting such a law. 

“I am pleased to begin sharing the results of our comprehensive study of AI and copyright, with this first set of recommendations to Congress. It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens. We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods,” said Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office. “We look forward to working with Congress as they consider our recommendations and evaluate future developments.”

In early 2023, the Copyright Office announced a broad initiative to explore the intersection of copyright and artificial intelligence. Since then, the Office has issued registration guidance for works incorporating AI-generated content, hosted public listening sessions and webinars, met with numerous experts and stakeholders, published a notice of inquiry seeking input from the public, and reviewed the more than 10,000 responsive comments.

The Report is being released in several Parts, beginning today. Forthcoming Parts will address the copyrightability of materials created in whole or in part by generative AI, the legal implications of training AI models on copyrighted works, licensing considerations, and the allocation of any potential liability. 

For more information about the Copyright Office’s AI Initiative, please visit the website."

Tuesday, July 30, 2024

An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge; The Conversation, July 23, 2024

 Lecturer in Law, University of New England , The Conversation; ; An academic publisher has struck an AI data deal with Microsoft – without their authors’ knowledge


"In May, a multibillion-dollar UK-based multinational called Informa announced in a trading update that it had signed a deal with Microsoft involving “access to advanced learning content and data, and a partnership to explore AI expert applications”. Informa is the parent company of Taylor & Francis, which publishes a wide range of academic and technical books and journals, so the data in question may include the content of these books and journals.

According to reports published last week, the authors of the content do not appear to have been asked or even informed about the deal. What’s more, they say they had no opportunity to opt out of the deal, and will not see any money from it...

The types of agreements being reached between academic publishers and AI companies have sparked bigger-picture concerns for many academics. Do we want scholarly research to be reduced to content for AI knowledge mining? There are no clear answers about the ethics and morals of such practices."

Allegheny County blocks generative AI on its computers as it shapes up its approach to the tech; Technical.ly, July 30, 2024

Matt Petras &  PublicSource , Technical.ly; Allegheny County blocks generative AI on its computers as it shapes up its approach to the tech

"Both Allegheny County and the City of Pittsburgh have taken action to regulate their use of AI technology. 

For the county, it’s a work in progress starting with a pause on ChatGPT and similar programs; for the city, it involves creating internal guidelines informed both by Pitt Cyber’s research and a national coalition of municipal governments. 

Some cities across the country have made their guidelines public. Many of these guidelines focus solely on generative AI technologies.

Ethical discussion of AI shouldn’t ignore the risks posed by algorithms as the public focus shifts toward generative tools, said Beth Schwanke, executive director of the University of Pittsburgh’s Institute for Cyber Law, Policy and Security."

Disconnected: 23 Million Americans Affected by the Shutdown of the Affordable Connectivity Program; CNet, July 28, 2024

Joe Supan, CNet ; Disconnected: 23 Million Americans Affected by the Shutdown of the Affordable Connectivity Program

"Jackson got her first home internet connection through the Affordable Connectivity Program, a pandemic-era fund that provided $30 to $75 a month to help low-income households pay for internet. In May, the $14.2 billion program officially ran out of money, leaving Jackson and 23 million households like hers with internet bills that were $30 to $75 higher than the month before. 

That's if they decided to hang on to their internet service at all: 13% of ACP subscribers, or roughly 3 million households, said that after the program ended they planned to cancel service, according to a Benton Institute survey conducted as the ACP expired. 

For as long as the internet has existed, there's been a gap between those who have access to it -- and the means to afford it -- and those who don't. The vast majority of federal broadband spending over the past two decades has gone toward expanding internet access to rural areas. Case in point: In 2021, Congress dedicated $90 billion to closing the digital divide, but only $14.2 billion went to making the internet more affordable through the ACP; the rest went to broadband infrastructure...

"The biggest barrier to home broadband is cost. There are more people who don't have access to home internet because of cost than there are people who don't have access because the infrastructure doesn't exist."
Angela Siefer, executive director of the National Digital Inclusion Alliance"

Monday, July 29, 2024

The COPIED Act Is an End Run around Copyright Law; Public Knowledge, July 24, 2024

 Lisa Macpherson , Public Knowledge; The COPIED Act Is an End Run around Copyright Law

"Over the past week, there has been a flurry of activity related to the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act. While superficially focused on helping people understand when they are looking at content that has been created or altered using artificial intelligence (AI) tools, this overly broad bill makes an end run around copyright law and restricts how everyone – not just huge AI developers – can use copyrighted work as the basis of new creative expression. 

The COPIED Act was introduced in the Senate two weeks ago by Senators Maria Cantwell (D-WA, and Chair of the Commerce Committee); Marsha Blackburn (R-TN); and Martin Heinrich (D-NM). By the end of last week, we learned there may be a hearing and markup on the bill within days or weeks. The bill directs agency action on standards for detecting and labeling synthetic content; requires AI developers to allow the inclusion of these standards on content; and prohibits the use of such content to generate new content or train AI models without consent and compensation from creators. It allows for enforcement by the Federal Trade Commission and state attorneys general, and for private rights of action. 

We want to say unequivocally that this is the wrong bill, at the wrong time, from the wrong policymakers, to address complex questions of copyright and generative artificial intelligence."

Lawyers using AI must heed ethics rules, ABA says in first formal guidance; Reuters, July 29, 2024

S, Reuters; Lawyers using AI must heed ethics rules, ABA says in first formal guidance

"Lawyers must guard against ethical lapses if they use generative artificial intelligence in their work, the American Bar Association said on Monday.

In its first formal ethics opinion on generative AI, an ABA committee said lawyers using the technology must "fully consider" their ethical obligations to protect clients, including duties related to lawyer competence, confidentiality of client data, communication and fees...

Monday's opinion from the ABA's ethics and professional responsibility committee said AI tools can help lawyers increase efficiency but can also carry risks such as generating inaccurate output. Lawyers also must try to prevent inadvertent disclosure or access to client information, and should consider whether they need to tell a client about their use of generative AI technologies, it said."

Joe Biden: My plan to reform the Supreme Court and ensure no president is above the law; The Washington Post, July 29, 2024

Joe Biden , The Washington Post; Joe Biden: My plan to reform the Supreme Court and ensure no president is above the law

"That’s why — in the face of increasing threats to America’s democratic institutions — I am calling for three bold reforms to restore trust and accountability to the court and our democracy.

First, I am calling for a constitutional amendment called the No One Is Above the Law Amendment. It would make clear that there is noimmunity for crimes a former president committed while in office. I share our Founders’ belief that the president’s power is limited, not absolute. We are a nation of laws — not of kings or dictators.

Second, we have had term limits for presidents for nearly 75 years. We should have the same for Supreme Court justices. The United States is the only major constitutional democracy that gives lifetime seats to its high court. Term limits would help ensure that the court’s membership changes with some regularity. That would make timing for court nominations more predictable and less arbitrary. It would reduce the chance that any single presidency radically alters the makeup of the court for generations to come. I support a system in which the president would appoint a justice every two years to spend 18 years in active service on the Supreme Court.

Third, I’m calling for a binding code of conduct for the Supreme Court. This is common sense. The court’s current voluntary ethics code is weak and self-enforced. Justices should be required to disclose gifts, refrain from public political activity and recuse themselves from cases in which they or their spouses have financial or other conflicts of interest. Every other federal judge is bound by an enforceable code of conduct, and there is no reason for the Supreme Court to be exempt.

All three of these reforms are supported by a majority of Americans — as well as conservative and liberal constitutional scholars. And I want to thank the bipartisan Presidential Commission on the Supreme Court of the United States for its insightful analysis, which informed some of these proposals.

We can and must prevent the abuse of presidential power. We can and must restore the public’s faith in the Supreme Court. We can and must strengthen the guardrails of democracy.

In America, no one is above the law. In America, the people rule."

Sunday, July 28, 2024

A.I. May Save Us, or May Construct Viruses to Kill Us; The New York Times, July 27, 2024

 NICHOLAS KRISTOF, The New York Times; A.I. May Save Us, or May Construct Viruses to Kill Us

"Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire."

Friday, July 26, 2024

In Hiroshima, a call for peaceful, ethical AI; Cisco, The Newsroom, July 18, 2024

Kevin Delaney , Cisco, The Newsroom; In Hiroshima, a call for peaceful, ethical AI

"“Artificial intelligence is a great tool with unlimited possibilities of application,” Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, said in an opening address at the AI Ethics for Peace conference in Hiroshima this month.

But Paglia was quick to add that AI’s great promise is fraught with potential dangers.

“AI can and must be guided so that its potential serves the good since the moment of its design,” he stressed. “This is our common responsibility.”

The two-day conference aimed to further the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promoted an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.

This month’s Hiroshima conference drew dozens of global religious, government, and technology leaders to a city that has transcended its dark past of tech-driven, atomic destruction to become a center for peace and cooperation.

The overarching goal in Hiroshima? To ensure that, unlike atomic energy, artificial intelligence is used only for peace and positive human advancement. And as an industry leader in AI innovation and its responsible use, Cisco was amply represented by Dave West, Cisco’s president for Asia Pacific, Japan, and Greater China (APJC)."

Students Weigh Ethics of Using AI for College Applications; Education Week via GovTech, July 24, 2024

Alyson Klein , Education Week via GovTech; Students Weigh Ethics of Using AI for College Applications

"About a third of high school seniors who applied to college in the 2023-24 school year acknowledged using an AI tool for help in writing admissions essays, according to research released this month by foundry10, an organization focused on improving learning.

About half of those students — or roughly one in six students overall — used AI the way Makena did, to brainstorm essay topics or polish their spelling and grammar. And about 6 percent of students overall—including some of Makena's classmates, she said — relied on AI to write the final drafts of their essays instead of doing most of the writing themselves.

Meanwhile, nearly a quarter of students admitted to Harvard University's class of 2027 paid a private admissions consultant for help with their applications.

The use of outside help, in other words, is rampant in college admissions, opening up a host of questions about ethics, norms, and equal opportunity.

Top among them: Which — if any — of these students cheated in the admissions process?

For now, the answer is murky."

Thursday, July 25, 2024

Philip Glass Says Crimean Theater Is Using His Music Without Permission; The Daily Beast, July 25, 2024

Clay Walker, The Daily Beast; Philip Glass Says Crimean Theater Is Using His Music Without Permission

"Legendary American composer Philip Glass had some harsh words after learning that a theater in Russian-annexed Crimea plans to use his music and name as part of a new show. In a letter posted to X, Glass explained that he had learned a new ballet called Wuthering Heights is set to open at the Sevastopol Opera and Ballet Theater—using works he had penned without his consent. “No permission for the use of my music in the ballet or the use of my name in the advertising and promotion of the ballet was ever requested of me or given by me. The use of my music and the use of my name without my consent is in violation of the Berne Convention for the Protection of Literary and Artistic works to which the Russian Federation is a signatory. It is an act of piracy,” Glass wrote."