Monday, July 15, 2024

National Research Act at 50: An Ethics Landmark in Need of an Update; The Hastings Center, July 12, 2024

Mark A. Rothstein and Leslie E. Wolf, The Hastings Center; National Research Act at 50: An Ethics Landmark in Need of an Update

"On July 12, 1974, President Richard M. Nixon signed into law the National Research Act, one of his last major official actions before resigning on August 8. He was preoccupied by Watergate at the time, and there has been speculation about whether he would have done this under less stressful circumstances. But enactment of the NRA was a foregone conclusion. After a series of legislative compromises, the Joint Senate-House Conference Report was approved by bipartisan, veto-proof margins in the Senate (72-14) and House (311-10).

The NRA was a direct response to the infamous Untreated Syphilis Study at Tuskegee whose existence and egregious practices disclosed by whistleblower Peter Buxtun were originally reported by Associated Press journalist Jean Heller in the Washington Star on July 25, 1972.  After congressional hearings exposing multiple research abuses, including the Tuskegee syphilis study, and legislative proposals in 1973, support coalesced around legislation with three main elements: (1) directing preparation of guidance documents on broad research ethics principles and various controversial issues by multidisciplinary experts appointed to a new federal commission, (2) adopting a model of institutional review boards, and (3) establishing federal research regulations applicable to researchers receiving federal funding.

This essay reflects on the NRA at 50. It traces the system of research ethics guidance, review, and regulation the NRA established; assesses how well that model has functioned; and describes some key challenges for the present and future. We discuss some important substantive and procedural gaps in the NRA regulatory structure that must be addressed to respond to the ethical issues raised by modern research." 

Holy See welcomes ‘significant’ new treaty on intellectual property; Vatican News, July 10, 2024

Joseph Tulloch, Vatican News ; Holy See welcomes ‘significant’ new treaty on intellectual property

"Archbishop Ettore Balestrero, the Permanent Observer of the Holy See to the United Nations and Other International Organizations in Geneva, has welcomed a historic new treaty on intellectual property.

In an address to members states of the UN's World Intellectual Property Organisation (WIPO), the Archbishop called the treaty a “significant step forward”.

The treaty


WIPO member states adopted the agreement – which regards “Intellectual Property, Genetic Resources and Associated Traditional Knowledge – in May of this year.

The treaty establishes a new disclosure requirement in international law for patent applicants whose inventions are based on genetic resources and/or associated traditional knowledge.

It was the first WIPO treaty in over a decade, as well as the first to evr deal with the genetic resources and traditional knowledge of indigenous peoples."

One-third of US military could be robotic by 2039: Milley; Military Times, July 14, 2024

, Military Times; One-third of US military could be robotic by 2039: Milley

"The 20th chairman of the Joint Chiefs of Staff believes growing artificial intelligence and unmanned technology could lead to robotic military forces in the future.

“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” said retired Army Gen. Mark Milley at an Axios event Thursday launching the publication’s Future of Defense newsletter.

He noted these robots could be commanded and controlled by AI systems."

AI Ethics for Peace; L'Osservatore Romano, July 12, 2024

 L'Osservatore Romano; AI Ethics for Peace

"Eleven World Religions, sixteen new signatories, thirteen nations in attendance, more than 150 participants: these are some of the numbers of AI Ethics for Peace, the historic multireligious event held in Hiroshima, Japan, on 9 and 10 July...

The choice to hold this event in Hiroshima has a deeply symbolic meaning, because no other city like it bears witness to the consequences of destructive technology and the need for a lasting quest for peace.

AI Ethics for Peace, over two days, brought together the world’s major religions to underscore their crucial importance in shaping a society in which, in the face of the relentless acceleration of technology, the call for technological development that protects the dignity of each individual human being and the entire planet becomes a reality.

This will be possible only if algorethics, that is, the development and application of an ethics of artificial intelligence, becomes an indispensable element by design, i.e. from the moment of its design.

Remarkable was the talk by Father Paolo Benanti, Professor of Ethics of Technology at the Pontifical Gregorian University, who presented the Hiroshima Addendum on Generative AI. This document focuses on the need for ethical governance of generative AI — an ongoing and iterative process that requires a sustained commitment from all stakeholders so that its potential is used for the good of humanity.

The application of Rome Call principles to the reality of the tech world and the responsibility that AI producers share were witnessed by the attending big tech leaders."

Pope asks world's religions to push for ethical AI development; United States Conference of Catholic Bishops, July 10, 2024

Justin McLellan, United States Conference of Catholic Bishops; Pope asks world's religions to push for ethical AI development

"Pope Francis called on representatives from the world's religions to unite behind the defense of human dignity in an age that will be defined by artificial intelligence.

"I ask you to show the world that we are united in asking for a proactive commitment to protect human dignity in this new era of machines," the pope wrote in a message to participants of a conference on AI ethics which hosted representatives from 11 world religions.

Religious leaders representing Eastern faiths such as Buddhism, Hinduism, Zoroastrianism, and Bahá'í, among others, as well as leaders of the three Abrahamic religions gathered in Hiroshima, Japan, for the conference, titled "AI Ethics for Peace." They also signed the Rome Call for AI Ethics -- a document developed by the Pontifical Academy for Life which asks signatories to promote an ethical approach to AI development."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

After abuse revelations, professors grapple with how to teach Munro; The Washington Post, July 12, 2024

 , The Washington Post; After abuse revelations, professors grapple with how to teach Munro

"Professors are wrestling with how to teach Munro’s work. Bookstores are debating whether to feature it on their shelves. And Canadians are grappling with the age-old question: Is it possible to divorce the art from the artist?"

Class explores how media impacts perceptions of health issues; University of Pittsburgh, University Times, July 11, 2024

MARTY LEVINE, University of Pittsburgh, University Times; Class explores how media impacts perceptions of health issues

"Communicating a message through storytelling, and not the mere recitation of facts, is key to public health communication, and Hoffman collaborates often with the Norman Lear Center at the University of Southern California, whose “Hollywood, Health and Society” project has conducted research on everything from “Increases in calls to the CDC National STD and AIDS hotline following AIDS-related episodes in a soap opera” to “The Impact of Food and Nutrition Messages on The Daily Show with Jon Stewart.” It also provides consultants to shows from “Breaking Bad” to “Black-ish,” and a Lear Center rep spoke in Hoffman’s class.

Hoffman was recently lead author on a published overview of current research evidence on the media and health, which found that “health storylines on fictional television influence viewers.”...

Pitt Public Health was the leader in developing the Salk vaccine for polio, she points out. Public health education and media literacy can be a sort of vaccination against misinformation, she says: “We often talk about it as inoculation. Misinformation is not going away. How can we make people less susceptible to it?”"

Thursday, July 11, 2024

The assignment: Build AI tools for journalists – and make ethics job one; Poynter, July 8, 2024

 , Poynter; The assignment: Build AI tools for journalists – and make ethics job one

"Imagine you had virtually unlimited money, time and resources to develop an AI technology that would be useful to journalists.

What would you dream, pitch and design?

And how would you make sure your idea was journalistically ethical?

That was the scenario posed to about 50 AI thinkers and journalists at Poynter’s recent invitation-only Summit on AI, Ethics & Journalism

The summit drew together news editors, futurists and product leaders June 11-12 in St. Petersburg, Florida. As part of the event, Poynter partnered with Hacks/Hackers, to ask groups attendees to  brainstorm ethically considered AI tools that they would create for journalists if they had practically unlimited time and resources.

Event organizer Kelly McBride, senior vice president and chair of the Craig Newmark Center for Ethics and Leadership at Poynter, said the hackathon was born out of Poynter’s desire to help journalists flex their intellectual muscles as they consider AI’s ethical implications.

“We wanted to encourage journalists to start thinking of ways to deploy AI in their work that would both honor our ethical traditions and address the concerns of news consumers,” she said.

Alex Mahadevan, director of Poynter’s digital media literacy project MediaWise, covers the use of generative AI models in journalism and their potential to spread misinformation."

Religious education group sues Fort Wayne man over copyright claims; The Journal Gazette, July 8, 2024

  , The Journal Gazette; Religious education group sues Fort Wayne man over copyright claims

"LifeWise claims in its lawsuit that Parrish signed up online to volunteer with the hope of publishing information that might damage the organization’s reputation and prompt parents to oppose LifeWise Academy chapters in their communities.

Parrish accessed LifeWise’s information storage systems, downloaded internal documents and posted them along with the LifeWise curriculum on his website, parentsagainstlifewise.online, according to the lawsuit. It said Parrish also posted links to the curriculum on the Facebook group.

“He improperly obtained our entire copyright protected curriculum, and he posted to his website without our permission,” LifeWise said in a statement Monday.

LifeWise tried to get Parrish to voluntarily remove its curriculum, but the complaint said the organization’s efforts – including an attorney’s cease-and-desist letter and social media messages the chief operating officer sent him – were unsuccessful.

The lawsuit said Parrish responded to the letter with a meme stating, “It’s called fair use (expletive).”

LifeWise disagrees. In its statement, the organization said its curriculum is licensed through a publisher called LifeWay, and anyone is welcome to purchase the LifeWay curriculum through its website.

“Posting the entire curriculum is not ‘fair use,’ and we are confident that the judge will agree,” LifeWise said Monday."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"

Tuesday, July 9, 2024

Record labels sue AI music startups for copyright infringement; WBUR Here & Now, July 8, 2024

  WBUR Here & Now; Record labels sue AI music startups for copyright infringement

"Major record labels including Sony, Universal Music Group and Warner are suing two music startups that use artificial intelligence. The labels say Suno and Udio rely on mass copyright infringement, echoing similar complaints from authors, publishers and artists who argue that generative AI infringes on copyright.

Here & Now's Lisa Mullins discusses the cases with Ina Fried, chief technology correspondent for Axios."

Bridging the Digital Divide: Advancing Access to Broadband for All; American Bar Association (ABA), June 3, 2024

Emily Bergeron, American Bar Association (ABA); Bridging the Digital Divide: Advancing Access to Broadband for All

"The “digital divide” is the disparity in access to and utilization of information and communication technologies between different groups based on socioeconomic status, geographic location, age, education, or other demographic characteristics. This divide often manifests as unequal access to the internet and digital devices, leading to disparities in opportunities, information, health care, education, and participation in government and the digital- and knowledge-based economy. The COVID-19 pandemic brought considerable focus to the digital divide. Individuals with broadband access could work, attend school, shop, and consult with their doctors from the comfort of their homes, while those lacking access had few options...

Eight out of 10 white adults have a broadband connection at home, whereas smaller percentages of Black and Hispanic adults—precisely 71 percent and 65 percent—indicate the same. Notably, Black adults are more likely than white adults to believe that a lack of high-speed internet at home puts people at a significant disadvantage when connecting with medical professionals, with 63 percent of Black adults expressing this view compared to 49 percent of white adults. The perspective of Hispanic adults, at 53 percent, does not significantly differ from that of individuals from other racial and ethnic backgrounds.

Despite federal efforts to expand broadband access in Tribal lands, a significant disparity persists. Approximately 18 percent of people in these areas still lack broadband services, while this figure is only 4 percent for non-Tribal areas. The gap widens further in rural regions, where about 30 percent of individuals on Tribal lands lack broadband access compared to 14 percent in non-Tribal areas...

The digital divide is not just a matter of technology. It undermines social justice and equality. By working collectively to bridge this divide, we can help create a more inclusive, connected, and equitable society where everyone can harness the benefits of the digital age. It is incumbent on governments, policymakers, and private organizations to take proactive measures and commit to digital inclusion, ensuring that no one is left behind in this fast-evolving digital landscape."

Monday, July 8, 2024

10 Things Every Board Member Needs to Know; American Libraries, July 1, 2024

 Sanhita SinhaRoy, American Libraries ; 10 Things Every Board Member Needs to Know

Kip Currier: Preparing Board members for effective, ethical service is vital for all organizations. Surprisingly, the word "ethics" is never specifically mentioned in this article, though ethics is implicated with the words "abiding by the duties of care, loyalty, and honesty" at the very end. Board members need to be aware of ethics principles/codes of organizations where they serve, as well as legal requirements and fiduciary responsibilities that have ethical dimensions in states where their organizations are located. 

[Excerpt]

"As libraries and library workers face censorship attempts, campus protests, and budget cuts, among other challenges, Harrington—a consultant and current president of the Timberland Regional (Wash.) Library board of trustees—led the program “Top 10 Things Every Library Board Member Should Know—but Often Doesn’t.”...

#10 There are specific attributes of an effective nonprofit board member.

They include a commitment to the mission of the organization; understanding of the board’s governance roles; active involvement in board activities and committees; thinking and acting strategically; not being involved in day-to-day management of the organization; abiding by the duties of care, loyalty, and honesty; and supporting the organization financially and through advocacy."

Five Questions to Ask Before Implementing Generative AI; Markkula Center for Applied Ethics at Santa Clara University, July 3, 2024

 Ann Skeet, Markkula Center for Applied Ethics at Santa Clara University ; Five Questions to Ask Before Implementing Generative AI

"While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system that has an identifiable training data set.”"

Sunday, July 7, 2024

Jim Clyburn Is Right About What Democrats Should Do Next; The New York Times, July 7, 2024

Ezra Klein, The New York Times; Jim Clyburn Is Right About What Democrats Should Do Next

Kip Currier: The most important sentence in this Ezra Klein OpEd is this one: 

"What Democrats denied themselves over the past few years was information."

Democracies, and political parties, depend on informed citizenries. Informed citizenries are cultivated and advanced when people have access to accurate, trustworthy information. Without informed citizenries, democracies and political parties are like endangered species that can weaken and disappear.

Access to information is the core principle that information centers -- libraries, archives, museums -- make possible. As New York Public Library Director Anthony Marx has previously underscored, "libraries are in the information access business." 

Information centers serve essential roles for healthy, functioning democracies, political parties, and societies.

Supreme Court ethics remain at center stage after hard-right rulings; The Washington Post; July 6, 2024

, The Washington Post;  Supreme Court ethics remain at center stage after hard-right rulings

"Several experts said the court needs to fully embrace an ethics overhaul to help reassure the public."

Saturday, July 6, 2024

New York’s First Black Librarians Changed the Way We Read; The New York Times, June 19, 2024

 Jennifer Schuessler, The New York Times; New York’s First Black Librarians Changed the Way We Read

"Today, figures like Schomburg and the historian and activist W.E.B. Du Bois (another collector and compiler of Black books) are hailed as the founders of the 20th-century Black intellectual tradition. But increasingly, scholars are also uncovering the important role of the women who often ran the libraries, where they built collections and — just as important — communities of readers.

“Mr. Schomburg’s collection is really the seed,” said Joy Bivins, the current director of the Schomburg Center for Research in Black Culture, as the 135th Street library, currently home to more than 11 million items, is now known. “But in many ways, it is these women who were the institution builders.”

Many were among the first Black women to attend library school, where they learned the tools and the systems of the rapidly professionalizing field. On the job, they learned these tools weren’t always suited to Black books and ideas, so they invented their own.

At times, they battled overt and covert censorship that would be familiar in today’s climate of rising book bans and restrictions on teaching so-called divisive concepts. But whether they worked in world-famous research collections or modest public branch libraries, these pioneers saw their role as not just about tending old books but also about making room for new people and new ideas."

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Friday, July 5, 2024

A.I. ‘Friend’ for Public School Students Falls Flat; The New York Times, July 1, 2024

 Dana Goldstein, The New York Times; A.I. ‘Friend’ for Public School Students Falls Flat

"A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time."

Thursday, July 4, 2024

The AI Ethicist: Fact or Fiction?; SSRN, Wharton University of Pennsylvania, November 20, 2023

 

Christian TerwieschUniversity of Pennsylvania - Operations & Information Management DepartmentLennart MeinckeUniversity of Pennsylvania; The Wharton School, Gideon Nave

University of Pennsylvania - The Wharton School; SSRN, Wharton University of Pennsylvania;

 The AI Ethicist: Fact or Fiction?

"Abstract

This study investigates the efficacy of an AI-based ethical advisor using the GPT-4 model. Drawing from a pool of ethical dilemmas published in the New York Times column “The Ethicist”, we compared the ethical advice given by the human expert and author of the column, Dr. Kwame Anthony Appiah, with AI-generated advice. The comparison is done by evaluating the perceived usefulness of the ethical advice across three distinct groups: random subjects recruited from an online platform, Wharton MBA students, and a panel of ethical decision-making experts comprising academics and clergy. Our findings revealed no significant difference in the perceived value of the advice between human generated ethical advice and AI-generated ethical advice. When forced to choose between the two sources of advice, the random subjects recruited online displayed a slight but significant preference for the AI-generated advice, selecting it 60% of the time, while MBA students and the expert panel showed no significant preference."

AI Chatbots Seem as Ethical as a New York Times Advice Columnist; Scientific American, July 1, 2024

, Scientific American ; AI Chatbots Seem as Ethical as a New York Times Advice Columnist

"In 1691 the London newspaper the Athenian Mercury published what may have been the world’s first advice column. This kicked off a thriving genre that has produced such variations as Ask Ann Landers, which entertained readers across North America for half a century, and philosopher Kwame Anthony Appiah’s weekly The Ethicist column in the New York Times magazine. But human advice-givers now have competition: artificial intelligence—particularly in the form of large language models (LLMs), such as OpenAI’s ChatGPT—may be poised to give human-level moral advice.

LLMs have “a superhuman ability to evaluate moral situations because a human can only be trained on so many books and so many social experiences—and an LLM basically knows the Internet,” says Thilo Hagendorff, a computer scientist at the University of Stuttgart in Germany. “The moral reasoning of LLMs is way better than the moral reasoning of an average human.” Artificial intelligence chatbots lack key features of human ethicists, including self-consciousness, emotion and intention. But Hagendorff says those shortcomings haven’t stopped LLMs (which ingest enormous volumes of text, including descriptions of moral quandaries) from generating reasonable answers to ethical problems.

In fact, two recent studies conclude that the advice given by state-of-the-art LLMs is at least as good as what Appiah provides in the pages of the New York Times. One found “no significant difference” between the perceived value of advice given by OpenAI’s GPT-4 and that given by Appiah, as judged by university students, ethical experts and a set of 100 evaluators recruited online. The results were released as a working paper last fall by a research team including Christian Terwiesch, chair of the Operations, Information and Decisions department at the Wharton School of the University of Pennsylvania."

Wednesday, July 3, 2024

AI Ethics Council Founded by Open AI and Operation HOPE Holds Inaugural Meeting; PR Newswire, July 1, 2024

Operation HOPE, Inc., PR Newswire ; AI Ethics Council Founded by Open AI and Operation HOPE Holds Inaugural Meeting

"The AI Ethics Council, founded by Open AI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, held its inaugural meeting on Friday, June 28th in Atlanta. The group, which evolved out of a listening tour that was initiated last spring at Clark Atlanta University that Mr. Altman and Mr. Bryant conducted together, was formed to ensure that traditionally underrepresented communities would have a voice in the evolution of AI overall— to help frame the human and ethical considerations around the technology, and vast participation in the economic opportunities of artificial intelligence.  The council was announced in December 2023 at the HOPE Global Forums | Annual Meeting in Atlanta.

The AI Ethics Council is an interdisciplinary body of diverse experts designed to become a leading authority in identifying, advising on, and addressing ethical issues related to artificial intelligence and its impact on underserved and historically excluded communities."

How ABC News Could Fix CNN’s Mockery Of The First Presidential Debate; Forbes, July 3, 2024

Subramaniam Vincent , Forbes; How ABC News Could Fix CNN’s Mockery Of The First Presidential Debate

"If we are bringing prolific liars live on an election debate, our responsibility to truth-telling and truth-determination requires that we make a sincere attempt to vet their claims within a few minutes of them being aired. This is when the audience of millions is in the frame of comparing candidates. And when those claims are dubious, it is an act of ethical journalism to intervene to ask its promoters to defend with actual evidence, or call them out."

We asked people about using AI to make the news. They’re anxious and annoyed; Poynter, June 27, 2024

 , Poynter; We asked people about using AI to make the news. They’re anxious and annoyed

"Sometimes, when it comes to using artificial intelligence in journalism, people think of a calculator, an accepted tool that makes work faster and easier.

Sometimes, they think it’s flat-out cheating, passing off the work of a robot for a human journalist.

Sometimes, they don’t know what to think at all — and it makes them anxious.

All of those attitudes emerged from new focus group research from the University of Minnesota commissioned by the Poynter Institute about news consumers’ attitudes toward AI in journalism.

The research, conducted by Benjamin Toff, director of the Minnesota Journalism Center and associate professor of Minnesota’s Hubbard School of Journalism & Mass Communication, was unveiled to participants at Poynter’s Summit on AI, Ethics and Journalism on June 11. The summit brought together dozens of journalists and technologists to discuss the ethical implications for journalists using AI tools in their work. 

“I think it’s a good reminder of not getting too far ahead of the public,” Toff said, in terms of key takeaways for newsrooms. “However much there might be usefulness around using these tools … you need to be able to communicate about it in ways that are not going to be alienating to large segments of the public who are really concerned about what these developments will mean for society at large.”

The focus groups, conducted in late May, involved 26 average news consumers, some who knew a fair amount about AI’s use in journalism, and some who knew little. 

Toff discussed three key findings from the focus groups:"

Tuesday, July 2, 2024

AI ETHICS FOR PEACE: WORLD RELIGIONS COMMIT TO THE ROME CALL; July 9 & 10, 2024

AI ETHICS FOR PEACE: WORLD RELIGIONS COMMIT TO THE ROME CALL

"An historic multi-faith event will take place in Hiroshima, Japan, on July 9th and 10th, 2024. Titled AI Ethics for Peace: World Religions commit to the Rome Call, this event holds profound significance as it convenes in Hiroshima, a city that stands as a powerful testament to the consequences of destructive technology and the enduring quest for peace. In this symbolic location, leaders of major world religions will gather to sign the Rome Call for AI Ethics, emphasizing the vital importance of guiding the development of artificial intelligence with ethical principles to ensure it serves the good of humanity.

The event is promoted by the Pontifical Academy of Life, Religions for Peace Japan, the United Arab Emirates’ Abu Dhabi Forum for Peace, and the Chief Rabbinate of Israel’s Commission for Interfaith Relations.

BACKGROUND

The Rome Call for AI Ethics was issued by the Pontifical Academy for Life and furthered by the RenAIssance Foundation in an effort to promote algorethics, i.e. an ethical development of artificial intelligence.

On February 28th, 2020, the Pontifical Academy for Life, together with Microsoft, IBM, the UN Food and Agriculture Organization (FAO) and the Italian Government – and in the presence of the President of the EU Parliament – signed this “Call for AI Ethics” in Rome.

The document aims to foster an ethical approach to Artificial Intelligence (AI) and to promote a sense of responsibility among organizations, governments, multinational technology companies, and institutions, in order to shape a future in which digital innovation and technological progress serve human genius and creativity, while preserving and respecting the dignity of each and every individual, as well as our planet’s.

Following the signing of the Rome Call by leaders of the three Abrahamic religions (Christianity, Islam and Judaism) in 2023, in the name of peaceful coexistence and shared values, the Hiroshima event reinforces the view that a multi-religious approach to vital questions such as AI ethics is the path to follow.

Religions play a crucial role in shaping a world in which the concept of development proceeds hand in hand with protecting the dignity of each individual human being and preserving the planet, our common home. Coming together to call for the development of an AI ethic is a step that all religious traditions must take."

More Adventures With AI Claude, The Contrite Poet; Religion Unplugged, June 11, 2024

Dr. Michael Brown , Religion Unplugged; More Adventures With AI Claude, The Contrite Poet

"Working with the AI bot Claude is, in no particular order, amazing, frustrating, and hilarious...

I have asked Claude detailed Hebrew grammatical questions or asked him to translate difficult rabbinic Hebrew passages, and time and time again, Claude has nailed it.

But just as frequently, he creates texts out of thin air, side by side with accurate citations, which then have to be vetted one by one.

When I asked Claude why he manufactured citations, he explained that he aims to please and can sometimes go a little too far. In other words, Claude tells me what he thinks I want to hear...

"I’m sure that AI bots are already providing “companionship” for an increasingly isolated generation, not to mention proving falsehoods side by side with truths for unsuspecting readers.

And so, the promise and the threat of AI continue to grow by the day, with a little entertainment and humor added in."

Are AI-powered church services coming to a pew near you?; Scripps News, May 10, 2024

 

""Depending upon what data sets it's using, we get an intense amount of bias within AI right now," Callaway told Scripps News. "And it reflects, shock and awe, the same bias that we have as humans. And so having someone that is actually a kind of wise guide or mentor to help you discern how to even interpret, understand the results that AI is giving you is really important."

But Callaway says there's good that can come from AI, like translating the Bible into various languages...

Rabbi Geoff Mitelman, who helped found the studies at Temple B'Nai Or through his organization Sinai and Synapses, agrees, saying AI can be an aid in study...

However, there are concerns across religions about the interpretation of such texts, bias and misinformation.

"The spread of misinformation and how easy it is to create and then spread misinformation, whether that's using something like Dall-E or ChatGPT or videos and also algorithms that will spread misinformation — because at least for hundreds of thousands of years it was better for humans to trust than to not trust, right?" said Mitelman.

That cautious view of AI and religion seems to translate across practices, a poll from the Christian research group Barna shows.
Over half of Christians, 52%, said they'd be disappointed if they found out AI was used in their church."

Navigate ethical and regulatory issues of using AI; Thomson Reuters, July 1, 2024

Thomson Reuters ; Navigate ethical and regulatory issues of using AI

"However, the need for regulation to ensure clarity, trust, and mitigate risk has not gone unnoticed. According to the report, the vast majority (93%) of professionals surveyed said they recognize the need for regulation. Among the top concerns: a lack of trust and unease about the accuracy of AI. This is especially true in the context of using the AI output as advice without a human checking for its accuracy."

Monday, July 1, 2024

How to Get Voters the Facts They Need Without a Trump Jan. 6 Trial; The New York Times, July 1, 2024

Andrew Weissmann, The New York Times ; How to Get Voters the Facts They Need Without a Trump Jan. 6 Trial

"The benefit of an evidentiary hearing would be enormous, giving the public at least some information it needs before going to the polls in November. The hearing would permit the airing, in an adversarial proceeding with full due process for Mr. Trump, evidence that goes to the heart of the most profound indictment in this nation’s history."

Biden Warns That Supreme Court’s Immunity Ruling Will Embolden Trump; The New York Times, July 1, 2024

Michael D. Shear , The New York Times; Biden Warns That Supreme Court’s Immunity Ruling Will Embolden Trump

"President Biden warned on Monday that the Supreme Court’s decision on presidential immunity meant that there were “virtually no limits on what the president can do” and urged voters to prevent former President Donald J. Trump from returning to the White House freed from the constraints of the law.

“The American people must decide if they want to entrust the president once again — the presidency — to Donald Trump,” Mr. Biden said during brief remarks, “knowing he’ll be more emboldened to do whatever he pleases whenever he wants to do it.”"

US Supreme Court liberals lament ruling making the president 'a king above the law'; Reuters, July 1, 2024

, Reuters ; US Supreme Court liberals lament ruling making the president 'a king above the law'

"The president of the United States has been elevated to the status of "a king above the law." The occupant of the White House may order assassinations of political rivals without fear of prosecution. America's leader may now be insulated from criminal consequences for whatever he or she wants to do in office.

That is what U.S. Supreme Court liberals said in dissent to Monday's landmark decision recognizing for the first time broad immunity from prosecution for former presidents."

God save us from this dishonorable court; The Washington Post, July 1, 2024

 , The Washington Post; God save us from this dishonorable court

"Smith’s office is now consigned to assess the tatters in which the court’s ruling has left its prosecution and determine, like a homeowner after a tornado has touched down, what can be salvaged.

The country is now left to worry about whether Trump will ever be held accountable — and about the implications of the court’s ruling for future presidents, including, most chillingly, Trump himself.

As Jackson wrote in a separate dissent, “Having now cast the shadow of doubt over when — if ever — a former President will be subject to criminal liability for any criminal conduct he engages in while on duty, the majority incentivizes all future Presidents to cross the line of criminality while in office, knowing that unless they act ‘manifestly or palpably beyond [their] authority,’ they will be presumed above prosecution and punishment alike.”

Sotomayor was similarly apocalyptic. “With fear for our democracy, I dissent,” she closed her dissent. Both Sotomayor and Jackson abandoned the customary “respectfully” — for good reason.

God knows what a reelected Trump would do in a second term. God save us from this dishonorable court."