Wednesday, June 26, 2024

The MTV News website is gone; The Verge, June 25, 2024

Andrew Liszewski, The Verge ; The MTV News website is gone

"The archives of the MTV News website, which had remained accessible online after the unit was shut down last year by parent company Paramount Global, have now been completely taken offline. As Varietyreported yesterday, both mtvnews.com and mtv.com/news now redirect visitors to the MTV website’s front page...

Although the MTV News website was no longer publishing new stories, its extensive archive, dating back over two decades to its launch in 1996, remained online. But as former staffers discovered yesterday, that archive is no longer accessible."

Tuesday, June 25, 2024

Collaborative ethics: innovating collaboration between ethicists and life scientists; Nature, June 20, 2024

, Nature ; Collaborative ethics: innovating collaboration between ethicists and life scientists

"Is there a place for ethics in scientific research, not about science or after scientific breakthroughs? We are convinced that there is, and we describe here our model for collaboration between scientists and ethicists.

Timely collaboration with ethicists benefits science, as it can make an essential contribution to the research process. In our view, such critical discussions can improve the efficiency and robustness of outcomes, particularly in groundbreaking or disruptive research. The discussion of ethical implications during the research process can also prepare a team for a formal ethics review and criticism after publication.

The practice of collaborative ethics also advances the humanities, as direct involvement with the sciences allows long-held assumptions and arguments to be put to the test. As philosophers and ethicists, we argue that innovative life sciences research requires new methods in ethics, as disruptive concepts and research outcomes no longer fit traditional notions and norms. Those methods should not be developed at a distance from the proverbial philosopher’s armchair or in after-the-fact ethics analysis. We argue that, rather, we should join scientists and meet where science evolves in real-time: as Knoppers and Chadwick put it in the early days of genomic science, “Ethical thinking will inevitably continue to evolve as the science does”1."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

  Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

After uproar over ethics, new 'Washington Post' editor won't take the job; NPR, June 21, 2024


David Folkenflik , NPR; After uproar over ethics, new 'Washington Post' editor won't take the job

"The ethical records of both men have come under withering scrutiny in recent days.

Lewis worked with Winnett at the Sunday Times in Britain in the early 2000s. After Lewis was named the youngest editor in the Daily Telegraph's history, he hired Winnett there. The two men, both Brits, worked hand-in-glove and won accolades in the U.K. for their scoops.

Yet NPR, the New York Times and the Post have reported on a parade of episodes involving both men in conduct that would be barred under professional ethics codes at major American news outlets, including the Post.

The incidents include paying a six-figure sum to secure a major scoop; planting a junior reporter in a government job to obtain secret and even classified documents; and relying on a private investigator who used subterfuge to secure people's confidential records and documents. The investigator was later arrested."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit; America: The Jesuit Review, June 20, 2024

America: The Jesuit Review; Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit

"Pope Francis met individually with 10 world leaders at the G7 summit. He also made history as the first pope to attend and deliver a speech at the gathering, where he urged delegates to prioritize ethics in artificial intelligence for the common good. Earlier that day, he had met with 100 international comedians at the Vatican. In this episode of “Inside the Vatican,” hosts Colleen Dulle and Gerard O’Connell bring you inside both events."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Thursday, June 20, 2024

Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case; The New York Times, June 19, 2024

 Leah Litman, The New York Times; Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case

"The court is a busy place, though the justices are completing decisions at the second slowest rate since the 1946 term, according to a recent article in The Wall Street Journal...

In 1974, the Watergate special prosecutor squared off against President Richard Nixon over his refusal to release Oval Office tape recordings of his conversations with aides. Nixon argued that he was immune from a subpoena seeking the recordings. Last year, Steve Vladeck, a law professor at the University of Texas at Austin, looked at how long that case took once it reached the Supreme Court on May 31 of that year. The justices gave the parties 21 days to file their briefs, and then 10 days to respond. Oral argument was held on July 8. Sixteen days later, on July 24, the court issued its 8-0 decision ordering Nixon to turn over the tapes. The chief justice, Warren Burger, who had been nominated to the court by Nixon, wrote the opinion. Total elapsed time: 54 days. Nixon subsequently resigned.

As of Tuesday, 110 days had passed since the court agreed to hear the Trump immunity case. And still no decision."

Wednesday, June 19, 2024

Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference; University of Oxford, In-Person Event on June 20, 2024

University of Oxford; Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference

"The Oxford University Institute for Ethics in AI is hosting an exciting one day conference in Athens on the 20th of June 2024, The Lyceum Project: AI Ethics with Aristotle, in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research...

Set in the cradle of philosophy, adjacent to the ancient site of Aristotle’s school, the conference will showcase some of the greatest philosophical minds and feature a special address from the Greek Prime Minister, Kyriakos Mitsotakis, as they discuss the most pressing question of our times – the ethical regulation of AI.

The conference will be free to attend (register to attend).

Professor John Tasioulas, Director of the Institute for Ethics in AI, said: ‘The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI. We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges.’

George Nounesis, Director & Chairman of the Board of NCSR Demokritos said: ‘There is no such thing as ethically neutral AI; and high-quality research on AI cannot ignore its inherent ethical aspects. Ancient Greek philosophy can serve as a valuable resource guiding us in this discourse. In this respect, Aristotelian philosophy can play a pivotal role by nurturing ethical reasoning and a comprehensive understanding of the societal 'implications of AI, broadening the dialogue with society.’

Alexandra Mitsotaki, President of the World Human Forum, said: ‘This conference is an important first step towards our vision to bring Aristotle’s lyceum alive again by showing the relevance of the teachings of the great philosopher for today’s global challenges. We aspire for the Lyceum to become a global point of connection. This is, after all, the original location where the great philosopher thought, taught and developed many of the ideas that formed Western Civilisation.’"

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

 Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."

What research actually says about social media and kids’ health; The Washington Post, June 17, 2024

 , The Washington Post; What research actually says about social media and kids’ health

"There is no clear scientific evidence that social media is causing mental health issues among young people. Public health officials are pushing for regulation anyway.

U.S. Surgeon General Vivek H. Murthy on Monday called for social media platforms to add warnings reminding parents and kids that the apps might not be safe, citing rising rates of mental health problems among children and teens. It follows an advisory Murthy issued last year about the health threat of loneliness for Americans, in which he named social media as a potential driver of social isolation.

But experts — from leading psychologists to free speech advocates — have repeatedly called into question the idea that time on social media like TikTok, Instagram and Snapchat leads directly to poor mental health. The debate is nuanced, they say, and it’s too early to make sweeping statements about kids and social media."

Monday, June 17, 2024

Video Clip: The Death of Truth; C-Span, June 9, 2024

 C-Span; Video Clip: The Death of Truth

"Steven Brill, a journalist and NewsGuard Co-CEO, talked about his new book on online misinformation and social media, and their impact on U.S. politics and democracy."

What Justice Alito said on ethics and recusal in his confirmation hearings; Citizens for Responsibility and Ethics in Washington (CREW), June 17, 2024

Linnaea Honl-Stuenkel and Connor Ganiats , Citizens for Responsibility and Ethics in Washington (CREW); What Justice Alito said on ethics and recusal in his confirmation hearings

"Recusals

Alito faced scrutiny for his initial failure to recuse from a case against the financial company Vanguard while serving on the U.S. Court of Appeals for the Third Circuit, despite holding at least $390,000 in Vanguard funds. Alito maintained that his failure to recuse was a mistake that he later remedied, and that ruling in the case did not actually violate judicial ethics rules. 

Alito repeatedly stressed that he would recuse from cases where the ethics code required him to do so, despite the broad duty for Supreme Court justices to hear cases. When asked about the case by Senator Orrin Hatch, Alito said, “I not only complied with the ethical rules that are binding on Federal judges—and they’re very strict—but also that I did what I have tried to do throughout my career as a judge, and that is to go beyond the letter of the ethics rules and to avoid any situation where there might be an ethical question raised.”

When pressed further by Senator Russ Feingold, Alito said he would not commit to recusing from all Vanguard cases going forward, but, “I will very strictly comply with the ethical obligations that apply to Supreme Court Justices.” 

Later, during a back and forth with Senator Edward Kennedy about his Vanguard mutual fund not being on his recusal list, Alito said: “I am one of those judges that you described who take recusals very, very seriously.” 

A Warning on Social Media Is the Very Least We Can Do; The New York Times, June 17, 2024

Pamela Paul, The New York Times ; A Warning on Social Media Is the Very Least We Can Do

"You’re in the middle of a public health emergency involving a dangerously addictive substance — let’s say an epidemic of fentanyl or vaping among teens. Which of the following is the best response?

1. Issue a warning. Tell everyone, “Hey, watch out — this stuff isn’t good for you.”

2. Regulate the dangerous substance so that it causes the least amount of harm.

3. Ban the substance and penalize anyone who distributes it...

Other objections to regulation are that it’s difficult to carry out (so are many things) and that there’s only a correlative link between social media and adverse mental health rather than one of causation.

Complacency is easy. The hard truth is that many people are too addicted to social media themselves to fight for laws that would unstick their kids. Big Tech, with Congress in its pocket, is only too happy for everyone to keep their heads in the sand and reap the benefits. But a combination of Options 2 and 3 are the only ones that will bring real results."

An epidemic of scientific fakery threatens to overwhelm publishers; The Washington Post, June 11, 2024

 and 
An epidemic of scientific fakery threatens to overwhelm publishers

"A record number of retractions — more than 10,000 scientific papers in 2023. Nineteen academic journals shut down recently after being overrun by fake research from paper mills. A single researcher with more than 200 retractions.

The numbers don’t lie: Scientific publishing has a problem, and it’s getting worse. Vigilance against fraudulent or defective research has always been necessary, but in recent years the sheer amount of suspect material has threatened to overwhelm publishers.

We were not the first to write about scientific fraud and problems in academic publishing when we launched Retraction Watch in 2010 with the aim of covering the subject regularly."

Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook; The New Republic, June 17, 2024

Ben Metzner, The New Republic; Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook

"Sinclair Broadcast Group, the right-wing media behemoth swallowing up local news stations and spitting them out as zombie GOP propaganda mills, is ramping up pro-Trump content in the lead-up to the 2024 election. Its latest plot? A coordinated effort across at least 86 local news websites to suggest that Joe Biden is mentally unfit for the presidency, based on edited footage and misinformation.

According to Judd Legum, Sinclair, which owns hundreds of television news stations around the country, has been laundering GOP talking points about Biden’s age and mental capacity into news segments of local Fox, ABC, NBC, and CBS affiliates. One replica online article with the headline “Biden appears to freeze, slur words during White House Juneteenth event” shares no evidence other than a spliced-together clip of Biden watching a musical performance and another edited video of Biden giving a speech originally posted on X by Sean Hannity. The article was syndicated en masse on the same day at the same time, Legum found, suggesting that editors at the local affiliates were not given the chance to vet the segment for accuracy.

Most outrageously, the article, along with at least two others posted in June, makes the evidence-free claim that Biden may have pooped himself at a D-Day memorial event in France, based on a video of the president sitting down during the event. According to Legum, one of the article’s URLs includes the word “pooping.”

Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms; The New York Times, June 17, 2024

 Vivek H. Murthy, The New York Times; Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms

"It is time to require a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents. A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe. Evidence from tobacco studies show that warning labels can increase awareness and change behavior. When asked if a warning from the surgeon general would prompt them to limit or monitor their children’s social media use, 76 percent of people in one recent survey of Latino parents said yes...

It’s no wonder that when it comes to managing social media for their kids, so many parents are feeling stress and anxiety — and even shame.

It doesn’t have to be this way. Faced with high levels of car-accident-related deaths in the mid- to late 20th century, lawmakers successfully demanded seatbelts, airbags, crash testing and a host of other measures that ultimately made cars safer. This January the F.A.A. grounded about 170 planes when a door plug came off one Boeing 737 Max 9 while the plane was in the air. And the following month, a massive recall of dairy products was conducted because of a listeria contamination that claimed two lives.

Why is it that we have failed to respond to the harms of social media when they are no less urgent or widespread than those posed by unsafe cars, planes or food? These harms are not a failure of willpower and parenting; they are the consequence of unleashing powerful technology without adequate safety measures, transparency or accountability."

Why the pope has the ears of G7 leaders on the ethics of AI; The Guardian, June 14, 2024

 , The Guardian; Why the pope has the ears of G7 leaders on the ethics of AI

"Normally when an 87-year-old claiming infallibility turns up at your door, the instinct is to give them a cup of tea and quietly ring social services. But when 1.3 billion other people, including your hostess, believe he is indeed infallible, the dynamic somewhat changes.

So Pope Francis, invited by the devout Catholic and Italian prime minister Giorgia Meloni, was warmly greeted when he reached the summit of mammon, the G7 club of western wealthy countries...

Sunak held the world’s first summit on AI safety leading to the Bletchley Declaration in October 2023. The UN has an AI expert advisory board that issued an interim report in December and, in May 2023 under the Japanese presidency, G7 leaders signed something called somewhat discouragingly the Hiroshima Process. (This is not as incendiary as it suggests. Think Schmidhuber, not Oppenheimer.)...

Sunak held the world’s first summit on AI safety leading to the Bletchley Declaration in October 2023. The UN has an AI expert advisory board that issued an interim report in December and, in May 2023 under the Japanese presidency, G7 leaders signed something called somewhat discouragingly the Hiroshima Process. (This is not as incendiary as it suggests. Think Schmidhuber, not Oppenheimer.)"

Friday, June 14, 2024

Pope Francis is first pontiff to address G7 leaders with AI speech; Axios, June 14, 2024

 April Rubin, Axios; Pope Francis is first pontiff to address G7 leaders with AI speech

"Pope Francis made history Friday as the first pontiff to speak at the Group of Seven meeting in Fasano, Italy, where he discussed his concerns with artificial intelligence.

Why it matters: The pope has long urged caution around AI, calling it "a fascinating tool and also a terrifying one," during his remarks Friday even as he acknowledged its potential applications in medicine, labor, culture, communications, education and politics. 

  • "The holy scriptures say that God gave to human beings his spirit in order for them to have wisdom, intelligence and knowledge in all kinds of tasks," he said. "Science and technology are therefore extraordinary products of the potential which is active in us human beings.""

Banishing Captain Underpants: An investigation of the 3,400 books pulled in Iowa.; Des Moines Register via USA Today, June 6, 2024

Samantha HernandezTim WebberChris HigginsPhillip SitterF. Amanda TugadeKyle WernerDes Moines Register via USA Today; Banishing Captain Underpants: An investigation of the 3,400 books pulled in Iowa.

"The data also exposes the breadth of pulled books, including the American classic “To Kill a Mockingbird” by Harper Lee, the Newbery Medal novel “The Giver” by Lois Lowry and “Captain Underpants and the Sensational Saga of Sir Stinks-A-Lot,” a popular children's book with an LGBTQ+ character, by Dav Pilkey.

The removals in Iowa are emblematic of a national trend in which thousands of unique titles – many of them classics or modern children's favorites – are being targeted for removal from public schools and libraries. Data from the American Library Association shows a dramatic increase in book removals in recent years: In 2022, 2,571 titles were targeted, which was, at the time, a record high. Last year, the number soared to 4,240 unique books, the ALA found."