Friday, June 28, 2024

Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.; The New York Times, June 28, 2024

 THOMAS L. FRIEDMAN, The New York Times; Joe Biden Is a Good Man and a Good President. He Must Bow Out of the Race.

"We are at the dawn of an artificial intelligence revolution that is going to change EVERYTHING FOR EVERYONE — how we work, how we learn, how we teach, how we trade, how we invent, how we collaborate, how we fight wars, how we commit crimes and how we fight crimes. Maybe I missed it, but I did not hear the phrase “artificial intelligence” mentioned by either man at the debate."

Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix; Fortune, June 27, 2024

, Fortune; Original sins and dirty secrets: GenAI has an ethics problem. These are the three things it most urgently needs to fix

"The ethics of generative AI has been in the news this week. AI companies have been accused of taking copyrighted creative works without permission to train their models, and there’s been documentation of those models producing outputs that plagiarize from that training data. Today, I’m going to make the case that generative AI can never be ethical as long as three issues that are currently inherent to the technology remain. First, there’s the fact that generative AI was created using stolen data. Second, it’s built on exploitative labor. And third, it’s exponentially worsening the energy crisis at a pivotal time when we need to be scaling back, not accelerating, our energy demands and environmental impact."

Thursday, June 27, 2024

God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?; Scientific American, March 19, 2024

 , Scientific American; God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?

"QuranGPT—which has now been used by about 230,000 people around the world—is just one of a litany of chatbots trained on religious texts that have recently appeared online. There’s Bible.Ai, Gita GPT, Buddhabot, Apostle Paul AI, a chatbot trained to imitate 16th-century German theologian Martin Luther, another trained on the works of Confucius, and yet another designed to imitate the Delphic oracle. For millennia adherents of various faiths have spent long hours—or entire lifetimes—studying scripture to glean insights into the deepest mysteries of human existence, say, the fate of the soul after death.

The creators of these chatbots don’t necessarily believe large language models (LLMs) will put these age-old theological enigmas to rest. But they do think that with their ability to identify subtle linguistic patterns within vast quantities of text and provide responses to user prompts in humanlike language (a feature called natural-language processing, or NLP), the bots can theoretically synthesize spiritual insights in a matter of seconds, saving users both time and energy. It’s divine wisdom on demand.

Many professional theologians, however, have serious concerns about blending LLMs with religion...

The danger of hallucination in this context is compounded by the fact that religiously oriented chatbots are likely to attract acutely sensitive questions—questions one might feel too embarrassed or ashamed to ask a priest, an imam, a rabbi or even a close friend. During a software update to QuranGPT last year, Khan had a brief glimpse into user prompts, which are usually invisible to him. He recalls seeing that one person had asked, “I caught my wife cheating on me—how should I respond?” Another, more troublingly, had asked, “Can I beat my wife?”

Khan was pleased with the system’s responses (it urged discussion and nonviolence on both counts), but the experience underscored the ethical gravity behind his undertaking."

New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.; The New York Times, June 27, 2024

Steven Lee Myers and  , The New York Times; New Tactic in China’s Information War: Harassing a Critic’s Child in the U.S.

"A covert propaganda network linked to the country’s security services has barraged not just Mr. Deng but also his teenage daughter with sexually suggestive and threatening posts on popular social media platforms, according to researchers at both Clemson University and Meta, which owns Facebook and Instagram...

The harassment fits a pattern of online intimidation that has raised alarms in Washington, as well as Canada and other countries where China’s attacks have become increasingly brazen. The campaign has included thousands of posts the researchers have linked to a network of social media accounts known as Spamouflage or Dragonbridge, an arm of the country’s vast propaganda apparatus.

China has long sought to discredit Chinese critics, but targeting a teenager in the United States is an escalation, said Darren Linvill, a founder of the Media Forensics Hub at Clemson, whose researchers documented the campaign against Mr. Deng. Federal law prohibits severe online harassment or threats, but that appears to be no deterrent to China’s efforts."

The Supreme Court rules for Biden administration in a social media dispute with conservative states; AP, June 26, 2024

MARK SHERMAN , AP; The Supreme Court rules for Biden administration in a social media dispute with conservative states

"The Supreme Court on Wednesday sided with the Biden administration in a dispute with Republican-led states over how far the federal government can go to combat controversial social media posts on topics including COVID-19 and election security.

By a 6-3 vote, the justices threw out lower-court rulings that favored Louisiana, Missouri and other parties in their claims that federal officials leaned on the social media platforms to unconstitutionally squelch conservative points of view.

Justice Amy Coney Barrett wrote for the court that the states and other parties did not have the legal right, or standing, to sue. Justices Samuel Alito, Neil Gorsuch and Clarence Thomas dissented. 

The decision should not affect typical social media users or their posts."

AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines; Legal Talk Network, June 27, 2024

Adriana Linares , Legal Talk Network; AI, Legal Tech, and Ethics: The Florida Bar’s Groundbreaking Guidelines

"Two friends of the podcast return for this episode of New Solo to talk all things legal tech and the latest in AI services for lawyers. Guests Renee Thompson and Liz McCausland are both accomplished mediators and solo practitioners who depend on tech to boost productivity and keep up with their busy lives.

AI is an emerging technology that is finding its way to more and more law offices. McCausland and Thompson served on a Florida Bar committee to draft an advisory opinion laying out ethical guidelines for the use of AI in legal practice.

With ethical guardrails published, what’s next? A best practices guide and clear definitions and examples of AI for legal services. Client consent, the impact on fees and confidentiality, and even how judges view the use of AI and informing the court that AI played a role in your presentation are all pieces of the puzzle."

Wednesday, June 26, 2024

The MTV News website is gone; The Verge, June 25, 2024

Andrew Liszewski, The Verge ; The MTV News website is gone

"The archives of the MTV News website, which had remained accessible online after the unit was shut down last year by parent company Paramount Global, have now been completely taken offline. As Varietyreported yesterday, both mtvnews.com and mtv.com/news now redirect visitors to the MTV website’s front page...

Although the MTV News website was no longer publishing new stories, its extensive archive, dating back over two decades to its launch in 1996, remained online. But as former staffers discovered yesterday, that archive is no longer accessible."

Tuesday, June 25, 2024

Collaborative ethics: innovating collaboration between ethicists and life scientists; Nature, June 20, 2024

, Nature ; Collaborative ethics: innovating collaboration between ethicists and life scientists

"Is there a place for ethics in scientific research, not about science or after scientific breakthroughs? We are convinced that there is, and we describe here our model for collaboration between scientists and ethicists.

Timely collaboration with ethicists benefits science, as it can make an essential contribution to the research process. In our view, such critical discussions can improve the efficiency and robustness of outcomes, particularly in groundbreaking or disruptive research. The discussion of ethical implications during the research process can also prepare a team for a formal ethics review and criticism after publication.

The practice of collaborative ethics also advances the humanities, as direct involvement with the sciences allows long-held assumptions and arguments to be put to the test. As philosophers and ethicists, we argue that innovative life sciences research requires new methods in ethics, as disruptive concepts and research outcomes no longer fit traditional notions and norms. Those methods should not be developed at a distance from the proverbial philosopher’s armchair or in after-the-fact ethics analysis. We argue that, rather, we should join scientists and meet where science evolves in real-time: as Knoppers and Chadwick put it in the early days of genomic science, “Ethical thinking will inevitably continue to evolve as the science does”1."

Monday, June 24, 2024

New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI; LawSites, June 24, 2024

, LawSites; New Legal Ethics Opinion Cautions Lawyers: You ‘Must Be Proficient’ In the Use of Generative AI

"A new legal ethics opinion on the use of generative AI in law practice makes one point very clear: lawyers are required to maintain competence across all technological means relevant to their practices, and that includes the use of generative AI.

The opinion, jointly issued by the Pennsylvania Bar Association and Philadelphia Bar Association, was issued to educate attorneys on the benefits and pitfalls of using generative AI and to provide ethical guidelines.

While the opinion is focused on AI, it repeatedly emphasizes that a lawyer’s ethical obligations surrounding this emerging form of technology are no different than those for any form of technology...

12 Points of Responsibility

The 16-page opinion offers a concise primer on the use of generative AI in law practice, including a brief background on the technology and a summary of other states’ ethics opinions.

But most importantly, it concludes with 12 points of responsibility pertaining to lawyers using generative AI:

  • Be truthful and accurate: The opinion warns that lawyers must ensure that AI-generated content, such as legal documents or advice, is truthful, accurate and based on sound legal reasoning, upholding principles of honesty and integrity in their professional conduct.
  • Verify all citations and the accuracy of cited materials: Lawyers must ensure the citations they use in legal documents or arguments are accurate and relevant. That includes verifying that the citations accurately reflect the content they reference.
  • Ensure competence: Lawyers must be competent in using AI technologies.
  • Maintain confidentiality: Lawyers must safeguard information relating to the representation of a client and ensure that AI systems handling confidential data both adhere to strict confidentiality measures and prevent the sharing of confidential data with others not protected by the attorney-client privilege.
  • Identify conflicts of interest: Lawyers must be vigilant, the opinion says, in identifying and addressing potential conflicts of interest arising from using AI systems.
  • Communicate with clients: Lawyers must communicate with clients about using AI in their practices, providing clear and transparent explanations of how such tools are employed and their potential impact on case outcomes. If necessary, lawyers should obtain client consent before using certain AI tools.
  • Ensure information is unbiased and accurate: Lawyers must ensure that the data used to train AI models is accurate, unbiased, and ethically sourced to prevent perpetuating biases or inaccuracies in AI-generated content.
  • Ensure AI is properly used: Lawyers must be vigilant against the misuse of AI-generated content, ensuring it is not used to deceive or manipulate legal processes, evidence or outcomes.
  • Adhere to ethical standards: Lawyers must stay informed about relevant regulations and guidelines governing the use of AI in legal practice to ensure compliance with legal and ethical standards.
  • Exercise professional judgment: Lawyers must exercise their professional judgment in conjunction with AI-generated content, and recognize that AI is a tool that assists but does not replace legal expertise and analysis.
  • Use proper billing practices: AI has tremendous time-saving capabilities. Lawyers must, therefore, ensure that AI-related expenses are reasonable and appropriately disclosed to clients.
  • Maintain transparency: Lawyers should be transparent with clients, colleagues, and the courts about the use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.

My Advice: Don’t Be Stupid

Over the years of writing about legal technology and legal ethics, I have developed my own shortcut rule for staying out of trouble: Don’t be stupid...

You can read the full opinion here: Joint Formal Opinion 2024-200."

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

How to Fix “AI’s Original Sin”; O'Reilly, June 18, 2024

  Tim O’Reilly, O'Reilly; How to Fix “AI’s Original Sin”

"In conversation with reporter Cade Metz, who broke the story, on the New York Times podcast The Daily, host Michael Barbaro called copyright violation “AI’s Original Sin.”

At the very least, copyright appears to be one of the major fronts so far in the war over who gets to profit from generative AI. It’s not at all clear yet who is on the right side of the law. In the remarkable essay “Talkin’ Bout AI Generation: Copyright and the Generative-AI Supply Chain,” Cornell’s Katherine Lee and A. Feder Cooper and James Grimmelmann of Microsoft Research and Yale note:

Copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generative AI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply.

But it seems less important to get into the fine points of copyright law and arguments over liability for infringement, and instead to explore the political economy of copyrighted content in the emerging world of AI services: Who will get what, and why?"

Sunday, June 23, 2024

After uproar over ethics, new 'Washington Post' editor won't take the job; NPR, June 21, 2024


David Folkenflik , NPR; After uproar over ethics, new 'Washington Post' editor won't take the job

"The ethical records of both men have come under withering scrutiny in recent days.

Lewis worked with Winnett at the Sunday Times in Britain in the early 2000s. After Lewis was named the youngest editor in the Daily Telegraph's history, he hired Winnett there. The two men, both Brits, worked hand-in-glove and won accolades in the U.K. for their scoops.

Yet NPR, the New York Times and the Post have reported on a parade of episodes involving both men in conduct that would be barred under professional ethics codes at major American news outlets, including the Post.

The incidents include paying a six-figure sum to secure a major scoop; planting a junior reporter in a government job to obtain secret and even classified documents; and relying on a private investigator who used subterfuge to secure people's confidential records and documents. The investigator was later arrested."

Saturday, June 22, 2024

NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’; Los Angeles Times, June 21, 2024

Samantha Masunaga , Los Angeles Times; NBCUniversal’s Donna Langley on AI: ‘We’ve got to get the ethics of it right’

"Artificial intelligence is “exciting,” but guardrails must be put in place to protect labor, intellectual property and ethics, NBCUniversal Studio Group Chairman Donna Langley said Friday at an entertainment industry law conference.

During a wide-ranging, on-stage conversation at the UCLA Entertainment Symposium, the media chief emphasized that first, “the labor piece of it has to be right,” a proclamation that was met with applause from the audience. 

“Nor should we infringe on people’s rights,” she said, adding that there also needs to be “very good, clever, sophisticated copyright laws around our IP.”...

AI has emerged as a major issue in Hollywood, as technology companies have increasingly courted studios and industry players. But it is a delicate dance, as entertainment industry executives want to avoid offending actors, writers and other workers who view the technology as a threat to their jobs."

Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit; America: The Jesuit Review, June 20, 2024

America: The Jesuit Review; Pope Francis meets Biden, Zelensky, and talks A.I. ethics at G7 summit

"Pope Francis met individually with 10 world leaders at the G7 summit. He also made history as the first pope to attend and deliver a speech at the gathering, where he urged delegates to prioritize ethics in artificial intelligence for the common good. Earlier that day, he had met with 100 international comedians at the Vatican. In this episode of “Inside the Vatican,” hosts Colleen Dulle and Gerard O’Connell bring you inside both events."

AI lab at Christian university aims to bring morality and ethics to artificial intelligence; Fox News, June 17, 2024

Christine Rousselle  , Fox News; AI lab at Christian university aims to bring morality and ethics to artificial intelligence

"A new AI Lab at a Christian university in California is grounded in theological values — something the school hopes will help to prevent Christians and others of faith from falling behind when it comes to this new technology.

"The AI Lab at Biola University is a dedicated space where students, faculty and staff converge to explore the intricacies of artificial intelligence," Dr. Michael J. Arena told Fox News Digital...

The lab is meant to "be a crucible for shaping the future of AI," Arena said via email, noting the lab aims to do this by "providing education, fostering dialogue and leading innovative AI projects rooted in Christian beliefs." 

While AI has been controversial, Arena believes that educational institutions have to "embrace AI or risk falling behind" in technology. 

"If we don't engage, we risk falling asleep at the wheel," Arena said, referring to Christian and faith-centered institutions. 

He pointed to social media as an example of how a failure to properly engage with an emerging technology with a strong approach to moral values has had disastrous results."

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Thursday, June 20, 2024

Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case; The New York Times, June 19, 2024

 Leah Litman, The New York Times; Something’s Rotten About the Justices Taking So Long on Trump’s Immunity Case

"The court is a busy place, though the justices are completing decisions at the second slowest rate since the 1946 term, according to a recent article in The Wall Street Journal...

In 1974, the Watergate special prosecutor squared off against President Richard Nixon over his refusal to release Oval Office tape recordings of his conversations with aides. Nixon argued that he was immune from a subpoena seeking the recordings. Last year, Steve Vladeck, a law professor at the University of Texas at Austin, looked at how long that case took once it reached the Supreme Court on May 31 of that year. The justices gave the parties 21 days to file their briefs, and then 10 days to respond. Oral argument was held on July 8. Sixteen days later, on July 24, the court issued its 8-0 decision ordering Nixon to turn over the tapes. The chief justice, Warren Burger, who had been nominated to the court by Nixon, wrote the opinion. Total elapsed time: 54 days. Nixon subsequently resigned.

As of Tuesday, 110 days had passed since the court agreed to hear the Trump immunity case. And still no decision."

Wednesday, June 19, 2024

Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference; University of Oxford, In-Person Event on June 20, 2024

University of Oxford; Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference

"The Oxford University Institute for Ethics in AI is hosting an exciting one day conference in Athens on the 20th of June 2024, The Lyceum Project: AI Ethics with Aristotle, in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research...

Set in the cradle of philosophy, adjacent to the ancient site of Aristotle’s school, the conference will showcase some of the greatest philosophical minds and feature a special address from the Greek Prime Minister, Kyriakos Mitsotakis, as they discuss the most pressing question of our times – the ethical regulation of AI.

The conference will be free to attend (register to attend).

Professor John Tasioulas, Director of the Institute for Ethics in AI, said: ‘The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI. We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges.’

George Nounesis, Director & Chairman of the Board of NCSR Demokritos said: ‘There is no such thing as ethically neutral AI; and high-quality research on AI cannot ignore its inherent ethical aspects. Ancient Greek philosophy can serve as a valuable resource guiding us in this discourse. In this respect, Aristotelian philosophy can play a pivotal role by nurturing ethical reasoning and a comprehensive understanding of the societal 'implications of AI, broadening the dialogue with society.’

Alexandra Mitsotaki, President of the World Human Forum, said: ‘This conference is an important first step towards our vision to bring Aristotle’s lyceum alive again by showing the relevance of the teachings of the great philosopher for today’s global challenges. We aspire for the Lyceum to become a global point of connection. This is, after all, the original location where the great philosopher thought, taught and developed many of the ideas that formed Western Civilisation.’"

Tuesday, June 18, 2024

POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: ADDRESS OF HIS HOLINESS POPE FRANCIS, June 14, 2024

The Vatican, POPE FRANCIS ATTENDS THE G7 SESSION ON ARTIFICIAL INTELLIGENCE: 

ADDRESS OF HIS HOLINESS POPE FRANCISBorgo Egnazia (Puglia)

[Excerpt]

            "An exciting and fearsome tool


 Esteemed ladies and gentlemen,

I address you today, the leaders of the Intergovernmental Forum of the G7, concerning the effects of artificial intelligence on the future of humanity.

“Sacred Scripture attests that God bestowed his Spirit upon human beings so that they might have ‘skill and understanding and knowledge in every craft’ ( Ex 35:31)”. [1] Science and technology are therefore brilliant products of the creative potential of human beings. [2]

Indeed, artificial intelligence arises precisely from the use of this God-given creative potential.

As we know, artificial intelligence is an extremely powerful tool, employed in many kinds of human activity: from medicine to the world of work; from culture to the field of communications; from education to politics. It is now safe to assume that its use will increasingly influence the way we live, our social relationships and even the way we conceive of our identity as human beings. [3]

The question of artificial intelligence, however, is often perceived as ambiguous: on the one hand, it generates excitement for the possibilities it offers, while on the other it gives rise to fear for the consequences it foreshadows. In this regard, we could say that all of us, albeit to varying degrees, experience two emotions: we are enthusiastic when we imagine the advances that can result from artificial intelligence but, at the same time, we are fearful when we acknowledge the dangers inherent in its use. [4]"

‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate; Bloomberg Law, June 18, 2024

 Laura Heymann , Bloomberg Law; ‘Trump Too Small’ Trademark Case Morphs Into Free Speech Debate

"The US Supreme Court’s June 13 decision in the “Trump Too Small” trademark case revealed a potential rift among the justices on First Amendment jurisprudence but did little to advance intellectual property law...

Trademark law, the Supreme Court has said in prior cases, is primarily about two goals: preventing confusion among consumers by ensuring accurate source identification and preserving trademark owners’ reputation and goodwill. For these justices, the names clause passed muster because prohibiting the registration of personal names without consent was self-evidently reasonable in light of these purposes; no further analysis was required."

What research actually says about social media and kids’ health; The Washington Post, June 17, 2024

 , The Washington Post; What research actually says about social media and kids’ health

"There is no clear scientific evidence that social media is causing mental health issues among young people. Public health officials are pushing for regulation anyway.

U.S. Surgeon General Vivek H. Murthy on Monday called for social media platforms to add warnings reminding parents and kids that the apps might not be safe, citing rising rates of mental health problems among children and teens. It follows an advisory Murthy issued last year about the health threat of loneliness for Americans, in which he named social media as a potential driver of social isolation.

But experts — from leading psychologists to free speech advocates — have repeatedly called into question the idea that time on social media like TikTok, Instagram and Snapchat leads directly to poor mental health. The debate is nuanced, they say, and it’s too early to make sweeping statements about kids and social media."

Monday, June 17, 2024

Video Clip: The Death of Truth; C-Span, June 9, 2024

 C-Span; Video Clip: The Death of Truth

"Steven Brill, a journalist and NewsGuard Co-CEO, talked about his new book on online misinformation and social media, and their impact on U.S. politics and democracy."

What Justice Alito said on ethics and recusal in his confirmation hearings; Citizens for Responsibility and Ethics in Washington (CREW), June 17, 2024

Linnaea Honl-Stuenkel and Connor Ganiats , Citizens for Responsibility and Ethics in Washington (CREW); What Justice Alito said on ethics and recusal in his confirmation hearings

"Recusals

Alito faced scrutiny for his initial failure to recuse from a case against the financial company Vanguard while serving on the U.S. Court of Appeals for the Third Circuit, despite holding at least $390,000 in Vanguard funds. Alito maintained that his failure to recuse was a mistake that he later remedied, and that ruling in the case did not actually violate judicial ethics rules. 

Alito repeatedly stressed that he would recuse from cases where the ethics code required him to do so, despite the broad duty for Supreme Court justices to hear cases. When asked about the case by Senator Orrin Hatch, Alito said, “I not only complied with the ethical rules that are binding on Federal judges—and they’re very strict—but also that I did what I have tried to do throughout my career as a judge, and that is to go beyond the letter of the ethics rules and to avoid any situation where there might be an ethical question raised.”

When pressed further by Senator Russ Feingold, Alito said he would not commit to recusing from all Vanguard cases going forward, but, “I will very strictly comply with the ethical obligations that apply to Supreme Court Justices.” 

Later, during a back and forth with Senator Edward Kennedy about his Vanguard mutual fund not being on his recusal list, Alito said: “I am one of those judges that you described who take recusals very, very seriously.” 

A Warning on Social Media Is the Very Least We Can Do; The New York Times, June 17, 2024

Pamela Paul, The New York Times ; A Warning on Social Media Is the Very Least We Can Do

"You’re in the middle of a public health emergency involving a dangerously addictive substance — let’s say an epidemic of fentanyl or vaping among teens. Which of the following is the best response?

1. Issue a warning. Tell everyone, “Hey, watch out — this stuff isn’t good for you.”

2. Regulate the dangerous substance so that it causes the least amount of harm.

3. Ban the substance and penalize anyone who distributes it...

Other objections to regulation are that it’s difficult to carry out (so are many things) and that there’s only a correlative link between social media and adverse mental health rather than one of causation.

Complacency is easy. The hard truth is that many people are too addicted to social media themselves to fight for laws that would unstick their kids. Big Tech, with Congress in its pocket, is only too happy for everyone to keep their heads in the sand and reap the benefits. But a combination of Options 2 and 3 are the only ones that will bring real results."

An epidemic of scientific fakery threatens to overwhelm publishers; The Washington Post, June 11, 2024

 and 
An epidemic of scientific fakery threatens to overwhelm publishers

"A record number of retractions — more than 10,000 scientific papers in 2023. Nineteen academic journals shut down recently after being overrun by fake research from paper mills. A single researcher with more than 200 retractions.

The numbers don’t lie: Scientific publishing has a problem, and it’s getting worse. Vigilance against fraudulent or defective research has always been necessary, but in recent years the sheer amount of suspect material has threatened to overwhelm publishers.

We were not the first to write about scientific fraud and problems in academic publishing when we launched Retraction Watch in 2010 with the aim of covering the subject regularly."

Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook; The New Republic, June 17, 2024

Ben Metzner, The New Republic; Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook

"Sinclair Broadcast Group, the right-wing media behemoth swallowing up local news stations and spitting them out as zombie GOP propaganda mills, is ramping up pro-Trump content in the lead-up to the 2024 election. Its latest plot? A coordinated effort across at least 86 local news websites to suggest that Joe Biden is mentally unfit for the presidency, based on edited footage and misinformation.

According to Judd Legum, Sinclair, which owns hundreds of television news stations around the country, has been laundering GOP talking points about Biden’s age and mental capacity into news segments of local Fox, ABC, NBC, and CBS affiliates. One replica online article with the headline “Biden appears to freeze, slur words during White House Juneteenth event” shares no evidence other than a spliced-together clip of Biden watching a musical performance and another edited video of Biden giving a speech originally posted on X by Sean Hannity. The article was syndicated en masse on the same day at the same time, Legum found, suggesting that editors at the local affiliates were not given the chance to vet the segment for accuracy.

Most outrageously, the article, along with at least two others posted in June, makes the evidence-free claim that Biden may have pooped himself at a D-Day memorial event in France, based on a video of the president sitting down during the event. According to Legum, one of the article’s URLs includes the word “pooping.”