Friday, August 16, 2024

Sasse’s spending spree: Former UF president channeled millions to GOP allies, secretive contracts; The Independent Florida Alligator, August 12, 2024

Garrett Shanley, The Independent Florida Alligator; Sasse’s spending spree: Former UF president channeled millions to GOP allies, secretive contracts

"In his 17-month stint as UF president, Ben Sasse more than tripled his office’s spending, directing millions in university funds into secretive consulting contracts and high-paying positions for his GOP allies.

Sasse ballooned spending under the president’s office to $17.3 million in his first year in office — up from $5.6 million in former UF President Kent Fuchs’ last year, according to publicly available administrative budget data.

A majority of the spending surge was driven by lucrative contracts with big-name consulting firms and high-salaried, remote positions for Sasse’s former U.S. Senate staff and Republican officials.

Sasse’s consulting contracts have been kept largely under wraps, leaving the public in the dark about what the contracted firms did to earn their fees. The university also declined to clarify specific duties carried out by Sasse’s ex-Senate staff, several of whom were salaried as presidential advisers."

Thursday, August 15, 2024

This Code Breaker Is Using AI to Decode the Heart’s Secret Rhythms; Wired, August 15, 2024

 Amit Katwala, Wired; This Code Breaker Is Using AI to Decode the Heart’s Secret Rhythms

"There’s an AI boom in health care, and the only thing slowing it down is a lack of data."

Surviving Putin's gulag: Vladimir Kara-Murza tells his story; The Washington Post, August 14, 2024

Today’s show was produced by Charla Freeland. It was edited by Allison Michaels and Damir Marusic and mixed by Emma Munger, The Washington Post; Surviving Putin's gulag: Vladimir Kara-Murza tells his story

"Pulitzer Prize winner Vladimir Kara-Murza, who was part of August’s massive prisoner exchange with Russia, sat down to talk with Post Opinions editor David Shipley about his time in jail, the importance of freedom of speech and what the future holds for Putin’s regime."

Russian court jails US-Russian woman for 12 years over $50 charity donation; The Guardian, August 15, 2024

Associated Press via The Guardian; Russian court jails US-Russian woman for 12 years over $50 charity donation

"A Russian court on Thursday sentenced the US-Russian dual national Ksenia Khavana to 12 years in prison on a treason conviction for allegedly raising money for the Ukrainian military.

The rights group the First Department said the charges stemmed from a $51 (£40) donation to a US charity that helps Ukraine.

Khavana, whom Russian authorities identify by her birth name of Karelina, was arrested in Ekaterinburg in February. She pleaded guilty in her closed trial last week, news reports said.

Khavana reportedly obtained US citizenship after marrying an American and moving to Los Angeles. She had returned to Russia to visit her family."

Artists Score Major Win in Copyright Case Against AI Art Generators; The Hollywood Reporter, August 13, 2024

 Winston Cho, The Hollywood Reporter; Artists Score Major Win in Copyright Case Against AI Art Generators

"Artists suing generative artificial intelligence art generators have cleared a major hurdle in a first-of-its-kind lawsuit over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems, with a federal judge allowing key claims to move forward.

U.S. District Judge William Orrick on Monday advanced all copyright infringement and trademark claims in a pivotal win for artists. He found that Stable Diffusion, Stability’s AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The order could entangle in the litigation any AI company that incorporated the model into its products."

Monday, August 12, 2024

Artificial Intelligence in the pulpit: a church service written entirely by AI; United Church of Christ, July 16, 2024

 , United Church of Christ; Artificial Intelligence in the pulpit: a church service written entirely by AI

"Would you attend a church service if you knew that it was written entirely by an Artificial Intelligence (AI) program? What would your thoughts and feelings be about this use of AI?

That’s exactly what the Rev. Dwight Lee Wolter wanted to know — and he let his church members at the Congregational Church of Patchogue on Long Island, New York, know that was what he was intending to do on Sunday, July 14. He planned a service that included a call to worship, invocation, pastoral prayer, scripture reading, sermon, hymns, prelude, postlude and benediction with the use of ChatGPT. ChatGPT is a free AI program developed by OpenAI, an Artificial Intelligence research company and released in 2022.

Taking fear and anger out of exploration

“My purpose is to take the fear and anger out of AI exploration and replace it with curiosity, flexibility and open-mindfulness,” said Wolter. “If, as widely claimed, churches need to adapt to survive, we might not recognize the church in 20 years if we could see it now; then AI will be a part of the church of the future. No matter what we presently think of it, it will be present in the future doing a lot of the thinking for us.”...

Wolter intends to follow up Sunday’s service with a reflection about how it went. On July 21, he will give a sermon about AI, with people offering input about the AI service. “We will discuss their reactions, feelings, thoughts, likes and dislikes, concerns and questions.” Wolter will follow with his synopsis sharing the benefits, criticisms, fears and concerns of AI...

Wolter believes we need to “disarm contempt prior to investigation,” when it comes to things like Artificial Intelligence. “AI is not going anywhere. It’s a tool–and with a shortage of clergy, money and volunteers, we will continue to rely on it.”"

Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization; Religion News Service, May 6, 2024

leja Hertzler-McCain , Religion News Service; Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization

"San Jose, California, Bishop Oscar CantĂș, who leads the Catholic faithful in Silicon Valley, said that AI doesn’t come up much with parishioners in his diocese...

Pointing to the adage coined by Meta founder Mark Zuckerberg, “move fast and break things,” the bishop said, “with AI, we need to move very cautiously and slowly and try not to break things. The things we would be breaking are human lives and reputations.”...

Noreen Herzfeld, a professor of theology and computer science at St. John’s University and the College of St. Benedict and one of the editors of a book about AI sponsored by the Vatican Dicastery for Culture and Education, said that the AI character was previously “impersonating a priest, which is considered a very serious sin in Catholicism.”...

Accuracy issues, Herzfeld said, is one of many reasons it should not be used for evangelization. “As much as you beta test one of these chatbots, you will never get rid of hallucinations” — moments when the AI makes up its own answers, she said...

Larrey, who has been studying AI for nearly 30 years and is in conversation with Sam Altman, the CEO of OpenAI, is optimistic that the technology will improve. He said Altman is already making progress on the hallucinations, on its challenges to users’ privacy and reducing its energy use — a recent analysis estimated that by 2027, artificial intelligence could suck up as much electricity as the population of Argentina or the Netherlands."

Sunday, August 11, 2024

Pueblo artist seeking copyright protection for AI-generated work; The Gazette, August 8, 2024

O'Dell Isaac , The Gazette; Pueblo artist seeking copyright protection for AI-generated work

"“We’re done with the Copyright Office,” he said. “Now we’re going into the court system.”

Allen said he believes his case raises two essential questions: What is art? And if a piece doesn’t belong to the artist, whom does it belong to?

Tara Thomas, director of the Bemis School of Arts at Colorado College, said the answers may not be clear-cut.

“There was a similar debate at the beginning of photography,” Thomas said. "Was it the camera, or was it the person taking the photos? Is the camera the artmaker, or is it a tool?”

Allen said it took more than two decades for photography to gain acceptance as an art form.

“We’re at a similar place in AI art,” he said. 

“Right now, there is a massive stigma surrounding AI, far more so than there was with photography, so the challenge is much steeper. It is that very stigma that is contributing to the stifling of innovation. Why would anybody want to incorporate AI art into their workflow if they knew they couldn’t protect their work?”"

Dave Eggers’ Novel Was Banned From South Dakota Schools. In a New Documentary, the Community Fights Back (Exclusive); People, August 10, 204

Carly Tagen-Dye

, People; Dave Eggers’ Novel Was Banned From South Dakota Schools. In a New Documentary, the Community Fights Back (Exclusive)

"Bestselling author Dave Eggers wasn’t expecting to learn that his 2013 dystopian novel, The Circle, was removed from high schools in Rapid City, S.D. What's more, Eggers' book, along with four others, was designated “to be destroyed” by the school board as well.

“It was new to me, although the other authors that were banned have had the books banned again and again,” Eggers tells PEOPLE.

The decision to ban The Circle, as well as The Perks of Being a Wallflowerby Stephen Chbosky, How Beautiful We Were by Imbolo Mbue, Fun Homeby Alison Bechdel and Girl, Woman, Other by Bernadine Evaristo, is the subject of the documentary To Be Destroyed, premiering on MSNBC on Aug. 11 as part of Trevor Noah's "The Turning Point" series. Directed by Arthur Bradford, the film follows Eggers during his travels to Rapid City, where he met with the teachers and students on the frontlines of the book banning fight."

Should artists be terrified of AI replacing them?; The Guardian, August 11, 2024

 , The Guardian; Should artists be terrified of AI replacing them?

"Interviewing those at the techno-cultural vanguard, including Herndon, Dryhurst and Maclean, has given me some sense of peace. I realise that I have been hanging on to 20th-century notions of art practice and the cultural landscape, one where humans spent months and years writing, painting, recording and filming works that defined the culture of our species. They provided meaning, distraction, wellbeing. A reason to exist. Making peace may mean letting go of these historical notions, finding new meaning. While digitally generatable media is increasingly becoming the domain of AI, for example, might performance and tactile artforms, such as live concerts, theatre and sculpture, be reinvigorated?"

Friday, August 9, 2024

Utah outlaws books by Judy Blume and Sarah J Maas in first statewide ban; The Guardian, August 7, 2024

 , The Guardian; Utah outlaws books by Judy Blume and Sarah J Maas in first statewide ban

"Books by Margaret Atwood, Judy Blume, Rupi Kaur and Sarah J Maas are among 13 titles that the state of Utah has ordered to be removed from all public school classrooms and libraries.

This marks the first time a state has outlawed a list of books statewide, according to PEN America’s Jonathan Friedman, who oversees the organisation’s free expression programs.

The books on the list were prohibited under a new law requiring all of Utah’s public school districts to remove books if they are banned in either three districts, or two school districts and five charter schools. Utah has 41 public school districts in total.

The 13 books could be banned under House bill 29, which became effective from 1 July, because they were considered to contain “pornographic or indecent” material. The list “will likely be updated as more books begin to meet the law’s criteria”, according to PEN America.

Twelve of the 13 titles were written by women. Six books by Maas, a fantasy author, appear on the list, along with Oryx and Crake by AtwoodMilk and Honey by Kaur and Forever by Blume. Two books by Ellen Hopkins appear, as well as Elana K Arnold’s What Girls Are Made Of and Craig Thompson’s Blankets.

Implementation guidelines say that banned materials must be “legally disposed of” and “may not be sold or distributed”."

TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert; Episcopal News Service, August 7, 2024

 KATHRYN POST, Episcopal News Service; TryTank Research Institute helps create Cathy, a new AI chatbot and Episcopal Church expert

"The latest AI chatbot geared for spiritual seekers is AskCathy, co-launched in June by a research institute and ministry organization and aiming to roll out soon on Episcopal church websites. Cathy draws on the latest version of ChatGPT and is equipped to prioritize Episcopal resources.

“This is not a substitute for a priest,” said the Rev. Tay Moss, director of one of Cathy’s architects, the Innovative Ministry Center, an organization based at the Toronto United Church Council that develops digital resources for communities of faith. “She comes alongside you in your search queries and helps you discover material. But she is not the end-all be-all of authority. She can’t tell you how to believe or what to believe.”

The Rev. Lorenzo Lebrija, the executive director of TryTank Research Institute at Virginia Theological Seminary and Cathy’s other principal developer, said all the institute’s projects attempt to follow the lead of the Holy Spirit, and Cathy is no different. He told Religion News Service the idea for Cathy materialized after brainstorming how to address young people’s spiritual needs. What if a chatbot could meet people asking life’s biggest questions with care, insight and careful research?

The goal is not that they will end up at their nearby Episcopal church on Sunday. The goal is that it will spark in them this knowledge that God is always with us, that God never leaves us,” Lebrija said. “This can be a tool that gives us a glimpse and little direction that we can then follow on our own.”

To do that, though, would require a chatbot designed to avoid the kinds of hallucinations and errors that have plagued other ChatGPT integrations. In May, the Catholic evangelization site Catholic Answers “defrocked” their AI avatar, Father Justin, designating him as a layperson after he reportedly claimed to be an ordained priest capable of taking confession and performing marriages...

The Rev. Peter Levenstrong, an associate rector at an Episcopal church in San Francisco who blogs about AI and the church, told RNS he thinks Cathy could familiarize people with the Episcopal faith.

“We have a PR issue,” Levenstrong said. “Most people don’t realize there is a denomination that is deeply rooted in tradition, and yet open and affirming, and theologically inclusive, and doing its best to strive toward a future without racial injustice, without ecocide, all these huge problems that we as a church take very seriously.”

In his own context, Levenstrong has already used Cathy to brainstorm Harry Potter-themed lessons for children. (She recommended a related book written by an Episcopalian.)

Cathy’s creators know AI is a thorny topic. Their FAQ page anticipates potential critiques."

Wednesday, August 7, 2024

It’s practically impossible to run a big AI company ethically; Vox, August 5, 2024

Sigal Samuel, Vox; It’s practically impossible to run a big AI company ethically

"Anthropic was supposed to be the good AI company. The ethical one. The safe one.

It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly. 

Yet lately, Anthropic has been in the headlines for less noble reasons: It’s pushing back on a landmark California bill to regulate AI. It’s taking money from Google and Amazon in a way that’s drawing antitrust scrutiny. And it’s being accused of aggressively scraping data from websites without permission, harming their performance. 

What’s going on?

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI."

Dr. Ruffini: Church leadership needed to shape ‘Ethical AI’; LiCAS News via Vatican News, August 2024

Joan April, Roy Lagarde & Mark Saludes - LiCAS News via Vatican News; Dr. Ruffini: Church leadership needed to shape ‘Ethical AI’

"“The digital world is not a ready-made. It is changing every day. We, we can change it. We can shape it. And we need Catholic communicators to do it, with love and with human intelligence,” said Dr. Ruffini. 

In a recorded speech delivered during the 7th National Catholic Social Communications Convention (NCSCC) in Lipa City, south of Manila, on August 5, the Prefect of the Dicastery for Communication (Vatican News' parent organization, underscored the Church’s responsibility to guide technological advancements with moral clarity and human-centered values.

“So the basic question is not about machines, but about humans, about us. There are and always will be things that a technology cannot replace, like freedom, like the miracle of encounter between people, like the surprise of the unexpected, the conversion, the outburst of ingenuity, the gratuitous love,” he said. 

Organized by the Episcopal Commission on Social Communications (ECSC) of the Catholic Bishops’ Conference of the Philippines (CBCP), the convention aims to explore advancements and risks in AI, offering insights on leveraging the technology for positive impact while addressing potential negative consequences."

A booming industry of AI age scanners, aimed at children’s faces; The Washington Post, August 7, 2024

, The Washington Post ; A booming industry of AI age scanners, aimed at children’s faces

"Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine: Employees at Incode, a San Francisco firm that runs more than 100 million verifications a year, now internally track state bills and contact local officials to, as senior director of strategy Fernanda Sottil said, “understand where … our tech fits in.”

But while the systems are promoted for safeguarding kids, they can only work by inspecting everyone — surveying faces, driver’s licenses and other sensitive data in vast quantities. Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'”"

Tuesday, August 6, 2024

How Companies Can Take a Global Approach to AI Ethics; Harvard Business Review (HBR), August 5, 2024

Favour Borokini, and Harvard Business Review (HBR) ; How Companies Can Take a Global Approach to AI Ethics

"Getting the AI ethics policy right is a high-stakes affair for an organization. Well-published instances of gender biases in hiring algorithms or job search results may diminish the company’s reputation, pit the company against regulations, and even attract hefty government fines. Sensing such threats, organizations are increasingly creating dedicated structures and processes to inculcate AI ethics proactively. Some companies have moved further along this road, creating institutional frameworks for AI ethics.

Many efforts, however, miss an important fact: ethics differ from one cultural context to the next...

Western perspectives are also implicitly being encoded into AI models. For example, some estimates show that less than 3% of all images on ImageNet represent the Indian and Chinese diaspora, which collectively account for a third of the global population. Broadly, a lack of high-quality data will likely lead to low predictive power and bias against underrepresented groups — or even make it impossible for tools to be developed for certain communities at all. LLMs can’t currently be trained for languages that aren’t heavily represented on the Internet, for instance. A recent survey of IT organizations in India revealed that the lack of high-quality data remains the most dominant impediment to ethical AI practices.

As AI gains ground and dictates business operations, an unchecked lack of variety in ethical considerations may harm companies and their customers.

To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies."

Sunday, August 4, 2024

Music labels' AI lawsuits create copyright puzzle for courts; Reuters, August 4, 2024

 , Reuters; Music labels' AI lawsuits create copyright puzzle for courts

"Suno and Udio pointed to past public statements defending their technology when asked for comment for this story. They filed their initial responses in court on Thursday, denying any copyright violations and arguing that the lawsuits were attempts to stifle smaller competitors. They compared the labels' protests to past industry concerns about synthesizers, drum machines and other innovations replacing human musicians...

The labels' claims echo allegations by novelists, news outlets, music publishers and others in high-profile copyright lawsuits over chatbots like OpenAI's ChatGPT and Anthropic's Claude that use generative AI to create text. Those lawsuits are still pending and in their early stages.

Both sets of cases pose novel questions for the courts, including whether the law should make exceptions for AI's use of copyrighted material to create something new...

"Music copyright has always been a messy universe," said Julie Albert, an intellectual property partner at law firm Baker Botts in New York who is tracking the new cases. And even without that complication, Albert said fast-evolving AI technology is creating new uncertainty at every level of copyright law.

WHOSE FAIR USE?

The intricacies of music may matter less in the end if, as many expect, the AI cases boil down to a "fair use" defense against infringement claims - another area of U.S. copyright law filled with open questions."

Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.; The New York Times, August 2, 2024

Mike Isaac and  , The New York Times; Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.

"Meta is in discussions with Awkwafina, Judi Dench and other actors and influencers for the right to incorporate their voices into a digital assistant product called MetaAI, according to three people with knowledge of the talks, as the company pushes to build more products that feature artificial intelligence.

Apart from Ms. Dench and Awkwafina, Meta is in talks with the comedian Keegan-Michael Key and other celebrities, said the people, who spoke on the condition of anonymity because the discussions are private. They added that all of Hollywood’s top talent agencies were involved in negotiations with the tech giant."

OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid; The Observer via The Guardian, August 3, 2024

Gary Marcus, The Observer via The Guardian; OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid

"Unfortunately, many other AI companies seem to be on the path of hype and corner-cutting that Altman charted. Anthropic – formed from a set of OpenAI refugees who were worried that AI safety wasn’t taken seriously enough – seems increasingly to be competing directly with the mothership, with all that entails. The billion-dollar startup Perplexity seems to be another object lesson in greed, training on data it isn’t supposed to be using. Microsoft, meanwhile, went from advocating “responsible AI” to rushing out products with serious problems, pressuring Google to do the same. Money and power are corrupting AI, much as they corrupted social media.


We simply can’t trust giant, privately held AI startups to govern themselves in ethical and transparent ways. And if we can’t trust them to govern themselves, we certainly shouldn’t let them govern the world.

 

honestly don’t think we will get to an AI that we can trust if we stay on the current path. Aside from the corrupting influence of power and money, there is a deep technical issue, too: large language models (the core technique of generative AI) invented by Google and made famous by Altman’s company, are unlikely ever to be safe. They are recalcitrant, and opaque by nature – so-called “black boxes” that we can never fully rein in. The statistical techniques that drive them can do some amazing things, like speed up computer programming and create plausible-sounding interactive characters in the style of deceased loved ones or historical figures. But such black boxes have never been reliable, and as such they are a poor basis for AI that we could trust with our lives and our infrastructure.

 

That said, I don’t think we should abandon AI. Making better AI – for medicine, and material science, and climate science, and so on – really could transform the world. Generative AI is unlikely to do the trick, but some future, yet-to-be developed form of AI might.

 

The irony is that the biggest threat to AI today may be the AI companies themselves; their bad behaviour and hyped promises are turning a lot of people off. Many are ready for government to take a stronger hand. According to a June poll by Artificial Intelligence Policy Institute, 80% of American voters prefer “regulation of AI that mandates safety measures and government oversight of AI labs instead of allowing AI companies to self-regulate"."

Saturday, August 3, 2024

AI is complicating plagiarism. How should scientists respond?; Nature, July 30, 2024

Diana Kwon , Nature; AI is complicating plagiarism. How should scientists respond?

"From accusations that led Harvard University’s president to resign in January, to revelations in February of plagiarized text in peer-review reports, the academic world has been roiled by cases of plagiarism this year.

But a bigger problem looms in scholarly writing. The rapid uptake of generative artificial intelligence (AI) tools — which create text in response to prompts — has raised questions about whether this constitutes plagiarism and under what circumstances it should be allowed. “There’s a whole spectrum of AI use, from completely human-written to completely AI-written — and in the middle, there’s this vast wasteland of confusion,” says Jonathan Bailey, a copyright and plagiarism consultant based in New Orleans, Louisiana.

Generative AI tools such as ChatGPT, which are based on algorithms known as large language models (LLMs), can save time, improve clarity and reduce language barriers. Many researchers now argue that they are permissible in some circumstances and that their use should be fully disclosed.

But such tools complicate an already fraught debate around the improper use of others’ work. LLMs are trained to generate text by digesting vast amounts of previously published writing. As a result, their use could result in something akin to plagiarism — if a researcher passes off the work of a machine as their own, for instance, or if a machine generates text that is very close to a person’s work without attributing the source. The tools can also be used to disguise deliberately plagiarized text, and any use of them is hard to spot. “Defining what we actually mean by academic dishonesty or plagiarism, and where the boundaries are, is going to be very, very difficult,” says Pete Cotton, an ecologist at the University of Plymouth, UK."

Supreme Court Ethics Controversies: All The Scandals That Led Biden To Endorse Code Of Conduct; Forbes, July 29, 2024

Alison Durkee , Forbes; Supreme Court Ethics Controversies: All The Scandals That Led Biden To Endorse Code Of Conduct

"President Joe Biden endorsed the Supreme Court imposing a binding code of ethics on Monday, following a string of recent ethics issues the court has faced that have ramped up criticism of the court and sparked cries for a code of conduct from lawmakers and legal experts."

Friday, August 2, 2024

Justice Department sues TikTok, accusing the company of illegally collecting children’s data; AP, August 2, 2024

HALELUYA HADERO, AP; Justice Department sues TikTok, accusing the company of illegally collecting children’s data

"The Justice Department sued TikTok on Friday, accusing the company of violating children’s online privacy law and running afoul of a settlement it had reached with another federal agency. 

The complaint, filed together with the Federal Trade Commission in a California federal court, comes as the U.S. and the prominent social media company are embroiled in yet another legal battle that will determine if – or how – TikTok will continue to operate in the country. 

The latest lawsuit focuses on allegations that TikTok, a trend-setting platform popular among young users, and its China-based parent company ByteDance violated a federal law that requires kid-oriented apps and websites to get parental consent before collecting personal information of children under 13. It also says the companies failed to honor requests from parents who wanted their children’s accounts deleted, and chose not to delete accounts even when the firms knew they belonged to kids under 13."

Paris mayor supports Olympics opening ceremony director after death threats; The Athletic, August 2, 2024

Ben Burrows and Brendan Quinn, The Athletic; Paris mayor supports Olympics opening ceremony director after death threats

"The mayor of Paris has offered her “unwavering support” to the artistic director behind the Olympics opening ceremony after he was subjected to harassment online including death threats.

Thomas Jolly has filed a complaint with authorities after the opening ceremony — which took place on Friday night — saw him targeted by “threats” and “defamation”...

A statement from Paris mayor Anne Hidalgo on Friday read: “On behalf of the City of Paris and in my own name, I would like to extend my unwavering support to Thomas Jolly in the aftermath of the threats and harassment he has been subjected to in recent days, which have led him to lodge a complaint.”

Paris’ Central Office for Combating Crimes Against Humanity and Hate Crimes (OCLCH) is now investigating Jolly’s complaint.

Jolly’s complaint related to “death threats on account of his origin, death threats on account of his sexual orientation, public insults on account of his origin, public insults on account of his sexual orientation” as well as “defamation” and “threatening and insulting messages criticizing his sexual orientation and his wrongly assumed Israeli origins.”"

Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’; The New York Times, August 1, 2024

 , The New York Times; Bipartisan Legal Group Urges Lawyers to Defend Against ‘Rising Authoritarianism’

"A bipartisan American Bar Association task force is calling on lawyers across the country to do more to help protect democracy ahead of the 2024 election, warning in a statement to be delivered Friday at the group’s annual meeting in Chicago that the nation faces a serious threat in “rising authoritarianism.”

The statement by a panel of prominent legal thinkers and other public figures — led by J. Michael Luttig, a conservative former federal appeals court judge appointed by President George Bush, and Jeh C. Johnson, a Homeland Security secretary during the Obama administration — does not mention by name former President Donald J. Trump.

But in raising alarms, the panel appeared to be clearly referencing Mr. Trump’s attempt to subvert his loss of the 2020 election, which included attacks on election workers who were falsely accused by Mr. Trump and his supporters of rigging votes and culminated in the violent attack on the Capitol by his supporters on Jan. 6, 2021."