Charlie Warzel , The Atlantic; YouTube Bends the Knee
"This is just the latest example of major tech companies bowing to Trump."
The Ebook version of my Bloomsbury book "Ethics, Information, and Technology" will be published on December 11, 2025 and the Hardback and Paperback versions will be available on January 8, 2026. Preorders are available via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Charlie Warzel , The Atlantic; YouTube Bends the Knee
"This is just the latest example of major tech companies bowing to Trump."
Roberto Paglialonga, Vatican News ; World Meeting on Human Fraternity: Disarming words to disarm the world
[Kip Currier: There is great wisdom and guidance in these words from Pope Leo and Fr. Enzo Fortunato (highlighted from this Vatican News article for emphasis):
“Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"
What is in the best interests of corporations and shareholders should not -- must not -- ever be this planet's central organizing principle.
To the contrary, that which is at the very center of our humanity -- truth, freedom, the well-being and dignity of each and every person, and prioritization of the best interests of all members of humanity -- MUST be our North Star and guiding light.]
[Excerpt]
"Representatives from the world of communication and information—directors and CEOs of international media networks— gathered in Rome for the “News G20” roundtable, coordinated by Father Enzo Fortunato, director of the magazine Piazza San Pietro. The event took place on Friday 12 September in the Sala della Protomoteca on Rome's Capitoline Hill. The participants addressed a multitude of themes, including transparency and freedom of information in times of war and conflict: the truth of facts as an essential element to “disarm words and disarm the world,” as Pope Leo XIV has said, so that storytelling and narrative may once again serve peace, dialogue, and fraternity. They also discussed the responsibility of those who work in media to promote the value of competence, in-depth reporting, and credibility in an age dominated by unchecked social media, algorithms, clickbait slogans, and rampant expressions of hatred and violence from online haters.
In opening the workshop, Father Fortunato outlined three “pillars” that can no longer be taken for granted in our time: truth, freedom, and dignity. Truth, he said, is “too often manipulated and exploited,” and freedom is “wounded,” as in many countries around the world “journalists are silenced, persecuted, or killed.” Yet “freedom of the press should be a guarantee for citizens and a safeguard for democracy.” Today, Fr. Fortunato continued, “we have many ‘dignitaries’ but little dignity”: people are targeted by “hate and defamation campaigns, often deliberately orchestrated behind a computer screen. Words can wound more than weapons—and not infrequently, those wounds lead to extreme acts.” Precisely in a historical period marked by division and conflict, humanity—despite its diverse peoples, cultures, and opinions—is called to rediscover what unites it. “Pope Leo XIV’s words echo: ‘Before being believers, we are called to be human.’” Therefore, Fr. Fortunato concluded, we must “safeguard truth, freedom, and dignity as common goods of humanity. That is the soul of our work—not the defense of corporations or interests.”"
MAUREEN DOWD, The New York Times; CBS: Caving. Bowing. Scraping.
"CBS is, as Colbert said, “morally bankrupt.” It’s sickening to see media outlets, universities, law firms and tech companies bending the knee. (Hang tough, Rupert!)
Satirists are left to hold people accountable, and they are more than ready."
Lauren Feiner, The Verge ; Trump signs the Take It Down Act into law
"President Donald Trump signed the Take It Down Act into law, enacting a bill that will criminalize the distribution of nonconsensual intimate images (NCII) — including AI deepfakes — and require social media platforms to promptly remove them when notified.
The bill sailed through both chambers of Congress with several tech companies, parent and youth advocates, and first lady Melania Trump championing the issue. But critics — including a group that’s made it its mission to combat the distribution of such images — warn that its approach could backfire and harm the very survivors it seeks to protect.
The law makes publishing NCII, whether real or AI-generated, criminally punishable by up to three years in prison, plus fines. It also requires social media platforms to have processes to remove NCII within 48 hours of being notified and “make reasonable efforts” to remove any copies. The Federal Trade Commission is tasked with enforcing the law, and companies have a year to comply...
The Cyber Civil Rights Initiative (CCRI), which advocates for legislation combating image-based abuse, has long pushed for the criminalization of nonconsensual distribution of intimate images (NDII). But the CCRI said it could not support the Take It Down Act because it may ultimately provide survivors with “false hope.” On Bluesky, CCRI President Mary Anne Franks called the takedown provision a “poison pill … that will likely end up hurting victims more than it helps.”"
The Washington Post ; Trump gives commencement address at University of Alabama
"The president talked about the “internet people” like Elon Musk and other tech moguls and businessmen. “They all hated me in my first term,” Donald Trump said. “And now they’re kissing my ass”
“It’s true,” he added. “It’s amazing. It’s nicer this way now.”"
Rachel Hall and Claire Wilmot, The Guardian; I didn’t eat or sleep’: a Meta moderator on his breakdown after seeing beheadings and child abuse
"When Solomon* strode into the gleaming Octagon tower in Accra, Ghana, for his first day as a Meta content moderator, he was bracing himself for difficult but fulfilling work, purging social media of harmful content.
But after just two weeks of training, the scale and depravity of what he was exposed to was far darker than he ever imagined."
David Smith in Washington , The Guardian; Biden bids farewell with dark warning for America: the oligarchs are coming
"The primetime speech did not mention Donald Trump by name. Instead it will be remembered for its dark, ominous warning about something wider and deeper of which Trump is a symptom.
“Today, an oligarchy is taking shape in America of extreme wealth, power, and influence that literally threatens our entire democracy, our basic rights and freedom and a fair shot for everyone to get ahead,” Biden said.
The word “oligarchy” comes from the Greek words meaning rule (arche) by the few (oligos). Some have argued that the dominant political divide in America is no longer between left and right, but between democracy and oligarchy, as power becomes concentrated in the hands of a few. The wealthiest 1% of Americans now has more wealth than the bottom 90% combined.
The trend did not start with Trump but he is set to accelerate it. The self-styled working-class hero has picked the richest cabinet in history, including 13 billionaires, surrounding himself with the very elite he claims to oppose. Elon Musk, the world’s richest man, has become a key adviser. Tech titans Musk, Jeff Bezos and Mark Zuckerberg – collectively worth a trillion dollars – will be sitting at his inauguration on Monday.
Invoking former president Dwight Eisenhower’s farewell address in January 1961 that warned against the rise of a military-industrial complex, Biden said: “Six decades later, I’m equally concerned about the potential rise of a tech industrial complex. It could pose real dangers for our country as well. Americans are being buried under an avalanche of misinformation and disinformation, enabling the abuse of power.”
In an acknowledgement of news deserts and layoffs at venerable institutions such as the Washington Post, Biden added starkly: “The free press is crumbling. Editors are disappearing. Social media is giving up on fact checking. Truth is smothered by lies, told for power and for profit. We must hold the social platforms accountable, to protect our children, our families and our very democracy from the abuse of power.”
Zuckerberg’s recent decision to abandon factcheckers on Facebook, and Musk’s weaponisation of X in favour of far-right movements including Maga, was surely uppermost in Biden’s mind. Trust in the old media is breaking down as people turn to a fragmented new ecosystem. It has all happened with disorienting speed."
CEO Today; BLUESKY SURGES WITH 700,000 NEW MEMBERS AS USERS FLEE X AFTER US ELECTION
"Bluesky Surges with 700,000 New Members as Users Flee X After US Election: A Social Media Revolution in the Making
In the wake of the US election, a quiet revolution has been unfolding in the world of social media. The platform Bluesky has seen a dramatic increase in user growth, with over 700,000 new members joining in just one week following the election results. This surge has propelled Bluesky’s user base to 14.5 million globally, up from 9 million in September. The platform’s meteoric rise is largely attributed to disillusioned social media users seeking a safer, more regulated alternative to X (formerly Twitter), especially after the platform underwent a radical transformation under Elon Musk's ownership and his association with US president-elect Donald Trump.
Bluesky, which originated as a project within Twitter before becoming an independent platform in 2022, has quickly become a refuge for those seeking a break from the rising tide of far-right activism, misinformation, and offensive content that has overtaken X in recent months. As X grapples with growing controversy and user dissatisfaction, Bluesky is capitalizing on the opportunity to position itself as a civil and balanced alternative...
The Growing Backlash Against X and Musk’s Vision
The rise of Bluesky is part of a broader trend of backlash against X since Elon Musk's acquisition of the platform. Under Musk’s leadership, X has shifted its focus, alienating a significant portion of its user base. In the aftermath of the US election, many have expressed concerns about the platform's increasing alignment with far-right political groups and its potential transformation into a propaganda tool for Trump and his supporters.
For example, a prominent critic of X, historian Ruth Ben-Ghiat, who had 250,000 followers on X, noted that she picked up 21,000 followers within her first day on Bluesky after moving to the platform. She shared her concerns about X's potential evolution into a far-right radicalization machine under Musk’s stewardship. Ben-Ghiat said, "After January, when X could be owned by a de facto member of the Trump administration, its functions as a Trump propaganda outlet and far-right radicalization machine could be accelerated."
This sentiment reflects the growing sense of unease among users about the political direction of X. As Musk’s political ties become clearer and his rhetoric becomes more controversial, users who once considered X a neutral platform for conversation now see it as a space increasingly hostile to their values. For many, Bluesky is emerging as the antidote to this growing disillusionment."
Cat Zakrzewski, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election
"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.
That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.
So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."
LARA KORTE and JEREMY B. WHITE, Politico; Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley
"Gov. Gavin Newsom vetoed a sweeping California bill meant to impose safety vetting requirements for powerful AI models, siding with much of Silicon Valley and leading congressional Democrats in the most high-profile fight in the Legislature this year."
Justine Roberts , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back
"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.
At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...
If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...
I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.
But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."
Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain:
Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari
"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...
The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.
The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.
Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...
“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...
Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."
Chris Hughes, The New York Times; Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money.
"Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost...
As much as they want to influence Mr. Trump’s policies, they also want to strike back at the Biden-Harris administration, which they believe has unfairly targeted their industry.
More than any other administration in the internet era, President Biden and Ms. Harris have pushed tech companies toward serving the public interest...
Last year, Mr. Andreessen, whose venture capital firm is heavily invested in crypto, wrote a widely discussed “manifesto” claiming that enemy voices of “bureaucracy, vetocracy, gerontocracy” are opposed to the “pursuit of technology, abundance and life.” In a barely concealed critique of the Biden-Harris administration, he argued that those who believe in carefully assessing the impact of new technologies before adopting them are “deeply immoral.”
Theodore Schleifer and Mike Isaac, The New York Times; Mark Zuckerberg Is Done With Politics
"Instead of publicly engaging with Washington, Mr. Zuckerberg is repairing relationships with politicians behind the scenes. After the “Zuckerbucks” criticism, Mr. Zuckerberg hired Brian Baker, a prominent Republican strategist, to improve his positioning with right-wing media and Republican officials. In the lead-up to November’s election, Mr. Baker has emphasized to Mr. Trump and his top aides that Mr. Zuckerberg has no plans to make similar donations, a person familiar with the discussions said.
Mr. Zuckerberg has yet to forge a relationship with Vice President Kamala Harris. But over the summer, Mr. Zuckerberg had his first conversations with Mr. Trump since he left office, according to people familiar with the conversations."
Tim Murphy, Mother Jones; Mark Zuckerberg Isn’t Done With Politics. His Politics Have Just Changed.
"On Tuesday, the New York Times reported that one of the world’s richest men had recently experienced a major epiphany. After bankrolling a political organization that supported immigration reform, espousing his support for social justice, and donating hundreds of millions of dollars to support local election workers during the 2020 election, “Mark Zuckerberg is done with politics.”
The Facebook founder and part-time Hawaiian feudal lord, according to the piece, “believed that both parties loathed technology and that trying to continue engaging with political causes would only draw further scrutiny to their company,” and felt burned by the criticism he has faced in recent years, on everything from the proliferation of disinformation on Facebook to his investment in election administration (which conservatives dismissively referred to as “Zuckerbucks”). He is mad, in other words, that people are mad at him, and it has made him rethink his entire theory of how the world works.
It’s an interesting piece, which identifies a real switch in how Zuckerberg—who along with his wife, Priscilla Chan, has made a non-binding pledge to give away a majority of his wealth by the end of his lifetime—thinks about his influence and his own ideology. But there’s a fallacy underpinning that headline: Zuckerberg isn’t done with politics. His politics have simply changed."
Chris Velazco , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting
"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”
Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."
Beatrix Lockwood, The Washington Post; ‘I quit my job as a content moderator. I can never go back to who I was before.’
"Alberto Cuadra worked as a content moderator at a video-streaming platform for just under a year, but he saw things he’ll never forget. He watched videos about murders and suicides, animal abuse and child abuse, sexual violence and teenage bullying — all so you didn’t have to. What shows up when you scroll through social media has been filtered through an army of tens of thousands of content moderators, who protect us at the risk of their own mental health.
Warning: The following illustrations contain references to disturbing content."
Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI
"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.
But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.
“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...
The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."
RYAN MACASERO , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk
"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.
Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.
The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."
Shira Ovide , The Washington Post; After a decade of free Alexa, Amazon now wants you to pay
"There was a lot of optimism in the 2010s that digital assistants like Alexa, Apple’s Siri and Google Assistant would become a dominant way we interact with technology, and become as life-changing as smartphones have been.
Those predictions were mostly wrong. The digital assistants were dumber than the companies claimed, and it’s often annoying to speak commands rather than type on a keyboard or tap on a touch screen...
If you’re thinking there’s no chance you’d pay for an AI Alexa, you should see how many people subscribe to OpenAI’s ChatGPT...
The mania over AI is giving companies a new selling point to upcharge you. It’s now in your hands whether the promised features are worth it, or if you can’t stomach any more subscriptions."