Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Tuesday, December 30, 2025

The New Billionaires of the A.I. Boom; The New York Times, December 30, 2025

, The New York Times; The New Billionaires of the A.I. Boom

 "Most are men.

The A.I. boom has elevated mostly male founders to billionaire status, a pattern in tech cycles. Only a few women — such as Ms. Guo and Ms. Murati — have reached that wealth level.

The A.I. craze has amplified the “homogeneity” of those who are part of this boom, Dr. O’Mara said."

An Anti-A.I. Movement Is Coming. Which Party Will Lead It?; The New York Times, December 29, 2025

MICHELLE GOLDBERG, The New York Times ; An Anti-A.I. Movement Is Coming. Which Party Will Lead It?

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”

A.I. obviously has beneficial uses, especially medical ones; it may, for example, be better than humans at identifying localized cancers from medical imagery. But the list of things it is ruining is long."

Sunday, December 28, 2025

The year in AI and culture; NPR, December 28, 2025

"From the advent of AI actress Tilly Norwood to major music labels making deals with AI companies, 2025 has been a watershed year for AI and culture."

Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’; The Independent, December 28, 2025

John Bowden  , The Independent; Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’

Senator’s warnings come as Trump renews calls to ban states from regulating AI

"“This is the most consequential technology in the history of humanity... There’s not been one single word of serious discussion in Congress about that reality,” said the Vermont senator.

Sanders added that while tech billionaires were pouring money into AI development, they were doing so with the aim of enriching and empowering themselves while ignoring the obvious economic shockwaves that would be caused by the widespread adoption of the technology.

“Elon Musk. [Mark] Zuckerberg. [Jeff] Bezos. Peter Thiel... Do you think they’re staying up nights worrying about working people?” Sanders said. “What are they gonna do when people have no jobs?"

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

 , The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Friday, December 26, 2025

AI Will Continue to Dominate California IP Litigation in 2026; Bloomberg Law, December 26, 2025

, Bloomberg Law; AI Will Continue to Dominate California IP Litigation in 2026

"Lawsuits against AI giants OpenAI, Anthropic, and Perplexity are set to continue headlining intellectual property developments in California federal courts in 2026.

In the coming months, we’ll see decisions in two key cases: whether Anthropic PBC’s historic $1.5 billion copyright settlement with authors will receive final approval and if music publishers’ separate copyright lawsuit against the artificial intelligence company will head to trial in September.

Here’s a closer look at the California legal battles that could redefine the landscape of IP law next year."

Tuesday, December 23, 2025

What Are the Risks of Sharing Medical Records With ChatGPT?; The New York Times, December 3, 2025

, The New York Times; What Are the Risks of Sharing Medical Records With ChatGPT?

"Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School and clinical lead for data science and A.I. at Mass General Brigham, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer."

Monday, December 22, 2025

OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers; Bloomberg Law, December 22, 2025

Annelise Levy, Bloomberg Law; OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers

"Writers including Pulitzer Prize-winning journalist John Carreyrou filed a copyright lawsuit accusing six AI giants of using pirated copies of their books to train large language models.

The complaint, filed Monday in the US District Court for the Northern District of California, claims Anthropic PBC, Google LLCOpenAI Inc.Meta Platforms Inc., xAI Corp., and Perplexity AI Inc. committed a “deliberate act of theft.”

It is the first copyright lawsuit against xAI over its training process, and the first suit brought by authors against Perplexity...

Carreyrou is among the authors who opted out of a $1.5 billion class-action settlement with Anthropic."

Sunday, December 21, 2025

Australian culture, resources and democracy for $4,300 a year? Thanks for the offer, tech bros, but no thanks; The Guardian, December 15, 2025

  , The Guardian; Australian culture, resources and democracy for $4,300 a year? Thanks for the offer, tech bros, but no thanks

"According to the Tech Council, AI will deliver $115bn in annual productivity (or about $4,300 per person), rubbery figures generated by industry-commissioned research based on estimates on hours saved with no regard for jobs lost, the distribution of the promised dividend benefit or how the profits will flow.

In return for this ill-defined bounty, Farquhar says our government will need to allow the tech industry to do three things: build a data and text mining exemption to copyright law, rapidly scale data centre infrastructure and allow foreign companies to use these centres without regard for local laws. This is a proposition that demands closer scrutiny.

The use of copyrighted content to train AI has been a burning issue since 2023 when a massive data dredge saw more than 190,000 authors (including me) have our works plundered without our consent to train AI. Musicians and artists too have had their work scraped and repurposed.

This theft has been critical in training the large language models to portray something approaching empathy. It has also allowed paid users to take this stolen content and ape creators, devaluing and diminishing their work in the process. Nick Cave has described this as “replication as travesty”, noting “songs arise out of suffering … data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing.”

The sense of grievance among creators over the erasure of culture is wide and deep. A wave of creators from Peter Garrett to Tina Arena, Anna Funderand Trent Dalton have determined this is the moment to take a stand.

It is not just the performers; journalists, academics, voiceover and visual artists are all being replaced by shittier but cheaper automated products built on the theft of their labour, undermining the integrity of their work and will ultimately take their jobs.

Like fossil fuels, what is being extracted and consumed is the sum of our accumulated history. It goes from metaphor to literal when it comes to the second plank of Farquhar’s pitch: massive spending on industrial infrastructure to accommodate AI.

This imperative to power AI is the justification used by Donald Trump to recharge the mining of fossil fuels, while the industry is beating the “modular nuclear” drum for a cleaner AI revolution. Meanwhile, the OpenAI CEO, Sam Altman, is reassuring us that we don’t need to stress because AI will solve climate change anyway!

The third and final element of Farquhar’s pitch is probably its most revealing. If Australia wants to build this AI nirvana, foreign nations should be given diplomatic immunity for the data centres built and operated here. This quaint notion of the “data embassy” overriding national sovereignty reinforces a growing sense that the tech sector is moving beyond the idea of the nation state governing corporations to that of a modern imperial power.

That’s the premise of Karen Hao’s book The Empire of AI, which chronicles the rise of OpenAI and the choices it made to trade off safety and the public good in pursuit of scale and profit."

Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash; The Guardian, December 19, 2025

 , The Guardian; Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash

"The Productivity Commission has abandoned a proposal to allow tech companies to mine copyrighted material to train artificial intelligence models, after a fierce backlash from the creative industries.

Instead, the government’s top economic advisory body recommended the government wait three years before deciding whether to establish an independent review of Australian copyright settings and the impact of the disruptive new technology...

In its interim report on the digital economy, the commission floated the idea of granting a “fair dealing” exemption to copyright rules that would allow AI companies to mine data and text to develop their large language models...

The furious response from creative industries to the commission’s idea included music industry bodies saying it would “legitimise digital piracy under guise of productivity”."

Tuesday, December 16, 2025

The Architects of AI Are TIME’s 2025 Person of the Year; Time, December 11, 2025

 Charlie Campbell.,Andrew R. Chow and Billy Perrigo , Time; The Architects of AI Are TIME’s 2025 Person of the Year

"For decades, humankind steeled itself for the rise of thinking machines. As we marveled at their ability to beat chess champions and predict protein structures, we also recoiled from their inherent uncanniness, not to mention the threats to our sense of humanity. Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe.

This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible. “Every industry needs it, every company uses it, and every nation needs to build it,” Huang tells TIME in a 75-minute interview in November, two days after announcing that Nvidia, the world’s first $5 trillion company, had once again smashed Wall Street’s earnings expectations. “This is the single most impactful technology of our time.” OpenAI’s ChatGPT, which at launch was the fastest-growing consumer app of all time, has surpassed 800 million weekly users. AI wrote millions of lines of code, aided lab scientists, generated viral songs, and spurred companies to re-examine their strategies or risk obsolescence. (OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME’s archives.)...

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods. Racing both beside and against each other, they placed multibillion-dollar bets on one of the biggest physical infrastructure projects of all time. They reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons."

Monday, December 15, 2025

Government's AI consultation finds just 3% support copyright exception; The Bookseller, December 15, 2025

 MAIA SNOW, The Bookseller; Government's AI consultation finds just 3% support copyright exception

"The initial results of the consultation found that the majority of respondents (88%) backed licences being required in all cases where data was being used for AI training. Just 3% of respondents supported the government’s preferred options, which would allow data mining by AI companies and require rights holders to opt-out."

Sunday, December 14, 2025

Elon Musk teams with El Salvador to bring Grok chatbot to public schools; The Guardian, December 11, 2025

, The Guardian; Elon Musk teams with El Salvador to bring Grok chatbot to public schools

"Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to a Thursday announcement by xAI. Over the next two years, the plan is to “deploy” the chatbot to more than 5,000 public schools in an “AI-powered education program”."

The Disney-OpenAI tie-up has huge implications for intellectual property; Fast Company, December 11, 2025

 CHRIS STOKEL-WALKER, Fast Company ; The Disney-OpenAI tie-up has huge implications for intellectual property

"Walt Disney and OpenAI make for very odd bedfellows: The former is one of the most-recognized brands among children under the age of 18. The near-$200 billion company’s value has been derived from more than a century of aggressive safeguarding of its intellectual property and keeping the magic alive among innocent children.

OpenAI, which celebrated its first decade of existence this week, is best known for upending creativity, the economy, and society with its flagship product, ChatGPT. And in the last two months, it has said it wants to get to a place where its adult users can use its tech to create erotica.

So what the hell should we make of a just-announced deal between the two that will allow ChatGPT and Sora users to create images and videos of more than 200 characters, from Mickey and Minnie Mouse to the Mandalorian, starting from early 2026?"

Saturday, December 13, 2025

Authors Ask to Update Meta AI Copyright Suit With Torrent Claim; Bloomberg Law, December 12, 2025

 

, Bloomberg Law; Authors Ask to Update Meta AI Copyright Suit With Torrent Claim

"Authors in a putative class action copyright suit against Meta Platforms Inc. asked a federal judge for permission to amend their complaint to add a claim over Meta’s use of peer-to-peer file-sharing unveiled in discovery."

Friday, December 12, 2025

The Disney-OpenAI Deal Redefines the AI Copyright War; Wired, December 11, 2025

BRIAN BARRETT, Wired; The Disney-OpenAI Deal Redefines the AI Copyright War

 "“I think that AI companies and copyright holders are beginning to understand and become reconciled to the fact that neither side is going to score an absolute victory,” says Matthew Sag, a professor of law and artificial intelligence at Emory University. While many of these cases are still working their way through the courts, so far it seems like model inputs—the training data that these models learn from—are covered by fair use. But this deal is about outputs—what the model returns based on your prompt—where IP owners like Disney have a much stronger case

Coming to an output agreement resolves a host of messy, potentially unsolvable issues. Even if a company tells an AI model not to produce, say, Elsa at a Wendy’s drive-through, the model might know enough about Elsa to do so anyway—or a user might be able to prompt their way into making Elsa without asking for the character by name. It’s a tension that legal scholars call the “Snoopy problem,” but in this case you might as well call it the Disney problem.

“Faced with this increasingly clear reality, it makes sense for consumer-facing AI companies and entertainment giants like Disney to think about licensing arrangements,” says Sag."

Thursday, December 11, 2025

Trump Says Chips Ahoy to Xi Jinping; Wall Street Journal, December 10, 2025

The Editorial Board, Wall Street Journal; Trump Says Chips Ahoy to Xi Jinping

"President Trump said this week he will let Nvidia sell its H200 chip to China in return for the U.S. Treasury getting a 25% cut of the sales. The Indians struck a better deal when they sold Manhattan to the Dutch. Why would the President give away one of America’s chief technological advantages to an adversary and its chief economic competitor?"

Trump Signs Executive Order to Neuter State A.I. Laws; The New York Times, December 11, 2025

, The New York Times; Trump Signs Executive Order to Neuter State A.I. Laws

"President Trump signed an executive order on Thursday that aims to neuter state laws that place limits on the artificial intelligence industry, a win for tech companies that have lobbied against regulation of the booming technology.

Mr. Trump, who has said it is important for America to dominate A.I., has criticized the state laws for generating a confusing patchwork of regulations. He said his order would create one federal regulatory framework that would override the state laws, and added that it was critical to keep the United States ahead of China in a battle for leadership on the technology."

Banning AI Regulation Would Be a Disaster; The Atlantic, December 11, 2025

Chuck Hagel, The Atlantic; Banning AI Regulation Would Be a Disaster

"On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI...

The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency...

Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience...

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action...

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future."

Disney says Google AI infringes copyright “on a massive scale”; Ars Technica, December 11, 2025

 RYAN WHITWAM , Ars Technica; Disney says Google AI infringes copyright “on a massive scale”

"Disney has sent a cease and desist to Google, alleging the company’s AI tools are infringing Disney’s copyrights “on a massive scale.”

According to the letter, Google is violating the entertainment conglomerate’s intellectual property in multiple ways. The legal notice says Google has copied a “large corpus” of Disney’s works to train its gen AI models, which is believable, as Google’s image and video models will happily produce popular Disney characters—they couldn’t do that without feeding the models lots of Disney data.

The C&D also takes issue with Google for distributing “copies of its protected works” to consumers."