Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Friday, January 23, 2026

It Makes Sense That People See A.I. as God; The New York Times, January 23, 2026

 , The New York Times; It Makes Sense That People See A.I. as God

"More and more, when it comes to our relationships with A.I. and the complex algorithms that shape so much of our modern subjectivity, we have slipped into the language and habits of mind we normally reserve for deities. And even people who do not make an explicit connection between A.I. and religion engage a kind of religious mode around the new technology."

Wednesday, January 21, 2026

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss; The Guardian, January 21, 2026

 and  , The Guardian; Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss

"Jamie Dimon, the boss of JP Morgan, has said artificial intelligence “may go too fast for society” and cause “civil unrest” unless governments and business support displaced workers.

While advances in AI will have huge benefits, from increasing productivity to curing diseases, the technology may need to be phased in to “save society”, he said...

Jensen Huang, the chief executive of the semiconductor maker Nvidia, whose chips are used to power many AI systems, argued that labour shortages rather than mass payoffs were the threat.

Playing down fears of AI-driven job losses, Huang told the meeting in Davos that “energy’s creating jobs, the chips industry is creating jobs, the infrastructure layer is creating jobs … jobs, jobs, jobs”...

Huang also argued that AI robotics was a “once-in-a-generation” opportunity for Europe, as the region had an “incredibly strong” industrial manufacturing base."

Tuesday, January 20, 2026

FREE WEBINAR: REGISTER: AI, Intellectual Property and the Emerging Legal Landscape; National Press Foundation, Thursday, January 22, 2026

 National Press Foundation; REGISTER: AI, Intellectual Property and the Emerging Legal Landscape

"Artificial intelligence is colliding with U.S. copyright law in ways that could reshape journalism, publishing, software, and the creative economy.

The intersection of AI and intellectual property has become one of the most consequential legal battles of the digital age, with roughly 70 federal lawsuits filed against AI companies and copyright claims on works ranging from literary and visual work to music and sound recording to computer programs. Billions of dollars are at stake.

Courts are now deciding what constitutes “fair use,” whether and how AI companies may use copyrighted material to build models, what licensing is required, and who bears responsibility when AI outputs resemble protected works. The legal decisions will shape how news, art, and knowledge are produced — and who gets paid for them.

To help journalists better understand and report on the developing legal issues of AI and IP, join the National Press Foundation and a panel of experts for a wide-ranging discussion around the stakes, impact and potential solutions. Experts in technology and innovation as well as law and economics join journalists in this free online briefing 12-1 p.m. ET on Thursday, January 22, 2026."

Monday, January 19, 2026

AI companies will fail. We can salvage something from the wreckage; The Guardian, January 18, 2026

 , The Guardian; AI companies will fail. We can salvage something from the wreckage

"The growth narrative of AI is that AI will disrupt labor markets. I use “disrupt” here in its most disreputable tech-bro sense.

The promise of AI – the promise AI companies make to investors – is that there will be AI that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself and give the other half to the AI company.

That is the $13tn growth story that Morgan Stanley is telling. It’s why big investors are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We would have to figure out what to do with all these unemployed people.

But AI can’t do your job. It can help you do your job, but that does not mean it is going to save anyone money...

After more than 20 years of being consistently wrong and terrible for artists’ rights, the US Copyright Office has finally done something gloriously, wonderfully right. All through this AI bubble, the Copyright Office has maintained – correctly – that AI-generated works cannot be copyrighted, because copyright is exclusively for humans. That is why the “monkey selfie” is in the public domain. Copyright is only awarded to works of human creative expression that are fixed in a tangible medium.

And not only has the Copyright Office taken this position, they have defended it vigorously in court, repeatedly winning judgments to uphold this principle.

The fact that every AI-created work is in the public domain means that if Getty or Disney or Universal or Hearst newspapers use AI to generate works – then anyone else can take those works, copy them, sell them or give them away for nothing. And the only thing those companies hate more than paying creative workers, is having other people take their stuff without permission...

AI is a bubble and bubbles are terrible.

Bubbles transfer the life savings of normal people who are just trying to have a dignified retirement to the wealthiest and most unethical people in our society, and every bubble eventually bursts, taking their savings with it."

Sunday, January 18, 2026

Publishers seek to join lawsuit against Google over AI training; Reuters, January 15, 2026

 , Reuters; Publishers seek to join lawsuit against Google over AI training

"Publishers Hachette Book Group and Cengage Group asked a California federal court on Thursday for permission to intervene in a proposed class action lawsuit against Google over the alleged misuse of copyrighted material used to train its artificial intelligence systems.

The publishers said in their proposed complaint that the tech company "engaged in one of the most prolific infringements of copyrighted materials in history" to build its AI capabilities, copying content from Hachette books and Cengage textbooks without permission...

The lawsuit currently involves groups of visual artists and authors who sued Google for allegedly misusing their work to train its generative AI systems. The case is one of many high-stakes lawsuits brought by artists, authors, music labels and other copyright owners against tech companies over their AI training."

Saturday, January 17, 2026

Public Shame Is the Most Effective Tool for Battling Big Tech; The New York Times, January 14, 2026

  , The New York Times; Public Shame Is the Most Effective Tool for Battling Big Tech

"It might be harder to shame the tech companies themselves into making their products safer, but we can shame third-party companies like toymakers, app stores and advertisers into ending partnerships. And with enough public disapproval, legislators might be inspired to act.

In some of the very worst corners of the internet might lie some hope...

Without more public shaming, what seems to be the implacable forward march of A.I. is unstoppable...

As Jay Caspian Kang noted in The New Yorker recently, changing social norms around kids and tech use can be powerful, and reforms like smartphone bans in schools have happened fairly quickly, and mostly on the state and local level."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Wednesday, January 14, 2026

Britain seeks 'reset' in copyright battle between AI and creators; Reuters, January 13, 2026

 Reuters; Britain seeks 'reset' in copyright battle between AI and creators

"British technology minister Liz Kendall said on Tuesday the government was seeking a "reset" on plans to overhaul copyright rules to accommodate artificial intelligence, pledging to protect creators while unlocking AI's economic potential.

Creative industries worldwide are grappling with legal and ethical challenges posed by AI systems that generate original content after being trained on popular works, often without compensating the original creators."

Tuesday, January 13, 2026

‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn; Forbes, January 9, 2026

Rob Salkowitz, , Forbes; ‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn

"Despite this string of successes, creators like BT caution that content owners need to move quickly to secure any kind of terms. “A lot of artists have their heads in the sand with respect to AI,” he said. “The fact is, if they don’t come to some kind of agreement, they may end up with nothing.”

The concern is that AI models are increasingly being trained on synthetic data: that is, on the output of AI systems, rather than on content attributable to any individual creator or rights owner. Gartner estimates that 75% of AI training data in 2026 will be synthetic. That number could hit 100% by 2030. Once the tech companies no longer need human-produced content, they will stop paying for it.

“The quality of outputs from AI systems has been improving dramatically, which means that it is possible to train on synthetic data without risking model collapse,” said Dr. Daniela Braga, founder and CEO of the data training firm Defined.ai, in a separate interview at CES. “The window is definitely closing for individual rights owners to secure favorable terms.”

Other experts suggest that these claims may be overstated.

Braga says the best way creators can protect themselves is to do business with ethical companies willing to provide compensation for high-quality human-produced content and represent the superior value of that content to their customers. As models grow in capabilities, the need will shift from sheer volume of data to data that is appropriately tagged and annotated to fit easily into specific use cases.

There remain some profound questions around the sustainability of AI from a business standpoint, with demand for services among enterprise and consumers lagging the massive, and massively expensive, build-out of capacity. For some artists opposed to generative AI in its entirety, there may be the temptation to wait it out until the bubble bursts. After all, these artists created their work to be enjoyed by humans, not to be consumed in bulk by machines threatening their livelihoods. In light of those objections, the prospect of a meager payout might seem unappealing."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

 and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Monday, January 5, 2026

AI copyright battles enter pivotal year as US courts weigh fair use; Reuters, January 5, 2026

 , Reuters; AI copyright battles enter pivotal year as US courts weigh fair use

"The sprawling legal fight over tech companies' vast copying of copyrighted material to train their artificial intelligence systems could be entering a decisive phase in 2026.

After a string of fresh lawsuits and a landmark settlement in 2025, the new year promises to bring a wave of rulings that could define how U.S. copyright law applies to generative AI. At stake is whether companies like OpenAI, Google and Meta can rely on the legal doctrine of fair use to shield themselves from liability – or if they must reimburse copyright holders, which could cost billions."

Saturday, January 3, 2026

University of Rochester's incoming head librarian looks to adapt to AI; WXXI, January 2, 2026

Noelle E. C. Evans, WXXI; University of Rochester's incoming head librarian looks to adapt to AI

"A new head librarian at the University of Rochester is preparing to take on a growing challenge — adapting to generative artificial intelligence.

Tim McGeary takes on the position of university librarian and dean of libraries on March 1. He is currently associate librarian for digital strategies and technology at Duke University, where he’s witnessed AI challenges firsthand...

“(The university’s digital repository) was dealing with an unforeseen consequence of its own success: By making (university) research freely available to anyone, it had actually made it less accessible to everyone,” Jamie Washington wrote for the campus online news source, UDaily.

That balance between open access and protecting students, researchers and publishers from potential harms from AI is a space of major disruption, McGeary said.

"If they're doing this to us, we have open systems, what are they possibly doing to those partners we have in the publishing space?" McGeary asked. "We've already seen some of the larger AI companies have to be in court because they have acquired content in ways that are not legal.”

In the past 25 years, he said he’s seen how university libraries have evolved with changing technology; they've had to reinvent how they serve research and scholarship. So in a way, this is another iteration of those challenges, he said."

Tuesday, December 30, 2025

The New Billionaires of the A.I. Boom; The New York Times, December 30, 2025

, The New York Times; The New Billionaires of the A.I. Boom

 "Most are men.

The A.I. boom has elevated mostly male founders to billionaire status, a pattern in tech cycles. Only a few women — such as Ms. Guo and Ms. Murati — have reached that wealth level.

The A.I. craze has amplified the “homogeneity” of those who are part of this boom, Dr. O’Mara said."

An Anti-A.I. Movement Is Coming. Which Party Will Lead It?; The New York Times, December 29, 2025

MICHELLE GOLDBERG, The New York Times ; An Anti-A.I. Movement Is Coming. Which Party Will Lead It?

"I disagree with the anti-immigrant, anti-feminist, bitterly reactionary right-wing pundit Matt Walsh about basically everything, so I was surprised to come across a post of his that precisely sums up my view of artificial intelligence. “We’re sleepwalking into a dystopia that any rational person can see from miles away,” he wrote in November, adding, “Are we really just going to lie down and let AI take everything from us?”

A.I. obviously has beneficial uses, especially medical ones; it may, for example, be better than humans at identifying localized cancers from medical imagery. But the list of things it is ruining is long."

Sunday, December 28, 2025

The year in AI and culture; NPR, December 28, 2025

"From the advent of AI actress Tilly Norwood to major music labels making deals with AI companies, 2025 has been a watershed year for AI and culture."

Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’; The Independent, December 28, 2025

John Bowden  , The Independent; Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’

Senator’s warnings come as Trump renews calls to ban states from regulating AI

"“This is the most consequential technology in the history of humanity... There’s not been one single word of serious discussion in Congress about that reality,” said the Vermont senator.

Sanders added that while tech billionaires were pouring money into AI development, they were doing so with the aim of enriching and empowering themselves while ignoring the obvious economic shockwaves that would be caused by the widespread adoption of the technology.

“Elon Musk. [Mark] Zuckerberg. [Jeff] Bezos. Peter Thiel... Do you think they’re staying up nights worrying about working people?” Sanders said. “What are they gonna do when people have no jobs?"

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

 , The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Friday, December 26, 2025

AI Will Continue to Dominate California IP Litigation in 2026; Bloomberg Law, December 26, 2025

, Bloomberg Law; AI Will Continue to Dominate California IP Litigation in 2026

"Lawsuits against AI giants OpenAI, Anthropic, and Perplexity are set to continue headlining intellectual property developments in California federal courts in 2026.

In the coming months, we’ll see decisions in two key cases: whether Anthropic PBC’s historic $1.5 billion copyright settlement with authors will receive final approval and if music publishers’ separate copyright lawsuit against the artificial intelligence company will head to trial in September.

Here’s a closer look at the California legal battles that could redefine the landscape of IP law next year."

Tuesday, December 23, 2025

What Are the Risks of Sharing Medical Records With ChatGPT?; The New York Times, December 3, 2025

, The New York Times; What Are the Risks of Sharing Medical Records With ChatGPT?

"Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School and clinical lead for data science and A.I. at Mass General Brigham, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer."

Monday, December 22, 2025

OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers; Bloomberg Law, December 22, 2025

Annelise Levy, Bloomberg Law; OpenAI, Anthropic, xAI Hit With Copyright Suit from Writers

"Writers including Pulitzer Prize-winning journalist John Carreyrou filed a copyright lawsuit accusing six AI giants of using pirated copies of their books to train large language models.

The complaint, filed Monday in the US District Court for the Northern District of California, claims Anthropic PBC, Google LLCOpenAI Inc.Meta Platforms Inc., xAI Corp., and Perplexity AI Inc. committed a “deliberate act of theft.”

It is the first copyright lawsuit against xAI over its training process, and the first suit brought by authors against Perplexity...

Carreyrou is among the authors who opted out of a $1.5 billion class-action settlement with Anthropic."