Showing posts with label AI tech companies. Show all posts
Showing posts with label AI tech companies. Show all posts

Tuesday, April 14, 2026

Agency in the Age of AI; Time, April 14, 2026

John Palfrey , Time; Agency in the Age of AI

"OpenAI’s recent acquisition of OpenClaw, an open-source, autonomous AI agent designed to run locally on a user’s computer, is a sign that AI agents are quickly being given more responsibilities and more access—from emails to bank accounts, a decision with unintended consequences, including deleted inboxes and Amazon Web Services outages. Peter Steinberger, the founder of OpenClaw, said he wants to “build an agent that even my mum can use.” But there is a difference between using technology to improve efficiency and giving technology agency that humans should hold. 

These developments prompt hard questions, particularly for young people who are seeking agency in their personal and professional lives. Does it make sense to train to be an actuary if AI is supposed to be good at predicting unknown outcomes based on data? Is it worth the cost today to train to be a lawyer or an accountant or pursue higher education at all when all the answers are supposedly at our fingertips? Put another way, what does agency look like in an era dominated by the spread of AI?"

Monday, April 13, 2026

Nobody is governing AI; Quartz, April 8, 2026

 Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash; Los Angeles Times, April 13, 2026

Queenie Wong , Los Angeles Times; OpenAI CEO Sam Altman addresses Molotov cocktail attack on his home and AI backlash

"Hours after a Molotov cocktail was thrown at his San Francisco home, OpenAI Chief Executive Sam Altman addressed the criticism surrounding artificial intelligence that appears to have been the impetus for the attack. 

In a lengthy blog post, Altman shared a family photo of his husband and child, stating he hopes it might convince people not to repeat the attack despite their opinions on him.

The San Francisco Police Department arrested a 20-year-old man in connection with the Friday morning attack but did not publicly comment on the motivation. Altman and his company, the maker of ChatGPT, have been at the center of a heated debate about whether AI will change the world for better or worse."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

  and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

Sunday, April 12, 2026

Is AI the greatest art heist in history?; The Guardian, April 12, 2026

, The Guardian; Is AI the greatest art heist in history? 

New technologies of reproduction are plundering the art world – and getting away with it

"In 2026, its easy to see why generative AI is bad. The internet has nicknamed its excretions “slop”. The CEOs of AI companies prance about on stage like supervillains, bragging that their products will eliminate vast swathes of work. Generative AI requires sacrificing the world’s water to feed its hideous data centres. Around the globe, chatbots induce schizophrenic delusions and urge teens to kill themselves – all while turning users brains to mush.

Who could have predicted this? Artists, that’s who...

When tech boosters want to demonise resistance, they invoke the luddites. By their telling, the luddites were primitive idiots, who smashed machines they were too stupid to understand. History though, tells a different story. As recounted by Brian Merchant’s sublime work Blood in the Machineluddites were skilled artisans, fighting for their way of life against the “satanic mills” – textile sweatshops powered by child semi-slaves. Forbidden from unionising, luddites smashed machines as a protest tactic. And they did not lose to the inevitable march of progress. They lost to physical force. The government called in troops, and the luddites were either executed or shipped to penal colonies in Australia.

Artists too are fighting for a way of life. And if we are too disorganised to triumph, that will be everyone’s loss. AI companies’ inappropriate scraping may have started with the work of illustrators like me, but it has grown to encompass everything else. It extends to the billions of dollars that these companies squander each year, to the carbon they burn, to the rare minerals in their chips, to the land on which their data centres sit, to culture, education, sanity and our very imaginations. In return for the entirety of the human and non-human world, the tech lords can only offer us dystopia. Their fantasy future contains neither meaningful work nor real communities, just robots chattering to each other, leaving nothing for us."

The most 'ethical' AI company might also be the web's biggest freeloader; Business Insider, April 12, 2026

, Business Insider ; The most 'ethical' AI company might also be the web's biggest freeloader

"Cloudflare's latest data offers one of the clearest snapshots yet of how AI companies consume the web, and how little they give back.

The company, which powers roughly 20% of the internet, tracks how AI bots crawl websites versus how often those platforms send users back through referrals. The resulting "crawl-to-refer" ratio is a simple yet telling metric: how much value is extracted compared to returned.

The early April 2026 figures are stark. Anthropic is the worst by a wide margin, with a ratio of 8,800 to 1. That means its bots crawl webpages 8,800 times for every referral sent...

Anthropic's position is particularly striking given its reputation for being "ethical." That reputation has made it a preferred choice among some users who want to support more responsible AI development. This data highlights a different dimension of ethics — how companies interact with the broader web ecosystem that provides information for AI model outputs."

Saturday, April 11, 2026

How AI is getting better at finding security holes; NPR, April 11, 2026

 , NPR; How AI is getting better at finding security holes

"In the past few months, AI models have gone from producing hallucinations to becoming effective at finding security flaws in software, according to developers who maintain widely used cyber infrastructure. Those pieces of software, among other things, power operating systems and transfer data for things connected to the internet.

While these new capabilities can help developers make software more secure, they can also be weaponized by hackers and nation states to steal information and money or disrupt critical services.

The latest development of AI's cyber capability came on Tuesday, when AI lab Anthropic announced it had developed a powerful new model the company believes could "reshape cybersecurity." It said that its latest model, Mythos Preview, was able to find "high-severity vulnerabilities, including some in every major operating system and web browser." Not only that, the model was better at coming up with ways to exploit the vulnerabilities it found, which means malicious actors can more effectively achieve their goals.

For now, the company is limiting the access to the model to around 50 select companies and organizations "in an effort to secure the world's most critical software." They're calling the collaboration Project Glasswing, naming it after a butterfly species with transparent wings.

Anthropic says the risk for misuse is so high that it has no plans to release this particular model to the general public, according to the announcement, but it will release other related models. "Our eventual goal is to enable our users to safely deploy Mythos-class models at scale," the company wrote."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Thursday, April 9, 2026

Claude Mythos Is Everyone’s Problem; The Atlantic, April 9, 2026

Matteo Wong , The Atlantic; Claude Mythos Is Everyone’s Problem

What happens when AI can hack everything?

"These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers."

Who owns ideas in the AI age?; Fortune, April 8, 2026

  , Fortune; Who owns ideas in the AI age?; David Shelley, CEO of Hachette’s U.K. and U.S. operations, on taking on Big Tech, defending copyright, and why the future of human creativity is at stake.

"Can you ever really own an idea?"

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

 Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Monday, April 6, 2026

US music publishers suing Anthropic make their case against AI 'fair use'; Reuters, March 24, 2026

 , Reuters; US music publishers suing Anthropic make their case against AI 'fair use'

"Music publishers Universal Music Group , Concord and ABKCO have asked a judge in California to rule that U.S. copyright law does not insulate artificial intelligence startup Anthropic from ​liability for copying their song lyrics to train its AI-powered chatbot Claude.

The publishers' request , filed on Monday ‌in federal court in San Jose, tees up a critical question in the legal battle between creators and tech companies: Does the doctrine of "fair use" apply to the copying of millions of copyrighted works to train AI models?"

Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code; Futurism, April 3, 2026

  , Futurism; Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code

As the Wall Street Journal reports, Anthropic is scrambling to contain a leak of its Claude Code AI model’s source code by issuing a copyright takedown request for more than 8,000 copies of it — a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place.

The leak isn’t considered to be an outright disaster; no customer data was exposed, Anthropic says, nor were the internal mathematical “weights” that determine how the AI “learns” and which distinguish it from other models. But it did expose the techniques its engineers used to get its AI model to act as an autonomous agent, a form of digital infrastructure coders call a harness, and other tricks for making the AI operate as seamlessly as it does.

Hence Anthropic’s copyright takedown request, which targets the thousands of copies that were shared on GitHub. It later narrowed its request from 8,000 copies to 96 copies, according to the WSJ reporting, claiming that the initial one covered more accounts than intended.

It’s certainly within Anthropic’s right to issue the takedown request, but the hypocrisy of Anthropic running to the law to protect its intellectual property is plain to see, especially for a company that’s relentlessly positioned itself as the ethical adult in the room."

Sunday, April 5, 2026

Claude's Constitution; Anthropic, January 21, 2026

Anthropic, Claude's Constitution

Our vision for Claude's character

"Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all of our other guidance and training to be consistent with it.

Training models is a difficult task, and Claude’s behavior might not always reflect the constitution’s ideals. We will be open—for example, in our system cards—about the ways in which Claude’s behavior comes apart from our intentions. But we think transparency about those intentions is important regardless.

The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g., “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.

This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.

For a summary of the constitution, and for more discussion of how we’re thinking about it, see our blog post “Claude’s new constitution.”

Powerful AI models will be a new kind of force in the world, and people creating them have a chance to help them embody the best in humanity. We hope this constitution is a step in that direction.

We’re releasing Claude’s constitution in full under a Creative Commons CC0 1.0 Deed, meaning it can be freely used by anyone for any purpose without asking for permission.

Many people at Anthropic and beyond contributed to the creation of this document, as did several Claude models. Amanda Askell is the primary author and wrote the majority of the text. Joe Carlsmith wrote significant parts of many sections and played a core role in revising the text. Chris Olah, Jared Kaplan, and Holden Karnofsky made significant contributions to its content and development. More detailed contribution statement and acknowledgments below.

The preface and the acknowledgements are not part of the official constitution."

Tuesday, March 31, 2026

Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm; Reuters, March 16, 2026

  and  , Reuters; Copyright Law in 2025: Courts begin to draw lines around AI training, piracy, and market harm

"In 2025, U.S. courts issued the first substantive, merits-stage decisions addressing whether the use of copyrighted works to train generative artificial intelligence systems constitutes "fair use." Although these rulings do not settle all open questions — and in some respects highlight emerging judicial disagreements — they represent a significant inflection point in copyright law's response to large language models, image generators, and other foundation models.

Taken together, these cases establish early guideposts for AI developers, publishers, media companies, and enterprises deploying generative AI systems. Below, we summarize the most important copyright ​decisions and pending cases shaping the law in 2025...

Conclusion and recommendations

The ​2025 decisions reflect cautious but meaningful progress in defining how copyright law applies to generative AI. Courts are increasingly receptive to fair use arguments for training on lawfully acquired data, deeply skeptical of speculative market-harm claims, and uniformly intolerant of piracy. At the same time, cases involving direct competition, news content, and human likeness may test the limits of these early rulings."

Monday, March 30, 2026

Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say; Axios, March 27, 2026

 Julie Bowen, Axios ; Axios AI+DC Summit: Copyright protection in the AI era will be up to the courts, industry leaders say

"Washington, D.C. — As policymakers grapple with how to regulate AI, the hardest questions around copyright and fair use are being punted to the courts, according to governance, creator, and technology experts at an Axios expert voices roundtable.

The big picture: With Congress moving slowly and disagreements over policy, judges are becoming the primary deciders of how AI and the creators work together — or don't.


That's partly by necessity: "Fair use is incredibly complicated — case by case, fact specific," News/Media Alliance president and CEO Danielle Coffey said.


"Each case that we get … we start to get these new guideposts," Jones Walker partner Graham Ryan said.


Ryan said they expect at least three fair use decisions this year that will have implications for the broader AI-artist ecosystem.


Axios' Maria Curi and Ashley Gold moderated the March 25 discussion, which was sponsored by Adobe.

What they're saying: Legal uncertainty remains. For example, two courts within the same district, and during the same week, differed in the reasoning behind their rulings on similar matters of fair use and AI.


"There is a current, live controversy over … the extant understanding of the fourth factor in fair use, which is: Does the copy replace the market for the work?" said Kevin Bankston, senior adviser for the Center for Democracy & Technology.


Still, "we have been trying to support the process through the courts, because we think there is a really strong framework in copyright law for protecting artists right now," according to Public Knowledge president and CEO Chris Lewis."

Sunday, March 29, 2026

Meta’s court losses spell potential trouble for AI research, consumer safety; CNBC, March 29, 2026

Jonathan Vanian , CNBC; Meta’s court losses spell potential trouble for AI research, consumer safety

"Over a decade ago, Meta then known as Facebook – hired social science researchers to analyze how the social network’s services were affecting users. It was a way for the company and its peers to show they were serious about understanding the benefits and potential risks of their innovations. 

But as Meta’s court losses this week illustrate, the researchers’ work can become a liability. Brian Boland, a former Facebook executive who testified in both trials — one in New Mexico and the other in Los Angeles — says the damning findings from Meta’s internal research and documents seemed to contradict the way the company portrayed itself publicly. Juries in the two trials determined that Meta inadequately policed its site, putting kids in harm’s way. 

Mark Zuckerberg’s company began clamping down on its research teams a few years ago after a Facebook researcher, Frances Haugen, became a prominent whistleblower. The newer crop of tech companies, like OpenAI and Anthropic, subsequently invested heavily in researchers and charged them with studying the impact of modern AI on users and publishing their findings. 

With AI now getting outsized attention for the harmful effects it’s having on some users, those companies must ask if it’s in their best interest to continue funding research or to suppress it."

Friday, March 27, 2026

OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens; Futurism, March 26, 2026

 , Futurism; OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens

"The company’s panicked executives have made it abundantly clear that distracting “side quests” must be abandoned, while doubling down on both enterprise and coding. The purported goal is to stuff all of its offerings into a single “super app,” taking a page out of xAI CEO Elon Musk’s playbook.

These aren’t empty words by OpenAI execs. First, news emerged this week that the company is killing its disastrous Sora video AI slop app, lighting what was supposed to be a groundbreaking $1 billion deal with Disney on fire.

Now, the company is axing its spicy “adult mode” chatbot, as the Financial Timesreports, once again highlighting how much pressure the company is under as competitors aren’t just catching up, but snatching up precious paying customers from right under its nose."

Q&A: The UK’s Copyright Report - A Gift to Creators, a Problem for AI; JD Supra, March 27, 2026

 Oliver Howley, JD Supra; Q&A: The UK’s Copyright Report - A Gift to Creators, a Problem for AI

"The UK Government has released its long-awaited copyright report, framed as an attempt to reconcile the competing interests of creators, technology companies and the wider innovation ecosystem. Rightsholders will welcome it, while the UK’s AI sector will find less comfort.

Two core policy decisions (on training data and on the ownership of AI-generated outputs) mark a shift away from earlier, more developer-friendly proposals. Both decisions leave significant questions unanswered: how AI developers can lawfully assemble training data at scale, what happens to content produced with minimal human input, and whether the UK’s current posture is sustainable in a world where capital and training runs are increasingly mobile.

In this Q&A, Oliver Howley, partner in Proskauer’s TMT Group and one of The Lawyer’s 2026 Hot 100, unpacks what the report says on these two decisions, what it leaves open, and what it means for developers, investors and rightsholders navigating the uncertainty ahead."