My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."
"Anthropic on Monday sued the Pentagon, alleging its designation as a "supply chain risk" violates the company's First Amendment rights and exceeds the government's authority.
Why it matters: Supply chain risk designations are usually reserved for foreign adversaries that pose a national security risk — a punishment that could be hard for the government to square as it relied on Claude for operations in Iran.
State of play: The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department.
President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools.
Anthropic is asking courtsto undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company.
The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements."
"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.
I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.
In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."
"As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.
According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.
Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.
It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails.
That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude."
"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.
Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control.
In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”
Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.
What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"
"After Claude developer Anthropicwalked awayfrom a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.
As reported byWindows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.
Some Redditorsare posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having"no ethics at all"and"selling their soul" by agreeing to allow their AI models to be used by the US military complex."
"Secretary of Defense Pete Hegseth ordered the cancellation of the Department of Defense’s ties with Columbia beginning in the 2026-27 academic year, arguing that Columbia and other universities are “woke breeding grounds of toxic indoctrination” in a Friday video posted on X.
In the video, Hegseth announced the “complete and immediate cancellation” of the DOD’s “attendance” at Columbia and other universities, marking the administration’s latest escalation against higher education. Friday’s announcement will also affect Columbia’s Ivy League peer institutions—Brown University, Princeton University, and Yale University—and the Massachusetts Institute of Technology, among others."
CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance
"OpenAIsaid it had struck a deal with the Pentagon to supply AI to classifiedUS militarynetworks, hours afterDonald Trumpordered the government to stop using the services of one of the company’s main competitors.
Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.
Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.
“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.
Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."
"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.
But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.
Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."
"OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technologymade by rival Anthropic.
Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems."
"President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate government intelligence analysis and defense work.
Writing on Truth Social,Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”
Shortly after Mr. Trump’s announcement, and 13 minutes after a Pentagon deadline, Defense Secretary Pete Hegsethdesignatedthe company a “supply-chain risk to national security.” The label means that no contractor or supplier that works with the military can do business with Anthropic.
The move is all but unheard-of, legal experts said. It strips an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks."
The Pentagon’s contract dispute with Anthropic is part of a wider clash about the use of artificial intelligence for national security and who decides on any safeguards.
"The fight between the Department of Defense and the artificial intelligence company Anthropic has ostensibly been abouta $200 million contractover the use of A.I. in classified systems.
But as the two sides careen towarda 5:01 p.m. Friday deadlineover terms of the contract, far more is at stake.
Amid the legalese and heated rhetoric are questions being asked globally about how to use A.I., what the technology’s risks are and who gets to decide on setting any limits — the makers of A.I. or national governments.
Underlying it all is fear and awe over the dizzying pace of A.I. progress and the technology’s uncertain impact on society."
The A.I. firm had rejected military officials’ latest offer. Anthropic has until 5:01 p.m. on Friday to give them unrestricted access to its model.
"A standoff between the Pentagon and the artificial intelligence company Anthropic appeared to be deepening as the two sides hurtled toward a5:01 p.m. deadlineFriday that military officials gave the firm to either allow them unrestricted access to its most advanced model or face consequences.
Defense Department officials criticized Anthropic’s leader after thecompany on Thursday rejectedtheir latest offer to settle the dispute. The Pentagon has threatened to either cut the company off from government business by declaring it a supply chain threat or force it to provide its frontier model without restrictions under the Defense Production Act.
Emil Michael, a top Pentagon official who oversees artificial intelligence, attacked Dario Amodei, the chief executive of Anthropic, who on Thursday released a statement about why the company would not agree to the Defense Department’s latest terms.
“It’s a shame that @DarioAmodei is a liar and has a God-complex,”Mr. Michael wrote late Thursday. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”"
"Anthropic, a company founded by OpenAI exiles worried about the dangers of AI, is loosening its core safety principle in response to competition.
Instead of self-imposed guardrails constraining its development of AI models, Anthropic is adopting a nonbinding safety framework that it says can and will change.
In a blog post Tuesday outlining its new policy, Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market.
The announcement is surprising, because Anthropic has described itself as the AI company with a “soul.” It also comes the same week that Anthropic is fighting a significant battle with the Pentagon over AI red lines."
"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios.
The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."
"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.
Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."
"Defense Secretary Pete Hegseth is "close" to cutting business ties withAnthropicand designating theAIcompany a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.
The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
Why it matters:That kind of penalty is usually reserved for foreign adversaries.
Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."
The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."