Showing posts with label DoD. Show all posts
Showing posts with label DoD. Show all posts

Tuesday, March 31, 2026

Judge appears skeptical of Pentagon’s latest press restrictions: ‘Is this a Catch-22?’; Politico, March 30, 2026

JOSH GERSTEIN , Politico; Judge appears skeptical of Pentagon’s latest press restrictions: ‘Is this a Catch-22?’

"A federal judge expressed skepticism Monday about the Pentagon’s new press access policy after invalidating an earlier version that prompted almost all holders of media credentials to turn them in.

U.S. District Judge Paul Friedman convened a hearing in response to complaints from The New York Times that the Pentagon is defying his earlier order to restore access by subsequently shutting down the decades-old Correspondents Corridor and giving journalists unescorted access only to a library at the margins of the complex."

Monday, March 30, 2026

Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute; National Catholic Register, March 30, 2026

Jonah McKeown , National Catholic Register; Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute

"A federal judge has temporarily blocked the Department of Defense from labeling American artificial intelligence (AI) company Anthropic a “supply chain risk,” a designation the Pentagon gave the company after Anthropic refused to allow the military to use its products for autonomous weaponry and mass surveillance.

The case has drawn interest from prominent Catholics due to the relative novelty of a major AI developer taking a stand in favor of ethical and socially responsible safeguards around the technology in the face of government coercion.

In a March 26 ruling, which is not a final decision in the case, Judge Rita Lin of the U.S. District Court for the Northern District of California said Anthropic has a high likelihood of ultimately winning its case and proving that the government’s “supply chain risk” designation violated, among other laws, the First and Fifth Amendments."

Friday, March 27, 2026

Hegseth Strikes Two Black and Two Female Officers From Promotion List; The New York Times, March 27, 2026

Greg JaffeEric SchmittHelene Cooper and  , The New York Tiimes; Hegseth Strikes Two Black and Two Female Officers From Promotion List

Defense Secretary Pete Hegseth’s highly unusual decision to remove officers from a one-star promotion list has spurred allegations of racial and gender bias.

"Defense Secretary Pete Hegseth is blocking the promotion of four Army officers to be one-star generals, a highly unusual move that has prompted some senior military officials to question whether the officers are being singled out because of their race or gender.

Two of the officers targeted by Mr. Hegseth are Black and two are women on a promotion list that consists of about three dozen officers, most of whom are white men, senior military officials said.

Mr. Hegseth had been pressing senior Army leaders, including Army Secretary Daniel P. Driscoll, for months to remove the officers’ names, military officials said. But Mr. Driscoll, citing the officers’ decades-long records of exemplary service, had repeatedly refused.

Earlier this month, Mr. Hegseth broke the logjam by unilaterally striking the officers’ names from the list, though it is not clear he has the legal authority to do so. The list is currently being reviewed by the White House, which is expected to send it to the Senate for final approval. A few female and Black officers remain on the list, military officials said.

It is exceedingly rare that a one-star list draws such intense scrutiny from a defense secretary. The battle highlights the bitter rifts opened by Mr. Hegseth’s campaign to reverse policies that he says are prejudiced against white officers.

Mr. Hegseth has said repeatedly that he is determined to change a culture corrupted by “foolish,” “reckless” and “woke” leaders from previous administrations. But his heavy scrutiny, especially of female and minority officers, is eroding confidence in a promotion system that is supposed to be apolitical and merit based, his critics have said.

This article is based on interviews with 11 current and former military and administration officials who requested anonymity to discuss sensitive personnel matters."

Thursday, March 26, 2026

Judge blocks Pentagon order branding Anthropic a national security risk; The Washington Post, March 26, 2026

, The Washington Post; Judge blocks Pentagon order branding Anthropic a national security risk

The artificial intelligence lab argued that the Trump administration was punishing it for speaking about the risks of its technology.


"A federal judge in San Francisco blocked a Pentagon order Thursday labeling the artificial intelligence company Anthropic a national security risk, saying officials had likely violated the law and retaliated against the firm for speaking publicly about how it wanted its technology to be used.


“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” District Court Judge Rita F. Lin wrote.


The immediate practical implications of the ruling are unclear, but it represents a clear victory for the AI lab, which has been involved in a bitter power struggle with the Defense Department over the use of its Claude system by the military."

Tuesday, March 24, 2026

After losing in court, the Pentagon moves to restrict press access again; CNN, March 23, 2026

 , CNN; After losing in court, the Pentagon moves to restrict press access again

"Undeterred by a federal judge’s recent rebuke, the Pentagon has announced another set of restrictions on the press corps that regularly covers the US military.

The changes will further reduce day-to-day press access, ultimately eroding the public’s understanding of what the military is doing.

Under the new rules, announced Monday, the “Correspondents’ Corridor” inside the Pentagon building — where journalists have worked for decades — has been shut down. The Pentagon says replacement workspace will be set up at a faraway “annex” location at some point.

Some longtime Pentagon reporters immediately suggested that the changes were retaliatory, coming three days after The New York Times won a permanent injunction against an earlier set of Pentagon restrictions. In that order, senior US District Judge Paul Friedman said the Pentagon had violated the First Amendment."

Saturday, March 14, 2026

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Tuesday, March 10, 2026

Anthropic sues Pentagon over rare "supply chain risk" label; Axios, March 9, 2026

 Maria Curi, Axios; Anthropic sues Pentagon over rare "supply chain risk" label

"Anthropic on Monday sued the Pentagon, alleging its designation as a "supply chain risk" violates the company's First Amendment rights and exceeds the government's authority.

Why it matters: Supply chain risk designations are usually reserved for foreign adversaries that pose a national security risk — a punishment that could be hard for the government to square as it relied on Claude for operations in Iran.

State of play: The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department.

  • President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools.

Anthropic is asking courts to undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company.


  • The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

 Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Tuesday, March 3, 2026

US Military Using Claude to Select Targets in Iran Strikes; Futurism, March 2, 2026

, Futurism; US Military Using Claude to Select Targets in Iran Strikes

"As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.

According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.

Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.

It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails. 

That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude."

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’

"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

 , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

Defense Secretary Pete Hegseth orders cancellation of DOD ties with Columbia beginning in 2026-27 academic year; Columbia Spectator, February 27, 2026

JOSEPH ZULOAGA AND DORA GAO, Columbia Spectator; Defense Secretary Pete Hegseth orders cancellation of DOD ties with Columbia beginning in 2026-27 academic year

"Secretary of Defense Pete Hegseth ordered the cancellation of the Department of Defense’s ties with Columbia beginning in the 2026-27 academic year, arguing that Columbia and other universities are “woke breeding grounds of toxic indoctrination” in a Friday video posted on X.

In the video, Hegseth announced the “complete and immediate cancellation” of the DOD’s “attendance” at Columbia and other universities, marking the administration’s latest escalation against higher education. Friday’s announcement will also affect Columbia’s Ivy League peer institutions—Brown University, Princeton University, and Yale University—and the Massachusetts Institute of Technology, among others."

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns; The Guardian, February 28, 2026

 and , The Guardian; OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance

"OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

, The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."

OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash; The New York Times, February 27, 2026

 , The New York Times; OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash

"OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic.

Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose, a term required by the Pentagon. But OpenAI also said it had found a way to ensure that its technologies would adhere to its safety principles by installing specific technical guardrails on its systems."

Friday, February 27, 2026

Trump Orders Government to Stop Using Anthropic After Pentagon Standoff; The New York Times, February 27, 2026

 Julian E. Barnes and  , The New York Times; Trump Orders Government to Stop Using Anthropic After Pentagon Standoff

"President Trump on Friday ordered all federal agencies to stop using artificial intelligence technology made by Anthropic, a directive that could vastly complicate government intelligence analysis and defense work.

Writing on Truth Social, Mr. Trump used harsh words for Anthropic, describing it as a “radical Left AI company run by people who have no idea what the real World is all about.”

Shortly after Mr. Trump’s announcement, and 13 minutes after a Pentagon deadline, Defense Secretary Pete Hegseth designatedthe company a “supply-chain risk to national security.” The label means that no contractor or supplier that works with the military can do business with Anthropic.

The move is all but unheard-of, legal experts said. It strips an American company of its government work by using a process previously deployed only with foreign companies the United States considered security risks."

Pentagon Standoff Is a Decisive Moment for How A.I. Will Be Used in War; The New York Times, February 27, 2026

Adam SatarianoJulian E. Barnes and  , The New York Times; Pentagon Standoff Is a Decisive Moment for How A.I. Will Be Used in War

The Pentagon’s contract dispute with Anthropic is part of a wider clash about the use of artificial intelligence for national security and who decides on any safeguards.

"The fight between the Department of Defense and the artificial intelligence company Anthropic has ostensibly been about a $200 million contract over the use of A.I. in classified systems.

But as the two sides careen toward a 5:01 p.m. Friday deadlineover terms of the contract, far more is at stake.

Amid the legalese and heated rhetoric are questions being asked globally about how to use A.I., what the technology’s risks are and who gets to decide on setting any limits — the makers of A.I. or national governments.

Underlying it all is fear and awe over the dizzying pace of A.I. progress and the technology’s uncertain impact on society."