Showing posts with label DoD. Show all posts
Showing posts with label DoD. Show all posts

Thursday, April 9, 2026

Judge Rejects Hegseth’s Second Attempt to Restrict Reporters at Pentagon; The New York Times, April 9, 2026

, The New York Times; Judge Rejects Hegseth’s Second Attempt to Restrict Reporters at Pentagon

"A federal judge on Thursday rejected an attempt by the Pentagon to impose a new set of restrictions on journalists who hold credentials to cover the military complex, in another blow to Defense Secretary Pete Hegseth’s attempts to control the media.

The order, from Judge Paul Friedman of U.S. District Court for the District of Columbia, declared that the new policy was essentially unconstitutional. With its new rules, the Pentagon changed the wording of a provision barring journalists from seeking confidential information from government sources.

Judge Friedman added that the Pentagon had “failed” to reinstate the press passes of several New York Times reporters.

It was the second time that Judge Friedman had tossed parts of the Pentagon’s press policy. He ruled last month that major parts of the previous policy, which also sought to restrict certain journalistic activities, were unconstitutional in a case brought by The Times."

Monday, April 6, 2026

Ousted Army Chief of Staff Gen. Randy George says U.S. soldiers deserve "courageous leaders of character" in outgoing email; CBS News, April 4, 2026

 Lucia I Suarez Sang, CBS News; Ousted Army Chief of Staff Gen. Randy George says U.S. soldiers deserve "courageous leaders of character" in outgoing email

"Ousted Army Chief of Staff, Gen. Randy George, told Pentagon officials in an outgoing email that U.S. soldiers deserve "courageous leaders of character," after Defense Secretary Pete Hegseth asked him to step down and take immediate retirement.

CBS News exclusively reported earlier this week on the general's ousting, with one source saying Hegseth wants someone in the role who will implement his and President Trump's vision for the Army.

An outgoing email, attributed to George and confirmed as authentic by CBS News on Saturday, circulated online after his ousting. A U.S. official told CBS News that George sent the email to Driscoll, the undersecretary and assistant secretary of the Army, as well as to the three- and four-star generals and officers on his staff.

"It has been the greatest privilege to serve beside you and lead Soldiers in support of our country," he wrote. "I know you'll all continue to stay laser-focused on the mission, continue innovating, and relentlessly cut through the bureaucracy to get our warfighters what they need to win on the modern battlefield."

He added: "Our soldiers are truly the best in the world – they deserve tough training and courageous leaders of character. I have no doubt you will all continue to lead with courage, character, and grit.""

Sunday, April 5, 2026

The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code; Observer, March 31, 2026

 , Observer; The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code

"Father Brendan McGuire is writing a novel about a disenchanted monk and his A.I. companion. He’s doing it with Claude. That detail—a Catholic priest using Anthropic’s chatbot to explore questions of faith and artificial consciousness—tells you something about where Silicon Valley’s moral reckoning has arrived. McGuire, 60, leads St. Simon Catholic Parish in Los Altos, Calif., a congregation that counts some of the Valley’s A.I. researchers among its members. Earlier this year, he and a group of faith leaders helped Anthropic shape the Claude Constitution, the set of guiding principles governing how its A.I. behaves.

He is not, in other words, an outside critic. He is something more complicated: a true believer in both God and technology, trying to hold them in the same hand. “I left the tech industry, but it never really left me,” McGuire told Observer...

McGuire wasn’t Anthropic’s only religious collaborator. Bishop Paul Tighe of the Vatican’s Dicastery for Culture and Education and Brian Patrick Green, a technology ethics director at Santa Clara University, also reviewed the Claude Constitution. Green and other Catholic scholars recently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”...

Anthropic says its engagement with religious voices—part of a broader effort to engage a wide variety of communities to keep pace with technological acceleration—is only a beginning. The company plans to expand outreach beyond Catholic institutions to other religious leaders going forward."

Tuesday, March 31, 2026

Judge appears skeptical of Pentagon’s latest press restrictions: ‘Is this a Catch-22?’; Politico, March 30, 2026

JOSH GERSTEIN , Politico; Judge appears skeptical of Pentagon’s latest press restrictions: ‘Is this a Catch-22?’

"A federal judge expressed skepticism Monday about the Pentagon’s new press access policy after invalidating an earlier version that prompted almost all holders of media credentials to turn them in.

U.S. District Judge Paul Friedman convened a hearing in response to complaints from The New York Times that the Pentagon is defying his earlier order to restore access by subsequently shutting down the decades-old Correspondents Corridor and giving journalists unescorted access only to a library at the margins of the complex."

Monday, March 30, 2026

Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute; National Catholic Register, March 30, 2026

Jonah McKeown , National Catholic Register; Judge Blocks Pentagon Move Against Anthropic in AI Ethics Dispute

"A federal judge has temporarily blocked the Department of Defense from labeling American artificial intelligence (AI) company Anthropic a “supply chain risk,” a designation the Pentagon gave the company after Anthropic refused to allow the military to use its products for autonomous weaponry and mass surveillance.

The case has drawn interest from prominent Catholics due to the relative novelty of a major AI developer taking a stand in favor of ethical and socially responsible safeguards around the technology in the face of government coercion.

In a March 26 ruling, which is not a final decision in the case, Judge Rita Lin of the U.S. District Court for the Northern District of California said Anthropic has a high likelihood of ultimately winning its case and proving that the government’s “supply chain risk” designation violated, among other laws, the First and Fifth Amendments."

Friday, March 27, 2026

Hegseth Strikes Two Black and Two Female Officers From Promotion List; The New York Times, March 27, 2026

Greg JaffeEric SchmittHelene Cooper and  , The New York Tiimes; Hegseth Strikes Two Black and Two Female Officers From Promotion List

Defense Secretary Pete Hegseth’s highly unusual decision to remove officers from a one-star promotion list has spurred allegations of racial and gender bias.

"Defense Secretary Pete Hegseth is blocking the promotion of four Army officers to be one-star generals, a highly unusual move that has prompted some senior military officials to question whether the officers are being singled out because of their race or gender.

Two of the officers targeted by Mr. Hegseth are Black and two are women on a promotion list that consists of about three dozen officers, most of whom are white men, senior military officials said.

Mr. Hegseth had been pressing senior Army leaders, including Army Secretary Daniel P. Driscoll, for months to remove the officers’ names, military officials said. But Mr. Driscoll, citing the officers’ decades-long records of exemplary service, had repeatedly refused.

Earlier this month, Mr. Hegseth broke the logjam by unilaterally striking the officers’ names from the list, though it is not clear he has the legal authority to do so. The list is currently being reviewed by the White House, which is expected to send it to the Senate for final approval. A few female and Black officers remain on the list, military officials said.

It is exceedingly rare that a one-star list draws such intense scrutiny from a defense secretary. The battle highlights the bitter rifts opened by Mr. Hegseth’s campaign to reverse policies that he says are prejudiced against white officers.

Mr. Hegseth has said repeatedly that he is determined to change a culture corrupted by “foolish,” “reckless” and “woke” leaders from previous administrations. But his heavy scrutiny, especially of female and minority officers, is eroding confidence in a promotion system that is supposed to be apolitical and merit based, his critics have said.

This article is based on interviews with 11 current and former military and administration officials who requested anonymity to discuss sensitive personnel matters."

Thursday, March 26, 2026

Judge blocks Pentagon order branding Anthropic a national security risk; The Washington Post, March 26, 2026

, The Washington Post; Judge blocks Pentagon order branding Anthropic a national security risk

The artificial intelligence lab argued that the Trump administration was punishing it for speaking about the risks of its technology.


"A federal judge in San Francisco blocked a Pentagon order Thursday labeling the artificial intelligence company Anthropic a national security risk, saying officials had likely violated the law and retaliated against the firm for speaking publicly about how it wanted its technology to be used.


“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” District Court Judge Rita F. Lin wrote.


The immediate practical implications of the ruling are unclear, but it represents a clear victory for the AI lab, which has been involved in a bitter power struggle with the Defense Department over the use of its Claude system by the military."

Tuesday, March 24, 2026

After losing in court, the Pentagon moves to restrict press access again; CNN, March 23, 2026

 , CNN; After losing in court, the Pentagon moves to restrict press access again

"Undeterred by a federal judge’s recent rebuke, the Pentagon has announced another set of restrictions on the press corps that regularly covers the US military.

The changes will further reduce day-to-day press access, ultimately eroding the public’s understanding of what the military is doing.

Under the new rules, announced Monday, the “Correspondents’ Corridor” inside the Pentagon building — where journalists have worked for decades — has been shut down. The Pentagon says replacement workspace will be set up at a faraway “annex” location at some point.

Some longtime Pentagon reporters immediately suggested that the changes were retaliatory, coming three days after The New York Times won a permanent injunction against an earlier set of Pentagon restrictions. In that order, senior US District Judge Paul Friedman said the Pentagon had violated the First Amendment."

Saturday, March 14, 2026

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Tuesday, March 10, 2026

Anthropic sues Pentagon over rare "supply chain risk" label; Axios, March 9, 2026

 Maria Curi, Axios; Anthropic sues Pentagon over rare "supply chain risk" label

"Anthropic on Monday sued the Pentagon, alleging its designation as a "supply chain risk" violates the company's First Amendment rights and exceeds the government's authority.

Why it matters: Supply chain risk designations are usually reserved for foreign adversaries that pose a national security risk — a punishment that could be hard for the government to square as it relied on Claude for operations in Iran.

State of play: The Pentagon last week designated Anthropic a supply chain risk, meaning companies must stop using Claude in cases directly tied to the department.

  • President Trump also told the federal government in a Truth Social post to stop using Anthropic's technology, and some agencies have begun offboarding the tools.

Anthropic is asking courts to undo the supply chain risk designation, block its enforcement and require federal agencies to withdraw directives to drop the company.


  • The company says its two lawsuits are not meant to force the government to work with Anthropic, but prevent officials from blacklisting companies over policy disagreements."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

 Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Tuesday, March 3, 2026

US Military Using Claude to Select Targets in Iran Strikes; Futurism, March 2, 2026

, Futurism; US Military Using Claude to Select Targets in Iran Strikes

"As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.

According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.

Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.

It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails. 

That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude."

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’

"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

 , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

Defense Secretary Pete Hegseth orders cancellation of DOD ties with Columbia beginning in 2026-27 academic year; Columbia Spectator, February 27, 2026

JOSEPH ZULOAGA AND DORA GAO, Columbia Spectator; Defense Secretary Pete Hegseth orders cancellation of DOD ties with Columbia beginning in 2026-27 academic year

"Secretary of Defense Pete Hegseth ordered the cancellation of the Department of Defense’s ties with Columbia beginning in the 2026-27 academic year, arguing that Columbia and other universities are “woke breeding grounds of toxic indoctrination” in a Friday video posted on X.

In the video, Hegseth announced the “complete and immediate cancellation” of the DOD’s “attendance” at Columbia and other universities, marking the administration’s latest escalation against higher education. Friday’s announcement will also affect Columbia’s Ivy League peer institutions—Brown University, Princeton University, and Yale University—and the Massachusetts Institute of Technology, among others."

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns; The Guardian, February 28, 2026

 and , The Guardian; OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance

"OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."

Saturday, February 28, 2026

If A.I. Is a Weapon, Who Should Control It?; The New York Times, February 28, 2026

, The New York Times ; If A.I. Is a Weapon, Who Should Control It?

"We spent the Cold War worrying mostly about military folly, and A.I. entered into our anxieties even then: the Soviet Doomsday Machine in “Dr. Strangelove,” the game-playing computer in “WarGames” and of course the fateful “Terminator” decision to make Skynet operational.

But for the last few years, as A.I. advances have concentrated potentially extraordinary power in the hands of a few companies and C.E.O.s — themselves embedded in a Bay Area culture of science-fiction dreams and apocalyptic fears — it’s become more natural to worry more about private power and ambition, about would-be A.I. god-kings rather than presidents and generals.

Until, that is, the current collision between the Department of Defense and Anthropic, the artificial intelligence pioneer, over whether Anthropic’s A.I. models should be bound by the company’s ethical constraints or made available for all uses the Pentagon might have in mind."