Showing posts with label ethical redlines. Show all posts
Showing posts with label ethical redlines. Show all posts

Tuesday, March 3, 2026

US Military Using Claude to Select Targets in Iran Strikes; Futurism, March 2, 2026

, Futurism; US Military Using Claude to Select Targets in Iran Strikes

"As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.

According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.

Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.

It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails. 

That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude."