Robert Booth , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests
"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.
OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.
The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.
The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."