An AI company that says it doesn't want its tools used for certain weapons is now hiring someone who knows weapons inside out. Anthropic is seeking a specialist in chemical weapons and high-yield explosives to help keep its chatbot Claude from assisting others in the making of chemical, radiological, or explosive devices, per a LinkedIn job ad flagged by the BBC. The goal is to block "catastrophic misuse" of Claude. The role calls for at least five years' experience in weapons or explosives defense and familiarity with dirty bombs and other radiological threats. Rival OpenAI is advertising a similar post focused on biological and chemical risks, with a salary that can reach $455,000.
The move is feeding a debate over whether giving AI systems access to such knowledge, however tightly controlled, is itself dangerous. "Is it ever safe to use AI systems to handle sensitive chemicals and explosives information?" asked tech researcher Stephanie Hare, who notes there's no global framework governing this kind of work. AI systems and warfare are becoming increasingly intertwined, per the Indian Express. Yet Anthropic has clashed with the US government, suing the Pentagon after it was labeled a "supply chain risk" for demanding limits on how its systems could be used for autonomous weapons. OpenAI says it backs Anthropic's stance, even as it pushes for its own US government contract.