Workers from OpenAI and Google have stepped up to support Anthropic in its legal battle after being blacklisted by the US Department of War. The move comes after Anthropic refused to allow its technology to be used for domestic mass surveillance or fully autonomous weapons.
Challenging an Unprecedented Designation
Anthropic filed two lawsuits contesting the government’s authority to label it a “supply chain risk to national security”—a designation previously reserved for foreign adversaries. The company argues that this label is arbitrary and undermines trust in the AI industry.
Industry Voices Rally
More than 30 engineers and researchers from OpenAI and Google submitted a legal brief supporting Anthropic’s stance. They emphasized that current AI systems cannot safely manage lethal autonomous actions or domestic surveillance without human oversight.
Open Letters and Public Support
In addition, nearly 900 employees from Google and OpenAI signed an open letter urging leadership to resist government demands for using AI in “red line” scenarios. The letter stresses the importance of prioritizing safety over competitive advantage.
The Future of AI Oversight
The conflict highlights the growing tension between innovation in AI and national security concerns. While Pentagon and US Department of War push for broader access to advanced AI tools, industry leaders insist that certain safeguards remain non-negotiable.
Noor Trends News, Technical Analysis, Educational Tools and Recommendations