
Outside OpenAI’s headquarters, a handful of people gathered on Monday holding pieces of colorful chalk. They got down on their knees and started writing messages on the sidewalk. Stand for liberty. Please no legal mass surveillance. Change the contract please.At issue was a business deal that the company recently signed with the Department of Defense, following the Pentagon’s sudden turn against Anthropic. OpenAI will now supply its technology to the military for use in classified settings, the sorts that may involve wartime decisions and intelligence-gathering—an agreement, many legal experts told me, that could give the government wide-ranging powers. “I would just really like to see OpenAI do the right thing and stand up for something, anything,” Niki Dupuis, an AI-start-up founder and one of the chalk protesters, told me.In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek “red lines” to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk—a hefty sanction that would require anybody who sells to the Pentagon to stop using Anthropic products in their work with the military. Perhaps OpenAI was about to secure the very terms Anthropic had been denied.But a close reading of the contract—the portions of it that OpenAI has shared with the public, anyway—indicates that the lines are, in fact, blurry. Several independent legal experts told me that, legally, the Pentagon can likely get away with using OpenAI’s technology—versions of the models that underlie ChatGPT—for mass surveillance of Americans. Moreover, the military will likely have a pathway to use OpenAI’s technology in autonomous weapons. AI models from Anthropic, DOD’s previous partner, have likely already been used for warfare; recently, its products were reportedly used to identify targets in Iran (Anthropic declined to comment on that reporting). But the company had refused to allow its technology to be used in fully autonomous weapons.[Read: Inside Anthropic’s killer-robot dispute with the Pentagon]The Department of Defense, which the Trump administration refers to as the Department of War, declined to answer my questions about the contract. A spokesperson for OpenAI reiterated to me that the Pentagon has agreed to not use the firm’s AI system… [TheTopNews] Read More.
1 week ago





