THE ATLANTIC – Technology | Internet & Technology

The Atlantic News Source Thumbs Logo.
  • Looks Like We’ve Democratized Insider Trading
    A few hours before Donald Trump gave his State of the Union address, Republican sources told the PBS correspondent Lisa Desjardins that the speech would break records. The president would speak for more than two hours, she reported on X, and one reliable source claimed he might ramble on for 180 minutes.The post went viral. At about the same time, the market started to move on Kalshi, an online platform where people can invest money in the outcome of a given news event. (Don’t call it gambling.) Forecasts on “How long will Trump speak for at the State of the Union?” shot up by 10 minutes after Desjardins posted: Armed with what they perceived as insider information, users thought they could make a buck by accurately “predicting” the outcome of his speech.But others speculated in a different direction. “They’re leaking a bunch of stuff about a super long speech and he’ll go about 2 minutes short of the supposed mark and everyone in the white house will make $200k on it,” one Bluesky user, ‪@danvogfan, posted a few hours after Desjardin’s post went viral. In other words, maybe the sources really did have good information—but they were throwing others off track to manipulate the market and profit for themselves.Prediction markets such as Kalshi and Polymarket have ushered in a moment when anyone with access to exclusive information related to a major news event can do this, even as the platforms themselves prohibit market manipulation. Trump ultimately didn’t speak for as long as the sources had said: He ended after an hour and 47 minutes. Anyone who had bet according to the information that Desjardins had reported would have lost money. “We live in such a profound dystopia,” another popular Bluesky user wrote above @danvogfan’s post after the fact.[Read: America is slow-walking into a Polymarket disaster]We can’t say definitively that any insider trading has actually happened, though other suspicious incidents have occurred. In early January, one Polymarket user bet more than $30,000 on Venezuelan President Nicolás Maduro being ousted just hours before he was captured by the U.S. military. (The bet paid out $400,000 and led Representative Ritchie Torres to introduce a bill that would ban federal workers from using prediction markets.) Last month, Israeli authorities charged two people on suspicion of using classified information to bet on military operations on Polymarket. And this past weekend, an anonymous trader who goes… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyThu, March 5, 2026
    9 hours ago
  • Tesla’s Secret Weapon Is a Giant Metal Box
    Elon Musk’s vision for the future of Tesla has finally rolled off the assembly line. Last month, a Tesla factory in Texas built the first Cybercab, a driverless electric car with neither a steering wheel nor pedals. With typical bombast, Musk has promised that the Cybercab will cost less than $30,000 by next year, and said that it could perhaps even pay for itself: Owners will conceivably be able to nap at home while the car is out hailing riders and earning them money.The Cybercab is among the splashiest parts of Tesla’s pivot away from its core business of selling cars (or at least those driven by humans). Musk is dead set on turning Tesla into a company that makes robots and robotaxis. Earlier this year, he killed the Model S—the vehicle that initially made Tesla into an electric-car giant—freeing up factory space to manufacture Optimus, a humanoid robot he says has the potential to be the “biggest product of all time.” The world’s richest man has a lot riding on the success of Tesla’s robots and robotaxis, namely a pay package worth up to $1 trillion.So far, the transformation has been chaotic. For all the hype surrounding the Cybercab, it’s not clear that Tesla can legally sell a car without a steering wheel. The technology also remains unproven: Tesla operates a fleet of robotaxis in Austin, where they have crashed at roughly eight times the rate of American drivers, according to an analysis of Tesla’s self-reported crash data. Musk has even further to go with his Optimus robots. The program has been dogged by public embarrassments and failures: At a Tesla event in December, an Optimus robot tasked with handing out water to guests lost its balance and dramatically tumbled backwards. Meanwhile, Tesla’s car sales are tumbling as Musk has seemingly lost interest in making human-driven cars. Besides the Cybertruck, which has proved to be a flop, Tesla has not released an entirely new car model since 2020. (Tesla and Musk did not respond to my requests for comment.)Tesla is undergoing a transformation—just not one oriented around the Cybercab or an army of humanoids that will do the dishes. The product that is poised to define the near future of the company is a metal box the size of a shipping container. It’s the Tesla Megapack, an enormous rechargeable battery that is used by power plants to balance out… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyWed, March 4, 2026
    1 day ago
  • A Dire Warning From the Tech World
    Dean Ball helped devise much of the Trump administration’s AI policy. Now he cannot believe what the Department of Defense has done to one of its major technology partners, the AI firm Anthropic.After weeks of negotiations, the Pentagon was unable to force Anthropic to accede to terms that, in Anthropic’s telling, could involve using AI for autonomous weapons and the mass surveillance of Americans, as my colleague Ross Andersen reported over the weekend. So the government has labeled the company a supply-chain risk, effectively plastering it with a scarlet letter. The Pentagon says that this means Anthropic will be unable to work with any company that contracts with the administration. That could include major technology companies that provide infrastructure for Anthropic’s AI models, such as Amazon. The supply-chain-risk designation is normally reserved for companies run by foreign adversaries, and if the order holds up legally, it could be a death blow for Anthropic.[Read: Inside Anthropic’s killer-robot dispute with the Pentagon]Ball, now a senior fellow at the Foundation for American Innovation, was traveling in Europe as all of this was unfolding last week, staying up as late as 2 a.m. to urge people in the administration to take a less severe approach: simply canceling the contract with Anthropic, without the supply-chain-risk designation. When his efforts failed, Ball told me in an interview yesterday, “my reaction was shock, and sadness, and anger.”In the aftermath of the decision, Ball published an essay on his Substack casting the conflict in civilizational terms; the Pentagon’s ultimatum, in his reckoning, is “a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.” The action, he wrote, is a repudiation of private property and freedom of speech, two of the most fundamental principles of the United States. In today’s America, Ball argued, the executive branch has become so unstoppable—and passing laws has become so challenging—that the president and his officials can do whatever they want. (When reached for comment, a White House spokesperson told me in a statement that “no company has the right to interfere in key national security decision-making.”)Yesterday, I called Ball to discuss his essay and why the standoff with Anthropic feels, to him, like such a dire sign for America. Ball is far from a likely source of such harsh criticism: He’s a Republican with close ties to the Trump administration who departed… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyTue, March 3, 2026
    2 days ago
  • Inside Anthropic’s Killer-Robot Dispute With the Pentagon
    Right up until the moment that Pete Hegseth moved to terminate the government’s relationship with the AI company Anthropic, its leaders believed that they were still on track for a deal. The Pentagon had unilaterally insisted on renegotiating its contract with Anthropic, the company whose AI model is the only one currently allowed into the federal government’s classified systems, in order to remove ethical restrictions that the company had placed on it.According to a source familiar with the negotiations, on Friday morning, Anthropic received word that Hegseth’s team would make a major concession. The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic. It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like as appropriate—suggesting that the terms were subject to change, based on the administration’s interpretation of a given situation.[Read: What happens to Anthropic now?]Anthropic’s team was relieved to hear that the government would be willing to remove those words, but one big problem remained: On Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company’s AI to analyze bulk data collected from Americans. That could include information such as the questions you ask your favorite chatbot, your Google search history, your GPS-tracked movements, and your credit-card transactions, all of which could be cross-referenced with other details about your life. Anthropic’s leadership told Hegseth’s team that was a bridge too far, and the deal fell apart. Soon after, Hegseth directed the U.S. military’s contractors, suppliers, and partners to stop doing business with Anthropic. The list of companies that contract with the military is extensive, and includes Amazon, the company that supplies much of Anthropic’s computing infrastructure. The Department of Defense did not respond to a request for comment. A spokesperson for Anthropic referred me to the company’s statement addressing Hegseth’s remarks.  My source, whom I am granting anonymity because they are not authorized to talk about the negotiations, also shed further light on the disagreement between Anthropic and the Pentagon over autonomous weapons, machines that can select and engage targets without a human making the final call. The U.S. military has been developing these systems for years and has budgeted $13.4 billion for them in fiscal year 2026 alone. They run the gamut from individual drones to whole swarms that can… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologySun, March 1, 2026
    5 days ago
  • What Happens to Anthropic Now?
    President Trump is terminating the government’s relationship with Anthropic, an AI company whose products, until recently, were used by Pentagon officials for classified operations. Following a weekslong standoff with the company, Trump posted on Truth Social this afternoon that all federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology,” adding: “We don’t need it, we don’t want it, and will not do business with them again!” The General Services Administration announced that it would take action against Anthropic’s products, and indeed, according to an email I obtained that was sent to the leadership of all agencies using USAi—a GSA platform that provides chatbots from tech companies to government workers—access to Anthropic was suspended “immediately.” The government is also removing Anthropic from its primary procurement system, which is the key way for any federal agency to purchase a commercial product.Anthropic was awarded a $200 million contract with the Pentagon last summer geared toward providing versions of its technology for military use. OpenAI, Google, and xAI were awarded similar contracts, though Anthropic’s Claude models are the only advanced generative-AI programs to receive Pentagon security clearance permitting the handling of secret and classified data. Claude had been integrated across the Department of Defense and was reportedly used to assist the raid on Venezuela that led to the capture of President Nicolás Maduro.Anthropic has said that it will not allow Claude to be used for mass domestic surveillance or to enable fully autonomous weaponry, which could involve applications such as Claude selecting and killing targets with drones, and analyzing data that have been indiscriminately gathered on Americans by the intelligence community. Anthropic has also said that the Pentagon never included such uses in its contracts with the firm. But now DOD is demanding unrestricted use of Claude and accusing Anthropic of trying to control the military and “putting our nation’s safety at risk” by refusing to comply.Following a heated meeting on Tuesday, DOD gave Anthropic until today at 5:01 p.m. eastern time to acquiesce to its demands. If not, the Pentagon would compel the company under an emergency wartime law called the Defense Production Act or, even more severe, designate Anthropic a “supply-chain risk,” which could forbid any organization that works with the U.S. military to do business with the AI company. Shortly after Trump’s announcement, Defense Secretary Pete Hegseth declared that he was doing just that. Dean Ball, an analyst who helped… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyFri, February 27, 2026
    6 days ago
1 2
----- OR -----


Scroll Up