THE ATLANTIC – Technology | Internet & Technology

The Atlantic News Source Thumbs Logo.
  • This Looks Like an Insider Bet on Aliens
    On Monday night, someone placed a peculiar bet on the prediction market Kalshi. At 7:45 p.m. eastern time, a single trader put down nearly $100,000 on the claim that, by the end of December, the Trump administration will confirm that alien life or technology exists elsewhere in our universe. According to The Atlantic’s review of Kalshi’s trading data, about 35 minutes after this bet was executed, it was followed by another that was almost twice as large (possibly from the same person). These were market-moving events: For one brief stretch, the market appeared to think that there was at least a one-in-three chance that the U.S. government will announce the existence of aliens this year. Perhaps this was just some overexcited UFO diehard with a hunch and money to burn. Or maybe, as some observers quickly noted, it was a trader with inside knowledge.When this alien-prediction market first opened, in December of last year, it didn’t attract much action: By early this month, only about $1 million had been traded on it, a pittance compared with the $195 million that has so far been wagered on Kalshi for who will be the next chair of the Federal Reserve. But money started pouring in 10 days ago, after Barack Obama was asked, in a podcast interview, whether aliens are real and replied, “They’re real, but I haven’t seen ’em.” Although he later clarified on Instagram that he had meant only to suggest that in our mind-bendingly expansive universe of stars and planets, other life forms are very likely to exist, his remark had already made international headlines.Trump seemed to get a kick out of Obama’s flub. A few days later, he accused the former president of leaking classified information and, in a post on Truth Social, directed Secretary of Defense Pete Hegseth and other parts of the federal government to “begin the process of identifying and releasing Government files related to alien and extraterrestrial life, unidentified aerial phenomena (UAP), and unidentified flying objects (UFOs).”It’s possible that Trump was simply delighted by the prospect of a slow-drip document release that has nothing to do with him or Jeffrey Epstein. Either way, his announcement brought even more money into Kalshi’s aliens market. One gambling-industry site published some “X-Files” trading advice: Buy on the rumors of congressional hearings, then sell the moment that officials start dodging questions.This week’s mysterious and mammoth bets did not… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyWed, February 25, 2026
    1 day ago
  • AI Is Unlocking a New Way of Doing Math
    Over the past couple of months, several researchers have begun making the same provocative claim: They used generative-AI tools to solve a previously unanswered math problem.The most extreme promises—AI-assisted resolutions to some of the hardest problems in mathematics—may well turn out to be empty hype. But a number of AI-written solutions, albeit to far less lauded problems, have checked out. These were answers to a number of the Erdős Problems—more than 1,000 mathematical questions set forth by the Hungarian mathematician Paul Erdős—written with generative-AI models including ChatGPT. OpenAI quickly claimed a victory: “GPT-5.2 Pro for solving another open Erdős problem,” OpenAI President Greg Brockman posted on X in January. “Going to be a wild year for mathematical and scientific advancement!” (OpenAI and The Atlantic have a corporate partnership.)Much of the excitement around the news has stemmed from the adjudicator of these AI-written proofs: Terence Tao, a professor at UCLA who is widely considered to be the world’s greatest living mathematician. His stamp of approval seemingly legitimizes the greatest promise of generative AI—to push the frontier of human knowledge and civilization. When I called Tao earlier this month to get his take on what AI can offer mathematics, he was more tempered. The AI-generated Erdős solutions are impressive, he told me, but not overwhelmingly so: The bots have functionally landed some “cheap wins,” Tao said.[Read: We’re entering uncharted territory for math]Tao has long been intrigued by, but reserved about, what AI tools can do for his field. The first time we spoke, in the fall of 2024, Tao had likened chatbots to “mediocre, but not completely incompetent” graduate students. About six months later, he told me the models had gotten better “at certain types of high-level math reasoning,” but lacked creativity and made subtle mistakes. But during our most recent conversation, he was more bullish. AI may not be on the cusp of solving all of the world’s great math problems, but chatbots are at the point where they can collaborate with human mathematicians. In the process, he said, the technology is opening up a different “way of doing mathematics.”This conversation has been edited for length and clarity.Matteo Wong: There has recently been a lot of excitement around ChatGPT’s ability to solve some Erdős Problems. How have you seen generative AI’s mathematical capabilities evolve over the past year or so?Terence Tao: There’s a big crowd of people who really, really want… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyTue, February 24, 2026
    3 days ago
  • Sam Altman Is Losing His Grip on Humanity
    Last Friday, on stage at a major AI summit in India, Sam Altman wanted to address what he called an “unfair” criticism. The OpenAI CEO was asked by a reporter from The Indian Express about the natural resources required to train and run generative-AI models. Altman immediately pushed back. Chatbots do require a lot of power, yes, but have you thought about all of the resources demanded by human beings across our evolutionary history?“It also takes a lot of energy to train a human,” Altman told a packed pavilion. “It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took, like, the very widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever, to produce you, and then you took whatever, you know, you took.”He continued: “The fair comparison is, if you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question, versus a human? And probably, AI has already caught up on an energy-efficiency basis, measured that way.”Altman’s comments are easy to pick apart. The energy used by the brain is significantly less than even efficient frontier models for simple queries, not to mention the laptops and smartphones people use to prompt AI models. It is true that people have to consume actual sustenance before they “get smart,” though this is also a helpful bit of redirection on Altman’s part—the real concern with AI is not really the resources it demands, but the amount it contributes to climate change. Atmospheric carbon dioxide is at levels not seen in million of years—it has been driven not by the evolution of the 117 billion people and all the other critters to have ever existed in the course of evolution, but by contemporary human society and combustion turbines akin to those OpenAI is setting up at its Stargate data centers. Other data centers, too, are building private, gas-fired power plants—which collectively will likely be capable of generating enough electricity for, and emitting as much greenhouse-gas emissions as, dozens of major American cities—or extending the life of coal plants. (OpenAI, which has a corporate partnership with the business side of this magazine, did not respond to a request for… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyMon, February 23, 2026
    3 days ago
  • AI Agents Are Taking America by Storm
    Americans are living in parallel AI universes. For much of the country, AI has come to mean ChatGPT, Google’s AI overviews, and the slop that now clogs social-media feeds. Meanwhile, tech hobbyists are becoming radicalized by bots that can work for hours on end, collapsing months of work into weeks, or weeks into an afternoon.Recently, more people have started to play around with tools such as Claude Code. The product, made by the start-up Anthropic, is “agentic,” meaning it can do all sorts of work a human might do on a computer. Some academics are testing Claude Code’s ability to autonomously generate papers; others are using agents for biology research. Journalists have been experimenting with Claude Code to write data-driven articles from scratch, and earlier this month, a pair used the bot to create a mock competitor to Monday.com, a public software company worth billions. In under an hour, they had a working prototype. Although the actual quality of all of these AI-generated papers and analyses remains unclear, the progress is both stunning and alarming. “Once a computer can use computers, you’re off to the races,” Dean Ball, a senior fellow at the Foundation for American Innovation, told me.Even as AI has advanced, the most sophisticated bots have yet to go fully mainstream. Unlike ChatGPT, which has a free tier, agentic tools such as Claude Code or OpenAI’s Codex typically cost money, and can be intimidating to set up. I run Claude Code out of my computer’s terminal, an app traditionally reserved for programmers, and which looks like something a hacker uses in movies. It’s also not always obvious how best to prompt agentic bots: A sophisticated user might set up teams of agents that message one another as they work, whereas a newbie might not realize such capabilities even exist.The tech industry is now rushing to develop more accessible versions of these products for the masses. Last month, Anthropic released a new paid version of Claude Code designed for nontechnical users; today the start-up debuted a new model to all users, which offers, among other things, “human-level capability in tasks like navigating a complex spreadsheet.” Meanwhile, OpenAI recently announced a new version of Codex, which the company claims can do nearly anything “professionals can do on a computer.” As these products have gained visibility, people seem to be realizing all at once that AI does a lot more than… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyTue, February 17, 2026
    1 week ago
  • Words Without Consequence
    For the first time, speech has been decoupled from consequence. We now live alongside AI systems that converse knowledgeably and persuasively—deploying claims about the world, explanations, advice, encouragement, apologies, and promises—while bearing no vulnerability for what they say. Millions of people already rely on chatbots powered by large language models, and have integrated these synthetic interlocutors into their personal and professional lives. An LLM’s words shape our beliefs, decisions, and actions, yet no speaker stands behind them.This dynamic is already familiar in everyday use. A chatbot gets something wrong. When corrected, it apologizes and changes its answer. When corrected again, it apologizes again—sometimes reversing its position entirely. What unsettles users is not just that the system lacks beliefs but that it keeps apologizing as if it had any. The words sound responsible, yet they are empty.This interaction exposes the conditions that make it possible to hold one another to our words. When language that sounds intentional, personal, and binding can be produced at scale by a speaker who bears no consequence, the expectations listeners are entitled to hold of a speaker begin to erode. Promises lose force. Apologies become performative. Advice carries authority without liability. Over time, we are trained—quietly but pervasively—to accept words without ownership and meaning without accountability. When fluent speech without responsibility becomes normal, it does not merely change how language is produced; it changes what it means to be human.This is not just a technical novelty but a shift in the moral structure of language. People have always used words to deceive, manipulate, and harm. What is new is the routine production of speech that carries the form of intention and commitment without any corresponding agent who can be held to account. This erodes the conditions of human dignity, and this shift is arriving faster than our capacity to understand it, outpacing the norms that ordinarily govern meaningful speech—personal, communal, organizational, and institutional.Language has always been more than the transmission of information. When humans speak, our words commit us in an implicit social contract. They expose us to judgment, retaliation, shame, and responsibility. To mean what we say is to risk something.The AI researcher Andrej Karpathy has likened LLMs to human ghosts. They are software that can be copied, forked, merged, and deleted. They are not individuated. The ordinary forces that tether speech to consequence—social sanction, legal penalty, reputational loss—presuppose a continuous agent whose future can… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologySun, February 15, 2026
    2 weeks ago
1 2
----- OR -----


Scroll Up