THE ATLANTIC – Technology | Internet & Technology

The Atlantic News Source Thumbs Logo.
  • The Human Skill That Eludes AI
    In a certain, strange way, generative AI peaked with OpenAI’s GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. It was creative. “You could be like, ‘Continue this story: The man decided to take a shower,’ and GPT-2 would be like, ‘And in the shower, he was eating his lemon and thinking about his wife,’” Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. “The models won’t do that anymore.”AI leaders boast about their models’ superhuman technical abilities. The technology can predict protein structures, create realistic videos, and build apps with a single prompt. But these executives and researchers also readily admit that they have not yet released a model that writes well. OpenAI CEO Sam Altman has predicted that large language models will soon be capable of “fixing the climate, establishing a space colony, and the discovery of all of physics,” but in an October interview with the economist Tyler Cowen, he guessed that even future models—an eventual GPT-6 or GPT-7—might be able to extrude only something equivalent to “a real poet’s okay poem.”Today’s AI-generated prose is riddled with flaws. Chatbots produce meaningless metaphors, endless “it’s not this, but that” constructions, and a cloyingly sycophantic tone—and, of course, they overuse my beloved em dash. (Only starting with GPT-5.1, released in November, could ChatGPT reliably follow instructions to avoid the beleaguered punctuation mark.) I wanted to understand why this is—why large language models, which, after all, have memorized centuries of great literature, can demonstrate incredible emergent abilities yet totally fail to produce a single essay that I’d want to read.[Read: Would limitlessness make us better writers?]So I talked with people who would know: people who work at LLM companies, AI-data vendors, academic computer-science departments, and AI-writing start-ups. (Some spoke with me under the condition of anonymity because their employers barred them from speaking publicly about their work.) What I learned is that modern LLMs are built in a way that is antagonistic to great writing; they are engineered to be rule-following teacher’s pets that always have the right answer in hand. In many respects, they’ve come a long way from GPT-2, but they’ve also lost something that made them looser and more compelling.LLMs begin their lives as indiscriminate readers. During the pretraining phase, they ingest something like the entire internet—Reddit posts, YouTube… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyTue, March 17, 2026
    9 hours ago
  • My Tesla Was Driving Itself Perfectly—Until It Crashed
    The smell was strange. Sharp. Chemical. Wrong. The concrete wall was too close. My glasses were gone. One of my kids was standing on the sidewalk next to our car—not crying, just confused.The seat belt had held. The crumple zone had crumpled. The airbag had fired. Everything designed to protect bodies had done its job. But the car, a Tesla Model X, was totaled.One Sunday last fall, my kids and I were on a drive we’d done hundreds of times, winding through the residential streets of the Bay Area to drop my son off at his Boy Scouts meeting. The Tesla was in Full Self-Driving mode, driving perfectly—until it wasn’t.What happened next, I’ve had to piece together. My memory is hazy, and some of it comes from one of my sons, who watched the whole thing unfold from the back seat. The car was making a turn. Something felt off—the steering wheel jerked one way, then the other, and the car decelerated in a way I didn’t expect. I turned the wheel to take over. I don’t know exactly what the system was doing, or why. I only know that somewhere in those seconds, we ended up colliding with a wall.You might think I’d have known what to do in this situation. I used to run the self-driving-car division at Uber, trying to build a future in which technology protects us from accidents. I had thought about edge cases, failure modes, the brittleness hiding behind smooth performance. My team trained human drivers on when and how to intervene if a self-driving car made a mistake. In the two years I ran the division, we had no injuries in our early pilot programs.With my own Tesla, I started out using Full Self-Driving as the default setting only on highways. That’s where it makes sense: You have clear lane markers and predictable traffic patterns. Then, one day, I tried it on a local road, and it worked well enough to become a habit.Despite the accident, we were lucky. I walked away with a stiff neck, a concussion, a few days of headaches, and some memories I can’t shake. The kids climbed out unharmed. Still, you could say I was crushed in what the researcher Madeleine Clare Elish calls the moral crumple zone. Some parts of a car are specifically designed to absorb damage in a crash, to protect the people inside. But… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyTue, March 17, 2026
    13 hours ago
  • Awareing Ourselves to Death
    From the comfort of my desk, I can see it all. A series of webcam feeds show me the sun setting over Tel Aviv and southern Lebanon. A map of the world, flecked with red dots, indicates that most of Europe and the Middle East are on “high alert.” I toggle a button on the map’s control panel, and the globe is instantly latticed with the locations of undersea fiber-optic cables. Below the map, a live feed of Bloomberg TV is running with the chyron Oil Extends Rout on Stockpile Talks. I scroll down and am greeted by walls of headlines, grouped into categories such as “World News” and “Intel Feed.” A “country instability” meter clocks Iran at 100 percent, while a different widget informs me that the world’s “strategic risk overview” remains “stable” at 50, whatever that means.   I am looking at World Monitor, a website that turns any browser into a makeshift situation room, and I love it. Built to look like a cross between a Bloomberg terminal and a big screen at U.S. Strategic Command, the site aims to display as much information about world events as possible in an assortment of real-time feeds. This is information overload presented as intelligence.World Monitor was built over a single weekend in January by Elie Habib, an engineer based in the United Arab Emirates whose day job is as CEO of Anghami, one of the Middle East’s largest music-streaming services. “I wanted to extract the signal from the noise,” he told me recently. But what he really built, by his own admission, is a noise machine. Right now, the site pulls in more than 100 different streams of data, including stock prices, prediction markets, satellite movements, weather alerts, major-airport flight data, fire outbreaks, and the operational status of cloud services such as Cloudflare and AWS. The information is all real, but what exactly a person ought to do with it is unclear.When Habib posted about the project on X, he was shocked by the response. At one point, tens of thousands of people were using the site at the same time; more than 2 million people accessed it in the first 20 days. Habib’s inbox filled with requests for new features as well as messages from venture capitalists looking to spin up World Monitor into a full-time business. Via GitHub, where Habib has made the code for World Monitor open-source and… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologySat, March 14, 2026
    4 days ago
  • Well, That’s One Way to Sell Americans on Electric Cars
    In a TikTok video posted earlier this week, a Chihuahua claps its paws and dances to disco in front of a Tesla. “EV owners seeing gas prices go up, and not having to pay it,” the caption reads. In another, a clip of the comedian Zach Galifianakis laughing hysterically is superimposed over a gas-price sign. Across social media, Americans who drive electric vehicles can’t help but gloat. Who’s laughing now?Indeed, a car that doesn’t require gas sure does sound appealing right now. As the Iran crisis continues to choke the global supply of oil, gas prices are rising higher and higher. Americans are now paying an average of $3.63 a gallon at the pump, according to AAA—up from $2.94 just a month ago. Four bucks may be right around the corner, and elevated prices could linger for months. Already, ride-share drivers are getting pickier about the trips they accept and driving longer hours to offset the extra costs. Commuters are hunting for the best deals on services such as GasBuddy—which has seen its daily active users more than double in a week and a half. At one Chevron in downtown Los Angeles, people are stopping just to take photos of the electronic sign displaying a price of $8.38 per gallon.America could have entered this fiasco with a better hand. The current spike in gas prices—and whatever comes next—could have been much more manageable if more people had electric vehicles in their driveway. Yet relatively few Americans are currently in the position to recharge instead of refuel (regardless of whether they’re rubbing it in with Chihuahua memes). In the United States, sales of electric vehicles have risen considerably over the years, but adoption lags behind the rest of the world. Just under 8 percent of new cars sold last year in the U.S. were electric, compared with a fifth in Europe and a third in China. Now America is quite literally paying the price for sticking with gas.[Read: The American car industry can’t go on like this]Some of the skepticism toward EVs is understandable: They generally cost more than conventional cars, plus there’s that unfamiliar business of charging. A road trip in an EV requires more planning than simply stopping at the nearest gas station when the low-fuel light starts blinking. On top of that, low gas prices have made it easy for less climate-conscious buyers to adopt an attitude of… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyFri, March 13, 2026
    4 days ago
  • Inside the Dirty, Dystopian World of AI Data Centers
    Photographs by Landon SpeersAs we drove through southwest Memphis, KeShaun Pearson told me to keep my window down—our destination was best tasted, not viewed. Along the way, we passed an abandoned coal plant to our right, then an active power plant to our left, equipped with enormous natural-gas turbines. Pearson, who directs the nonprofit Memphis Community Against Pollution, was bringing me to his hometown’s latest industrial megaproject.Already, the air smelled of soot, gasoline, and asphalt. Then I felt a tickle sliding up my nostrils and down into my throat, like I was getting a cold. As we approached, I heard the rumble of cranes and trucks, and then from behind a patch of trees emerged a forest of electrical towers. Finally, I saw it—a white-walled hangar, bigger than a dozen football fields, where Elon Musk intends to build a god.This is Colossus: a data center that Musk’s artificial-intelligence company, xAI, is using as a training ground for Grok, one of the world’s most advanced generative-AI models. Training these models takes a staggering amount of energy; if run at full strength for a year, Colossus would use as much electricity as 200,000 American homes. When fully operational, Musk has written on X, this facility and two other xAI data centers nearby will require nearly two gigawatts of power. Annually, those facilities could consume roughly twice as much electricity as the city of Seattle.To get Colossus up and running fast, xAI built its own power plant, setting up as many as 35 natural-gas turbines—railcar-size engines that can be major sources of smog—according to imagery obtained by the Southern Environmental Law Center. Pearson coughed as we drove by the facility. The scratch in my throat worsened, and I rolled up my window.xAI’s rivals are all building similarly large data centers to develop their most powerful generative-AI models; a metropolis’s worth of electricity will surge through facilities that occupy a few city blocks. These companies have primarily made their chatbots “smarter” not by writing niftier code but by making them bigger: ramming more data through more powerful computer chips that use more electricity. OpenAI has announced plans for facilities requiring more than 30 gigawatts of power in total—more than the largest recorded demand for all of New England. Since ChatGPT’s launch, in November 2022, the capital expenditures of Amazon, Microsoft, Meta, and Google have exceeded $600 billion, and much of that spending has gone… [TheTopNews] Read More.
    THE ATLANTIC – Technology | Internet & TechnologyFri, March 13, 2026
    5 days ago
1 2
----- OR -----


Scroll Up