- What Was Grammarly Thinking?
To me, the best first sentence of any piece of journalism is the one in Joan Didion’s 1987 book, Miami, which begins like this: “Havana vanities come to dust in Miami.”I love that sentence and that propulsive first chapter so much that I once sat down to try to figure out how she did it. I looked at the sentences one at a time to assess what purpose each one was serving, and I counted how many of them Didion had needed to accomplish each thing she wanted to accomplish. Then I thought about how she figured out what order to put them in to have maximum page-turning impact. And then I compared all of it unfavorably with the flailing and feeble way in which I would have pursued the same goals. I marked up my copy of the book in a somewhat desperate fashion and then became depressed.That type of copying is pretty normal, and they teach it in school. It’s how you learn (and how you become depressed). But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).[Margaret Atwood: Murdered by my replica?]I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them."Before the tool went down,… [TheTopNews] Read More.16 hours ago - Who Cares If AI Brings Down the Economy?
The tech billionaire Hemant Taneja admits that AI is a bubble. In fact, he welcomes it: “Bubbles are good,” Taneja, the CEO of General Catalyst, a venture-capital firm, told me in an email. If AI comes crashing down, it will lead to “some spectacular failures,” he said—companies will go under and people will lose their jobs—but that’s a price worth paying for “enduring companies that change the world forever.”His view is widespread in Silicon Valley. Some, such as Nvidia CEO Jensen Huang, reject the notion that their companies are overvalued. But many of the wealthiest and most powerful people in tech are embracing the idea of an AI bubble. Jeff Bezos has argued that AI might be a “good” kind of bubble. Sam Altman has made similar comments, predicting that AI will be a “huge net win for the economy” even if “a phenomenal amount of money” is lost along the way.Indeed, a phenomenal amount of money is at stake: OpenAI, which is still far from profitable, is currently worth more than Toyota, Coca-Cola, and Disney combined. This year, Big Tech plans to spend some $650 billion on the AI build-out—a sum that far exceeds the GDP of most countries. Investors are banking that AI will spur a productivity boom and deliver unimaginable corporate profits, but that future could be far off. If the spending dries up first, the bubble could pop—perhaps dragging the rest of the economy down with it. Nonetheless, Silicon Valley thinks that the present mania will eventually pay back its returns through scientific discovery and economic growth. “Stop trying to make bubbles go away,” as the entrepreneur James Thomason recently wrote. “The benefits of innovation outweigh the costs of volatility.” In other words: Be grateful for the bubble.[Read: Here’s how the AI crash happens]Silicon Valley did not invent the idea that bubbles can be worth the pain. Various economists have made the argument for decades. But as the AI boom has exploded, a book by two investors, Tobias Huber and Byrne Hobart, has helped formalize tech’s pro-bubble ideology. Boom: Bubbles and the End of Stagnation was a hit in Silicon Valley when it came out in 2024, praised by the tech billionaires Peter Thiel and Marc Andreessen.The authors argue that there are essentially two kinds of bubbles: good ones (dot-com, the railroads) and bad ones (the 2008 housing crisis). Both cause damage when they burst, but… [TheTopNews] Read More.18 hours ago - Dario Amodei’s Oppenheimer Moment
More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, “Machines of Loving Grace,” is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in “most relevant fields,” and we’ll have millions of them, a “country of geniuses,” all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they’re developed—or “grown,” as staffers at Anthropic are fond of saying—in the correct way, they will decide to greatly improve our lives.Amodei does not explain precisely how the AIs will accomplish this. In most cases, he expects them to do what the smartest humans do, but much more rapidly, compressing decades of scientific progress. He says that by 2035, we could have the theories, cures, and technologies of the early 22nd century. Our infectious diseases and cancers could be cured, and we could live twice as long, and slow the decay of our brains. Demis Hassabis, the head of Google DeepMind, has similarly conceived of superintelligent AI as the ultimate tool to accelerate scientific discovery, and Sam Altman, OpenAI’s CEO, has said that advanced AI may even solve physics.Amodei does not say that this utopian AI future is inevitable. To the contrary, among the chief executives at the top AI labs, he may be the one who worries most about the technology’s dangers. “Machines of Loving Grace” is an optimistic outlier in his larger oeuvre of published writing, much of which concerns the risks that will accompany the creation of a greater-than-human intelligence. Amodei seems to think of today’s AI researchers as comparable to Manhattan Project scientists, and has been known to recommend The Making of the Atomic Bomb. In his telling, superhuman AI could be even more dangerous than nuclear weapons, which is why AI needs to be developed the right way, by the right people, so that it doesn’t overpower humanity or tip the global balance of power toward autocracies.Implicit in this vision is… [TheTopNews] Read More.1 day ago - AI Layoffs Are a Self-Fulfilling Prophecy
Late last month, at an event in Washington, D.C., Andrew Yang delivered a bleak message. “I have bad news, America,” he told the crowd. “The Fuckening is here.”The Fuckening is the name that Yang, a former presidential candidate, has given to AI’s disembowelment of the workforce. As he sees it, millions of knowledge workers will soon lose their job, personal-bankruptcy rates will spike, and entire downtowns will turn vacant as offices hollow out. Yang has talked with computer-science majors, he said onstage, who can’t find a job and are instead “driving Ubers to make ends meet.” His doomsaying is extreme but familiar: Fears of job losses are mounting as AI continues to rapidly advance. A new generation of AI agents are more capable than traditional chatbots of assisting with sophisticated computer work. Bots are no longer limited to searching the web and answering questions—they can create financial models, generate slide decks, and much more.Perhaps the most concerning sign yet of an impending jobs crisis came one day after Yang’s announcement. The payments firm Block, which operates Square and Cash App, announced that it was laying off roughly 4,000 workers—nearly half of the company’s workforce—due to AI. “The intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working,” Block CEO Jack Dorsey, who also co-founded Twitter, wrote. Going forward, he added, the company will be laser-focused on integrating AI across layers of its operations.Although other companies have also blamed AI for job cuts, Block’s layoffs were unusually drastic. “The dreaded AI jobs wipeout got real,” The Wall Street Journal declared. Other companies could soon follow Block’s lead—not necessarily because the technology is ready to replace workers, but because it’s become fashionable to make such cuts. In that sense, AI-induced job loss risks becoming a self-fulfilling prophecy.Dorsey’s explanation for the layoffs at Block might not be the whole story. The company could be engaged in “AI-washing,” or using the technology as a convenient excuse to lay off workers when other factors may be to blame. Like many other tech companies, Block became bloated during the pandemic—its workforce more than tripled from 2019 to 2022. Perhaps the cuts offered Dorsey a way to shed workers while also signaling to the world that he is taking AI seriously. “It is hard to imagine a firm-wide sudden 50%+ efficiency gain that justifies massive organizational cuts,” Ethan… [TheTopNews] Read More.3 days ago - A Never-Ending Conspiracy Theory in Remote Alaska
The guy pouring my beer in Anchorage told me that he knew there was no truth to decades-old rumors about a research facility 200 miles to the northeast. Nobody was up there talking to aliens or controlling people’s minds. “They just do the aurora,” he said, cheerfully, while tearing up pieces of mint.The comment didn’t surprise me. Many people who don’t believe one conspiracy theory about that station—known as the High-frequency Active Auroral Research Program, or HAARP—believe another. A common misconception is that it can manufacture northern lights, a natural wonder typically most visible in or near the Arctic Circle. It cannot (and neither can any man-made instrument). Still, late last year, when a geomagnetic storm caused aurora sightings as far south as Texas, Facebook was studded with posts warning that these lights were not “natural” and that they were created by the scientists at HAARP for possibly sinister reasons.I’ve been curious about HAARP for a while because of rumors such as this one. The lab has also been erroneously credited with various supernatural occurrences (backward-walking caribou) and secret contact with extraterrestrials (covered up by “men in black”). Most commonly, it’s blamed for events caused by nature. The office phone rings after hurricanes, earthquakes, floods, wildfires, tornadoes, and typhoons, no matter where in the world they occur. A 2024 study found that HAARP was the subject of more than a million conspiracism-inflected posts on Twitter from January 2022 to March 2023, primarily about natural disasters. In early 2024, the far-right influencer Laura Loomer suggested that HAARP created a snowstorm to dampen turnout at the Iowa caucuses and thwart the Trump campaign. And when I visited HAARP this past November, calls were coming in about whether the facility had caused Hurricane Melissa, which had recently swept through Jamaica and Cuba, resulting in at least 88 fatalities and billions of dollars in damage.[Adam Serwer: Gullible, cynical America]All of this anxiety is focused on a unique research instrument housed at HAARP, which is owned by the University of Alaska at Fairbanks and was originally built by the military for the cost of $290 million. “The array,” as the instrument is called, is a grid of 180 transmitters that each sit atop a 72-foot-tall post, arranged in a clearing and surrounded by Alaskan wilderness. You could call it the world’s highest-powered radio transmitter, but it’s more precisely its most powerful ionospheric heater (which sounds… [TheTopNews] Read More.3 days ago





