- Dario Amodei’s Oppenheimer Moment
More than a year before his recent standoff with the Pentagon, Dario Amodei, the chief executive of Anthropic, published a 15,000-word manifesto describing a glorious AI future. Its title, “Machines of Loving Grace,” is borrowed from a Richard Brautigan poem, but as Amodei acknowledged, with some embarrassment, its utopian vision bears some resemblance to science fiction. According to Amodei, we will soon create the first polymath AIs with abilities that surpass those of Nobel Prize winners in “most relevant fields,” and we’ll have millions of them, a “country of geniuses,” all packed into the glowing server racks of a data center, working together. With access to tools that operate directly on our physical world, these AIs would be able to get up to a great deal of dangerous mischief, but according to Amodei, if they’re developed—or “grown,” as staffers at Anthropic are fond of saying—in the correct way, they will decide to greatly improve our lives.Amodei does not explain precisely how the AIs will accomplish this. In most cases, he expects them to do what the smartest humans do, but much more rapidly, compressing decades of scientific progress. He says that by 2035, we could have the theories, cures, and technologies of the early 22nd century. Our infectious diseases and cancers could be cured, and we could live twice as long, and slow the decay of our brains. Demis Hassabis, the head of Google DeepMind, has similarly conceived of superintelligent AI as the ultimate tool to accelerate scientific discovery, and Sam Altman, OpenAI’s CEO, has said that advanced AI may even solve physics.Amodei does not say that this utopian AI future is inevitable. To the contrary, among the chief executives at the top AI labs, he may be the one who worries most about the technology’s dangers. “Machines of Loving Grace” is an optimistic outlier in his larger oeuvre of published writing, much of which concerns the risks that will accompany the creation of a greater-than-human intelligence. Amodei seems to think of today’s AI researchers as comparable to Manhattan Project scientists, and has been known to recommend The Making of the Atomic Bomb. In his telling, superhuman AI could be even more dangerous than nuclear weapons, which is why AI needs to be developed the right way, by the right people, so that it doesn’t overpower humanity or tip the global balance of power toward autocracies.Implicit in this vision is… [TheTopNews] Read More.11 hours ago - AI Layoffs Are a Self-Fulfilling Prophecy
Late last month, at an event in Washington, D.C., Andrew Yang delivered a bleak message. “I have bad news, America,” he told the crowd. “The Fuckening is here.”The Fuckening is the name that Yang, a former presidential candidate, has given to AI’s disembowelment of the workforce. As he sees it, millions of knowledge workers will soon lose their job, personal-bankruptcy rates will spike, and entire downtowns will turn vacant as offices hollow out. Yang has talked with computer-science majors, he said onstage, who can’t find a job and are instead “driving Ubers to make ends meet.” His doomsaying is extreme but familiar: Fears of job losses are mounting as AI continues to rapidly advance. A new generation of AI agents are more capable than traditional chatbots of assisting with sophisticated computer work. Bots are no longer limited to searching the web and answering questions—they can create financial models, generate slide decks, and much more.Perhaps the most concerning sign yet of an impending jobs crisis came one day after Yang’s announcement. The payments firm Block, which operates Square and Cash App, announced that it was laying off roughly 4,000 workers—nearly half of the company’s workforce—due to AI. “The intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working,” Block CEO Jack Dorsey, who also co-founded Twitter, wrote. Going forward, he added, the company will be laser-focused on integrating AI across layers of its operations.Although other companies have also blamed AI for job cuts, Block’s layoffs were unusually drastic. “The dreaded AI jobs wipeout got real,” The Wall Street Journal declared. Other companies could soon follow Block’s lead—not necessarily because the technology is ready to replace workers, but because it’s become fashionable to make such cuts. In that sense, AI-induced job loss risks becoming a self-fulfilling prophecy.Dorsey’s explanation for the layoffs at Block might not be the whole story. The company could be engaged in “AI-washing,” or using the technology as a convenient excuse to lay off workers when other factors may be to blame. Like many other tech companies, Block became bloated during the pandemic—its workforce more than tripled from 2019 to 2022. Perhaps the cuts offered Dorsey a way to shed workers while also signaling to the world that he is taking AI seriously. “It is hard to imagine a firm-wide sudden 50%+ efficiency gain that justifies massive organizational cuts,” Ethan… [TheTopNews] Read More.2 days ago - A Never-Ending Conspiracy Theory in Remote Alaska
The guy pouring my beer in Anchorage told me that he knew there was no truth to decades-old rumors about a research facility 200 miles to the northeast. Nobody was up there talking to aliens or controlling people’s minds. “They just do the aurora,” he said, cheerfully, while tearing up pieces of mint.The comment didn’t surprise me. Many people who don’t believe one conspiracy theory about that station—known as the High-frequency Active Auroral Research Program, or HAARP—believe another. A common misconception is that it can manufacture northern lights, a natural wonder typically most visible in or near the Arctic Circle. It cannot (and neither can any man-made instrument). Still, late last year, when a geomagnetic storm caused aurora sightings as far south as Texas, Facebook was studded with posts warning that these lights were not “natural” and that they were created by the scientists at HAARP for possibly sinister reasons.I’ve been curious about HAARP for a while because of rumors such as this one. The lab has also been erroneously credited with various supernatural occurrences (backward-walking caribou) and secret contact with extraterrestrials (covered up by “men in black”). Most commonly, it’s blamed for events caused by nature. The office phone rings after hurricanes, earthquakes, floods, wildfires, tornadoes, and typhoons, no matter where in the world they occur. A 2024 study found that HAARP was the subject of more than a million conspiracism-inflected posts on Twitter from January 2022 to March 2023, primarily about natural disasters. In early 2024, the far-right influencer Laura Loomer suggested that HAARP created a snowstorm to dampen turnout at the Iowa caucuses and thwart the Trump campaign. And when I visited HAARP this past November, calls were coming in about whether the facility had caused Hurricane Melissa, which had recently swept through Jamaica and Cuba, resulting in at least 88 fatalities and billions of dollars in damage.[Adam Serwer: Gullible, cynical America]All of this anxiety is focused on a unique research instrument housed at HAARP, which is owned by the University of Alaska at Fairbanks and was originally built by the military for the cost of $290 million. “The array,” as the instrument is called, is a grid of 180 transmitters that each sit atop a 72-foot-tall post, arranged in a clearing and surrounded by Alaskan wilderness. You could call it the world’s highest-powered radio transmitter, but it’s more precisely its most powerful ionospheric heater (which sounds… [TheTopNews] Read More.2 days ago - What Anthropic’s Clash With the Pentagon Is Really About
The weekslong conflict between Anthropic and the Department of Defense is entering a new phase. After being designated a supply-chain risk by DOD last week, which effectively forbids Pentagon contractors from using its products, the AI company filed a lawsuit against DOD this morning alleging that the government’s actions were unconstitutional and ideologically motivated. Then, this afternoon, 37 employees from OpenAI and Google DeepMind—including Google’s chief scientist, Jeff Dean—signed an amicus brief in support of Anthropic, in essence lending support to one of their employers’ greatest business rivals (even as OpenAI itself has established a controversial new contract with DOD).The standoff is unprecedented. For the past few weeks, Anthropic has been in heated negotiations with the Pentagon over how the U.S. military can use the firm’s AI systems. Anthropic CEO Dario Amodei had refused terms that would have seemingly allowed the Trump administration to use the company’s AI systems for mass domestic surveillance or to power fully autonomous weapons, leading DOD officials to accuse Amodei of “putting our nation’s safety at risk” and of having a “God-complex.”Nobody knows how this dispute will end. A spokesperson for Anthropic told me that the lawsuit “does not change our longstanding commitment to harnessing AI to protect our national security” and that the firm will “pursue every path toward resolution, including dialogue with the government.” A DOD spokesperson told me that the department does not comment on litigation.[Read: Inside Anthropic’s killer-robot dispute with the Pentagon]But a conflict like this was inevitable, and more are sure to come. The government does not have anything close to a legal framework for regulating generative AI or, for that matter, online data collection. There are few legal, externally enforced guardrails on the use of AI in autonomous weaponry, and fewer still on how AI can be used to process the huge sums of information that federal agencies can collect on people: location data, credit-card purchases, browsing-history data, and so on. Because the laws are loose, Anthropic and OpenAI have been able to set their own privacy policies and guidelines for how AI can and cannot be used, and then change them at will; OpenAI, Meta, and Google, for instance, have all reversed previous restrictions on military applications of AI. But this cuts in the other direction as well: Anthropic has effectively been branded an enemy of the state for opposing the administration’s desire to be able to use… [TheTopNews] Read More.2 days ago - Polymarket Is Going to Get Someone Killed
Ayatollah Ali Khamenei was not, it’s safe to assume, a devoted Polymarket user. If he had been, the Iranian leader might still be alive. Hours before Khamenei’s compound in Tehran was reduced to rubble last week, an account under the username “magamyman” bet about $20,000 that the supreme leader would no longer be in power by the end of March. Polymarket placed the odds at just 14 percent, netting “magamyman” a profit of more than $120,000.Everyone knew that an attack might be in the works—some American aircraft carriers had already been deployed to the Middle East weeks ago—but the Iranian government was caught off guard by the timing. Although the ayatollah surely was aware of the risks to his life, he presumably did not know that he would be targeted on this particular Saturday morning. Yet on Polymarket, plenty of warning signs pointed to an impending attack. The day before, 150 users bet at least $1,000 that the United States would strike Iran within the next 24 hours, according to a New York Times analysis. Until then, few people on the platform were betting that kind of money on an immediate attack.Maybe all of this sounds eerily familiar. In January, someone on Polymarket made a series of suspiciously well-timed bets right before the U.S. attacked a foreign country and deposed its leader. By the time Nicolás Maduro was extracted from Venezuela and flown to New York, the user had pocketed more than $400,000. Perhaps this trader and the Iran bettors who are now flush with cash simply had the luck of a lifetime—the gambling equivalent of making a half-court shot. Or maybe they knew what was happening ahead of time and flipped it for easy money. We simply do not know.Polymarket traders swap crypto, not cash, and conceal their identities through the blockchain. Even so, investigations into insider trading are already underway: Last month, Israel charged a military reservist for allegedly using classified information to make unspecified bets on Polymarket.The platform forbids illegal activity, which includes insider trading in the U.S. But with a few taps on a smartphone, anyone with privileged knowledge can now make a quick buck (or a hundred thousand). Polymarket and other prediction markets—the sanitized, industry-favored term for sites that let you wager on just about anything—have been dogged by accusations of insider trading in markets of all flavors. How did a Polymarket user know that… [TheTopNews] Read More.5 days ago





