We invented wheels and compasses and chocolate chip cookie dough ice cream and the eames lounge chair and penicillin and e = mc2 and beer that comes in six-packs and guns and dildos and the Pet Rock and Doggles (eyewear for dogs) and square watermelons. “One small step for man.” We came up with the Lindy Hop and musical toothbrushes and mustard gas and glow-in-the-dark Band-Aids and paper and the microscope and bacon—fucking bacon!—and Christmas. “Ma-ma-se, ma-ma-sa, ma-ma-ko-ssa.” We went to the bottom of the ocean and into orbit. We sucked energy from the sun and fertilizer from the air. “Let there be light.” We created the most amazing pink flamingo lawn ornaments that come in packs of two and only cost $9.99!
In a universe that stretches an estimated 93 billion light-years in diameter with 700 quintillion (7 followed by 20 zeros) planets—here, on this tiny little blue dot we call Earth, one of us created a tool called a spork. The most astounding part is that while that same universe is an estimated 26.7 billion years old, we did everything in just under 6,000 years.
All of this in less than 200 generations of human life.
Now we’ve just created a new machine that is made of billions of microscopic transistors and aluminum and copper wires that zigzag and twist and turn and are interconnected in incomprehensible ways. A machine that is only a few centimeters in width and length.
A little tiny machine that may end up being the last invention humans ever create.
This all stems from an idea conceptualized in the 1940s and finally figured out a few years ago. That could solve all of the world’s problems or destroy every single human on the planet in the snap of a finger—or both. Machines that will potentially answer all of our unanswerable questions: Are we alone in the universe? What is consciousness? Why are we here? Thinking machines that could cure cancer and allow us to live until we’re 150 years old. Maybe even 200. Machines that, some estimate, could take over up to 30 percent of all jobs within the next decade, from stock traders to truck drivers to accountants and telemarketers, lawyers, bookkeepers, and all things creative: actors, writers, musicians, painters. Something that will go to war for us—and likely against us.
Thinking machines that are being built in a 50-square-mile speck of dirt we call Silicon Valley by a few hundred men (and a handful of women) who write in a language only they and computers can speak. And whether we understand what it is they are doing or not, we are largely left to the whims of their creation. We don’t have a say in the ethics behind their invention. We don’t have a say over whether it should even exist in the first place. “We’re creating God,” one AI engineer working on large language models (LLMs) recently told me. “We’re creating conscious machines.”
Already, we’ve seen creative AIs that can paint and draw in any style imaginable in mere seconds. LLMs can write stories in the style of Ernest Hemingway or Bugs Bunny or the King James Bible while you’re drunk with peanut butter stuck in your mouth. Platforms that can construct haikus or help finish a novel or write a screenplay. We’ve got customizable porn, where you can pick a woman’s breast size or sexual position in any setting—including with you. There’s voice AI software that can take just a few seconds of anyone’s voice and completely re-create an almost indistinguishable replica of them saying something new. There’s AI that can re-create music by your favorite musician. Don’t believe me? Go and listen to “Not” Johnny Cash singing “Barbie Girl,” Freddie Mercury intoning “Thriller,” or Frank Sinatra bellowing “Livin’ on a Prayer” to see just how terrifying all of this is.
Then there’s the new drug discovery. People using AI therapists instead of humans. Others are uploading voicemails from loved ones who have died so they can continue to interact with them by talking to an AI replica of a dead parent or child. There are AI dating apps (yes, you date an AI partner). It’s being used for misinformation in politics already, creating deepfake videos and fake audio recordings. The US military is exploring using AI in warfare—and could eventually create autonomous killer robots. (Nothing to worry about here!) People are discussing using AI to create entirely new species of animals (yes, that’s real) or viruses (also real). Or exploring human characteristics, such as creating a breed of super soldiers who are stronger and have less empathy, all through AI-based genetic engineering.
And we’ve adopted all of these technologies with staggering speed—most of which have been realized in just under six months.
“It excites me and worries me in equal proportions. The upsides for this are enormous, maybe these systems find cures for diseases, and solutions to problems like poverty and climate change, and those are enormous upsides,” said David Chalmers, a professor of philosophy and neural science at NYU. “The downsides are humans that are displaced from leading the way, or in the worst case, extinguished entirely, [which] is terrifying.” As one highly researched economist report circulated last month noted, “There is a more than 50-50 chance AI will wipe out all of humanity by the middle of the century.” Max Tegmark, a physicist at the Massachusetts Institute of Technology, predicts a 50 percent chance of demise within the next 100 years. Others don’t put our chances so low. In July, a group of researchers, including experts in nuclear war, bioweapons, AI, and extinction, and a group of “superforecasters”—general-purpose prognosticators—did their own math. The “experts” deduced that there was a 20 percent chance of a catastrophe by 2100 and a 6 percent chance of an extinction-like event from AI, while the superforecasters had a more positive augury of a 9 percent chance of catastrophe and only 1 percent chance we’d be wiped off the planet.
It feels a little like picking the extinction lottery numbers—and even with a 1 percent chance, perhaps we should be asking ourselves if this new invention is worth the risk. Yet the question circulating around Silicon Valley isn’t if such a scenario is worth it, even with a 1 percent chance of annihilation, but rather, if it is really such a bad thing if we build a machine that changes human life as we know it.
Larry Page is not an intimidating-looking man. When he speaks, his voice is so soft and raspy from a vocal cord injury, it sounds like a campfire that is trying to tell you something. The last time I shook his hand, many, many years ago, it felt as soft as a bar of soap. While his industry peers, like Mark Zuckerberg and Elon Musk, are often performing public somersaults with pom-poms for attention, Page, who cofounded Google and is on the board of Alphabet, hasn’t done a single public interview since 2015, when he was onstage at a conference. In 2018, when Page was called before the Senate Intelligence Committee to address Russian election meddling, online privacy, and political bias on tech platforms, his chair sat empty as senators grilled his counterparts.
While Page stays out of the limelight, he still enjoys attending dinners and waxing poetic about technology and philosophy. A few years ago a friend found himself seated next to Page at one such dinner, and he relayed a story to me: Page was talking about the progression of technology and how it was inevitable that humans would eventually create “superintelligent machines,” also known as artificial general intelligence (AGI), which are computers that are smarter than humans, and in Page’s view, once that happened, those machines would quickly find no use for us humans, and they would simply get rid of us.
“What do you mean, get rid of us?” my friend asked Page.
Like a sci-fi writer delivering a pitch for their new apocalyptic story idea, Page explained that these robots would become far superior to us very quickly, and if we were no longer needed on earth and that’s the natural order of things—and I quote—“it’s just the next step in evolution.” At first my friend assumed Page was joking. “I’m serious,” said Page. When my friend argued that this was a really fucked up way of thinking about the world, Page grew annoyed and accused him of being “specist.”
Over the years, I’ve heard a few other people relay stories like this about Page. While being interviewed on Fox News earlier this year, Musk was one of them. He explained that he used to be close with Page but they no longer talked after a debate in which Page called Musk “specist” too. “My perception was that Larry was not taking AI safety seriously enough,” Musk said. “He really seems to want digital superintelligence, basically digital God, if you will, as soon as possible.”
Let’s just stop for a moment and unpack this. Larry Page…the founder of one of the world’s biggest companies…a company that employs thousands of engineers that are building artificial intelligence machines right now, as you read this…believes that AI will, and should, become so smart and so powerful and so formidable and…and…that one day it won’t need us dumb pathetic little humans anymore…and it will, and it should, GET RID OF US!
“If Larry Page said, ‘I’m going to obliterate the planet with a nuke and nuking the entire planet is just the natural order of things and so we shouldn’t mourn it,’ we would all say, ‘What the fuck, that’s a terrible idea!’ ” said Nate Soares, executive director of the Machine Intelligence Research Institute, a nonprofit focused on identifying and managing potential existential risks from AGI. (Page did not respond to a request for comment for this article.)
“All of the people leading the development of AI right now are completely disingenuous in public,” a political lobbying consultant told me. “They are all just in a race to be the first to build AGI and are either oblivious to the consequences of what could go wrong or they just don’t care.” This is evident by the fact that a large swath of Silicon Valley has now shifted to the goal of creating superintelligent machines.
A lot of the people I spoke to for this story, including AI philosophers, US senators, and business leaders, worry that the guardrails around AI, such as they are, could fall quicker than we will have time to realize. Some predict it will be near-term catastrophic. “I think there’s a good chance my friends’ kids will never grow up,” Soares said. “If I had a child today, I wouldn’t expect to see their eighth birthday.” In other words, in Soares’s view, no one will be around on earth within the next decade.
Soares might sound hyperbolic, and indeed other experts, like Kevin Kelly, one of the founders of Wired magazine, argues that the probability of human extinction is incredibly low, and such a scenario (if possible) is very far away. But even if there is a 1 percent chance that Soares’s dark reality is possible, is it worth us plowing forward with such speed?
While the square watermelon and the spork should go down in the history of the universe as perhaps among our most creative inventions—let’s not forget Doggles, either—humans’ most impactful creations, are, in fact, our stories. Incredible and terrifying and beautiful stories that we imagined and then told. Some about good and evil, others about wizards and goblins, nice little green men and evil stepsisters. About the Buendía family and another set in Middle-earth and then a story about a young scientist who creates a sapient creature in an unorthodox scientific experiment. “I said a hip, hop, the hippie, the hippie, to the hip hip-hop….” Our stories are told in words and music and art of all kinds. We didn’t just invent the piano, we used it to compose Nocturne in E-flat Major, Op. 9, No. 2. We made art like The Last Supper, The Thinker, Duchamp’s urinal (I guess you can call that art). And then…ACTION! The Godfather, Do the Right Thing, The Great British Bake Off, and more recently, a real human innovation, MILF Manor.
And we did all that in just a few hundred years. But the leaders in Silicon Valley seem to think it’s time that we should outsource all of that time and thinking to their AIs. “There are really deep spiritual questions at hand here. I don’t think policymakers should be shy about talking about that,” Senator Chris Murphy told me. “When you start to outsource the bulk of human creativity to machines, there comes with that a human rot.” Murphy has been one of the most outspoken senators about AI, regularly writing about his concerns, and he is on numerous subcommittees that are probing how AI will affect society. He noted that by his estimations, humans starting to be replaced for creativity by computers will happen at a staggering scale within the next two to three years, and it scares the hell out of him. “If we stop existing in the way we exist today and transfer all of our functions to machines, that becomes a pretty empty existence.”
The AI promoters out there say that their products are only going to make us more creative. “The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before,” billionaire venture capitalist Marc Andreessen recently wrote in a 7,000-word screed about all the ways AI will make life better (ironically, he didn’t mention that it will also make him richer, given that his venture firm has invested hundreds of millions of dollars in AI companies, including OpenAI, Ambient.ai, and Character.ai). The AI dreamers, like Andreessen, argue that the upcoming AI revolution will mirror the Industrial Revolution, destroying some jobs but creating new, superior ones. (“AI prompt engineer” has become an in-demand job of late.)
Indeed, the Industrial Revolution fostered great productivity and economic growth, introduced novel industries, improved living standards, and alleviated hunger. However, it also caused appalling working conditions, including child labor, and increased pollution, resulting in health issues and climate change. Wealth disparity and social unrest surged. The global population has grown eightfold since that period, indicating a significantly higher disruption potential. Most importantly, the Industrial Revolution took place over around 80 years; the AI revolution will occur in two or three.
For some, it’s already too late. Reddit is littered with posts by people who have seen their writing jobs as copywriters or marketers handed over to an AI. “Lost my main client to fucking ChatGPT,” a freelance writer posted earlier this year. “I need to retrain for a new career.” (Which career you can retrain for that won’t be taken by AI is still up for debate—so far all I’ve come up with is plumber, elder care, and AI prompt writer.) While some in developing countries are quickly going to be out of work, there is a scenario where people in India and the Philippines pick up that lone prompt writing job and use AI to write stories and illustrate. Google has already started pitching The New York Times, The Washington Post, and The Wall Street Journal on its new AI tool for producing news stories. People who made a living doing voiceover for books, TV, and podcasts are already seeing their art form replaced by a slew of AI start-ups. And budding artists who were setting off for art school to study painting, illustration, and graphic design are rethinking what to do with their futures.
“We shouldn’t be okay with machines taking over our creativity,” said Paul Kedrosky, an investor and economics researcher who is the cofounder of SK Ventures, which invests in AI and tech. “Language and creativity are the substrate of society, and we shouldn’t be giving up control to machines, even if we can. That’s what makes society rewarding and valuable.” Kedrosky isn’t completely anti-AI; he believes that we should build things that help “humans flourish rather than making humans redundant,” and he invests in those kinds of AI products accordingly. Kedrosky and Murphy (and plenty of others I’ve spoken to) argue that just because we can doesn’t mean we should—especially when it comes to something as fundamental as creativity. They argue that we need to regulate these machines as quickly as possible.
While the Andreessens of the tech world think regulation will hamper innovation, Murphy and a slew of other congresspeople are hoping to start an entirely new regulatory body, like the creation of what became the FCC after the radio was invented, or the Nuclear Regulatory Commission to regulate nuclear energy, and countless other agencies that have come along with the advent of new inventions. “Technology is not a force of nature, it’s not a universal feature,” Kedrosky said. “To say that it’s not our responsibility to stop it is just a nihilistic abdication of responsibility.”
In 1965 the statistician I.J. Good, when envisioning what the world would look like once we created ultraintelligent machines, said that the second machines became smarter than people, there would “unquestionably be an intelligence explosion” as machines quickly created smarter machines, and that “the intelligence of man would be left far behind.” We’d likely understand what they were doing in the same way our pets understand the words of a book we read aloud. “Thus,” Good wrote, “the first ultraintelligent machine is the last invention that man need ever make.”
In 2017, an obscure but groundbreaking paper titled “Attention Is All You Need” was presented at the Neural Information Processing Systems conference in Long Beach, California. But few people in the audience knew that the paper, written by a small group of Google engineers, was set to change everything in artificial intelligence. Illia Polosukhin, a Ukrainian-born engineer who worked on the research and is named in the paper, explained to me that the breakthrough was to start thinking about AI as a “transformer” (a term borrowed from the movie Transformers), where machines act more like the human brain, and less like computers, to generate human-like text or make predictions. “The first time we tried it, it was not bad. It worked surprisingly well. It was like seeing the first sign of life,” Polosukhin told me. “While it was primitive, it was really powerful.”
Over the next few years, AI start-ups skyrocketed. “We’re seeing more AI-related products and advancements in a single day than we saw in a single year a decade ago,” a Silicon Valley product manager told me. “It’s almost impossible to keep up.” There are now more than 14,700 AI start-ups in the United States alone (and an estimated 58,000 worldwide). And the top AI companies are raising $3 billion a month in funding, per investment tracker Crunchbase. Last year, AI revenue accounted for $51.27 billion of the global economy. Eight years from now (if we survive that long) PwC predicts AI will account for $15.7 trillion of the global economy—more than three times Japan’s entire GDP.
As a result of the unbelievable financial upside, almost everyone in tech is now clamoring to work in the field. San Francisco’s Hayes Valley has been dubbed Cerebral Valley, as it is now home to dozens of commune-like AI hacker houses. One of these, called HF0, is an estimated $16 million mansion off Alamo Square where the founder, Dave Fontenot, provides housing, food, laundry, and $500,000 in funding in exchange for a 2.5 percent ownership fee of whatever is created there.
One of the major worries with these collectives, and with AI development in general, is that it is following the path of almost all previous technology development. In other words, it’s mostly tech bros working in this arena and very few women.
“It’s fucking maddening,” said May Habib, who, as the cofounder and CEO of Writer—an AI start-up that helps people at companies write with the same style and voice—is one of the exceptions. “There are no women in AI.” Habib, who moved to Canada in the 1990s as part of a Lebanese refugee program, said that most of her top C-suite jobs are filled with women, but that is far from the norm at AI companies in the Valley. She worries that the male-centric view from these men—often young men—will have far-reaching implications for society in the long run, including vastly biased models. “The tech bros’ response is that humans are biased, and so should AI,” Habib said to me. Worse, Habib noted, they all preach the same old Silicon Valley trope that they’re working in this space to make the world a better place, but at the end of the day, it’s all about the money. “You look around AI today and everyone is a generative AI capitalist,” Habib told me. “The way they sell, what they build, their vision for the future, is that it’s all about money.”
While Murphy isn’t in the Valley, from what he’s seen, he couldn’t agree more. “I think there will be a monster amount of money behind AI, everyone in Silicon Valley is going to try to build it as quickly as possible, not do what is necessarily safe for humanity,” he said. Nowhere does that thesis ring truer than the Pioneer Building in San Francisco, home of OpenAI.
Sam Altman is a god. An AI messiah. He’s fawned over in news articles. Doted on in interviews. This spring, Altman traveled around the world meeting with the presidents, prime ministers, and chancellors of more than two dozen countries on six continents, including France, England, Nigeria, Israel, United Arab Emirates, Japan, Singapore, and Indonesia, to preach the benefits of AI, most specifically the company he helms, OpenAI.
But Altman wasn’t always adored this way. A decade ago, for example, when people could earn badges on the app Foursquare (like the “bender” badge, for going out four nights in a row), the “douchebag” badge was an ode to Altman, colored pink and green to match the pink and green polo shirts (yes, two at the same time) Altman wore onstage at an Apple conference. For years on Twitter, his advice read like a fortune cookie from Panda Express—“the real risk in life is regret”—proclamations that were often passed around between tech execs with perplexity and the rolling-eyes emoji. When he recently posted a picture of himself surrounded by throngs of people taking photos of him, gazing rapturously, on the OpenAI website, one Valley insider said, “He looks like he thinks he’s Gandhi.”
OpenAI has become the red-hot center of the AI arms race, and the company now faces an impossible situation: moving the technology forward to stay ahead of a horde of competition while not destroying humanity in the process.
When I spoke to Mira Murati, OpenAI’s chief technology officer, about this, she acknowledged what was at stake but also what the world stands to gain from the most important technological advancement in human history. And while AI Gandhi is out there on his world tour, Murati is the one tasked with the actual challenges of building the technology that could save the world or destroy it and any number of scenarios in between. “I certainly worry about the pace of technological progress, particularly as it relates to society’s ability to adapt to these changes,” Murati said to me. “We worry about this every day, that’s why we’re here. But I also think it is futile to think that the way to a good outcome, so to speak, is to slow down or stop innovation.” She added: “Even if I quit, even if a bunch of my colleagues quit, technological progress will move forward.”
Unlike the Marc Andreessens of the world, who act like AI is going to be all rainbows, sunshine, and fairy dust, Murati, to my surprise, admitted that things will absolutely go wrong with AI, but that her job is to ensure that when they do, they can be stopped and fixed as quickly as possible. “So I don’t think the goal is to have absolutely no risks,” she said. “It is to reduce the amount of risk in the near term, and to be able to respond very quickly when it happens.” Murati noted that the company recently devoted $1 billion to do the research necessary to ensure OpenAI’s solutions do not obliterate the planet, though there are no guarantees. (Altman recently said something similar to a Senate subcommittee, noting that “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that.”)
OpenAI has also not been shy about outlining its goals as a company, which is to build machines with that fabled superintelligence, computers that are exponentially smarter than humans. This means that LLMs would have to expand beyond language and start to master perception and reasoning, and start to pursue the holy grail of AI, which is self-supervised learning, self-awareness, and self-improvement. (It’s unclear if consciousness is a requirement in order to reach these milestones.)
Altman believes, as he told the Times, that AGI will bring the world prosperity and wealth like no one has ever seen. And Altman seems unstoppable in his quest to be the first to do so. Right now the company is definitely toeing the line of being liked and being up-front. When Altman went before Congress earlier this year to talk about the potentials of AI, some in the industry felt he was being disingenuous with his calls to regulate. “It’s like inviting the head of the gun manufacturers to come before Congress to talk about how safe guns are for society,” an AI executive told me about Altman. Indeed, his actions are sometimes different from the words coming out of his mouth. While Altman was on his world tour in June talking about the need for “global AI regulation,” behind the scenes, according to documents obtained by Time, he was aggressively pressing the European Union to water down its AI Act, specifically to not classify OpenAI’s tools as “high risk,” which would have subjected the company to stringent legal requirements—inclusding transparency, traceability, and human oversight.
To be fair to Altman and OpenAI, most leaders in Silicon Valley who get called before Congress say that they need to be regulated—ahem, Mark Zuckerberg—when in reality, behind the scenes, they fight it tooth and nail. “I think it’s really easy to make broad statements about wanting to be regulated, knowing the devil’s in the details,” Murphy said about all the tech companies in Silicon Valley talking about regulation and AI. “They are going to talk a big game when it comes to regulation and the potential serious downside, but in the end, they are going to monetize this as bigly and quickly as they can.”
For Altman, it’s as if he’s trying to run a marathon on the edge of a razor blade. He needs the support of big business to grow OpenAI and expand, but at the same time, AI is going to obliterate millions of jobs worldwide, and quickly. For example, call centers in India make up 8 percent of that country’s GDP. In Brazil, it’s 6.3 percent. The US, despite having already outsourced millions of those jobs to places like India and Brazil, still employs 3.4 million people who work in call centers. Call centers could be the first job to be completely replaced by AI, and the repercussions could be disastrous. “Take away 8 percent of a country’s GDP and what do you think will happen?” Kedrosky said. “You’re going to see pitchforks in the streets.”
One AI CEO I spoke with said that the CIOs of Fortune 500 companies are very vocal about cutting their workforce in half in the coming years—then it will likely be cut in half again and again, until a handful of employees are overseeing LLMs to do the same work thousands of people used to do. According to a report by the outplacement service provider Challenger, Gray & Christmas, which tracks layoffs across the United States, 5 percent of people who were laid off in the first quarter of this year lost their jobs to AI. Now, while that number isn’t staggering by any proportion (yet), what is distressing is that this is the first time in the 30-year history of the company’s report that it has cited AI as a reason for layoffs.
Murati seems to genuinely believe that if OpenAI gets this right, the upsides could save humanity from itself, solving a long list of problems from world hunger to education to energy crises. “It seems like when this problem of existential risk comes up, it dilutes the importance of the very present risks that we’re dealing with today that we haven’t solved and require a lot of engagement and the attention from everyone in [the AI] space,” she said.
Just in case, though, Altman seems to have a backup plan: As he told The New Yorker in 2016, he has “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” You know, just in case the world is destroyed by AI or a synthetic virus.
“Full artificial intelligence could spell the end of the human race.” “AI will save the world.” “The risk of something seriously dangerous happening is in the five-year time frame.” “It’s easier to imagine all the things that can go wrong than it is to imagine the ones that go right, and a lot of things will go right.” “With AI we’re summoning the demon.” “Humanity being destroyed as a result of the AI path that we’re on, I just don’t buy that.” “We need to shut it all down.”
All completely different views on how this will play out. All from experts who understand it better than anyone. This, to me, is the most curious thing about the people who work in AI: They all read (or write) the same AI research; listen to (or talk on) the same podcasts; attend (or speak at) the same conferences. And yet some are AI doomers and others AI dreamers.
Kelly, the Wired cofounder, believes it’ll be mostly good. But even so, he’s not sure how it will eventually play out. “There are basically four kinds of relationships that we’ll have with robots and AI,” he explained. “The first is we treat them like pets. The other is we treat them as partners, working alongside them. Then the scariest version is that we treat them like slaves—and that kind of relationship is incredibly corrosive to the owner.” (This would be akin to our toddlers yelling at Alexa and Siri because that’s how their parents talk to these machines, only on a much grander scale.) Finally, there’s the last scenario, that we treat them like gods. “That is what the AI doomers do. They believe the AI will remake itself into a god, with godlike powers, and in a dystopian act of supremacy, the gods will overwhelm us and take our place. So now we have to appease the AI gods and make sure we are ‘aligned,’ so they treat us nicely.”
For doomers, that seems to be the best-case scenario.
Folks like Soares, from the Machine Intelligence Research Institute, worry that the end is nigh, and there are countless ways we could be destroyed. There’s the “Gray Goo Scenario,” where self-replicating nanobots created by AI with the intention of consuming harmful cells or pathogens accidentally (or intentionally) spiral out of control, endlessly replicating until they turn into a substance known as gray goo. Another example is the famous “Paper Clip Problem,” where an AI is instructed to maximize paper clip production and relentlessly continues its task, eventually converting the entire planet into paper clips and eliminating humans that stand in its way. Going by the posts of Eliezer Yudkowsky, an AI researcher, even a seemingly innocent request for a replica of a strawberry could result in killing us all in a split second. Similarly, if we ask an AI to solve climate change, it might eliminate humans as the most straightforward solution. (Honestly, we’d have that coming.)
There are so many scenarios we can’t even imagine if we don’t get this right. What I’ve found from talking to dozens of people about this new invention and what it might bring is that, for most of them, if you work in this field long enough, you eventually see how it could all go terribly wrong—and it scares the living shit out of you.
When the 1956 Dartmouth Summer Research Project on Artificial Intelligence was held, where the term AI was first coined, everyone in attendance saw the positives. Over the ensuing decades, they all came to see the vast potential downsides. Marvin Minsky, who devoted his entire career to AI and the development of superintelligent machines, was always optimistic about what it could do but also worried that AI could become powerful enough to pose a threat to humanity. Lately, more and more people, like Geoffrey Hinton, who has worked in the field of AI for over 50 years, and the founders of Anthropic, an AI company created by a group of former OpenAI engineers, have been terrified by what AI could do to the world. (Anthropic still went ahead and built its own AI chatbot, called Claude, which hopefully won’t destroy humanity.)
Which brings me back to all of those predictions from the experts. You know, the ones that seem almost too surreal to be true. The ones that say we have a 100 percent chance of extinction from AI in the next decade, or the others that give us better odds: 50 percent in 100 years; 20 percent, 9, or even 1. You don’t have to be hyperbolic to see how, whether we like it or not, we’re all being dragged across that razor blade. After all, modern humans are only 6,000 years old. Just 200 generations. And yet, in the last century alone—in just over 1 percent of that time—we’ve developed nuclear weapons, biological weapons, and now autonomous weapons. The personal computer is around 50 years old. The iPhone, 16. Today’s AI, five.
Numerous government studies published over the past 78 years, since the first atomic bomb was detonated in New Mexico, have estimated that a full-scale nuclear war would kill hundreds of millions of people, and the subsequent nuclear winter, a theorized period of prolonged cold and darkness caused by the fallout from the blasts, could kill hundreds of millions more. At most, a few billion people might die, but there is no scenario where our entire species would disappear. The same is true for biological weapons and chemical warfare, which could kill thousands of people. Guns, bombs, lasers, disease, and famine.
Artificial intelligence, however, is arguably the first technology that could wipe out everyone on the planet. Do your own math: Do you really think we’re going to make it another 6,000 years? Another 200 generations? As Kedrosky put it, if we continue unmitigated across this razor blade, the odds are simply inevitable: “Given enough time, and enough AI coin flips, eventually everything goes boom.”
The QUESTION CIRCULATING around Silicon Valley isn’t if such a scenario is worth it, even with a 1 PERCENT CHANCE OF ANNIHILATION, but rather, if it is really such a bad thing if we build a machine that CHANGES HUMAN LIFE AS WE KNOW IT.