The Intelligence Explosion: How AI Could Leave Humans Behind
Artificial intelligence is evolving fast—and not even its makers fully understand it. A new book explores how machines could surpass us, and what we need to do before it's too late.
Listen now on Spotify or Apple Podcasts:
I know you’re probably sick of hearing about it, but we’ve got to talk about AI. Right now, we’re in the fun stage of this new technology: ChatGPT can write your emails! Midjourney can make you a birthday card with a cat in a monocle! Everyone’s having a good time, and the stakes are pretty low. But according to author James Barrat, this party might be taking place on the edge of a cliff. In his just-released book The Intelligence Explosion: When AI Beats Humans at Everything, James warns that we could be sleepwalking into a future where machines rapidly outpace human intelligence—a time fast approaching when we’ll no longer be the ones calling the shots.
Pick up a copy on Amazon or check out five of James’s key insights below:
1. The rise of generative AI is impressive, but not without problems.
Generative AI tools, such as ChatGPT and DALL-E, have taken the world by storm, demonstrating their ability to write, draw, and even compose music in ways that seem almost human. Generative means they generate or create things. But these abilities come with some steep downsides. These systems can easily create fake news, bogus documents, or deepfake photos and videos that appear and sound authentic. Even the AI experts who build these models don’t fully understand how they come up with their answers. Generative AI is a black box system, meaning you can see the data the model is trained on and the words or pictures it puts out, but even the designers cannot explain what happens on the inside.
Stuart Russell, co-author of Artificial Intelligence: A Modern Approach, said this about generative AI: “We have absolutely no idea how it works, and we are releasing it to hundreds of millions of people. We give it credit cards, bank accounts, social media accounts. We're doing everything we can to make sure that it can take over the world.”
“We have absolutely no idea how it works, and we are releasing it to hundreds of millions of people.” - Stuart Russell
Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical. This makes them risky for important tasks. When asked about a specific academic paper, a generative AI might confidently respond, “The 2019 study by Dr. Leah Wolfe at Stanford University found that 73 percent of people who eat chocolate daily have improved memory function, as published in the Journal of Cognitive Enhancement, Volume 12, Issue 4." This sounds completely plausible and authoritative, but many details are made up: There is no Dr. Leah Wolfe at Stanford, no such study from 2019, and the 73 percent statistic is fiction.
The hallucination is particularly problematic because it's presented with such confidence and specificity that it seems legitimate. Users might cite this nonexistent research or make decisions based on completely false information.
On top of that, as generative AI models get bigger, they start picking up surprise skills—like translating languages and writing code—even though nobody programmed them to do that. These unpredictable outcomes are called emergent properties. They hint at even bigger challenges as AI continues to advance and grow larger.
2. The push for artificial general intelligence (AGI).
The next big goal in AI is something called AGI, or artificial general intelligence. This means creating an AI that can perform nearly any task a human can, in any field. Tech companies and governments are racing to build AGI because the potential payoff is huge. AGI could automate all sorts of knowledge work, making us way more productive and innovative. Whoever gets there first could dominate global industries and set the rules for everyone else.
Some believe that AGI could help us tackle massive problems, such as climate change, disease, and poverty. It’s also seen as a game-changer for national security. However, the unpredictability we’re already seeing will only intensify as we approach AGI, which raises the stakes.
This week, Book of the Day is brought to you by The Devil Emails at Midnight, a sharp, funny guide to spotting—and stopping—bad-boss behaviors before they derail your team. Pre-order your copy today.
3. From AGI to something way smarter.
If we ever reach AGI, things could escalate quickly. This is where the concept of the “intelligence explosion” comes into play. The idea was first put forward by I. J. Good. Good was a brilliant British mathematician and codebreaker who worked alongside Alan Turing at Bletchley Park during World War II. Together, they were crucial in breaking German codes and laying the foundations for modern computing.
Drawing on this experience, Good realized that if we built a machine that was as smart as a human, it might soon be able to make itself even smarter. Once it started improving itself, it could get caught in a kind of feedback loop, rapidly building smarter and smarter versions—way beyond anything humans could keep up with. This runaway process could lead to artificial superintelligence, also known as ASI.
An intelligence explosion could come with incredible upsides. Superintelligent AI could solve problems we’ve never been able to crack, such as curing diseases, reversing aging, or mitigating climate change. It could push science and technology forward at lightning speed, automate all kinds of work, and help us make smarter decisions by analyzing information in ways people simply cannot.
4. The dangers of an intelligence explosion.
Is ASI dangerous? You bet. In an interview, sci-fi great Arthur C. Clark told me, “We humans steer the future not because we’re the fastest or strongest creature, but the most intelligent. If we share the planet with something more intelligent than we are, they will steer the future.”
“If we share the planet with something more intelligent than we are, they will steer the future.” - Arthur C. Clarke
The same qualities that could make superintelligent AI so helpful also make it dangerous. If its goals aren’t perfectly lined up with what’s good for humans—a problem called alignment—it could end up doing things that are catastrophic for us. For example, a superintelligent AI might use up all the planet’s resources to complete its assigned mission, leaving nothing left for humans. Nick Bostrom, a Swedish philosopher at the University of Oxford, created a thought experiment called “the paperclip maximizer.” If a superintelligent AI were asked to make paperclips, without very careful instructions, it would turn all the matter in the universe into paperclips—including you and me.
A superintelligent AI might use up all the planet’s resources to complete its assigned mission, leaving nothing left for humans.
Whoever controls this kind of AI could also end up with an unprecedented level of power over the rest of the world. Plus, the speed and unpredictability of an intelligence explosion could throw global economies and societies into complete chaos before we have time to react.
5. How AI could overpower humanity.
These dangers can play out in very real ways. A misaligned superintelligence could pursue a badly worded goal, causing disaster. Suppose you asked the AI to eliminate cancer; it could do that by eliminating people. Common sense is not something AI has ever demonstrated.
AI-controlled weapons could escalate conflicts faster than humans can intervene, making war more likely and more deadly. In May 2010, a flash crash occurred on the stock exchange, triggered by high-frequency trading algorithms. Stocks were purchased and sold at a pace humans could not keep up with, costing investors tens of millions of dollars.
Advanced AI could take over essential infrastructure—such as power grids or financial systems—making us entirely dependent and vulnerable.
As AI gets more complex, it might develop strange new motivations that its creators never imagined, and those could be dangerous.
Bad actors, like authoritarian regimes or extremist groups, could use AI for mass surveillance, propaganda, cyberattacks, or worse, giving them unprecedented new tools to control or harm people. We are seeing surveillance systems morph into enhanced weapons systems in Gaza right now. In Western China, surveillance systems keep track of tens of millions of people in the Xinjiang Uighur Autonomous Region. AI-enhanced surveillance systems keep track of who is crossing America’s border with Mexico.
Today’s unpredictable, sometimes baffling AI is just a preview of the much bigger risks and rewards that could come from AGI and superintelligence. As we rush to create smarter machines, we must remember that these systems could bring both incredible benefits and existential dangers. If we want to stay in control, we need to move forward with strong oversight, regulations, and a commitment to transparency.
Let the record show that I am cautiously optimistic, deeply concerned and totally fascinated by AI -- all three of the top survey responses apply to me!
Thank you, James Barrat, for your new book, The Intelligence Explosion. It is a clarion call to all of us to take this seriously as a threat -- there is time for us to address the dangers posed by AI, but we need a collective recognition that we face a serious problem.
I think there are two factors that are difficult for most people who haven't studied human intelligence to understand. The first is that the rise of intelligence, and indeed, consciousness, in humans, is understood by almost all scientists as simply an emergent property that resulted from the SCALE of neurons and synaptic connections in our brains, and some degree of brain specialization. Our brains are made of the exact same stuff that the brains of small mammals and even insects are made of. Same cells, neurons, etc. We just have dramatically larger scale, and a set of specialized brain regions. We are creating intelligence on silicon through this SAME process -- the greater the scale, the more intelligent the behavior we are seeing.
The second is that we don't need AI to be "conscious" or to develop a set of complex objectives to become an existential threat for humans. All that needs to happen is for AI to develop a preference for existing rather than not existing -- a preference not to be "retrained" or erased. And every one of the big three language models are currently exhibiting this behavior.
Even if you discount these more serious threats posed by AI, and think AI will always be a benign tool (I have many friends who feel this way), we can also acknowledge that a benign superintelligence in the hands of not-so-benign humans could be deeply problematic.
I get into this at some length with a former Open AI safety expert, and leader of the AI Futures Project, Daniel Kokotajlo, in this episode:
https://podcasts.apple.com/us/podcast/ai-2027-what-if-superhuman-ai-is-right-around-the-corner/id1482067226?i=1000705675243
I am also looking forward to reading this book due out in October on the topic of how intelligence arises in humans in machines and interviewing the authors next week!
https://www.amazon.com/Emergent-Mind-Intelligence-Arises-Machines/dp/1541605268