Are we in an AI bubble, or is superintelligence about to make our world unrecognizable? Listen to my conversation with AI innovator and investor Dave Blundin on Apple or Spotify, and let’s discuss in the comments below.
Here’s something I am trying to figure out — Are we on a path to AI superintelligence, when is this likely to happen, and how should this information change the decisions we make on a daily basis?
During the last couple years, it has felt to me like I have been living in two alternate realities. One is the normal daily life we all inhabit — trying to be a good dad, navigate business challenges and opportunities, and celebrate our finite lives each day. This is a linear world, in which new changes emerge at predictable rates. The second reality I inhabit is the time I spend reading, thinking and talking about the acceleration of AI and other technologies. I think of this as the exponential world.
When you talk to technologists like Bill Gates and Reid Hoffman, and big thinkers like Yuval Noah Harari — all of whom we’ve had on the Next Big Idea podcast recently — there is an implicit understanding among them that the AI revolution underway is of an entirely unprecedented magnitude. We’ve watched the computer revolution change our world. And then the internet. But we ain’t seen nothing yet. The argument technologists make is that though tech changes may feel linear, we are currently at a critical inflection point on an exponential curve.
We’ve figured out how to build neural networks on silicon that can think like top mathematicians and scientists, but faster and at greater scale. Very soon, AI technologists tell us, these systems will think at a level well beyond that of any living scientist, many thousands of times faster. The key questions for me, again, are how likely is this to happen, when would it likely happen, and what should we do differently if this is true?
This week on the podcast, I had a guest on the show who is uniquely well qualified to answer these questions. You’ve probably never heard of him. His name is David Blundin. He’s co-founded 23 companies, five of which have hit nine-figure valuations. He currently runs a fund called Link Ventures, which invests in dozens of AI startups, he teaches a popular AI entrepreneurship class at MIT, and he co-hosts the Moonshots podcast with Peter Diamandis. Dave’s answers to my questions above? We will see an intelligence explosion shortly, resulting in superintelligence in 2026 or 2027. And yes, this should change some of the decisions we are making, both individually and collectively.
It makes sense to double down on your health, for starters, because we are likely to reach “mortality escape velocity,” which is to say we could become immortal. Yes, immortal. But we may be stuck with the bodies we have, so you might want to make an effort in the meantime to stay physically young.
If you believe these trend lines will continue, there are also implications for the stock market (listen for details, and of course do your own research). We should all be playing with the latest AI tools, using them either to start new businesses or reinvent the businesses we are already in. And, turning to global consequences, it’s critical that we develop AI responsibly, and if you prefer U.S. governance to China’s, we’d better make sure the U.S. continues to lead this tech revolution.
Though my personal best guess (and hope, for safety reasons) is that superintelligence will take longer to develop than Dave suggests, I think we should take these predictions from people at the bleeding edge of AI very seriously. Dismissing them as a marketing strategy is foolish, in my view. The stakes are too high.
It’s worth noting that the people making these predictions are very serious thinkers. Google Deep Mind’s CEO Demis Hassabis is a Nobel prize winning cognitive neuroscientist and computer scientist who did postdoctoral research at MIT and Harvard. Anthropic’s CEO Dario Amodei got a PhD in physics from Princeton. These guys are not salesmen. They didn’t dream of being billionaires as children, they dreamt of uncovering scientific truths, and they already have more money than they know what to do with. Demis says AI could end all disease within the next decade, extend the human lifespan, and lead to “radical abundance.” Dario says something similar — he believes we are likely to cure most diseases within 5-10 years, double the human lifespan, and may become immortal. That is, if we don’t extinguish our species in the process. Both Dario and Demis emphasize the need for new forms of global collaboration to avoid dystopian outcomes.
I am emphasizing all of this because I think it’s worthy of repeating. If you take these guys seriously, we need to prepare for what may be the most dramatic change in human history. Both individually and collectively. While I’m uncertain about the timeline (timing is a bitch, and I’m hopeful it will take longer for safety reasons), I am convinced that this is our most likely reality.
If you want to dig deeper into what’s happening, listen to this week’s episode. You may also want to listen to some of our other conversations about AI, which you’ll find in this Spotify playlist.
If you are not yet convinced this one is worth listening to, here’s a small sampling of Dave’s assertions in this week’s episode:
The internet crash of 2000 was a catastrophic collapse of confidence, a “retreat from reality,” not an indication that the technology was overhyped. The internet delivered everything predicted and much more.
Yes, we could experience an AI bubble, followed by a market correction, due to constraints in production of AI chips, but it will not stop our path to superintelligence.
The intelligence explosion will come in 2026 or 2027, but it will not be an infinite takeoff. It will be a step function, resulting in a 100x to 10,000x improvement in capability the software layer.
Phase two of the intelligence explosion will occur when chip production catches up with demand.
Superintelligence will come years before its embodiment in robots. But robots will come to us all in the next couple decades making all human labor unnecessary, and producing radical abundance.
The robot revolution is a second order effect of AI innovation— the core code that runs humanoid robots and autonomous cars is exactly the same.
The current educational curriculum is “not even vaguely useful” in light of the AI revolution. AI is already a great teacher; it will soon be better than any professor you’ve ever had.
We are currently in a window of opportunity for AI entrepreneurship that may not last long. Right now you should be seriously considering reinventing your job, or if that’s not possible, quitting your job. If you are in college, and you have an appetite to start an AI startup, consider dropping out.
Listen to our conversation on Apple or Spotify, and let us know what you think in the comments below.
Friends, we have been quietly working on something special that just launched — our newly reinvented Next Big Idea Book Club.
What has changed? The way we pick the books, when you get them, and how we discuss them together.
We are now picking the six most important books of the year, the ones everyone will be talking about by the people who are driving the global conversation. We are delivering them to you on publication day, or soon thereafter. And you are invited to talk directly with these legendary thinkers in live video events, and we are continuing the conversation in our new WhatsApp community, and in BookChats in which we apply the ideas to our lives.
These same authors — our NBIC winners — will be our most anticipated podcast guests of the year. Many of these will be hosted as live conversations in New York City, with special VIP seating for Next Big Idea Club members, as well as livestream access for our members across the country.
I hope to discuss the next season of books with you in our new reimagined Next Big Idea Book Club community. Join us, get a copy of Brené Brown’s latest book, shipping now, and join the conversation!