Dear Book of The Day readers,
Happy Sunday. I am Rufus Griscom, co-founder of the Next Big Idea Club, and host of our weekly podcast. This is the first in a series of reflections I intend to share on occasional Sundays, attempting to digest — with your help — our conversations with some of the most brilliant people on the planet.
One of the questions at the center of many of our recent conversations is this: How will AI impact our future? We’ve discussed this with Bill Gates, Yuval Noah Harari,
, Stuart Russell (who runs the Center for Human-Compatible Artificial Intelligence at Berkeley), Sal Khan (founder of Khan Academy), David Chalmers, (at Google Labs) and Kevin Roose (New York Times), to name a few. If you missed these, here’s a Spotify playlist of our AI greatest hits.The essential question I keep asking myself is how soon and how profoundly is AI likely to change our world? The collective answer from our esteemed guests seems to be — sooner and more profoundly than you think.
This view is powerfully reinforced by a buzzy new essay published recently by Anthropic CEO Dario Amodei called Machines of Loving Grace.
If you haven’t read it, here’s the TL;DR:
We are likely to achieve Artificial General Intelligence, which Dario prefers to call “powerful AI,” as soon as 2026, though “it could take much longer.”
He describes powerful AI as “smarter than a Nobel Prize winner across most relevant fields,” and replicable, so we could have “millions of instances” of powerful AI, limited only by compute. This could make it possible to have a "country of geniuses in a datacenter" working on our most pressing problems within a couple years.
Once we hit this inflection point, Dario believes we will see 50-100 years of scientific progress in the following 5-10 years, which he refers to as the “compressed 21st century.”
What scientific breakthroughs will this make possible in a 5-10 year time horizon?
Elimination of most forms of cancer — “reductions of 95% or more of both mortality and incidence seem possible”
Prevention of Alzheimer’s — it could “eventually be prevented with relatively simple interventions”
Prevention and treatment of nearly all natural infectious disease
Prevention and cures for most forms of mental illness, including depression, schizophrenia, addiction and PTSD
Biological freedom — “weight, physical appearance, reproduction, and other biological processes will be fully under people’s control”
Improvement of the human “baseline experience” — we will be able to improve a wide range of cognitive functions, and increase the proportion of people’s lives that “consist of extraordinary moments” of revelation, creative inspiration, compassion and beauty.
Doubling of the human lifespan — Once human lifespan is 150, we may be able to reach “escape velocity, buying enough time that most of those currently alive today will be able to live as long as they want”
Mitigation of climate change through acceleration of technologies for renewable energy, lab-grown meat, and carbon removal
None of these are novel predictions — we have heard similarly utopian takes from folks like Sam Altman, Marc Andreesen and Ray Kurzweil, and rebuttals from their critics. But it is surprising to see such an aggressive timeline from someone as cautious and measured as Dario, who left OpenAI in 2020 with six other senior staff members (another half dozen have followed recently) in order to build safer and more transparent AI systems. The company they founded, Anthropic, has pledged its commitment to putting out safe AI, even if it takes longer, and hopes to start a “safety race” between top LLMs.
Dario has a PhD in physics from Princeton, originally ran OpenAI’s safety team, and explores at great length in the essay the factors that will slow down the speed of AI driven tech innovations — namely interactions with humans. And yet … he describes a version of our world that could be unfathomably different in 20 years. Of course he hasn’t lost sight of the risks. As he puts it, “most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”
One of Dario’s more original arguments — hopeful for me, in this perilous moment — is that if we do it right, AI could help strengthen democracy around the world. This is important, Dario tells us, because “AI-powered authoritarianism seems too terrible to contemplate.”
In my recent conversation with Yuval Noah Harari, author of Sapiens and most recently Nexus, he had the following to say about AI’s potential misuse.
Listen here:
We can have AI teachers and AI doctors and AI therapists who give us services that we didn't even imagine previously. But in the wrong hands, this can create a totalitarian system of a kind that even Orwell couldn't imagine. You can suddenly produce mass intimacy. Think about the politician you most fear in the world. What would that person do with the ability to produce mass intimacy based on AI?
He went on to describe the current use of facial recognition and surveillance in places like China and Iran (as described by Kashmir Hill in her recent book Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It). It does feel like a pivotal moment in the global AI race. As Bill Gates said in our conversation, “Let's not let people with malintent benefit from having a better AI than the good intent side of cyber defense or war defense or bioterror defense.”
So how, exactly, do we use AI to strengthen democracy? Dario says,
My current guess at the best way to do this is via an “entente strategy,” in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.
MIT professor Max Tegmark, among the most articulate AI safety advocates, referred to Dario’s entente strategy in a response as a “suicide race,” because “we are closer to building AGI than we are to figuring out how to align or control it.” Max makes the case for limiting AI development to “Tool AI” – AIs developed “to accomplish specific goals,” such as Google DeepMind’s AlphaFold, whose developers just won a Nobel Prize.
I agree with Tegmark’s description of the best case — it would be wonderful if we could globally restrict the development of superintelligence to narrow goals. But it doesn’t seem likely. As Gates put it in our conversation, “If we knew how to slow it down, a lot of people would probably say, ‘Okay, let's consider doing that.’” But, he added, “the incentive structures don't really have some mechanism that's all that plausible of how that would happen.”
These conversations are discouraging, when we think about the risks to future generations. We can’t deny the risks (Dario himself has put his p-doom at 10% - 25%). But I, for one, am buoyed by Dario’s confidence that there is a path to build “powerful AI” responsibly, with the necessary safety mechanisms. And to do it quickly enough to distribute the benefits of this technology globally, while strengthening democracy and human rights. I also like his argument that “improvements in mental health, well-being, and education [should] increase democracy, as all three are negatively correlated with support for authoritarian leaders." I hope this proves to be true. It seems particularly likely that global education will benefit from AI progress, based on my invigorating conversation with Sal Khan, who is making it happen.
Though I find all this fascinating, I also find it overwhelming, at times. It can be too much. On Friday, I had a wonderful conversation with
(stay tuned!), who says in his new book, Meditations for Mortals, too many of us are “living inside the news.” We are finite humans, who must choose judiciously what to care about. What to focus on. As William James said, “The art of being wise is the art of knowing what to overlook.”So should we ignore the dizzying acceleration of AI? I don’t think so. I think it’s a development that will be transformative enough to our individual and collective futures that it’s worth thinking about now.
In the 90s, I leaned into the dawn of the early internet, starting an early web zine and dating platform called Nerve.com. This led to a fascinating career that has been rewarding in every sense of the word. In the late 2000s, in contrast, I was slow to study and engage with the early social media products. I came to regret that professionally. As a parent, meanwhile, as I discussed recently with
, I was too permissive with tech access for my kids.I have therefore resolved to try to be a better student of the latest tech revolution. Here are a few ways that the trajectory of AI technology is influencing my decisions:
I am doubling down on investing in personal health now. If we are likely to see medical breakthroughs that will meaningfully improve our health and perhaps extend our lives in 10-20 years, I would rather be in a physical state that is worth preserving. (This resolution has the advantage of being one I am unlikely to regret no matter what happens).
I am collecting ideas more carefully so that I can populate LLMs with my personal interests, paving the way for a better personal assistant. For instance, though I prefer reading physical books, I am scanning them using Readwise, so that I can later access all the highlights. My intention is to extend this practice to everything I read. I have also found that submitting my personal journal to AI analysis often yields useful insights, which positively reinforces my erratic journaling practice.
I am bullish on investment positions in the largest US tech companies. Though there is no guarantee that the Nasdaq 100 will continue its nearly 20% year-over-year growth rate for the last 15 years, it seems unlikely that the rate of technological progress is about to slow down.
Though it’s a small gesture, I am making an effort to support the companies that take AI safety most seriously, and the candidates most likely to protect democracy and human rights in this volatile historical moment.
What do you think? I am curious to hear how you all are processing these developments.
If you want to dig deeper into the topic, you might enjoy our Spotify playlist featuring all of our conversations about AI. If you would like a primer, this conversation with Cade Metz about the history of AI, where it all began, is a great place to start.
—
One final note: I hope you are enjoying our Book A Day newsletter as much as we love making it. If so, why not subscribe? It helps us keep the literary magic flowing, and gets you the full experience, which is going to get ever more exciting in the weeks and months to come.
Wonderful thoughts Rufus. Yes, Dario on NBI would be great to hear. You are such a thoughtful interviewer that it would be an insightful discussion. These are tremendously big questions and the more thoughtful conversations we have about them, the better.
One piece of background here is that I have been pretty deeply concerned about AI risk for a few years now. That’s part of what has driven my interest in seeking out conversations with experts and industry leaders in the space. What I have found is that the closer people are to the white hot center of the AI revolution, the more concerned they are about the threats.
I am genuinely encouraged by Dario Amodei's essay because Anthropic has punched above their weight as an AI competitor, and their seems to be evidence that there social good mission (they are public benefit company) is helping them attract some of the best talent in the industry. OpenAI, meanwhile, which has been criticized for abandoning their social good mission, is hemorrhaging talent.
In a world in which tech companies have the power of nation states — and may be on a path to having more power — the culture of those companies, and the character of the people who run them, is deeply important. Companies have different cultures, and those cultures attract and repel talent, much as nations do. Just as we take great interest in the personalities and political trajectories of totalitarian leaders like Putin and Xi Jinping, because we want to understand geopolitical risk, it's in our interest to understand the leaders and cultures of our top technology companies. I am hopeful that the US will remain a bastion of democracy for centuries to come, but in the event that democracy in the US falters, one can imagine global technology companies relocating and employing their influence for good or ill.
What do you think, am I overstating the case?
Separate question -- should we try to get Dario on the Next Big Idea podcast?