Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about the ways in which humanity might self-destruct. In photographs he often looks extremely serious, perhaps rightly haunted by the existential dangers lurking in his brain. When we talk on Zoom, he looks relaxed and smiles.
Bostrom has dedicated his life to thinking about distant technological advances and existential risks for humanity. With the publication of his latest book, Superintelligence: paths, dangers, strategiesIn 2014, Bostrom brought to public attention what was then a fringe idea: that AI would advance to a point where it could turn on and wipe out humanity.
To many, in AI research and elsewhere, the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writings. The book launched a wave of apocalyptic concerns about AI that has been brewing recently. broke out following the arrival of ChatGPT. Concern about AI risks is not only a dominant topic, but also a theme within Government AI policy circles.
Bostrom’s new book takes a very different approach. Rather than playing catastrophic hits, Deep Utopia: life and meaning in a resolved world, envisions a future in which humanity has successfully developed superintelligent machines but avoided disaster. All diseases are gone and humans can live indefinitely in infinite abundance. Bostrom’s book examines what meaning life would have in a techno-utopia and asks whether it might be rather hollow. He spoke with WIRED over Zoom, in a conversation that has been lightly edited for length and clarity.
Will Knight: Why move from writing about superintelligent AI that threatens humanity to envisioning a future in which it is used for good?
Nick Bostrom: The various things that could go wrong with the development of AI are now receiving much more attention. This is a big change in the last 10 years. Now all major cutting-edge AI labs have research groups trying to develop scalable alignment methods. And in recent years as well, we see political leaders starting to take an interest in AI.
There hasn’t yet been a commensurate increase in terms of depth and sophistication in terms of thinking about where things will go if we don’t fall into one of these chasms. The reflection has been quite superficial on the subject.
When you wrote Superintelligence, few could have expected that the existential risks of AI would so quickly become a mainstream debate. Will we need to worry about the problems with your new book sooner than people think?
As we start to see automation roll out, assuming progress continues, I think these conversations will start to take place and eventually deepen.
Social companion apps will become increasingly important. People will have all kinds of different points of view and it’s a great place to maybe wage a little culture war. This might be great for people who fail to thrive in ordinary life, but what if there was a portion of the population that took pleasure in mistreating them?
In the political and information domains, we have seen the use of AI in political campaigns, marketing and automated propaganda systems. But if we have a sufficient level of wisdom, these things could really amplify our ability to be constructive democratic citizens, with individual advice explaining what policy proposals mean to you. There will be a whole dynamic for society.
Would a future in which AI has solved many problems, like climate change, disease, and the need to work, really be so bad?