In today’s episode, Jon speaks with Calum Chace, author of the new nonfiction book Surviving AI. The potential risk posed by superintelligent AI has recently gained unprecedented coverage in the mainstream press, thanks to the release of Nick Bostrom’s book Superintelligence and public statements by the likes of Elon Musk, Bill Gates, and Stephen Hawking. In our discussion we explore some of the fundamental questions surrounding this issue such as: how soon will artificial general intelligence arrive? How likely is it to be dangerous? And is a hard takeoff or soft takeoff more likely? While AGI may still be a long way off, the extraordinarily high stakes suggest we should devote a few more resources to studying this highly unique issue facing humanity.
Relevant Links
- Surviving AI by Calum Chace
- Superintelligence by Nick Bostrom
- The Age of Spiritual Machines by Ray Kurzweil
- The Better Angels of Our Nature by Stephen Pinker
- MIRI
- Future of Humanity Institute
- Centre for the Study of Existential Risk