We discuss AI risk argument through two recent articles, one written by sci fi author Ted Chiang and one by Steven Pinker, both of which dismiss the strongest version of the arguments as put forth by Nick Bostrom and others, in this episode. Is insight the same as morality, as Chiang seems to think? Does Steven Pinker even understand the basics of Bostrom’s claims? Does the foom argument need to be true to worry about AI risk? And at the end, a bit of fun (before we’re all turned into paperclips).
Relevant Links
- Silicon Valley Is Turning Into Its Own Worst Fear by Ted Chiang
- We’re told to fear robots. But why do we think they’ll turn on us? by Steven Pinker
- RTF Ep 064: Calum Chace on Is it Time to Start Worrying about AI?
- RTF Ep 006: What is an Intelligence Explosion, and Will It Kill Us All?
- 2001: A Space Odyssey‘s “Daisy” scene
- Universal Paperclips game