079: Discussion of AI Risk



We discuss AI risk argument through two recent articles, one written by sci fi author Ted Chiang and one by Steven Pinker, both of which dismiss the strongest version of the arguments as put forth by Nick Bostrom and others, in this episode. Is insight the same as morality, as Chiang seems to think? Does Steven Pinker even understand the basics of Bostrom’s claims? Does the foom argument need to be true to worry about AI risk? And at the end, a bit of fun (before we’re all turned into paperclips).

Relevant Links