An Intelligence Explosion is the idea that a greater-than-human intelligent machine will quickly design a greater-than-itself intelligent machine, and so on, until very rapidly the intelligence of artificial systems greatly outstrips that of humanity. Is this hard takeoff scenario realistic? Is it possible? Is there any way to encourage future super-intelligent machines to be friendly?
Relevant Links
- Stephen Omohundro’s Basic AI Drives paper
- Nick Bostrom’s Superintelligent Will
- Facing the Intelligence Explosion
- Wikipedia entry for Monkey’s Paw
- Robert J Sawyer’s WWW trilogy
- MIRI
- Oxford Future of Humanity Institute
- Wikipedia entry for I J Good
- Singularity or Bust: documentary with Ben Goertzel and Hugo De Garis
- David Eubanks asks ‘Is Intelligence Self Limiting?’
- Stuart Armstrong speaks on How We’re Predicting AI