FastForward Radio — From Sub-Human to Post-Human in Three Easy Steps!

By | November 17, 2009

Phil Bowermaster and Stephen Gordon discuss the future of human and machine evolution:

1. What are the challenges faced in trying to develop a human-level artificial intelligence?

2. When do humans stop being human?

3. What will be the relationship between humanity and post-human artificial intelligence?

Daunting questions with some potentially surprising answers!


Archived recording available here:

Listen to FastForward Radio... on Blog Talk Radio

  • DCWhatthe

    ..1. What are the challenges faced in trying to develop a human-level artificial intelligence?..

    Time, and perfectly understandable impatience.

    >>2. When do humans stop being human?>3. What will be the relationship between humanity and post-human artificial intelligence?

  • DCWhatthe

    This was a really interesting podcast, both the audio content and the chat.

    Two things I took from the podcast:

    1. We don’t know exactly how or when post-human intelligence will evolve. We are probably right on some of our assumptions, but there’s no guaranteed way of determining which predictions are accurate.

    2. We know – we absolutely know – that it’s coming.

  • Leslie Kirschner

    I find discussions about ethics and AI kind of mind-bending. As I think Stephen pointed out, these debates will quite possibly have more to do with us than with AI. There is a natural tendency when imagining future AI to think of it as an embodied “being”–just like us but with better/faster thinking capabilities–robots with silicon brains, maybe programmed with different goals/desires than us, but essentially having a lot of our biological baggage attached to the intelligence. Although there may instances of this, I think it’s more likely that truly advanced AI will be a distributed intelligence that may, for convenience, interface with us through familiar vehicles such as robots (or toasters, or a voice in our heads), but will resemble google more than anything else we have today. Do we worry about how we “treat” google? When you look at it logically, harming a robot may be no more ethically wrong than cutting someone’s hair. But if we harm or abuse a robot that looks and acts human (or like a discrete being), we will naturally recoil from that as a result of OUR “programming”.