Would Asimov’s laws or a simple kill switch be sufficient to avoid the harms of a superintelligent AI?  Nick Bostrom and James Barrat explain the difficulties of ensuring positive outcomes of an intelligence explosion on BBC Radio.

Problems emerging from poorly seeded AI goals and incentives that push towards rapid and competitive development of generalised AI suggest serious concerns for our future.  In particular, if we have enough difficulty ensuring the security of systems against human hackers, how well could we fare against online superintelligent AI?  Listen to the full discussion on the BBC here.

Posted in News.

Share on Facebook | Share on Twitter