In an article published last week, notable scientists Steven Hawking, Max Tegmark, Frank Wilczek, and Stuart Russell discuss the risks associated with advances in artificial intelligence, citing work done by the Future of Humanity Institute. They note, “Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes such as the Cambridge Center for Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute.” For the full article, please see here.

Posted in News.

Share on Facebook | Share on Twitter