In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence. During the interview, Bill Gates echoed Elon Musk’s concerns with future superintelligence safety, with Elon Musk noting that a good analogy would be “if you consider nuclear research, with its potential for a very dangerous weapon: releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety.”

To see the full interview, please see here.

Posted in News.

Share on Facebook | Share on Twitter