{"id":488363,"date":"2018-05-30T07:53:25","date_gmt":"2018-05-30T14:53:25","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=488363"},"modified":"2020-04-23T15:12:48","modified_gmt":"2020-04-23T22:12:48","slug":"making-intelligence-intelligible-dr-rich-caruana","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/podcast\/making-intelligence-intelligible-dr-rich-caruana\/","title":{"rendered":"Making intelligence intelligible with Dr. Rich Caruana"},"content":{"rendered":"
\"\"

Dr. Rich Caruana, Principal Researcher. Photo courtesy of Maryatt Photography.<\/p><\/div>\n

Episode 26, May 30, 2018<\/h3>\n

In the world of machine learning, there\u2019s been a notable trade-off between accuracy and intelligibility. Either the models are accurate but difficult to make sense of, or easy to understand but prone to error. That\u2019s why Dr. Rich Caruana<\/a>, Principal Researcher at Microsoft Research, has spent a good part of his career working to make the simple more accurate and the accurate more intelligible.<\/p>\n

Today, Dr. Caruana talks about how the rise of deep neural networks has made understanding machine predictions more difficult for humans, and discusses an interesting class of smaller, more interpretable models that may help to make the black box nature of machine learning more transparent.<\/p>\n

Related:<\/h3>\n