Monday, 27 May 2019

An approach to enhance machine learning explanations

Researchers at IBM Research U.K., the U.S. Military Academy and Cardiff University have recently proposed a technique they call Local Interpretable Model Agnostic Explanations (LIME) for attaining a better understanding of the conclusions reached by machine learning algorithms. Their paper, published on SPIE digital library, could inform the development of artificial intelligence (AI) tools that provide exhaustive explanations of how they reached a particular outcome or conclusion.

* This article was originally published here