Explainable AI: Cracking the Black Box Code to Drive Adoption

28 Feb 2019
by Yugal Joshi, Alisha Mittal

Research on advanced intelligent systems is underway to solve critical business problems and world issues. Intelligent loan approval, automatic medical diagnosis, autonomous cars, and predictive criminal defender identification are some of the revolutionary applications of AI that promise to deliver immense potential in terms of cost savings and performance improvement. While the high potential is appreciated by enterprises, our research suggests that only one in five enterprises have adopted AI at scale to deliver meaningful business results.

Is the training data bias free? How did the system arrive at these results? How do I validate the results for accuracy and ensure compliance? Is there an exceptional situation where the system may fail? Such decisive questions have been left unanswered by AI systems leading to muted trust and limited large-scale adoption. This key roadblock is the black box nature of AI systems that limits explainability. Enterprises increasingly demand trustable, ethical, and repeatable behavior from AI algorithms that can ensure fairness and accuracy of results.

In this paper we detail out the meaning and need of explainable AI systems. In addition, we:

  • Identify enterprise expectations from an explainable AI system
  • Establish a framework for enterprises to evaluate the extent of explainability required
  • Explore the existing approaches to enable explainability at training data stage, functionality stage, and interpretation stage
  • Comprehend the limitations and challenges with existing AI systems and the road ahead

Membership(s)

Digital Services

 

Page Count: 14