Research on advanced intelligent systems is underway to solve critical business problems and world issues. Intelligent loan approval, automatic medical diagnosis, autonomous cars, and predictive criminal defender identification are some of the revolutionary applications of AI that promise to deliver immense potential in terms of cost savings and performance improvement. While the high potential is appreciated by enterprises, our research suggests that only one in five enterprises have adopted AI at scale to deliver meaningful business results.
Is the training data bias free? How did the system arrive at these results? How do I validate the results for accuracy and ensure compliance? Is there an exceptional situation where the system may fail? Such decisive questions have been left unanswered by AI systems leading to muted trust and limited large-scale adoption. This key roadblock is the black box nature of AI systems that limits explainability. Enterprises increasingly demand trustable, ethical, and repeatable behavior from AI algorithms that can ensure fairness and accuracy of results.
In this paper we detail out the meaning and need of explainable AI systems. In addition, we:
Identify enterprise expectations from an explainable AI system
Establish a framework for enterprises to evaluate the extent of explainability required
Explore the existing approaches to enable explainability at training data stage, functionality stage, and interpretation stage
Comprehend the limitations and challenges with existing AI systems and the road ahead
Although Artificial Intelligence (AI) has been around for decades, both its hype and adoption have grown exponentially in the past few years. Enterprises are recognizing the value of AI as a competitive differentiator and value creator, and many are…