Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions

Source: Keith McCormick

Learn why the need for XAI has been rapidly increasing in recent years. Explore available methods and common techniques for XAI and IML, as well as when and how to use each. Keith walks you through the challenges and opportunities of black box models, showing you how to bring transparency to your models and using real-world examples that illustrate tricks of the trade on the easy-to-learn, open-source KNIME Analytics Platform. By the end of this course, you’ll have a better understanding of XAI and IML techniques for both global and local explanations.

This summarizes the key contents for the course set in LinkedIn Learning on Explainable AI.

Ways to quantify feature importance

Global explanations tell us what input variables are most important to the model. (E.g. GLM coefficient, i.e. the most important variable in the model overall.)

Local explanations indicate which inputs were most important to a specific individual prediction (E.g. Reason for specific prediction, such as credit rating score rating)

Challenges of Variable Importance (VI)

What can we do?

Argument against XAI

Interpretable Machine Learning

Personal thoughts

This course quickly summarised the current state of XAI in a beginner-friendly manner and straight to the main point. Of the few examples given, they are on point in cases used such as credit scoring on the COTS tool KNIME.

Due to this course’s designation as an intermediate course, it is disappointing that there is no deep explanation in some key areas, such as the working behind local explanations, making it more comparable to an introductory course on XAI. Without concrete examples, some issues regarding feature scaling, imbalance datasets used for credit scoring, and how they affect feature importance remain unclear.

What’s next?

Reading List: