Machine learning models are becoming more and more popular. But not every user is convinced in their utility and usability. How and when can we trust the models? If our model has rejected a loan applicant, can we explain to them why that is the case? What types of explanations about the model or its behavior can we provide? What does it even means to explain a model?
We address these and other questions in this course on Machine learning or AI explainability (also called XAI in short). We will introduce theoretical approaches and build a hands-on understanding of various explainability techniques in Python.
The course builds an overview of XAI approaches before going into details of different types of explanations: visual, explanations of the overall model behavior (so-called global), as well as of how the model reached its decision for every single prediction(so-called local explanations). We will apply each presented approach to a regression and/or classification task; and you will gain ever more practice with the techniques using the hands on assignments.
By the end of the course, you should have an understanding of the current state-of-the-art XAI approaches, their benefits and pitfalls. You will also be able to use the tools learned here in your own use cases and projects.
XAI is a rapidly developing research field with many open-ended questions. But one thing is certain: it is not going anywhere, the same way Machine learning and AI are here to stay.