Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining Machine Learning models and their algorithms. The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution, and activation attribution. After reading this book, you will understand AI and Machine Learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses.
Author(s): Pradeepta Mishra
Publisher: Apress
Year: 2023
Language: English
Pages: 267
Table of Contents
About the Author
About the Technical Reviewer
Acknowledgments
Introduction
Chapter 1: Introducing Explainability and Setting Up Your Development Environment
Recipe 1-1. SHAP Installation
Problem
Solution
How It Works
Recipe 1-2. LIME Installation
Problem
Solution
How It Works
Recipe 1-3. SHAPASH Installation
Problem
Solution
How It Works
Recipe 1-4. ELI5 Installation
Problem
Solution
How It Works
Recipe 1-5. Skater Installation
Problem
Solution
How It Works
Recipe 1-6. Skope-rules Installation
Problem
Solution
How It Works
Recipe 1-7. Methods of Model Explainability
Problem
Solution
How It Works
Conclusion
Chapter 2: Explainability for Linear Supervised Models
Recipe 2-1. SHAP Values for a Regression Model on All Numerical Input Variables
Problem
Solution
How It Works
Recipe 2-2. SHAP Partial Dependency Plot for a Regression Model
Problem
Solution
How It Works
Recipe 2-3. SHAP Feature Importance for Regression Model with All Numerical Input Variables
Problem
Solution
How It Works
Recipe 2-4. SHAP Values for a Regression Model on All Mixed Input Variables
Problem
Solution
How It Works
Recipe 2-5. SHAP Partial Dependency Plot for Regression Model for Mixed Input
Problem
Solution
How It Works
Recipe 2-6. SHAP Feature Importance for a Regression Model with All Mixed Input Variables
Problem
Solution
How It Works
Recipe 2-7. SHAP Strength for Mixed Features on the Predicted Output for Regression Models
Problem
Solution
How It Works
Recipe 2-8. SHAP Values for a Regression Model on Scaled Data
Problem
Solution
How It Works
Recipe 2-9. LIME Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 2-10. ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 2-11. How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 2-12. Global Explanation for Logistic Regression Models
Problem
Solution
How It Works
Recipe 2-13. Partial Dependency Plot for a Classifier
Problem
Solution
How It Works
Recipe 2-14. Global Feature Importance from the Classifier
Problem
Solution
How It Works
Recipe 2-15. Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 2-16. Model Explanations Using ELI5
Problem
Solution
How It Works
Conclusion
References
Chapter 3: Explainability for Nonlinear Supervised Models
Recipe 3-1. SHAP Values for Tree Models on All Numerical Input Variables
Problem
Solution
How It Works
Recipe 3-2. Partial Dependency Plot for Tree Regression Model
Problem
Solution
How It Works
Recipe 3-3. SHAP Feature Importance for Regression Models with All Numerical Input Variables
Problem
Solution
How It Works
Recipe 3-4. SHAP Values for Tree Regression Models with All Mixed Input Variables
Problem
Solution
How It Works
Recipe 3-5. SHAP Partial Dependency Plot for Regression Models with Mixed Input
Problem
Solution
How It Works
Recipe 3-6. SHAP Feature Importance for Tree Regression Models with All Mixed Input Variables
Problem
Solution
How It Works
Recipe 3-7. LIME Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 3-8. ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 3-9. How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 3-10. Global Explanation for Decision Tree Models
Problem
Solution
How It Works
Recipe 3-11. Partial Dependency Plot for a Nonlinear Classifier
Problem
Solution
How It Works
Recipe 3-12. Global Feature Importance from the Nonlinear Classifier
Problem
Solution
How It Works
Recipe 3-13. Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 3-14. Model Explanations Using ELI5
Problem
Solution
How It Works
Conclusion
Chapter 4: Explainability for Ensemble Supervised Models
Recipe 4-1. Explainable Boosting Machine Interpretation
Problem
Solution
How It Works
Recipe 4-2. Partial Dependency Plot for Tree Regression Models
Problem
Solution
How It Works
Recipe 4-3. Explain a Extreme Gradient Boosting Model with All Numerical Input Variables
Problem
Solution
How It Works
Recipe 4-4. Explain a Random Forest Regressor with Global and Local Interpretations
Problem
Solution
How It Works
Recipe 4-5. Explain the Catboost Regressor with Global and Local Interpretations
Problem
Solution
How It Works
Recipe 4-6. Explain the EBM Classifier with Global and Local Interpretations
Problem
Solution
How It Works
Recipe 4-7. SHAP Partial Dependency Plot for Regression Models with Mixed Input
Problem
Solution
How It Works
Recipe 4-8. SHAP Feature Importance for Tree Regression Models with Mixed Input Variables
Problem
Solution
How It Works
Recipe 4-9. Explaining the XGBoost Model
Problem
Solution
How It Works
Recipe 4-10. Random Forest Regressor for Mixed Data Types
Problem
Solution
How It Works
Recipe 4-11. Explaining the Catboost Model
Problem
Solution
How It Works
Recipe 4-12. LIME Explainer for the Catboost Model and Tabular Data
Problem
Solution
How It Works
Recipe 4-13. ELI5 Explainer for Tabular Data
Problem
Solution
How It Works
Recipe 4-14. How the Permutation Model in ELI5 Works
Problem
Solution
How It Works
Recipe 4-15. Global Explanation for Ensemble Classification Models
Problem
Solution
How It Works
Recipe 4-16. Partial Dependency Plot for a Nonlinear Classifier
Problem
Solution
How It Works
Recipe 4-17. Global Feature Importance from the Nonlinear Classifier
Problem
Solution
How It Works
Recipe 4-18. XGBoost Model Explanation
Problem
Solution
How It Works
Recipe 4-19. Explain a Random Forest Classifier
Problem
Solution
How It Works
Recipe 4-20. Catboost Model Interpretation for Classification Scenario
Problem
Solution
How It Works
Recipe 4-21. Local Explanations Using LIME
Problem
Solution
How It Works
Recipe 4-22. Model Explanations Using ELI5
Problem
Solution
How It Works
Recipe 4-23. Multiclass Classification Model Explanation
Problem
Solution
How It Works
Conclusion
Chapter 5: Explainability for Natural Language Processing
Recipe 5-1. Explain Sentiment Analysis Text Classification Using SHAP
Problem
Solution
How It Works
Recipe 5-2. Explain Sentiment Analysis Text Classification Using ELI5
Problem
Solution
How It Works
Recipe 5-3. Local Explanation Using ELI5
Problem
Solution
How It Works
Conclusion
Chapter 6: Explainability for Time-Series Models
Recipe 6-1. Explain Time-Series Models Using LIME
Problem
Solution
How It Works
Recipe 6-2. Explain Time-Series Models Using SHAP
Problem
Solution
How It Works
Conclusion
Chapter 7: Explainability for Deep Learning Models
Recipe 7-1. Explain MNIST Images Using a Gradient Explainer Based on Keras
Problem
Solution
How It Works
Recipe 7-2. Use Kernel Explainer–Based SHAP Values from a Keras Model
Problem
Solution
How It Works
Recipe 7-3. Explain a PyTorch-Based Deep Learning Model
Problem
Solution
How It Works
Conclusion