Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python
- Length: 278 pages
- Edition: 1
- Language: English
- Publisher: Apress
- Publication Date: 2023-03-04
- ISBN-10: 1484290283
- ISBN-13: 9781484290286
- Sales Rank: #0 (See Top 100 Books)
Understand how to use Explainable AI (XAI) libraries and build trust in AI and machine learning models. This book utilizes a problem-solution approach to explaining machine learning models and their algorithms.
The book starts with model interpretation for supervised learning linear models, which includes feature importance, partial dependency analysis, and influential data point analysis for both classification and regression models. Next, it explains supervised learning using non-linear models and state-of-the-art frameworks such as SHAP values/scores and LIME for local interpretation. Explainability for time series models is covered using LIME and SHAP, as are natural language processing-related tasks such as text classification, and sentiment analysis with ELI5, and ALIBI. The book concludes with complex model classification and regression-like neural networks and deep learning models using the CAPTUM framework that shows feature attribution, neuron attribution, and activation attribution.
After reading this book, you will understand AI and machine learning models and be able to put that knowledge into practice to bring more accuracy and transparency to your analyses.
What You Will Learn
- Create code snippets and explain machine learning models using Python
- Leverage deep learning models using the latest code with agile implementations
- Build, train, and explain neural network models designed to scale
- Understand the different variants of neural network models
Who This Book Is For
AI engineers, data scientists, and software developers interested in XAI
Table of Contents About the Author About the Technical Reviewer Acknowledgments Introduction Chapter 1: Introducing Explainability and Setting Up Your Development Environment Recipe 1-1. SHAP Installation Problem Solution How It Works Recipe 1-2. LIME Installation Problem Solution How It Works Recipe 1-3. SHAPASH Installation Problem Solution How It Works Recipe 1-4. ELI5 Installation Problem Solution How It Works Recipe 1-5. Skater Installation Problem Solution How It Works Recipe 1-6. Skope-rules Installation Problem Solution How It Works Recipe 1-7. Methods of Model Explainability Problem Solution How It Works Conclusion Chapter 2: Explainability for Linear Supervised Models Recipe 2-1. SHAP Values for a Regression Model on All Numerical Input Variables Problem Solution How It Works Recipe 2-2. SHAP Partial Dependency Plot for a Regression Model Problem Solution How It Works Recipe 2-3. SHAP Feature Importance for Regression Model with All Numerical Input Variables Problem Solution How It Works Recipe 2-4. SHAP Values for a Regression Model on All Mixed Input Variables Problem Solution How It Works Recipe 2-5. SHAP Partial Dependency Plot for Regression Model for Mixed Input Problem Solution How It Works Recipe 2-6. SHAP Feature Importance for a Regression Model with All Mixed Input Variables Problem Solution How It Works Recipe 2-7. SHAP Strength for Mixed Features on the Predicted Output for Regression Models Problem Solution How It Works Recipe 2-8. SHAP Values for a Regression Model on Scaled Data Problem Solution How It Works Recipe 2-9. LIME Explainer for Tabular Data Problem Solution How It Works Recipe 2-10. ELI5 Explainer for Tabular Data Problem Solution How It Works Recipe 2-11. How the Permutation Model in ELI5 Works Problem Solution How It Works Recipe 2-12. Global Explanation for Logistic Regression Models Problem Solution How It Works Recipe 2-13. Partial Dependency Plot for a Classifier Problem Solution How It Works Recipe 2-14. Global Feature Importance from the Classifier Problem Solution How It Works Recipe 2-15. Local Explanations Using LIME Problem Solution How It Works Recipe 2-16. Model Explanations Using ELI5 Problem Solution How It Works Conclusion References Chapter 3: Explainability for Nonlinear Supervised Models Recipe 3-1. SHAP Values for Tree Models on All Numerical Input Variables Problem Solution How It Works Recipe 3-2. Partial Dependency Plot for Tree Regression Model Problem Solution How It Works Recipe 3-3. SHAP Feature Importance for Regression Models with All Numerical Input Variables Problem Solution How It Works Recipe 3-4. SHAP Values for Tree Regression Models with All Mixed Input Variables Problem Solution How It Works Recipe 3-5. SHAP Partial Dependency Plot for Regression Models with Mixed Input Problem Solution How It Works Recipe 3-6. SHAP Feature Importance for Tree Regression Models with All Mixed Input Variables Problem Solution How It Works Recipe 3-7. LIME Explainer for Tabular Data Problem Solution How It Works Recipe 3-8. ELI5 Explainer for Tabular Data Problem Solution How It Works Recipe 3-9. How the Permutation Model in ELI5 Works Problem Solution How It Works Recipe 3-10. Global Explanation for Decision Tree Models Problem Solution How It Works Recipe 3-11. Partial Dependency Plot for a Nonlinear Classifier Problem Solution How It Works Recipe 3-12. Global Feature Importance from the Nonlinear Classifier Problem Solution How It Works Recipe 3-13. Local Explanations Using LIME Problem Solution How It Works Recipe 3-14. Model Explanations Using ELI5 Problem Solution How It Works Conclusion Chapter 4: Explainability for Ensemble Supervised Models Recipe 4-1. Explainable Boosting Machine Interpretation Problem Solution How It Works Recipe 4-2. Partial Dependency Plot for Tree Regression Models Problem Solution How It Works Recipe 4-3. Explain a Extreme Gradient Boosting Model with All Numerical Input Variables Problem Solution How It Works Recipe 4-4. Explain a Random Forest Regressor with Global and Local Interpretations Problem Solution How It Works Recipe 4-5. Explain the Catboost Regressor with Global and Local Interpretations Problem Solution How It Works Recipe 4-6. Explain the EBM Classifier with Global and Local Interpretations Problem Solution How It Works Recipe 4-7. SHAP Partial Dependency Plot for Regression Models with Mixed Input Problem Solution How It Works Recipe 4-8. SHAP Feature Importance for Tree Regression Models with Mixed Input Variables Problem Solution How It Works Recipe 4-9. Explaining the XGBoost Model Problem Solution How It Works Recipe 4-10. Random Forest Regressor for Mixed Data Types Problem Solution How It Works Recipe 4-11. Explaining the Catboost Model Problem Solution How It Works Recipe 4-12. LIME Explainer for the Catboost Model and Tabular Data Problem Solution How It Works Recipe 4-13. ELI5 Explainer for Tabular Data Problem Solution How It Works Recipe 4-14. How the Permutation Model in ELI5 Works Problem Solution How It Works Recipe 4-15. Global Explanation for Ensemble Classification Models Problem Solution How It Works Recipe 4-16. Partial Dependency Plot for a Nonlinear Classifier Problem Solution How It Works Recipe 4-17. Global Feature Importance from the Nonlinear Classifier Problem Solution How It Works Recipe 4-18. XGBoost Model Explanation Problem Solution How It Works Recipe 4-19. Explain a Random Forest Classifier Problem Solution How It Works Recipe 4-20. Catboost Model Interpretation for Classification Scenario Problem Solution How It Works Recipe 4-21. Local Explanations Using LIME Problem Solution How It Works Recipe 4-22. Model Explanations Using ELI5 Problem Solution How It Works Recipe 4-23. Multiclass Classification Model Explanation Problem Solution How It Works Conclusion Chapter 5: Explainability for Natural Language Processing Recipe 5-1. Explain Sentiment Analysis Text Classification Using SHAP Problem Solution How It Works Recipe 5-2. Explain Sentiment Analysis Text Classification Using ELI5 Problem Solution How It Works Recipe 5-3. Local Explanation Using ELI5 Problem Solution How It Works Conclusion Chapter 6: Explainability for Time-Series Models Recipe 6-1. Explain Time-Series Models Using LIME Problem Solution How It Works Recipe 6-2. Explain Time-Series Models Using SHAP Problem Solution How It Works Conclusion Chapter 7: Explainability for Deep Learning Models Recipe 7-1. Explain MNIST Images Using a Gradient Explainer Based on Keras Problem Solution How It Works Recipe 7-2. Use Kernel Explainer–Based SHAP Values from a Keras Model Problem Solution How It Works Recipe 7-3. Explain a PyTorch-Based Deep Learning Model Problem Solution How It Works Conclusion
Donate to keep this site alive
How to download source code?
1. Go to: https://github.com/Apress
2. In the Find a repository… box, search the book title: Explainable AI Recipes: Implement Solutions to Model Explainability and Interpretability with Python
, sometime you may not get the results, please search the main title.
3. Click the book title in the search results.
3. Click Code to download.
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.