Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more
- Length: 306 pages
- Edition: 1
- Language: English
- Publisher: Packt Publishing
- Publication Date: 2022-07-29
- ISBN-10: 1803246154
- ISBN-13: 9781803246154
- Sales Rank: #3325353 (See Top 100 Books)
Leverage top XAI frameworks to explain your machine learning models with ease and discover best practices and guidelines to build scalable explainable ML systems
Key Features
- Explore various explainability methods for designing robust and scalable explainable ML systems
- Use XAI frameworks such as LIME and SHAP to make ML models explainable to solve practical problems
- Design user-centric explainable ML systems using guidelines provided for industrial applications
Book Description
Explainable AI (XAI) is an emerging field that brings artificial intelligence (AI) closer to non-technical end users. XAI makes machine learning (ML) models transparent and trustworthy along with promoting AI adoption for industrial and research use cases.
Applied Machine Learning Explainability Techniques comes with a unique blend of industrial and academic research perspectives to help you acquire practical XAI skills. You’ll begin by gaining a conceptual understanding of XAI and why it’s so important in AI. Next, you’ll get the practical experience needed to utilize XAI in AI/ML problem-solving processes using state-of-the-art methods and frameworks. Finally, you’ll get the essential guidelines needed to take your XAI journey to the next level and bridge the existing gaps between AI and end users.
By the end of this ML book, you’ll be equipped with best practices in the AI/ML life cycle and will be able to implement XAI methods and approaches using Python to solve industrial problems, successfully addressing key pain points encountered.
What you will learn
- Explore various explanation methods and their evaluation criteria
- Learn model explanation methods for structured and unstructured data
- Apply data-centric XAI for practical problem-solving
- Hands-on exposure to LIME, SHAP, TCAV, DALEX, ALIBI, DiCE, and others
- Discover industrial best practices for explainable ML systems
- Use user-centric XAI to bring AI closer to non-technical end users
- Address open challenges in XAI using the recommended guidelines
Who this book is for
This book is designed for scientists, researchers, engineers, architects, and managers who are actively engaged in the field of Machine Learning and related areas. In general, anyone who is interested in problem-solving using AI would be benefited from this book. The readers are recommended to have a foundational knowledge of Python, Machine Learning, Deep Learning, and Data Science. This book is ideal for readers who are working in the following roles:
- Data and AI Scientists
- AI/ML Engineers
- AI/ML Product Managers
- AI Product Owners
- AI/ML Researchers
- User experience and HCI Researchers
Applied Machine Learning Explainability Techniques Contributors About the author About the reviewers Preface Who this book is for What this book covers To get the most out of this book Download the example code files Download the color images Conventions used Get in touch Share Your Thoughts Section 1 – Conceptual Exposure Chapter 1: Foundational Concepts of Explainability Techniques Introduction to XAI Understanding the key terms Consequences of poor predictions Summarizing the need for model explainability Defining explanation methods and approaches Dimensions of explainability Addressing key questions of explainability Understanding different types of explanation methods Understanding the accuracy interpretability trade-off Evaluating the quality of explainability methods Criteria for good explainable ML systems Auxiliary criteria of XAI for ML systems Taxonomy of evaluation levels for explainable ML systems Summary References Chapter 2: Model Explainability Methods Technical requirements Types of model explainability methods Knowledge extraction methods EDA Result visualization methods Using comparison analysis Using Surrogate Explainer methods Influence-based methods Feature importance Sensitivity analysis PDPs LRP Representation-based explanation VAMs Example-based methods CFEs in structured data CFEs in unstructured data Summary References Chapter 3: Data-Centric Approaches Technical requirements Introduction to data-centric XAI Analyzing data volume Analyzing data consistency Analyzing data purity Thorough data analysis and profiling process The need for data analysis and profiling processes Data analysis as a precautionary step Building robust data profiles Monitoring and anticipating drifts Detecting drifts Selection of statistical measures Checking adversarial robustness Impact of adversarial attacks Methods to increase adversarial robustness Evaluating adversarial robustness Measuring data forecastability Estimating data forecastability Summary References Section 2 – Practical Problem Solving Chapter 4: LIME for Model Interpretability Technical requirements Intuitive understanding of LIME Learning interpretable data representations Maintaining a balance in the fidelity-interpretability trade-off Searching for local explorations What makes LIME a good model explainer? SP-LIME A practical example of using LIME for classification problems Potential pitfalls Summary References Chapter 5: Practical Exposure to Using LIME in ML Technical requirements Using LIME on tabular data Setting up LIME Discussion about the dataset Discussions about the model Application of LIME Explaining image classifiers with LIME Setting up the required Python modules Using a pre-trained TensorFlow model as our black-box model Application of LIME Image Explainers Using LIME on text data Installing the required Python modules Discussions about the dataset used for training the model Discussions about the text classification model Applying LIME Text Explainers LIME for production-level systems Summary References Chapter 6: Model Interpretability Using SHAP Technical requirements An intuitive understanding of the SHAP and Shapley values Introduction to SHAP and Shapley values What are Shapley values? Shapley values in ML The SHAP framework Model explainability approaches using SHAP Visualizations in SHAP Explainers in SHAP Using SHAP to explain regression models Setting up SHAP Inspecting the dataset Training the model Application of SHAP Advantages and limitations of SHAP Advantages Limitations Summary References Chapter 7: Practical Exposure to Using SHAP in ML Technical requirements Applying TreeExplainers to tree ensemble models Installing the required Python modules Discussion about the dataset Training the model Application of TreeExplainer in SHAP Explaining deep learning models using DeepExplainer and GradientExplainer GradientExplainer Discussion on the dataset used for training the model Using a pre-trained CNN model for this example Application of GradientExplainer in SHAP Exploring DeepExplainers Application of DeepExplainer in SHAP Model-agnostic explainability using KernelExplainer Application of KernelExplainer in SHAP Exploring LinearExplainer in SHAP Application of LinearExplainer in SHAP Explaining transformers using SHAP Explaining transformer-based sentiment analysis models Explaining a multi-class prediction transformer model using SHAP Explaining zero-shot learning models using SHAP Summary References Chapter 8: Human-Friendly Explanations with TCAV Technical requirements Understanding TCAV intuitively What is TCAV? Explaining with abstract concepts Goals of TCAV Approach of TCAV Exploring the practical applications of TCAV Getting started About the data Discussions about the deep learning model used Model explainability using TCAV Advantages and limitations Advantages Limitations Potential applications of concept-based explanations Summary References Chapter 9: Other Popular XAI Frameworks Technical requirements DALEX Setting up DALEX for model explainability Discussions about the dataset Training the model Model explainability using DALEX Model-level explanations Prediction-level explanations Evaluating model fairness Interactive dashboards using ARENA Explainerdashboard Setting up Explainerdashboard Model explainability with Explainerdashboard InterpretML Supported explanation methods Setting up InterpretML Discussions about the dataset Training the model Explainability with InterpretML ALIBI Setting up ALIBI Discussion about the dataset Training the model Model explainability with ALIBI DiCE CFE methods supported in DiCE Model explainability with DiCE ELI5 Setting up ELI5 Model explainability using ELI5 H2O AutoML explainers Explainability with H2O explainers Quick comparison guide Summary References Section 3 –Taking XAI to the Next Level Chapter 10: XAI Industry Best Practices Open challenges of XAI Guidelines for designing explainable ML systems Adopting a data-first approach for explainability Emphasizing IML for explainability Emphasizing prescriptive insights for explainability Summary References Chapter 11: End User-Centered Artificial Intelligence User-centered XAI/ML systems Different aspects of end user-centric XAI Rapid XAI prototyping using EUCA Efforts toward increasing user acceptance of AI/ML systems using XAI Providing a delightful UX Summary References Why subscribe? Other Books You May Enjoy Packt is searching for authors like you Share Your Thoughts
Donate to keep this site alive
How to download source code?
1. Go to: https://github.com/PacktPublishing
2. In the Find a repository… box, search the book title: Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more
, sometime you may not get the results, please search the main title.
3. Click the book title in the search results.
3. Click Code to download.
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.