Platform and Model Design for Responsible AI: Design and build resilient, private, fair, and transparent machine learning models
- Length: 538 pages
- Edition: 1
- Language: English
- Publisher: Packt Publishing
- Publication Date: 2023-05-09
- ISBN-10: 1803237074
- ISBN-13: 9781803237077
- Sales Rank: #0 (See Top 100 Books)
Craft ethical AI projects with privacy, fairness, and risk assessment features for scalable and distributed systems while maintaining explainability and sustainability
Purchase of the print or Kindle book includes a free PDF eBook
Key Features
- Learn risk assessment for machine learning frameworks in a global landscape
- Discover patterns for next-generation AI ecosystems for successful product design
- Make explainable predictions for privacy and fairness-enabled ML training
Book Description
AI algorithms are ubiquitous and used for everything, from recruiting to deciding who will get a loan. With such widespread use of AI in the decision-making process, it’s necessary to build an explainable, responsible, transparent, and trustworthy AI-enabled system. With Platform and Model Design for Responsible AI, you’ll be able to make existing black box models transparent.
You’ll be able to identify and eliminate bias in your models, deal with uncertainty arising from both data and model limitations, and provide a responsible AI solution. You’ll start by designing ethical models for traditional and deep learning ML models as well as deploying them in a sustainable production setup. Next, you’ll learn how to set up data pipelines, validate datasets, and set up component microservices in a secured and private way in any cloud-agnostic framework. You’ll then build a fair and private ML model with proper constraints, tune the hyperparameters, and evaluate the model metrics.
By the end of this book, you’ll know the best practices to comply with laws of data privacy and ethics, in addition to the techniques needed for data anonymization. You’ll be able to develop models with explainability, store them in feature stores and handle uncertainty in the model predictions.
What you will learn
- Understand the threats and risks involved in machine learning models
- Discover varying levels of risk mitigation strategies and risk tiering tools
- Apply traditional and deep learning optimization techniques efficiently
- Build auditable and interpretable ML models and feature stores
- Understand the concept of uncertainty and explore model explainability tools
- Develop models for different clouds including AWS, Azure, and GCP
- Explore ML orchestration tools like Kubeflow and VertexAI
- Incorporate privacy and fairness in ML models from design to deployment
Who This Book Is For
This book is for experienced machine learning professionals looking to understand the risks and leakages of ML models and frameworks, and learn to develop and use reusable components to reduce effort and cost in setting up and maintaining the AI ecosystem.
Platform and Model Design for Responsible AI Contributors About the authors About the reviewers Preface Who this book is for What this book covers To get the most out of this book Download the example code files Conventions used Get in touch Share Your Thoughts Download a free PDF copy of this book Part 1: Risk Assessment Machine Learning Frameworks in a Global Landscape Chapter 1: Risks and Attacks on ML Models Technical requirements Discovering risk elements Strategy risk Financial risk Technical risk People and processes risk Trust and explainability risk Compliance and regulatory risk Exploring risk mitigation strategies with vision, strategy, planning, and metrics Defining a structured risk identification process Enterprise-wide controls Micro-risk management and the reinforcement of controls Assessing potential impact and loss due to attacks Discovering different types of attacks Data phishing privacy attacks Poisoning attacks Evasion attacks Model stealing/extraction Perturbation attacks Scaffolding attack Model inversion Transfer learning attacks Summary Further reading Chapter 2: The Emergence of Risk-Averse Methodologies and Frameworks Technical requirements Analyzing the threat matrix and defense techniques Researching and planning during the system and model design/architecture phase Model training and development ML model live in production Anonymization and data encryption Data masking Data swapping Data perturbation Data generalization K-anonymity L-diversity T-closeness Pseudonymization Homomorphic encryption Secure Multi-Party Computation (MPC/SMPC) Differential Privacy (DP) Sensitivity Properties of DP Hybrid privacy methods and models Adversarial risk mitigation frameworks Model robustness Summary Further reading Chapter 3: Regulations and Policies Surrounding Trustworthy AI Regulations and enforcements under different authorities Regulations in the European Union Propositions/acts passed by other countries Special regulations for children and minority groups Promoting equality for minority groups Educational initiatives International AI initiatives and cooperative actions Next steps for trustworthy AI Proposed solutions and improvement areas Summary Further reading Part 2: Building Blocks and Patterns for a Next-Generation AI Ecosystem Chapter 4: Privacy Management in Big Data and Model Design Pipelines Technical requirements Designing privacy-proven pipelines Big data pipelines Architecting model design pipelines Incremental/continual ML training and retraining Scaling defense pipelines Enabling differential privacy in scalable architectures Designing secure microservices Vault Cloud security architecture Developing in a sandbox environment Managing secrets in cloud orchestration services Monitoring and threat detection Summary Further reading Chapter 5: ML Pipeline, Model Evaluation, and Handling Uncertainty Technical requirements Understanding different components of ML pipelines ML tasks and algorithms Uncertainty in ML Types of uncertainty Quantifying uncertainty Uncertainty in regression tasks Uncertainty in classification tasks Tools for benchmarking and quantifying uncertainty The Uncertainty Baselines library Keras-Uncertainty Robustness metrics Summary References Chapter 6: Hyperparameter Tuning, MLOps, and AutoML Technical requirements Introduction to AutoML Introducing H2O AutoML Understanding Amazon SageMaker Autopilot The need for MLOps TFX – a scalable end-to-end platform for AI/ML workflows Understanding Kubeflow Katib for hyperparameter tuning Vertex AI Datasets Training and experiments in Vertex AI Vertex AI Workbench Summary Further reading Part 3: Design Patterns for Model Optimization and Life Cycle Management Chapter 7: Fairness Notions and Fair Data Generation Technical requirements Understanding the impact of data on fairness Real-world bias examples Causes of bias Defining fairness Types of fairness based on statistical metrics Types of fairness based on the metrics of predicted outcomes Types of fairness based on similarity-based measures Types of fairness based on causal reasoning The role of data audits and quality checks in fairness Assessing fairness Linear regression The variance inflation factor Mutual information Significance tests Evaluating group fairness Evaluating counterfactual fairness Best practices Fair synthetic datasets MOSTLY AI’s self-supervised fair synthetic data generator A GAN-based fair synthetic data generator Summary Further reading Chapter 8: Fairness in Model Optimization Technical requirements The notion of fairness in ML Unfairness mitigation methods In-processing methods Explicit unfairness mitigation Fairness constraints for a classification task Fairness constraints for a regression task Fairness constraints for a clustering task Fairness constraints for a reinforcement learning task Fairness constraints for recommendation systems Challenges of fairness Missing sensitive attributes Multiple sensitive attributes Choice of fairness measurements Individual versus group fairness trade-off Interpretation and fairness Fairness versus model performance Limited datasets Summary Further reading Chapter 9: Model Explainability Technical requirements Introduction to Explainable AI Scope of XAI Challenges in XAI Explain Like I’m Five (ELI5) LIME SHAP Understanding churn modeling using XAI techniques Building a model Using ELI5 to understand classifier models Hands-on with LIME SHAP in action CausalNex DoWhy for causal inference DoWhy in action AI Explainability 360 for interpreting models Summary References Chapter 10: Ethics and Model Governance Technical requirements Model Risk Management (MRM) Types of model inventory management Cost savings with MRM A transformative journey with MRM Model risk tiering Model risk calibration Model version control ModelDB Weights & Biases Further reading Part 4: Implementing an Organization Strategy, Best Practices, and Use Cases Chapter 11: The Ethics of Model Adaptability Technical requirements Adaptability framework for data and model drift Statistical methods Statistical process control Understanding model explainability during concept drift/calibration Explainability and calibration Challenges with calibration and fairness Summary Further reading Chapter 12: Building Sustainable Enterprise-Grade AI Platforms Technical requirements The key to sustainable enterprise-grade AI platforms Sustainable solutions with AI as an organizational roadmap Organizational standards for sustainable frameworks Sustainability practices and metrics across different cloud platforms Emission metrics on Google Cloud Best practices and strategies for carbon-free energy The energy efficiency of data centers Carbon emission trackers The FL carbon calculator Centralized learning carbon emissions calculator Adopting sustainable model training and deployment with FL CO2e emission metrics Comparing emission factors – centralized learning versus FL Illustrating how FL works better than centralized learning The CO2 footprint of FL How to compensate for equivalent CO2e emissions Design patterns of FL-based model training Sustainability in model deployments Design patterns of FL-based model deployments Summary Further reading Chapter 13: Sustainable Model Life Cycle Management, Feature Stores, and Model Calibration Sustainable model development practices Organizational standards for sustainable, trustworthy frameworks Explainability, privacy, and sustainability in feature stores Feature store components and functionalities Feature stores for FL Exploring model calibration Determining whether a model is well calibrated Calibration techniques Model calibration using scikit-learn Building sustainable, adaptable systems Concept drift-aware federated averaging (CDA-FedAvg) Summary Further reading Chapter 14: Industry-Wide Use Cases Technical requirements Building ethical AI solutions across industries Biased chatbots Ethics in XR/AR/VR Use cases in retail Privacy in the retail industry Fairness in the retail industry Interpretability – the role of counterfactuals (CFs) Supply chain use cases Use cases in BFSI Deepfakes Use cases in healthcare Healthcare system architecture using Google Cloud Survival analysis for Responsible AI healthcare applications Summary Further reading Index Why subscribe? Other Books You May Enjoy Packt is searching for authors like you Share Your Thoughts Download a free PDF copy of this book
Donate to keep this site alive
How to download source code?
1. Go to: https://github.com/PacktPublishing
2. In the Find a repository… box, search the book title: Platform and Model Design for Responsible AI: Design and build resilient, private, fair, and transparent machine learning models
, sometime you may not get the results, please search the main title.
3. Click the book title in the search results.
3. Click Code to download.
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.