Responsible AI in the Enterprise offers a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts like explainable, safe, ethical, robust, transparent, auditable, and interpretable machine learning models, this book equips developers with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance. Readers will gain an in-depth understanding of FairLearn and InterpretML, as well as other tools like Google's What-If Tool, ML Fairness Gym, IBM's AI 360 Fairness tool, Aequitas, and FairLearn.
The book covers various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance standards recommendations. It provides practical insights on how to use AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Readers will explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, and learn how to use FairLearn for fairness assessment and bias mitigation. By the end of this book you will ge to grips with tools and techniques available to create transparent and accountable machine learning models.
Author(s): Adnan Masood, PhD | Heather Dawe, MSc
Publisher: Packt Publishing Limited
Year: 2023
Language: English
Pages: 850
Responsible AI in the Enterprise
Foreword
Contributors
About the authors
About the reviewer
Preface
Who this book is for
Essential chapters tailored to distinct AI-related positions
What this book covers
To get the most out of this book
Download the example code files
Conventions used
Get in touch
Share Your Thoughts
Download a free PDF copy of this book
Part 1: Bigot in the Machine – A Primer
1
Explainable and Ethical AI Primer
The imperative of AI governance
Key terminologies
Explainability
Interpretability
Explicability
Safe and trustworthy
Fairness
Ethics
Transparency
Model governance
Enterprise risk management and governance
Tools for enterprise risk governance
AI risk governance in the enterprise
Perpetuating bias – the network effect
Transparency versus black-box apologetics – advocating for AI explainability
The AI alignment problem
Summary
References and further reading
2
Algorithms Gone Wild
AI in hiring and recruitment
Facial recognition
Bias in large language models (LLMS)
Hidden cost of AI safety – low wages and psychological impact
AI-powered inequity and discrimination
Policing and surveillance
Social media and attention engineering
The environmental impact
Autonomous weapon systems and military
The AIID
Summary
References and further reading
Part 2: Enterprise Risk Observability Model Governance
3
Opening the Algorithmic Black Box
Getting started with interpretable methods
The business case for explainable AI
Taxonomy of ML explainability methods
Shapley Additive exPlanations
How is SHAP different from Shapley values?
A working example of SHAP
Local Interpretable Model-Agnostic Explanations
A working example of LIME
Feature importance
Anchors
PDPs
Counterfactual explanations
Summary
References and further reading
4
Robust ML – Monitoring and Management
An overview of ML attacks and countermeasures
Model and data security
Privacy and compliance
Attack prevention and monitoring
Ethics and responsible AI
The ML life cycle
Adopting an ML life cycle
MLOps and ModelOps
Model drift
Data drift
Concept drift
Monitoring and mitigating drift in ML models
Simple data drift detection using Python data drift detector
Housing price data drift detection using Evidently
Analyzing data drift using Azure ML
Summary
References and further reading
5
Model Governance, Audit, and Compliance
Policies and regulations
United States
European Union
United Kingdom
Singapore
United Arab Emirates
Toronto Declaration – protecting the right to equality in ML
Professional bodies and industry standards
Microsoft’s Responsible AI framework
IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems
ISO/IEC’s standards for AI
OECD AI Principles
The University of Oxford’s recommendations for AI governance
PwC’s Responsible AI Principles/Toolkit
Alan Turing Institute guide to AI ethics
Technology toolkits
Microsoft Fairlearn
IBM’s AI Explainability 360 open source toolkit
Credo AI Lens toolkit
PiML – the integrated Python toolbox for interpretable ML
FAT Forensics – algorithmic fairness, accountability, and transparency toolbox
Aequitas – the Bias and Fairness Audit Toolkit
AI trust, risk, and security management
Auditing checklists and measures
Datasheets for datasets
Model cards for model reporting
Summary
References and further reading
6
Enterprise Starter Kit for Fairness, Accountability, and Transparency
Getting started with enterprise AI governance
AI STEPS FORWARD – AI governance framework
Implementing AI STEPS FORWARD in an enterprise
The strategic principles of AI STEPS FORWARD
AI STEPS FORWARD in enterprise governance
The AI STEPS FORWARD maturity model
Risk management in AI STEPS FORWARD
Measures and metrics of AI STEPS FORWARD
AI STEPS FORWARD – taxonomy of components
Salient capabilities for AI Governance
The indispensable role of the C-suite in fostering responsible AI adoption
An AI Center of Excellence
The role of internal AI boards in enterprise AI governance
Healthcare systems
Retail and e-commerce systems
Financial services
Predictive analytics and forecasting
Cross-industry applications of AI
Establishing repeatable processes, controls, and assessments for AI systems
Ethical AI upskilling and education
Summary
References and further reading
Part 3: Explainable AI in Action
7
Interpretability Toolkits and Fairness Measures – AWS, GCP, Azure, and AIF 360
Getting started with hyperscaler interpretability toolkits
Google Vertex Explainable AI
Model interpretability in Vertex AI – feature attribution and example-based explanations
Integration with Google Colab and other notebooks
Simplified deployment
Explanations are comprehensive and multimodal
AWS Sagemaker Clarify
Azure Machine Learning model interpretability
Azure’s responsible AI offerings
Responsible AI scorecards
Open source offerings – the responsible AI toolbox
Open source toolkits and lenses
IBM AI Fairness 360
Aequitas – Bias and Fairness Audit Toolkit
PETs
Differential privacy
Homomorphic encryption
Secure multiparty computation
Federated learning
Data anonymization
Data perturbation
Summary
References and further reading
8
Fairness in AI Systems with Microsoft Fairlearn
Getting started with fairness
Fairness metrics
Fairness-related harms
Getting started with Fairlearn
Summary
References and further reading
9
Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox
Fairness metrics
Demographic parity
Equalized odds
Simpson’s paradox and the risks of multiple testing
Bias and disparity mitigation with Fairlearn
Fairness in real-world scenarios
Mitigating correlation-related bias
The Responsible AI Toolbox
The Responsible AI dashboard
Summary
References and further reading
10
Foundational Models and Azure OpenAI
Foundation models
Bias in foundation models
The AI alignment challenge – investigating GPT-4’s power-seeking behavior with ARC
Enterprise use of foundation models and bias remediation
Biases in GPT3
Azure OpenAI
Access to Azure OpenAI
The Code of Conduct
Azure OpenAI Service content filtering
Use cases and governance
What not to do – limitations and potential risks
Data, privacy, and security for Azure OpenAI Service
AI governance for the enterprise use of Azure OpenAI
Getting started with Azure OpenAI
Consuming the Azure OpenAI GPT3 model using the API
Azure OpenAI Service models
Code generation models
Embedding models
Summary
References and further reading
Index
Why subscribe?
Other Books You May Enjoy
Packt is searching for authors like you
Share Your Thoughts
Download a free PDF copy of this book