AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AI provides readers with solutions and a foundational understanding of the methods that can be applied to test AI systems and provide assurance. Anyone developing software systems with intelligence, building learning algorithms, or deploying AI to a domain-specific problem (such as allocating cyber breaches, analyzing causation at a smart farm, reducing readmissions at a hospital, ensuring soldiers’ safety in the battlefield, or predicting exports of one country to another) will benefit from the methods presented in this book.
As AI assurance is now a major piece in AI and engineering research, this book will serve as a guide for researchers, scientists and students in their studies and experimentation. Moreover, as AI is being increasingly discussed and utilized at government and policymaking venues, the assurance of AI systems―as presented in this book―is at the nexus of such debates.
Author(s): Feras A. Batarseh, Laura Freeman
Publisher: Academic Press
Year: 2022
Language: English
Pages: 600
City: London
Front Cover
AI Assurance
Copyright
Contents
Contributors
A note by the editors
A note on the book cover
Foreword 1
Foreword 2
Foreword 3
Part 1 Foundations of AI assurance
1 An introduction to AI assurance
1.1 Motivation and overview
1.1.1 Book content
1.2 The need for new assurance methods
1.3 Conclusion
References
2 Setting the goals for ethical, unbiased, and fair AI
2.1 Introduction and background
2.1.1 Value-loading
2.1.1.1 The control problem
2.1.1.2 The value-loading problem
2.1.2 Human-compatible AI
2.1.2.1 Cooperative inverse reinforcement learning
2.1.3 The alignment problem
2.1.3.1 The role of training data
2.1.3.2 The objective function
2.1.4 AI assurance: a formal framework
2.2 Ethical AI but… how?
2.2.1 Three normative theories: a brief outline
2.2.1.1 Deontological ethics: duties
2.2.1.2 Utilitarianism
2.2.1.3 Virtue ethics
2.2.2 The implementation problem
2.2.2.1 Top-down approach
2.2.2.2 Bottom-up approach
2.2.3 Intentional statements and reward functions
2.2.3.1 The problem of specification
2.2.3.2 Moral uncertainty
2.3 Conclusion
References
3 An overview of explainable and interpretable AI
3.1 Introduction
3.2 Methods and materials
3.2.1 Statistics and evaluation metrics
3.2.1.1 Mean
3.2.1.2 Median
3.2.1.3 Standard deviation and variance
3.2.1.4 R2
3.2.1.5 Accuracy
3.2.1.6 Precision, recall, and F1
3.2.2 Shape metrics
3.2.2.1 Area and perimeter
3.2.2.2 Shape proportion and encircled image-histograms
3.2.2.3 FD
3.2.2.4 Circularity
3.2.2.5 Eigenvalues and eccentricity
3.2.2.6 Number of corners
3.2.2.7 Hu moments
3.2.3 Modeling algorithms
3.2.3.1 OLS, GLM, and non-linear models
3.2.3.2 Knn
3.2.3.3 Naïve Bayes
3.2.3.4 Linear and quadratic discriminant analysis
3.2.3.5 Trees
3.2.3.6 Random forests
3.2.3.7 SVM
3.2.3.8 CNNs
3.2.3.9 DAMG
3.2.3.10 Perceived accuracy
3.2.4 Dimensionality reduction
3.2.4.1 Subset selection procedures
3.2.4.2 LASSO, ridge, and elastic net
3.2.4.3 PCA
3.2.4.4 FA
3.2.4.5 Fourier transform
3.2.4.6 Manifolds
3.2.5 Model assurance
3.2.5.1 Resampling methods
3.2.5.2 Effect comparison
3.2.5.3 HILT models
3.2.5.4 Influential observations
3.2.5.5 Visualization methods
3.3 Experiments using XAI models
3.3.1 Satellite imagery
3.3.2 White blood cell
3.4 Discussion
3.4.1 XAI vs. AI in critical applications
3.4.2 Explainability, interpretability, and model assurance in practice
3.4.3 XAI models outperform CNN-based solutions
3.4.4 XAI, deep learning models, and human inputs
3.4.5 Extending the lessons learned to non-image problems
3.5 Future work
3.6 Conclusion
Acknowledgments
References
4 Bias, fairness, and assurance in AI: overview and synthesis
4.1 Introduction
4.2 Assurance and ethical AI
4.2.1 Overview of bias and lack of assurance in AI
4.2.2 Current assurance methods for bias reduction
4.3 Validation methods
4.4 Synthesis of the literature
4.5 Conclusion
References
5 An evaluation of the potential global impacts of AI assurance
5.1 Introduction
5.2 Literature review
5.3 Methodology & modeling
5.3.1 Scenario 1: full adoption of AI across all regions
5.3.2 Scenario 2: estimation of gains from AI ethical frameworks across all regions
5.3.3 Estimation of loss due to strict liabilities across all regions
5.4 Results and analysis
5.4.1 Impact of policy shocks on GDP of countries/regions
5.4.2 Impact of policy shocks on output of countries/regions
5.4.3 Impact of policy shocks on employment of countries/regions
5.4.4 Impact of policy shocks on export of countries/regions
5.4.5 Impact of policy shocks on import of countries/regions
5.5 Conclusion
Acknowledgment
References
Part 2 AI assurance methods
6 The role of inference in AI: Start S.M.A.L.L. with mindful modeling
6.1 Real wisdom on artificial intelligence
6.2 Fundamentals: decision-making, heuristics and cognitive biases
6.2.1 Dual-process model of decision-making
6.2.2 Error and bias in medical decision-making
6.2.3 Implicit and/or explicit: bias in AI practitioners and AI models
6.3 Fundamentals: yearning to make sense of the world through models and inference
6.3.1 Mindful modeling approaches: a mark of thoughtful work
6.3.2 Start S.M.A.L.L. (Specific-Mindful-Attainable-Limited-Lucid)
6.3.2.1 Conceptual modeling
6.3.2.2 Group model building process
6.3.2.3 Causal modeling
6.3.3 Inference in modeling
6.3.3.1 Frequentist (Fisherian) inference
6.3.3.2 Probabilistic (Bayesian) inference: a gateway to causal inference
6.3.3.3 Causal inference: tempting the trope that ``correlation does not imply causation''
6.4 Bolstering AI assurance: reducing biases with inferential methods
6.4.1 What is AI assurance?
6.4.1.1 Working scenario: mitigating bias in healthcare through AI assurance
6.4.2 Contemporary AI: mindful modeling before data engineering helps reduce bias
6.4.2.1 Question 1: what is the basis of ground truth for teaching the machine?
6.4.2.2 Question 2: who determines when predictive analytics are used in decision-making?
6.4.2.3 Question 3: when is a problem cognitively complex enough to obscure bias present in decision-making?
6.4.3 Considering the level of system predictability when designing AI assurance
6.5 Rest assured: mindful approaches in modeling may help avoid another AI winter
6.6 Further reading
Acknowledgments
References
7 Outlier detection using AI: a survey
7.1 Introduction and motivation
7.2 Outlier detection methods
7.2.1 Statistical and probabilistic based methods
7.2.1.1 Parametric distribution models
7.2.1.2 Non-parametric distribution models
7.2.1.3 Miscellaneous statistical models
7.2.1.4 Advantages of statistical and probabilistic based methods
7.2.1.5 Disadvantages of statistical and probabilistic based methods
7.2.1.6 Research gaps and suggestions
7.2.2 Density-based methods
7.2.2.1 Advantages of density-based methods
7.2.2.2 Disadvantages of density-based methods
7.2.2.3 Research gaps and suggestions
7.2.3 Clustering-based methods
7.2.3.1 Advantages of clustering-based methods
7.2.3.2 Disadvantages of clustering based methods
7.2.3.3 Research gaps and suggestions
7.2.4 Distance-based methods
7.2.4.1 K-nearest neighbor models
7.2.4.2 Pruning techniques
7.2.4.3 Time series data
7.2.4.4 Advantages of distance-based methods
7.2.4.5 Disadvantages of distance-based methods
7.2.4.6 Research gaps and suggestions
7.2.5 Ensemble methods
7.2.5.1 Advantages of ensemble methods
7.2.5.2 Disadvantages of ensemble methods
7.2.5.3 Research gaps and suggestions
7.2.6 Learning-based methods
7.2.6.1 Subspace learning models
7.2.6.2 Active learning models
7.2.6.3 Graph-based learning models
7.2.6.4 Deep learning models
7.2.6.5 Advantages of learning-based methods
7.2.6.6 Disadvantages of learning-based methods
7.2.6.7 Research gaps and suggestions
7.3 Tools for outlier detection
7.4 Datasets for outlier detection
7.5 AI assurance and outlier detection
7.6 Conclusions
References
8 AI assurance using causal inference: application to public policy
8.1 Introduction and motivation
8.2 Causal inference
8.2.1 An introduction to causal inference
8.2.2 Overview of causal inference methods
8.3 AI assurance using causal inference
8.3.1 AI assurance: goals and methods
8.3.2 Methods for leveraging causality in assurance
8.3.3 Application of causality in assurance: economy of technology example
8.4 Network representations of data
8.4.1 An introduction to graph theory
8.4.2 Recurrent graph neural networks (RGNN)
8.4.3 Economy of technology dataset as a network
8.5 Conclusion
Acknowledgments
References
9 Data collection, wrangling, and pre-processing for AI assurance
9.1 Introduction and motivation
9.2 Relevant data characteristics
9.3 Data pre-processing: data wrangling and munging
9.4 Data processing architectures: ETL & ELT
9.5 DataOps: data operations automation management
9.6 Data tagging, provenance, and lineage
References
10 Coordination-aware assurance for end-to-end machine learning systems: the R3E approach
10.1 Introduction
10.2 Background and motivation
10.2.1 Background – characterizing BDML
10.2.2 Motivating example: machine learning for classifying building elements
10.2.3 Research questions
10.3 Key elements of R3E approach
10.3.1 QoAChain: chaining diverse types of quality constraints as a contract for optimizing end-to-end BDML
10.3.2 R3E objects and operations
10.3.2.1 Conceptualize R3E objects
10.3.2.2 R3E attributes associated with R3E objects
10.3.2.3 R3E operations and APIs
10.3.3 Engineering methods
10.3.3.1 Coordination for R3E
10.3.3.2 Monitoring and analytics
10.3.3.3 Testing, benchmarking, and experimenting for R3E
10.4 Illustrative examples
10.5 Discussion
10.6 Conclusions and future work
Acknowledgments
References
Part 3 AI assurance and applications
11 Assuring AI methods for economic policymaking
11.1 Introduction to harnessing AI for economics
11.1.1 ML in economic models
11.1.2 AI accountability models in economic research
11.1.3 Adopters of economic forecasting using XAI
11.2 Commonplace explainability methods
11.2.1 Local interpretable model-agnostic explanations (LIME) explainer
11.2.1.1 LIME methodology
11.2.1.2 LIME implementation
11.2.2 SHapley Additive exPlanations (SHAP)
11.2.2.1 SHAP methodology
11.2.2.2 SHAP implementation
11.2.3 Partial dependence plots
11.2.3.1 PDP methodology
11.2.3.2 PDP implementation
11.3 Mitigating bias in AI models for economic prediction
11.3.1 NLP use in central banking
11.3.2 NLP transformer networks
11.3.3 LIME for text explanations
11.3.4 LLMs and the AI central banker
11.3.5 Data assurance of LLMs
11.3.6 LLM transparency
11.3.7 Association rules mining
11.3.8 Graph neural networks
11.3.8.1 GNNs for international trade
11.3.8.2 Explainability methods for GNNs
11.4 Conclusion
Acknowledgments
References
12 Panopticon implications of ethical AI: equity, disparity, and inequality in healthcare
12.1 Introduction
12.2 Ontological perspectives
12.3 Ethics frameworks
12.4 Governance in the healthcare domain
12.5 Societal disparities in wellbeing
12.6 Conclusion
References
13 Recent advances in uncertainty quantification methods for engineering problems
13.1 Introduction
13.2 Polynomial chaos method for UQ
13.3 Gaussian Process or Kriging for UQ
13.4 Polynomial chaos Kriging for UQ
13.5 Uncertainty quantification of a supersonic nozzle
13.5.1 Test case description
13.5.2 Deterministic results
13.5.3 Description of uncertainties
13.5.4 Uncertainty analysis
13.6 Conclusions
Acknowledgments
References
14 Socially responsible AI assurance in precision agriculture for farmers and policymakers
14.1 Introduction
14.1.1 AI in agriculture
14.1.2 Big data in agriculture
14.1.3 Political economy of PA
14.2 Current methods of AI assurance in agriculture
14.2.1 AI assurance in agricultural policy
14.2.2 AI assurance in precision agriculture
14.3 Agricultural policy
14.4 AI assurance in agriculture recommendations
14.4.1 Participatory design from the start
14.4.2 XAI for agricultural end users
14.5 Conclusion
CRediT authorship contribution statement
References
15 The application of artificial intelligence assurance in precision farming and agricultural economics
15.1 Introduction
15.2 AI for smart farms
15.2.1 Correlation of economic indices and various commodities
15.2.2 Causation of economic indices and various commodities
15.2.3 Scoring outlier events for the model and finding anomalies
15.2.4 Outlier classification and labeling
15.3 Insight into data driven farming
15.3.1 Kentland and dairy farm
15.3.2 Shenandoah Valley Agricultural Research and Extension Center (SVAREC)
15.3.3 Dairy complex at Virginia Tech's SmartFarms
15.4 Larger policy implications
15.5 Conclusion
Acknowledgments
References
16 Bringing dark data to light with AI for evidence-based policymaking
16.1 Introduction
16.1.1 Background
16.1.2 Motivation
16.1.3 The AIM pipeline
16.2 The dataset for AIM
16.2.1 Dataset paradigm
16.2.2 Metrics of interest
16.2.3 Legislation data
16.2.4 Environmental descriptors
16.3 Feature creation
16.3.1 Policies as data
16.3.2 NLP in AIM
16.3.3 Spectral clustering of laws
16.3.4 Technology usage as data
16.4 Learning the trends
16.4.1 Neural network predicting AIMs
16.4.2 Training metrics
16.4.3 Prediction results
16.5 Discussions and future directions
16.5.1 Feasible applications
16.5.2 Future directions
16.6 Ethics of AI in public policy
16.6.1 Data in the legislative process
16.6.2 AI and bias
16.6.3 AI assurance and the law
References
Index
Back Cover