The world we live in presents plenty of tricky, impactful, and hard-tomake decisions to be taken. Sometimes the available options are ample, at other times they are apparently binary, either way, they often confront us with dilemmas, paradoxes, and even denial of values. In the dawn of the age of intelligence, when robots are gradually taking over most decision-making from humans, this book sheds a bit of light on decision rationale. It delves into the limits of these decision processes (for both humans and machines), and it does so by providing a new perspective that is somehow opposed to orthodox economics. All Economics reflections in this book are underlined and linked to Artificial Intelligence. The authors hope that this comprehensive and modern analysis, firmly grounded in the opinions of various groundbreaking Nobel laureate economists, may be helpful to a broad audience interested in how decisions may lead us all to flourishing societies. That is, societies in which economic blunders (caused by over simplification of problems and super estimation of tools) are reduced substantially.
Author(s): Daniel Muller, Fernando Buarque, Tshilidzi Marwala
Publisher: World Scientific Publishing
Year: 2022
Language: English
Pages: 252
City: Singapore
Contents
Foreword
Preface
Acknowledgments
About the Authors
1. Introduction
1.1 The Rationale of the Book
1.2 The Core Contributions
1.2.1 St. Petersburg Paradox
1.2.2 Planning and Decision Making
1.2.3 Oversubscription Planning
1.3 The Structure of the Book
2. Decision-Making and Rationality
2.1 Introduction
2.2 Decision-Making Process
2.3 Rationality
2.3.1 Deterministic Rationality?
2.3.2 (Flexible) Bounded Rationality
2.4 From Overabundance of Information to Artificial Irrationality
2.5 Bounded Determinism
2.5.1 Resource Bounded Rationality
2.5.2 Rationality Within Time Bounds
2.5.3 Bounded by Assumptions and Definitions
2.5.4 Rationalization by Utilization
2.5.5 Globally Local Rationality
2.6 Instrumental and Value Rational Decision Actions
2.6.1 Value Rational Decision Action
2.6.2 Incremental Improving Decision-Making Process
2.7 Conclusion
3. Artificial Intelligence
3.1 Introduction
3.2 A Quick Account of History
3.3 Flavors of AI
3.3.1 Classic or Symbolic AI
3.3.2 Computational Intelligence or Soft Computing
3.4 Types of Inference
3.4.1 Deductive Reasoning
3.4.2 Inductive Reasoning
3.4.3 Abductive Reasoning
3.5 Learning Paradigms
3.5.1 Supervised Training
3.5.2 Semi-Supervised Training
3.5.3 Unsupervised Training
3.5.4 Reinforcement Learning
3.6 Classes of Problems
3.6.1 Search Problems
3.6.2 Classification Problems
3.6.3 Clustering Problems
3.6.4 Prediction Problems
3.6.5 Optimization Problems
3.6.6 Problems of Causal Inference
3.7 Families of AI Algorithms
3.7.1 Expert Systems (ES)
3.7.2 Decision Tree (DT)
3.7.3 Artificial Neural Networks (ANN)
3.7.4 Evolutionary Computation (CEVO)
3.7.5 Swarm Intelligence (SI)
3.8 Conclusion
4. Optimization
4.1 Introduction
4.2 Optimization
4.3 Nelder-Mead Simplex Method
4.4 Broyden-Fletcher-Goldfarb-Shanno (BFGS) Algorithm
4.5 Conjugate Gradient (CG) Method
4.6 Genetic Algorithm
4.6.1 Initialization
4.6.2 Crossover
4.6.3 Mutation
4.6.4 Selection
4.6.5 Termination
4.7 Particle Swarm Optimization
4.8 Simulated Annealing (SA)
4.8.1 Simulated Annealing Parameters
4.8.2 Transition Probabilities
4.8.3 Monte Carlo Method
4.8.4 Markov Chain Monte Carlo (MCMC)
4.8.5 Acceptance Probability Function: Metropolis Algorithm
4.8.6 Cooling Schedule
4.9 Hybrid Global-Local Optimization Technique
4.10 Conclusion
5. Cost-Value and Utility Dimensions and Dynamics
5.1 Introduction
5.2 Values and Trust
5.2.1 Value
5.2.2 Utility
5.2.3 Trust
5.3 Hidden Dimensions of Values and Cost
5.4 Time Dimensions of Values and Costs
5.5 The Diamond-Water Paradox
5.6 Value Bias and Lost Utility
5.6.1 Negative Costs Value of Actions
5.6.2 Negative Utility Value Actions
5.7 Value-Bias in Time Perspective
5.7.1 The Forgotten Aspects of Problem Solving
5.8 Value Alignment: The Relativity of Achievements and Success
5.8.1 Collaborate to Learn Each Other's Reality
5.8.2 Planned Risks Hedging
5.9 Value-Rational Judgment Function
5.9.1 Ethics Heuristics
5.10 Planning Value Rational Actions
5.10.1 Landmarks — A Rational Glimpse into the Future
5.10.2 Planning (Economic) Value Landmarks
5.11 Value and the Utility of Knowledge
5.11.1 Investment in Knowledge
5.11.2 (Re) Search
5.11.2.1 Retrospective Phase
5.12 From Data to Action
5.12.1 DIKW Hierarchy
5.12.2 From Data to Action
5.12.3 Chinese Reading Room (CRR)
5.12.4 The (Big) Data Paradox: Artificial Irrationality
5.13 Conclusion
6. Relative Net Utility and the St. Petersburg Paradox
6.1 Introduction
6.2 St. Petersburg Paradox
6.2.1 Petersburg Game Decision-Making Process Diagram
6.3 Net Utility and St. Petersburg Paradox
6.3.1 Evaluation With Respect to the Break-Even Point
6.3.2 The Decision-Choice of Not Participating in the Game
6.3.3 Net Utility of Changing Position
6.3.4 Dynamic Reference Point for Net Utility of Changing Position
6.4 Theorem of Indifference and Resource Preserving Tie-Breaking Decision Criteria
6.4.1 Tie-Breaking Decision Criteria
6.4.2 Utility Tie-Breaking With Time Factor
6.4.3 From Time to Resource-Based Utility Tie-Breaking Point
6.5 The Net Utility in (Bounded) Rational Decision-Making Process
6.5.1 Value Rational and Bounded Rational Decision-Making Process
6.5.2 The Boundedness of the Utility Function
6.5.3 The Expected Net Utility Theorem: Net Utility Polarity in Value Rational Decision-Making Process
6.5.4 The Theorem of Indifference in Value Rational Decision-Making Process
6.6 The Universal Concept of Net Utility
6.7 Conclusion
7. Value Rational Planning
7.1 Introduction
7.2 The Berlin Paradox
7.3 Background
7.3.1 Oversubscription Planning (OSP)
7.3.2 Landmarks in Heuristic Search
7.3.3 Heuristic Search in OSP
7.4 General Additive Utility Functions
7.4.1 Why Negative Values Need a Special Treatment?
7.4.2 Good or Better?
7.4.3 The Initial State With Additive Utility Function
7.5 Landmarks in OSP with Negative Utility Values
7.5.1 The V-Sum Compilation
7.6 Solving OSP with Additive Utility Functions as a Process of Improvement
7.6.1 Relax the Assumption of the Lowest Possible Initial State Utility Value
7.6.2 High-Level Overview of the Improvement Approach
7.6.3 Gross Positive Actions
7.6.4 The Window of Opportunity to Improve
7.6.5 Maintained Achievements
7.6.6 Synergistic Criteria for a Valuable Plan
7.6.7 Multiple Operator Repetitions
7.6.8 The Valuable Plan Compilation
7.6.9 Incorporating the Compilation into the Planner
7.7 Empirical Evaluation
7.7.1 Non-Negative Utility
7.7.2 Negative Utility
7.7.3 Landmarks Effectiveness Measure
7.8 Conclusion
8. Inference of Net-Utility Polarity of Actions in Oversubscription Planning
8.1 Introduction
8.2 Background
8.3 Offline Detection of the Net-Utility Polarity
8.4 Online Detection of the Net Utility Polarity
8.5 Empirical Evaluation
8.6 Conclusion
9. Conclusion
Bibliography
Index