In recent years there has been increasing excitement concerning the potential of Artificial Intelligence to transform human society. This book addresses the leading edge of research in this area. The research described aims to address present incompatibilities of Human and Machine reasoning and learning approaches. According to the influential US funding agency DARPA (originator of the Internet and Self-Driving Cars) this new area represents the Third Wave of Artificial Intelligence (3AI, 2020s-2030s), and is being actively investigated in the US, Europe and China.
The chapters of this book have been authored by a mixture of UK and other international specialists. Some of the key questions addressed by the Human-Like Computing programme include how AI systems might 1) explain their decisions effectively, 2) interact with human beings in natural language, 3) learn from small numbers of examples and 4) learn with minimal supervision. Solving such fundamental problems involves new foundational research in both the Psychology of perception and interaction as well as the development of novel algorithmic approaches in Artificial Intelligence.
Author(s): Stephen Muggleton, Nicholas Chater
Publisher: Oxford University Press
Year: 2021
Language: English
Pages: 544
City: Oxford
Cover
Human-Like Machine Intelligence
Copyright
Preface
Acknowledgements
Contents
Part 1: Human-like Machine Intelligence
1: Human-Compatible Artificial Intelligence
1.1 Introduction
1.2 Artificial Intelligence
1.3 1001 Reasons to Pay No Attention
1.4 Solutions
1.4.1 Assistance games
1.4.2 The off-switch game
1.4.3 Acting with unknown preferences
1.5 Reasons for Optimism
1.6 Obstacles
1.7 Looking Further Ahead
1.8 Conclusion
References
2: Alan Turing and Human-Like Intelligence
2.1 The Background to Turing’s 1936 Paper
2.2 Introducing Turing Machines
2.3 The Fundamental Ideas of Turing’s 1936 Paper
2.4 Justifying the Turing Machine
2.5 Was the Turing Machine Inspired by Human Computation?
2.6 From 1936 to 1950
2.7 Introducing the Imitation Game
2.8 Understanding the Turing Test
2.9 Does Turing’s “Intelligence” have to be Human-Like?
2.10 Reconsidering Standard Objections to the Turing Test
References
3: Spontaneous Communicative Conventions through Virtual Bargaining
3.1 The Spontaneous Creation of Conventions
3.2 Communication through Virtual Bargaining
3.3 The Richness and Flexibility of Signal-Meaning Mappings
3.4 The Role of Cooperation in Communication
3.5 The Nature of the Communicative Act
3.6 Conclusions and Future Directions
Acknowledgements
References
4: Modelling Virtual Bargaining using Logical Representation Change
4.1 Introduction—Virtual Bargaining
4.2 What’s in the Box?
4.3 Datalog Theories
4.3.1 Clausal form
4.3.2 Datalog properties
4.3.3 Application 1: Game rules as a logic theory
4.3.4 Application 2: Signalling convention as a logic theory
4.4 SL Resolution
4.4.1 SL refutation
4.4.2 Executing the strategy
4.5 Repairing Datalog Theories
4.5.1 Fault diagnosis and repair
4.5.2 Example: The black swan
4.6 Adapting the Signalling Convention
4.6.1 ‘Avoid’ condition
4.6.2 Extended vocabulary
4.6.3 Private knowledge
4.7 Conclusion
Acknowledgements
References
Part 2: Human-like Social Cooperation
5: Mining Property-driven Graphical Explanations for Data-centric AI from Argumentation Frameworks
5.1 Introduction
5.2 Preliminaries
5.2.1 Background: argumentation frameworks
5.2.2 Application domain
5.3 Explanations
5.4 Reasoning and Explaining with BFs Mined from Text
5.4.1 Mining BFs from text
5.4.2 Reasoning
5.4.3 Explaining
5.5 Reasoning and Explaining with AFs Mined from Labelled Examples
5.5.1 Mining AFs from examples
5.5.2 Reasoning
5.5.3 Explaining
5.6 Reasoning and Explaining with QBFs Mined from Recommender Systems
5.6.1 Mining QBFs from recommender systems
5.6.2 Explaining
5.7 Conclusions
Acknowledgements
References
6: Explanation in AI systems
6.1 Machine-generated Explanation
6.1.1 Bayesian belief networks: a brief introduction
6.1.2 Bayesian belief networks: explaining evidence
6.1.3 Bayesian belief networks: explaining reasoning processes
6.2 Good Explanation
6.2.1 A brief overview of models of explanation
6.2.2 Explanatory virtues
6.2.3 Implications
6.2.4 A brief case study on human-generated explanation
6.3 Bringing in the user: bi-directional relationships
6.3.1 Explanations are communicative acts
6.3.2 Explanations and trust
6.3.3 Trust and fidelity
6.3.4 Further research avenues
6.4 Conclusions
Acknowledgements
References
7: Human-like Communication
7.1 Introduction
7.2 Face-to-face Conversation
7.2.1 Facial expressions
7.2.2 Gesture
7.2.3 Voice
7.3 Coordinating Understanding
7.3.1 Standard average understanding
7.3.2 Misunderstandings
7.4 Real-time Adaptive Communication
7.5 Conclusion
References
8: Too Many cooks: Bayesian inference for coordinating Multi-agent Collaboration
8.1 Introduction
8.2 Multi-Agent MDPs with Sub-Tasks
8.2.1 Coordination Test Suite
8.3 Bayesian Delegation
8.4 Results
8.4.1 Self-play
8.4.2 Ad-hoc
8.5 Discussion
Acknowledgements
References
9: Teaching and Explanation: Aligning Priors between Machines and Humans
9.1 Introduction
9.2 Teaching Size: Learner and Teacher Algorithms
9.2.1 Uniform-prior teaching size
9.2.2 Simplicity-prior teaching size
9.3 Teaching and Explanations
9.3.1 Interpretability
9.3.2 Exemplar-based explanation
9.3.3 Machine teaching for explanations
9.4 Teaching with Exceptions
9.5 Universal Case
9.5.1 Example 1: Non-iterative concept
9.5.2 Example 2: Iterative concept
9.6 Feature-value Case
9.6.1 Example 1: Concept with nominal attributes only
9.6.2 Example 2: Concept with numeric attributes
9.7 Discussion
Acknowledgements
References
Part 3: Human-like Perception and Language
10: Human-like Computer Vision
10.1 Introduction
10.2 Related Work
10.3 Logical Vision
10.3.1 Learning geometric concepts from synthetic images
10.3.2 One-shot learning from real images
10.4 Learning Low-level Perception through Logical Abduction
10.5 Conclusion and Future Work
References
11: Apperception
11.1 Introduction
11.2 Method
11.2.1 Making sense of unambiguous symbolic input
11.2.2 The Apperception Engine
11.2.3 Making sense of disjunctive symbolic input
11.2.4 Making sense of raw input
11.2.5 Applying the Apperception Engine to raw input
11.3 Experiment: Sokoban
11.3.1 The data
11.3.2 The model
11.3.3 Understanding the interpretations
11.3.4 The baseline
11.4 Related Work
11.5 Discussion
11.6 Conclusion
References
12: Human–Machine Perception of Complex Signal Data
12.1 Introduction
12.1.1 Interpreting the QT interval on an ECG
12.1.2 Human–machine perception
12.2 Human–Machine Perception of ECG Data
12.2.1 Using pseudo-colour to support human interpretation
Pseudo-colouring method
12.2.2 Automated human-like QT-prolongation detection
12.3 Human–Machine Perception: Differences, Benefits, and Opportunities
12.3.1 Future work
References
13: The Shared-Workspace Framework for Dialogue and Other Cooperative Joint Activities
13.1 Introduction
13.2 The Shared Workspace Framework
13.3 Applying the Framework to Dialogue
13.4 Bringing Together Cooperative Joint Activity and Communication
13.5 Relevance to Human-like Machine Intelligence
13.5.1 Communication via an augmented workspace
13.5.2 Making an intelligent artificial interlocutor
13.6 Conclusion
References
14: Beyond Robotic Speech: Mutual Benefits to Cognitive Psychology and Artificial Intelligence from the Study of Multimodal Communic
14.1 Introduction
14.2 The Use of Multimodal Cues in Human Face-to-face Communication
14.3 How Humans React to Embodied Agents that Use Multimodal Cues
14.4 Can Embodied Agents Recognize Multimodal Cues Produced by Humans?
14.5 Can Embodied Agents Produce Multimodal Cues?
14.6 Summary and Way Forward: Mutual Benefits from Studies on Multimodal Communication
14.6.1 Development and coding of shared corpora
14.6.2 Toward a mechanistic understanding of multimodal communication
14.6.3 Studying human communication with embodied agents
Acknowledgements
References
Part 4: Human-like Representation and Learning
15: Human–Machine Scientific Discovery
15.1 Introduction
15.2 Scientific Problem and Dataset: Farm Scale Evaluations (FSEs) of GMHT Crops
15.3 The Knowledge Gap for Modelling Agro-ecosystems: Ecological Networks
15.4 Automated Discovery of Ecological Networks from FSE Data and Ecological Background Knowledge
15.5 Evaluation of the Results and Subsequent Discoveries
15.6 Conclusions
References
16: Fast and Slow Learning in Human-Like Intelligence
16.1 Do Humans Learn Quickly and Is This Uniquely Human?
16.1.1 Evidence of rapid learning in infants, children, and adults
16.1.2 Does fast learning require a specific mechanism?
16.1.3 Slow learning in infants, children, and adults
16.1.4 Beyond word and concept learning
16.1.5 Evidence of rapid learning in non-human animals
16.2 What Makes for Rapid Learning?
16.3 Reward Prediction Error as the Gateway to Fast and Slow Learning
16.4 Conclusion
Acknowledgements
References
17: Interactive Learning with Mutual Explanations in Relational Domains
17.1 Introduction
17.2 The Case for Interpretable and Interactive Learning
17.3 Types of Explanations—There is No One-Size Fits All
17.4 Interactive Learning with ILP
17.5 Learning to Delete with Mutual Explanations
17.6 Conclusions and Future Work
Acknowledgements
References
18: Endowing machines with the expert human ability to select representations: why and how
18.1 Introduction
18.2 Example of selecting a representation
18.3 Benefits of switching representations
18.3.1 Epistemic benefits of switching representations
18.3.2 Cognitive benefits of switching representations
18.4 Why selecting a good representation is hard
18.4.1 Representational and cognitive complexity
18.4.2 Cognitive framework
18.5 Describing representations: rep2rep
18.5.1 A description language for representations
18.5.2 Importance
18.5.3 Correspondences
18.5.4 Formal properties for assessing informational suitability
18.5.5 Cognitive properties for assessing cognitive cost
18.6 Automated analysis and ranking of representations
18.7 Applications and future directions
Acknowledgements
References
19: Human–Machine Collaboration for Democratizing Data Science
19.1 Introduction
19.2 Motivation
19.2.1 Spreadsheets
19.2.2 A motivating example: Ice cream sales
19.3 Data Science Sketches
19.3.1 Data wrangling
19.3.2 Data selection
Processing the data
Relational rule learning
Implementation choices
19.3.3 Clustering
Problem setting
Finding a cluster assignment
19.3.4 Sketches for inductive models
Prediction
Learning constraints and formulas
Auto-completion
Solving predictive auto-completion under constraints. PSYCHE acquires candidate
Integrating the sketches. Let us now consider the effect of coloured sketches. So far,
19.4 Related Work
19.4.1 Visual analytics
19.4.2 Interactive machine learning
19.4.3 Machine learning in spreadsheets
19.4.4 Auto-completion and missing value Imputation
19.5 Conclusion
Acknowledgements
References
Part 5: Evaluating Human-like Reasoning
20: Automated Common-sense Spatial Reasoning: Still a Huge Challenge
20.1 Introduction
20.2 Common-sense Reasoning
20.2.1 The nature of common-sense reasoning
20.2.2 Computational simulation of commonsense spatial reasoning
20.2.3 But natural language is still a promising route to common-sense
20.3 Fundamental Ontology of Space
20.3.1 Defining the spatial extent of material entities
20.4 Establishing a Formal Representation and its Vocabulary
20.4.1 Semantic form
20.4.2 Specifying a suitable vocabulary
20.4.3 The potentially infinite distinctions among spatial relations
20.5 Formalizing Ambiguous and Vague Spatial Vocabulary
20.5.1 Crossing
20.5.2 Position relative to ‘vertical’
20.5.3 Sense resolution
20.6 Implicit and Background Knowledge
20.7 Default Reasoning
20.8 Computational Complexity
20.9 Progress towards Common-sense Spatial Reasoning
20.10 Conclusions
Acknowledgements
References
21: Sampling as the Human Approximation to Probabilistic Inference
21.1 A Sense of Location in the Human Sampling Algorithm
21.2 Key Properties of Cognitive Time Series
21.3 Sampling Algorithms to Explain Cognitive Time Series
21.3.1 Going beyond individuals to markets
21.4 Making the Sampling Algorithm more Bayesian
21.4.1 Efficient accumulation of samples explains perceptual biases
21.5 Conclusions
Acknowledgements
References
22: What Can the Conjunction Fallacy Tell Us about Human Reasoning?
22.1 The Conjunction Fallacy
22.2 Fallacy or No Fallacy?
22.3 Explaining the Fallacy
22.4 The Pre-eminence of Impact Assessment over Probability Judgements
22.5 Implications for Effective Human-like Computing
22.6 Conclusion
References
23: Logic-based Robotics
23.1 Introduction
23.2 Relational Learning in Robot Vision
23.3 Learning to Act
23.3.1 Learning action models
Trace recording
Segmentation of states
Matching the segments with existing action models
Learning by experimentation
Experimentation in simulation and real world
23.3.2 Tool creation
Tool generalizer
23.3.3 Learning to plan with qualitative models
Planning with qualitative models
Learning a qualitative model
Refining actions by reinforcement learning
Closed-loop learning and experiments
23.4 Conclusion
Acknowledgements
References
24: Predicting Problem Difficulty in Chess
24.1 Introduction
24.2 Experimental Data
24.3 Analysis
24.3.1 Relations between player rating, problem rating, and success
24.3.2 Relations between player’s rating and estimation of difficulty
24.3.3 Experiment in automated prediction of difficulty
24.4 More Subtle Sources of Difficulty
24.4.1 Invisible moves
24.4.2 Seemingly good moves and the ‘Einstellung’ effect
24.5 Conclusions
Acknowledgements
References
Index