A novel approach to hybrid AI aimed at developing trustworthy agent collaborators.
The vast majority of current AI relies wholly on machine learning (ML). However, the past thirty years of effort in this paradigm have shown that, despite the many things that ML can achieve, it is not an all-purpose solution to building human-like intelligent systems. One hope for overcoming this limitation is hybrid AI: that is, AI that combines ML with knowledge-based processing. In Agents in the Long Game of AI, Marjorie McShane, Sergei Nirenburg, and Jesse English present recent advances in hybrid AI with special emphases on content-centric computational cognitive modeling, explainability, and development methodologies.
At present, hybridization typically involves sprinkling knowledge into an ML black box. The authors, by contrast, argue that hybridization will be best achieved in the opposite way: by building agents within a cognitive architecture and then integrating judiciously selected ML results. This approach leverages the power of ML without sacrificing the kind of explainability that will foster society’s trust in AI. This book shows how we can develop trustworthy agent collaborators of a type not being addressed by the “ML alone” or “ML sprinkled by knowledge” paradigms—and why it is imperative to do so.
Author(s): Marjorie McShane, Sergei Nirenburg, Jesse English
Edition: 1
Publisher: The MIT Press
Year: 2024
Language: English
Commentary: Publisher's PDF
Pages: 336
City: Cambridge, MA
Tags: Artificial Intelligence; Learning; Cognitive Psychology; Text Generation; Ontologies; Natural Language Understanting; Agent-based AI; Knowledge Acquisition; Explainability; Knowledge Bases
Contents
Acknowledgments
1. Setting the Stage
2. Content-Centric Cognitive Modeling
2.1. The OntoAgent Cognitive Architecture
2.1.1. Perception Recognition
2.1.2. Perception Interpretation
2.1.3. Deliberation
2.1.4. Action Specification
2.1.5. Action Rendering
2.2. Hybridization
2.3. The Overall Methodology of LEIA Research and Development
2.4. Microtheories
2.5. Methodology of Practice: An Accent on System Implementation
2.5.1. Simpler-First, Extensible System Development
2.5.2. Graphics and Tools
2.6. Recap of Content-Centric Cognitive Modeling
2.7. Comparisons with Other Approaches
2.7.1. Thumbnail Juxtaposition with Data-Driven AI
2.7.2. Typical Choices in Cognitive Systems Research
2.7.3. Cognitive Architecture Research
2.7.4. The Main Takeaway from These Comparisons
3. Knowledge Bases
3.1. Why Preexisting Resources Don’t Fill the Bill
3.2. Ontology
3.2.1. Properties
3.2.2. Ontological Instances
3.2.3. Proto-Instances
3.2.4. Scripts
3.3. The Lexicon
3.4. The Opticon and Analogous [Sense]icons
3.5. Episodic Memory
4. Language Understanding and Generation
4.1. Introduction
4.2. Language Understanding
4.2.1. Brief Overview of Language Understanding
4.2.2. Recent Advances in Construction Semantics
4.3. Language Generation
4.3.1. Reasoning about Content and Generating an MMR
4.3.2. Action Specification: Converting an MMR into a GMR
4.3.3. Language Rendering Step 1: SemMapping
4.3.4. Language Rendering Step 2: Generating Sentences from SemMaps
4.3.5. Language Rendering Step 3: Selecting the Best Sentence
4.4. Comparisons with Other Linguistic Theories
5. The Trajectory of Microtheory Development: The Example of Coreference
5.1. Introduction
5.2. Verb Phrase Ellipsis
5.2.1. Linguistic Background and Top-Level Model of VP Ellipsis
5.2.2. Embedded VP Ellipsis Constructions
5.2.3. Syntactic Constructions Anchored in Function Words
5.2.4. Other Methods of Identifying Textual Sponsors
5.2.5. The Semantic Side of Resolving VP Ellipsis
5.2.6. VP Ellipsis in Extended Semantics
5.2.7. VP Ellipsis in Situational Reasoning
5.3. Other Referring Expressions
5.3.1. Event Anaphors
5.3.2. Personal Pronouns
5.4. Porting the VP Ellipsis Model to Russian
6. Dialog as Perception, Deliberation, and Action
6.1. The Tradition of Dialog Modeling
6.2. Communicative Acts: Events Like Any Others
6.3. Examples of Dialog as Perception, Deliberation, and Action
6.3.1. The Doctor Asks, “What brings you here?”
6.3.2. The Doctor Asks, “Do you have chest pain?”
6.3.3. The Doctor Proposes a Medical Intervention
7. Learning
7.1. Part 1: An Example-Based Introduction to Different Modes of Learning
7.1.1. Basic Learning through Language
7.1.2. Mixed-Initiative Learning
7.1.3. Data-Driven Learning
7.1.4. Multimodal Learning
7.1.5. An Extended Example: A LEIA Learns Rules of the Road
7.2. Part 2: Eventualities in Learning
7.2.1. Lexicon Learning during Natural Language Understanding
7.2.2. Learning Ontology and Residual Aspects of Lexicon
7.3. Final Thoughts on Learning
8. Explaining
8.1. LEIAs as Social Agents That Explain
8.2. Explaining Perception and Action
8.3. Explaining Knowledge
8.4. Explaining Reasoning
8.5. An Example: LEIAs Serving as Tutors and Advisors Explain Their Reasoning
8.6. How Empirical Contributions to LEIA Operation Affect Explainability
8.7. Visualizations for Explanation in the Maryland Virtual Patient System
8.8. Explanation as Part of Overall Agent Operation
9. Knowledge Acquisition
9.1. Introduction
9.2. Acquiring Ontology
9.3. Acquiring Lexicon
9.4. Threading Knowledge Acquisition with System Operation
10. Disrupting the Dominant Paradigm
Notes
Acknowledgments
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
References
Index