Foundations of Probabilistic Logic Programming: Languages, Semantics, Inference and Learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

The computational foundations of Artificial Intelligence (AI) are supported by two comer stones: logics and Machine Leaming. Computationallogic has found its realization in a number of frameworks for logic-based approaches to knowledge representation and automated reasoning, such as Logic Program­ ming, Answer Set Programming, Constraint Logic Programming, Descrip­tion Logics, and Temporal Logics. Machine Leaming, and its recent evolution to Deep Leaming, has a huge number of applications in video surveillance, social media services, Big Data analysis, weather predictions, spam filtering, online customer support, etc. Since its birth, the field of Probabilistic Logic Programming has seen a steady increase of activity, with many proposals for languages and algorithms for inference and learning. This book aims at providing an overview of the field with a special emphasis on languages under the Distribution Semantics, one of the most influential approaches. The book presents the main ideas for semantics, inference, and learning and highlights connections between the methods. Many examples of the book include a link to a page of the web application where the code can be run online. This 2nd edition aims at reporting the most exciting novelties in the field since the publication of the 1st edition. The semantics for hybrid programs with function symbols was placed on a sound footing. Probabilistic Answer Set Programming gained a lot of interest together with the studies on the complexity of inference. Algorithms for solving the MPE and MAP tasks are now available. Inference for hybrid programs has changed dramatically with the introduction of Weighted Model Integration. With respect to learning, the first approaches for neuro-symbolic integration have appeared together with algorithms for learning the structure for hybrid programs. Moreover, given the cost of learning PLPs, various works proposed language restrictions to speed up learning and improve its scaling. The intended audience of the book are researchers in Computer Science and AI that want to getan overview of PLP. However, it can also be used by students, especially graduate, to get acquainted with the topic, and by practitioners that would like to get more details on the inner workings of methods.

Author(s): Fabrizio Riguzzi
Series: River Publishers Series in Software Engineering
Publisher: River Publishers
Year: 2023

Language: English
Pages: 548

Cover
Half Title
Series Page
Title Page
Copyright Page
Table of Contents
Foreword
Preface to the 2nd Edition
Preface
Acknowledgments
List of Figures
List of Tables
List of Examples
List of Definitions
List of Theorems
List of Acronyms
Chapter 1: Preliminaries
1.1: Orders, Lattices, Ordinals
1.2: Mappings and Fixpoints
1.3: Logic Programming
1.4: Semantics for Normal Logic Programs
1.4.1: Program completion
1.4.2: Well-founded semantics
1.4.3: Stable model semantics
1.5: Probability Theory
1.6: Probabilistic Graphical Models
Chapter 2: Probabilistic Logic Programming Languages
2.1: Languages with the Distribution Semantics
2.1.1: Logic programs with annotated disjunctions
2.1.2: ProbLog
2.1.3: Probabilistic horn abduction
2.1.4: PRISM
2.2: The Distribution Semantics for Programs Without Function Symbols
2.3: Examples of Programs
2.4: Equivalence of Expressive Power
2.5: Translation into Bayesian Networks
2.6: Generality of the Distribution Semantics
2.7: Extensions of the Distribution Semantics
2.8: CP-logic
2.9: KBMC Probabilistic Logic Programming Languages
2.9.1: Bayesian logic programs
2.9.2: CLP(BN)
2.9.3: The prolog factor language
2.10: Other Semantics for Probabilistic Logic Programming
2.10.1: Stochastic logic programs
2.10.2: ProPPR
2.11: Other Semantics for Probabilistic Logics
2.11.1: Nilsson’s probabilistic logic
2.11.2: Markov logic networks
2.11.2.1: Encoding Markov logic networks with probabilistic logic programming
2.11.3: Annotated probabilistic logic programs
Chapter 3: Semantics with Function Symbols
3.1: The Distribution Semantics for Programs with Function Symbols
3.2: Infinite Covering Set of Explanations
3.3: Comparison with Sato and Kameya’s Definition
Chapter 4: Hybrid Programs
4.1: Hybrid ProbLog
4.2: Distributional Clauses
4.3: Extended PRISM
4.4: cplint Hybrid Programs
4.5: Probabilistic Constraint Logic Programming
4.5.1: Dealing with imprecise probability distributions
Chapter 5: Semantics for Hybrid Programs with Function Symbols
5.1: Examples of PCLP with Function Symbols
5.2: Preliminaries
5.3: The Semantics of PCLP is Well-defined
Chapter 6: Probabilistic Answer Set Programming
6.1: A Semantics for Unsound Programs
6.2: Features of Answer Set Programming
6.3: Probabilistic Answer Set Programming
Chapter 7: Complexity of Inference
7.1: Inference Tasks
7.2: Background on Complexity Theory
7.3: Complexity for Nonprobabilistic Inference
7.4: Complexity for Probabilistic Programs
7.4.1: Complexity for acyclic and locally stratified programs
7.4.2: Complexity results from [Mauá and Cozman, 2020]
Chapter 8: Exact Inference
8.1: PRISM
8.2: Knowledge Compilation
8.3: ProbLog1
8.4: cplint
8.5: SLGAD
8.6: PITA
8.7: ProbLog2
8.8: TP Compilation
8.9: MPE and MAP
8.9.1: MAP and MPE in probLog
8.9.2: MAP and MPE in PITA
8.10: Modeling Assumptions in PITA
8.10.1: PITA(OPT)
8.10.2: VIT with PITA
8.11: Inference for Queries with an Infinite Number of Explanations
Chapter 9: Lifted Inference
9.1: Preliminaries on Lifted Inference
9.1.1: Variable elimination
9.1.2: GC-FOVE
9.2: LP2
9.2.1: Translating probLog into PFL
9.3: Lifted Inference with Aggregation Parfactors
9.4: Weighted First-order Model Counting
9.5: Cyclic Logic Programs
9.6: Comparison of the Approaches
Chapter 10: Approximate Inference
10.1: ProbLog1
10.1.1: Iterative deepening
10.1.2: k-best
10.1.3: Monte carlo
10.2: MCINTYRE
10.3: Approximate Inference for Queries with an Infinite Number of Explanations
10.4: Conditional Approximate Inference
10.5: k-optimal
10.6: Explanation-based Approximate Weighted Model Counting
10.7: Approximate Inference with TP-compilation
Chapter 11: Non-standard Inference
11.1: Possibilistic Logic Programming
11.2: Decision-theoretic ProbLog
11.3: Algebraic ProbLog
Chapter 12: Inference for Hybrid Programs
12.1: Inference for Extended PRISM
12.2: Inference with Weighted Model Integration
12.2.1: Weighted Model Integration
12.2.2: Algebraic Model Counting
12.2.2.1: The probability density semiring and WMI
12.2.2.2: Symbo
12.2.2.3: Sampo
12.3: Approximate Inference by Sampling for Hybrid Programs
12.4: Approximate Inference with Bounded Error for Hybrid Programs
12.5: Approximate Inference for the DISTR and EXP Tasks
Chapter 13: Parameter Learning
13.1: PRISM Parameter Learning
13.2: LLPAD and ALLPAD Parameter Learning
13.3: LeProbLog
13.4: EMBLEM
13.5: ProbLog2 Parameter Learning
13.6: Parameter Learning for Hybrid Programs
13.7: DeepProbLog
13.7.1: DeepProbLog inference
13.7.2: Learning in DeepProbLog
Chapter 14: Structure Learning
14.1: Inductive Logic Programming
14.2: LLPAD and ALLPAD Structure Learning
14.3: ProbLog Theory Compression
14.4: ProbFOIL and ProbFOIL+
14.5: SLIPCOVER
14.5.1: The language bias
14.5.2: Description of the algorithm
14.5.2.1: Function INITIALBEAMS
14.5.2.2: Beam search with clause refinements
14.5.3: Execution Example
14.6: Learning the Structure of Hybrid Programs
14.7: Scaling PILP
14.7.1: LIFTCOVER
14.7.1.1: Liftable PLP
14.7.1.2: Parameter learning
14.7.1.3: Structure learning
14.7.2: SLEAHP
14.7.2.1: Hierarchical probabilistic logic programs
14.7.2.2: Parameter learning
14.7.2.3: Structure learning
14.8: Examples of Datasets
Chapter 15: cplint Examples
15.1: cplint Commands
15.2: Natural Language Processing
15.2.1: Probabilistic context-free grammars
15.2.2: Probabilistic left corner grammars
15.2.3: Hidden Markov models
15.3: Drawing Binary Decision Diagrams
15.4: Gaussian Processes
15.5: Dirichlet Processes
15.5.1: The stick-breaking process
15.5.2: The Chinese restaurant process
15.5.3: Mixture model
15.6: Bayesian Estimation
15.7: Kalman Filter
15.8: Stochastic Logic Programs
15.9: Tile Map Generation
15.10: Markov Logic Networks
15.11: Truel
15.12: Coupon Collector Problem
15.13: One-dimensional Random Walk
15.14: Latent Dirichlet Allocation
15.15: The Indian GPA Problem
15.16 Bongard Problems
Chapter 16: Conclusions
Bibliography
Index
About the Author