This textbook provides an in-depth exploration of statistical learning with reproducing kernels, an active area of research that can shed light on trends associated with deep neural networks. The author demonstrates how the concept of reproducing kernel Hilbert Spaces (RKHS), accompanied with tools from regularization theory, can be effectively used in the design and justification of kernel learning algorithms, which can address problems in several areas of artificial intelligence. Also provided is a detailed description of two biomedical applications of the considered algorithms, demonstrating how close the theory is to being practically implemented.
Among the book’s several unique features is its analysis of a large class of algorithms of the Learning Theory that essentially comprise every linear regularization scheme, including Tikhonov regularization as a specific case. It also provides a methodology for analyzing not only different supervised learning problems, such as regression or ranking, but also different learning scenarios, such as unsupervised domain adaptation or reinforcement learning. By analyzing these topics using the same theoretical framework, rather than approaching them separately, their presentation is streamlined and made more approachable.
An Introduction to Artificial Intelligence Based on Reproducing Kernel Hilbert Spaces is an ideal resource for graduate and postgraduate courses in computational mathematics and data science.
Author(s): Sergei Pereverzyev
Series: Compact Textbooks in Mathematics
Edition: 1
Publisher: Birkhäuser
Year: 2022
Language: English
Pages: 152
Tags: Reproducing Kernel Hilbert Spaces, Machine Learning
2022_Bookmatter_AnIntroductionToArtificialInte
Preface
Contents
Acronyms
Pereverzyev2022_Chapter_Introduction
1 Introduction
1.1 The Appearance of Reproducing Kernel Hilbert Space in the Processing of Noisy Data
1.2 Elements of the General Theory of RKHS
1.3 On Some Techniques for Constructing Kernels
1.4 Penalized Least Squares in RKHS: Representer Theorem
Pereverzyev2022_Chapter_LearningInReproducingKernelHil
2 Learning in Reproducing Kernel Hilbert Spaces and Related Integral Operators
2.1 Supervised Learning in the Regression Setting
2.2 Supervised Learning in the Ranking Problem Setting
2.3 Numerical Radon–Nikodym Differentiation in RKHS
Pereverzyev2022_Chapter_SelectedTopicsOfTheRegularizat
3 Selected Topics of the Regularization Theory
3.1 General Regularization Scheme and Source Conditions
3.2 The Qualification of the Regularization Indexed by a Family of Functions: Examples
3.3 Interplay Between Qualification and Source Conditions: Regularization of Noisy Equations
3.4 The Balancing Principle
3.5 Aggregation by the Linear Functional Strategy
Pereverzyev2022_Chapter_RegularizedLearningInRKHS
4 Regularized Learning in RKHS
4.1 Regularization of a Supervised Learning Problem in Regression Setting
4.2 Regularization in Ranking Setting
4.3 The Balancing Principle in the Estimation of the Excess of Risk
4.4 Aggregation by the Linear Functional Strategy in Target Space L2,ρX
4.5 Regularization by the Linear Functional Strategy with Multiple Kernels
4.6 RKHS-Based Regularization in Reinforcement Learning
4.7 On a Regularization of Unsupervised Domain Adaptation in RKHS
4.8 Risk Bounds Under the Assumption of Knowing the Radon–Nikodym Derivative
4.8.1 Assumptions and Auxiliaries
4.8.2 General Regularization Scheme in Covariate Shift Domain Adaptation Problem
4.9 Approximate Domain Adaptation
4.9.1 Regularized Radon–Nikodym Numerical Differentiation in RKHS
4.9.2 Application in Domain Adaptation
4.9.3 Numerical Illustrations
4.10 Resolving the Regularization Parameter Issue by an Aggregation
Pereverzyev2022_Chapter_ExamplesOfApplications
5 Examples of Applications
5.1 Experiments with MovieLens and Jester Joke Datasets
5.2 Application to Blood Glucose Error Grid Analysis
5.2.1 Surveillance Error Grid
5.2.2 One More Approach to the Regularization Parameter Choice
5.3 Experiments with Multiple Kernel Learning Algorithms
5.4 Prediction of Nocturnal Hypoglycemia by an Aggregation of Known Prediction Approaches
5.4.1 A Brief Overview of Known Approaches
5.4.2 NH Prediction as a Ranking Problem
5.4.3 The Considered Clinical Datasets
5.4.4 Metrics for the Performance Evaluation
5.4.5 Evaluation of the Aggregated NH Predictor
5.5 Detection of Vertebral Artery Stenosis Based on Diagnoses of Carotid Artery Stenosis
2022_Bookmatter_AnIntroductionToArtificialInte (1)
References
Index