Machine Learning

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Machine Learning, a vital and core area of artificial intelligence (AI), is propelling the AI field ever further and making it one of the most compelling areas of computer science research. This textbook offers a comprehensive and unbiased introduction to almost all aspects of machine learning, from the fundamentals to advanced topics. It consists of 16 chapters divided into three parts: Part 1 (Chapters 1-3) introduces the fundamentals of machine learning, including terminology, basic principles, evaluation, and linear models; Part 2 (Chapters 4-10) presents classic and commonly used machine learning methods, such as decision trees, neural networks, support vector machines, Bayesian classifiers, ensemble methods, clustering, dimension reduction and metric learning; Part 3 (Chapters 11-16) introduces some  advanced topics, covering feature selection and sparse learning, computational learning theory, semi-supervised learning, probabilistic graphical models, rule learning, and reinforcement learning. Each chapter includes exercises and further reading, so that readers can explore areas of interest.

The book can be used as an undergraduate or postgraduate textbook for computer science, computer engineering, electrical engineering, data science, and related majors. It is also a useful reference resource for researchers and practitioners of machine learning.

Author(s): Zhi-Hua Zhou
Publisher: Springer
Year: 2021

Language: English
Commentary: True PDF
Pages: 472

Preface
Contents
Symbols
1 Introduction
1.1 Introduction
1.2 Terminology
1.3 Hypothesis Space
1.4 Inductive Bias
1.5 Brief History
1.6 Application Status
1.7 Further Reading
Exercises
Break Time
References
2 Model Selection and Evaluation
2.1 Empirical Error and Overfitting
2.2 Evaluation Methods
2.3 Performance Measure
2.4 Comparison Test
2.5 Bias and Variance
2.6 Further Reading
Exercises
Break Time
References
3 Linear Models
3.1 Basic Form
3.2 Linear Regression
3.3 Logistic Regression
3.4 Linear Discriminant Analysis
3.5 Multiclass Classification
3.6 Class Imbalance Problem
3.7 Further Reading
Exercises
Break Time
References
4 Decision Trees
4.1 Basic Process
4.2 Split Selection
4.3 Pruning
4.4 Continuous and Missing Values
4.5 Multivariate Decision Trees
4.6 Further Reading
Exercises
Break Time
References
5 Neural Networks
5.1 Neuron Model
5.2 Perceptron and Multi-layer Network
5.3 Error Backpropagation Algorithm
5.4 Global Minimum and Local Minimum
5.5 Other Common Neural Networks
5.6 Deep Learning
5.7 Further Reading
Exercises
Break Time
References
6 Support Vector Machine
6.1 Margin and Support Vector
6.2 Dual Problem
6.3 Kernel Function
6.4 Soft Margin and Regularization
6.5 Support Vector Regression
6.6 Kernel Methods
6.7 Further Reading
Exercises
Break Time
References
7 Bayes Classifiers
7.1 Bayesian Decision Theory
7.2 Maximum Likelihood Estimation
7.3 Naïve Bayes Classifier
7.4 Semi-Naïve Bayes Classifier
7.5 Bayesian Network
7.6 EM Algorithm
7.7 Further Reading
Exercises
Break Time
References
8 Ensemble Learning
8.1 Individual and Ensemble
8.2 Boosting
8.3 Bagging and Random Forest
8.4 Combination Strategies
8.5 Diversity
8.6 Further Reading
Exercises
Break Time
References
9 Clustering
9.1 Clustering Problem
9.2 Performance Measure
9.3 Distance Calculation
9.4 Prototype Clustering
9.5 Density Clustering
9.6 Hierarchical Clustering
9.7 Further Reading
Exercises
Break Time
References
10 Dimensionality Reduction and Metric Learning
10.1 k-Nearest Neighbor Learning
10.2 Low-Dimensional Embedding
10.3 Principal Component Analysis
10.4 Kernelized PCA
10.5 Manifold Learning
10.6 Metric Learning
10.7 Further Reading
Exercises
Break Time
References
11 Feature Selection and Sparse Learning
11.1 Subset Search and Evaluation
11.2 Filter Methods
11.3 Wrapper Methods
11.4 Embedded Methods and L1 Regularization
11.5 Sparse Representation and Dictionary Learning
11.6 Compressed Sensing
11.7 Further Reading
Exercises
Break Time
References
12 Computational Learning Theory
12.1 Basic Knowledge
12.2 PAC Learning
12.3 Finite Hypothesis Space
12.4 VC Dimension
12.5 Rademacher Complexity
12.6 Stability
12.7 Further Reading
Exercises
Break Time
References
13 Semi-Supervised Learning
13.1 Unlabeled Samples
13.2 Generative Methods
13.3 Semi-Supervised SVM
13.4 Graph-Based Semi-Supervised Learning
13.5 Disagreement-Based Methods
13.6 Semi-Supervised Clustering
13.7 Further Reading
Exercises
Break Time
References
14 Probabilistic Graphical Models
14.1 Hidden Markov Model
14.2 Markov Random Field
14.3 Conditional Random Field
14.4 Learning and Inference
14.5 Approximate Inference
14.6 Topic Model
14.7 Further Reading
Exercises
Break Time
References
15 Rule Learning
15.1 Basic Concepts
15.2 Sequential Covering
15.3 Pruning Optimization
15.4 First-Order Rule Learning
15.5 Inductive Logic Programming
15.6 Further Reading
Exercises
Break Time
References
16 Reinforcement Learning
16.1 Task and Reward
16.2 K-Armed Bandit
16.3 Model-Based Learning
16.4 Model-Free Learning
16.5 Value Function Approximation
16.6 Imitation Learning
16.7 Further Reading
Exercises
Break Time
References
Appendix A Matrix
A.1 Basic Operations
A.2 Derivative
A.3 Singular Value Decomposition
Appendix B Optimization
B.1 Lagrange Multiplier Method
B.2 Quadratic Programming
B.3 Semidefinite Programming
B.4 Gradient Descent Method
B.5 Coordinate Descent Method
Appendix C Probability Distributions
C.1 Common Probability Distributions
C.2 Conjugate Distribution
C.3 Kullback–Leibler Divergence
Index