Machine Learning: Theory and Practice

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Machine Learning: Theory and Practice provides an introduction to the most popular methods in machine learning. The book covers regression including regularization, tree-based methods including Random Forests and Boosted Trees, Artificial Neural Networks including Convolutional Neural Networks (CNNs), reinforcement learning, and unsupervised learning focused on clustering. Topics are introduced in a conceptual manner along with necessary mathematical details. The explanations are lucid, illustrated with figures and examples. For each machine learning method discussed, the book presents appropriate libraries in the R programming language along with programming examples.

Features:

    • Provides an easy-to-read presentation of commonly used machine learning algorithms in a manner suitable for advanced undergraduate or beginning graduate students, and mathematically and/or programming-oriented individuals who want to learn machine learning on their own.

    • Covers mathematical details of the machine learning algorithms discussed to ensure firm understanding, enabling further exploration

    • Presents worked out suitable programming examples, thus ensuring conceptual, theoretical and practical understanding of the machine learning methods.

    This book is aimed primarily at introducing essential topics in Machine Learning to advanced undergraduates and beginning graduate students. The number of topics has been kept deliberately small so that it can all be covered in a semester or a quarter. The topics are covered in depth, within limits of what can be taught in a short period of time. Thus, the book can provide foundations that will empower a student to read advanced books and research papers.

    Author(s): Jugal Kalita
    Publisher: CRC Press/Chapman & Hall
    Year: 2022

    Language: English
    Pages: 298
    City: Boca Raton

    Cover
    Half Title
    Title Page
    Copyright Page
    Dedication
    Contents
    Preface
    About the Author
    1. Introduction
    1.1. Learning
    1.2. Machine Learning
    1.3. Types of Machine Learning
    1.3.1. Supervised Learning
    1.3.1.1. Classification
    1.3.1.2. Regression
    1.4. Unsupervised Machine Learning or Clustering
    1.5. Reinforcement Learning
    1.6. Organization of the Book
    1.7. Programming Language Used
    1.8. Summary
    2. Regression
    2.1. Regression
    2.1.1. Linear Least Squares Regression
    2.2. Evaluating Regression
    2.2.1. Coefficient of Determination R2
    2.2.2. Adjusted R2
    2.2.3. F-Statistic
    2.2.3.1. Is the Model Statistically Significant? F-Test
    2.2.4. Running the Linear Regression in R
    2.2.5. The Role of Optimization in Regression
    2.3. Multi-Dimensional Linear Regression
    2.3.1. Multi-dimensional Linear Least Squares Regression Using R
    2.4. Polynomial Regression
    2.4.1. Polynomial Regression Using R
    2.5. Overfitting in Regression
    2.6. Reducing Overfitting in Regression: Regularization
    2.6.1. Ridge Regression
    2.6.2. Lasso Regression
    2.6.3. Elastic Net Regression
    2.7. Regression in Matrix Form
    2.7.1. Deriving LSRL Formula using Geometry and Linear Algebra
    2.7.2. Deriving LSLR Formula Using Matrix Calculus
    2.8. Conclusions and Further Reading
    3. Tree-Based Classification and Regression
    3.1. Introduction
    3.2. Inductive Bias
    3.3. Decision Trees
    3.3.1. General Tree Building Procedure
    3.3.2. Splitting a Node
    3.3.2.1. Gini Index
    3.3.2.2. Entropy
    3.4. Simple Decision Trees in R
    3.4.1. Simple Decision Trees Using tree Library with Gini Index and Discrete Features
    3.4.2. Simple Decision Trees Using tree Library with Gini Index and Numeric Features
    3.4.3. Simple Decision Trees Using tree Library with Gini Index and Mixed Features
    3.5. Overfitting of Trees
    3.5.1. Pruning Trees in R
    3.5.1.1. Cross-validation for Best Pruned Tree
    3.5.2. Converting Trees to Rules
    3.5.2.1. Building Rules in R
    3.6. Evaluation of Classification
    3.6.1. Training and Testing Protocols
    3.6.2. Evaluation Metrics
    3.6.2.1. Basic Terminologies
    3.6.2.2. Metrics for Binary Classification
    3.6.2.3. Evaluation of Binary Classification Using R
    3.6.2.4. Metrics for Multi-class Classification
    3.7. Ensemble Classification and Random Forests
    3.7.1. Bootstrapping and Bagging (Bootstrapped Aggregation)
    3.7.2. Feature Sampling and Random Forests
    3.7.2.1. Do Random Forests Converge?
    3.7.3. Random Forests in R
    3.8. Boosting
    3.8.1. Boosting on Loss: AdaBoost
    3.8.1.1. Multi-Class AdaBoost
    3.8.1.2. AdaBoost in R
    3.8.2. Regression Using Trees
    3.8.2.1. Regression Using Single Trees
    3.8.2.2. Regression Using Random Forests
    3.8.2.3. Regression Using Loss-Boosted AdaBoost Trees
    3.8.2.4. Regression in R Using Trees
    3.9. Summary
    4. Artificial Neural Networks
    4.1. Biological Inspiration
    4.2. An Artificial Neuron
    4.2.1. Activation Functions
    4.2.1.1. Sigmoid or Logistic Activation
    4.2.1.2. tanh Activation Function
    4.2.1.3. Rectified Linear Unit
    4.3. Simple Feed-Forward Neural Network Architectures
    4.4. Training and Testing a Neural Network
    4.4.1. Datasets Used in This Chapter
    4.4.2. Training and Testing a Neural Network in R Using Keras
    4.4.2.1. Loading the MNIST Dataset
    4.4.2.2. Reshaping the Input
    4.4.2.3. Architecture and Learning
    4.4.2.4. Compiling and Running Experiments
    4.4.2.5. The Entire Program
    4.4.2.6. Observations about Empirical ANN Learning
    4.5. Backpropagation
    4.5.1. Backpropagation at a Conceptual Level
    4.5.2. Mathematics of Backpropagation
    4.5.2.1. Computing ∂E∂wij When j Is an Output Node
    4.5.2.2. Computing ∂E∂wij When j Is a Hidden Layer Node
    4.5.2.3. Backpropagation with Different Activation Functions
    4.6. Loss Functions in Neural Networks
    4.7. Convolutional Neural Networks
    4.7.1. Convolutions
    4.7.1.1. Linear Convolutions
    4.7.1.2. Two-Dimensional Convolutions
    4.7.1.3. Three-Dimensional Convolutions
    4.7.2. Convolutional Neural Networks in Practice
    4.7.2.1. Convolutional Layer and Activation
    4.7.2.2. CNNs Using Keras and R
    4.7.3. Pooling Layers in CNNs
    4.7.3.1. CNN using Pooling in R
    4.7.4. Regularization in CNNs
    4.7.4.1. Dropout for ANN Regularization
    4.7.4.2. Batch Normalization for Regularization
    4.8. Matrix or Tensor Formulation of Neural Networks
    4.8.1. Expressing Activations in Layer l
    4.8.2. Generalizing to a Bigger Network
    4.9. Conclusions
    5. Reinforcement Learning
    5.1. Examples of Reinforcement Learning
    5.1.1. Learning to Navigate a Maze
    5.1.2. The 8-puzzle Game
    5.1.3. Learning to Play Atari Games
    5.2. The Reinforcement Learning Process
    5.3. Rewards—Immediate and Cumulative
    5.3.1. Immediate Reward
    5.3.2. Cumulative Reward
    5.3.3. Discounting Future Rewards
    5.4. Evaluating States
    5.4.1. Estimating V (s) Given Policy π
    5.4.2. Writing the Update Formula Incrementally
    5.5. Learning Policy: ϵ-Greedy Algorithms
    5.5.1. Alternative Way to Learn Policy by Exploitation and Exploration
    5.6. Learning V (s) Values from Neighbors
    5.7. Q Learning
    5.8. Reinforcement Learning in R
    5.8.1. Q-Learning in a Simple 3 × 3 Gridworld
    5.8.1.1. Q-Learning in a Simple 5 × 5 Maze
    5.9. Conclusions
    6. Unsupervised Learning
    6.1. Clustering
    6.2. Centroid-Based Clustering
    6.2.1. K-Means Clustering
    6.3. Cluster Quality
    6.3.1. Intrinsic Cluster Quality Metrics
    6.3.1.1. Simple Statistical Intrinsic Cluster Quality Metric
    6.3.1.2. Dunn Index
    6.3.1.3. Davies-Bouldin Index
    6.3.2. Extrinsic Cluster Quality Metrics
    6.3.2.1. Purity
    6.3.2.2. Rand Index
    6.4. K-Means Clustering in R
    6.4.1. Using Extrinsic Clustering Metrics for K-Means
    6.4.2. Using Intrinsic Clustering Metrics for K-Means
    6.5. Hierarchical or Connectivity-Based Clustering
    6.5.1. Agglomerative Clustering in R
    6.5.2. Divisive Clustering in R
    6.6. Density-Based Clustering
    6.6.1. DBSCAN
    6.6.2. Density-Based Clustering in R
    6.6.3. Clustering a Dataset of Multiple Shapes
    6.7. Conclusions
    7. Conclusions
    Bibliography
    Index