An Introduction to Artificial Intelligence and Machine Learning I: By day-to-day examples

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

How does our brain work in our routine life? The same way we design artificial intelligence in machines. Instead of complex straightforward theory, this book explains all logic and algorithms with the help of day-to-day examples. The language is straightforward. Besides, the examples are straightforward. We adequately cover all functions of the intelligent agent and machine learning models. This book is a sweet friend for newcomers to the AI field (this includes academic students and working professionals.). This book additionally includes statistical models. The overall intention of this book is to spread the knowledge to all kinds of readers preparing themselves to secure a visa for the upcoming AI-driven earth.

Author(s): Manikandan Paneerselvam
Publisher: Notion Press
Year: 2023

Language: English
Pages: 390

Cover
Title
Copyright
Preface
Chapter 1. Introduction to Artificial Intelligence and Machine Learning
1.1 The Level and Depth of the Subjects in This Book
1.2 Is This Book Complex to Read?
1.3 Details About Other Volumes of This Book Series
1.4 Artificial Intelligence
1.5 Statistical Methods
1.6 Machine Learning
1.7 Summary
Part 1. Artificial Intelligence
Chapter 2. Artificial Intelligence: Introduction
2.1 Rationality
Chapter 3. Artificial Intelligence: Search and Problem Solving
3.1 Problem-Solving Introduction
3.2 Find the Goal and Formulate the Goal
3.3 Design the Problem and Problem Formulation
3.4 Search and Executions
3.5 Steps Involved in Problem-Solving Agent Algorithm
3.6 Some Crucial Components of the Problem
3.7 Real-Time Examples for Problem-solving
3.8 Tree and Graph
3.9 Problem-Solving Performance
3.10 Searching
3.11 Uninformed Search Strategies
3.12 Breadth-First Search
3.13 Depth-First Search
3.14 Uniform Cost Search
3.15 Depth-Limited Search
3.16 Iterative Deepening Depth-First Search (IDDFS)
3.17 Bidirectional Search
3.18 Performance Comparison – Uninformed Search Algorithms
3.19 Informed Search Algorithms
3.20 Greedy Best-First Search
3.21 A* Search Algorithm
3.22 Admissibility and Consistency
3.23 Iterative Deepening A* Algorithm (IDA*)
Chapter 4. Artificial Intelligence: Local Search
4.1 Introduction to Local Search
4.2 Hill Climbing Local Search Algorithm
4.3 Simulated Annealing
4.4 Local Beam Search
4.5 Genetic Algorithms
Chapter 5. Artificial Intelligence: Adversarial Search – Games
5.1 Game Theory
5.2 The Minimax Algorithm
5.3 Alpha-Beta Pruning
Chapter 6. Artificial Intelligence: Logic and logical agents
6.1 Knowledge-Based Agents
6.2 Logic
6.3 Logical Reasoning
6.4 Logical Inference
6.5 Inference Algorithm and Its Properties
6.6 Propositional Logic
6.7 Syntax of Propositional Logic
6.8 Semantics
6.9 Example Knowledge Base Using Propositional Logic
6.10 Inference Algorithm Logic, Theorems
Chapter 7. Artificial Intelligence: Uncertainty
7.1 What is Uncertainty, and How is It Useful?
7.2 Basics of Probability
7.3 Logic vs. Probability
7.4 Probability: [Probability from Mathematics]
7.5 Probability in Artificial Intelligence
7.6 The Prior or Unconditional Probability
7.7 Posterior or Conditional Probability
7.8 Probability Distribution
7.9 Joint Probability Distribution
7.10 Full Joint Inference and Inference
7.11 Inference by Enumeration
7.12 Independence
7.13 Conditional Independence
7.14 Bayes Theorem and Naïve Bayes
7.15 Bayesian Networks: Syntax
7.16 Bayesian Networks: Factorization
7.17 Inference in Bayesian Networks
7.18 Bayesian Network: Conditional Independence
7.19 Markov Blanket
7.20 Definition (D-Separation)
7.21 Bayesian Network: Inference Using Enumeration and Variable Elimination
7.21.1 Inference using enumeration
7.21.2 Variable elimination method
7.22 Bayesian Network: Rejection Sampling
7.23 Bayesian Network: Likelihood Weighting
7.24 Bayesian Network: Maximum Likelihood
7.25 Bayesian Network: Maximum a Posteriori (MAP) Learning
Chapter 8. Artificial Intelligence: Top View – Agent and Environments
8.1 Agent in General
8.2 Task Environment and PEAS Description
8.3 Properties of the Task Environment (Characteristics)
8.4 Agent Program Types
8.5 Simple Reflex Agents
8.6 Model-Based Reflex Agents
8.7 Goal-Based Agents
8.8 Utility-Based Agents
8.9 General Learning Agents
Chapter 9. Artificial Intelligence: Ethics
9.1 Basics
9.2 Robustness of AI Systems
9.3 Transparency of AI Systems
9.4 Data Bias
9.5 Accountability for Ethics Issues
9.6 Data Privacy
9.7 Cyber Security with AI and ML Systems
Part 2. Statistical Methods
Chapter 10. Statistical Methods: Statistics and Probability Basics
10.1 Data and Data Visualization
10.2 Central Tendency
10.3 Mean
10.4 Median
10.5 Mode
10.6 Measures of Spread
10.7 Range
10.8 Interquartile Range (IQR)
10.9 Data Set Value Changes and Outliers
10.10 Constant Addition or Subtraction
10.11 Extreme Values
10.12 Box and Whisker Plots
10.13 Sample Mean, Variance, and Standard Deviation
10.13.1 Sample Mean
10.13.2 Variance
10.13.3 Standard Deviation
10.14 Frequency Histogram and Density Curve
10.15 Symmetric Distribution
10.16 Skewed Distributions
10.17 Outlier Calculations
10.18 Normal Distribution
10.19 Z-Score
10.20 Probability
10.21 Probability – Addition, Union, and Intersection
Chapter 11. Statistical Methods: Independent probability
11.1 The Multiplication Rule
11.2 Dependent Probability
11.3 Bayes’ Theorem
Chapter 12. Statistical Methods: Discrete Random Variables
12.1 Discrete Random Variables and Probability Distributions
12.2 Additional use of Random Variables
12.3 Expected Value
12.4 Variance and Standard Deviation
12.5 Transforming Random Variables
12.6 Linear Combinations of Random Variables
12.7 Permutations and Combinations
12.8 Binomial Random Variables
12.9 Binomial Random Variable Characters
12.10 Binomial Probability
12.11 Poisson Distributions
12.12 Bernoulli Random Variables
Chapter 13. Statistical Methods: Sampling
13.1 The Art of Collecting Statistical Data
13.2 The Goal of Collecting Samples
13.3 Observational Study and Experimental Study
13.4 One-Way Tables and Two-Way Tables
13.5 Exclusive to Experimental Studies
13.6 Sampling and Bias
13.7 Sampling Techniques
13.8 Sampling Distributions of the Sample Mean
13.9 Sampling Distribution of the Sample Proportion (SDSP)
13.10 The Student’s t-Distribution
13.11 Confidence Interval for the Mean
13.12 Confidence Interval for the Proportion
Chapter 14. Statistical Methods: Hypothesis Testing
14.1 Inferential Statistics and Hypotheses
14.2 Hypothesis
14.3 The Population Mean μ
14.4 For Population Proportions
14.5 Significance Level and Type I and II Errors
14.6 Test Statistics for One- and Two-Tailed Tests
14.7 Choosing a One-Tailed or Two-Tailed Test
14.8 The Α Value for One- and Two-Tailed Tests
14.9 Calculating the Test Statistic
14.10 The p-Value and Rejecting the Null
14.11 Significance
14.12 Hypothesis Testing for the Population Proportion
Part. 3 Machine Learning
Chapter 15. Machine Learning: Introduction
15.1 Machine Learning
15.2 Types of Machine Learning
15.3 Visualization Examples for Types of Machine Learning
15.4 Training, Validation, and Testing Data Sets
Chapter 16. Machine Learning: Data Workflow and Data Mining
16.1 Definition of Data
16.2 Types of Attributes
16.3 Discrete and Continuous Attributes
16.4 Characteristics of Data
16.5 Outliers
16.6 Data Quality Problems
16.7 Data Pre-Processing
16.8 Confusion Matrix – Performance Evaluation
16.9 Receiver Operating Characteristic Curve
16.10 Dealing with Imbalanced Classes
16.11 Challenges of Machine Learning
16.12 Bias-Variance Trade-Off
16.13 Choice of Hyperparameters
Chapter 17. Machine Learning: Linear Regression Models
17.1 Linear Regression
17.2 Other Examples of Linear Regression
17.3 How Exactly the Machine Learning Model Learns (Model Engine)
17.4 Cost Function
17.5 Linear Functions vs. Nonlinear Functions
17.6 Optimization and Optimization Functions
17.7 Required Mathematics and Statistics for Machine Learning Linear Regression
17.8 Traditional “Closed from Solution” Mathematics Model for Optimization
17.9 Overfitting and Bias-Variance Trade-Off
17.10 L1 and L2 Regularization Methods
17.11 Earlier Stopping
Chapter 18. Machine Learning: Classification (Linear and Logistic classification)
18.1 Classification
18.2 Classification Types
18.2.1 Linear Classifier
18.2.2 Nonlinear Classifier
18.3 Logistic Regression
18.4 Binary Classification vs. Multi Classification
18.5 Cost Function
18.6 Cross Entropy
18.7 Cost Optimization
Chapter 19. Machine Learning: Decision Tree
19.1 Decision Tree
19.2 Decision Tree – Example
19.3 Pruning
19.4 How to Create the Decision Tree?
19.5 Gini Impurity
19.6 Entropy and Information Gain
19.7 Information Gain (IG)
19.8 Issues in the Decision Tree Model
Chapter 20. Machine Learning: Instance-based Learning Algorithms
20.1 K-nearest Neighbour Classifier
20.2 Steps involved in KNN Algorithm Models
20.3 K-Elbow Method for ‘K’ Value Selection
20.4 Locally Weighted Regression Model
Chapter 21. Machine Learning: Support Vector Machine
21.1 SVM Basics
21.2 Maximum Margin Classifier
21.3 Soft Margin Classifier
21.4 Two-Dimensional Data and Support Vector Classifier
21.5 Linear Classification Mathematics
21.6 Three-Dimension Support Vector Classifications
21.7 Mathematics Behind Linear Support Vectors
21.8 Support Vector Machine Logic
21.9 Kernal Functions
Chapter 22. Machine Learning: Bayesian Learning
22.1 Bayesian Machine Learning
22.2 Bayes’ Theorem (Statistics Point of View)
22.3 Bayes Theorem and Naïve Bayes from Artificial Intelligence’s Point of View
22.4 Naïve Bayes Classifier in Machine Learning
22.5 Maximum Likelihood Estimation
22.6 Maximum A Posteriori (MAP) Learning
Chapter 23. Machine Learning: Ensemble Learning
23.1 Bagging and Boosting
23.2 Random Forest Algorithm
23.3 AdaBoost Algorithm
23.4 Steps involved in Ada Boosting
23.5 Gradient Boosting Algorithms
23.6 XGBoost Algorithms
Chapter 24. Machine Learning: Unsupervised Learning
24.1 Clustering
24.2 Association
24.3 K-Means Clustering
24.4 How to Decide the “K” Value
24.5 K-Means Clustering for a Two-Dimensional Dataset
About the Author