Machine Learning Pocket Reference: Working with Structured Data in Python

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

With detailed notes, tables, and examples, this handy reference will help you navigate the basics of structured machine learning. Author Matt Harrison delivers a valuable guide that you can use for additional support during training and as a convenient resource when you dive into your next machine learning project. Ideal for programmers, data scientists, and AI engineers, this book includes an overview of the machine learning process and walks you through classification with structured data. You’ll also learn methods for clustering, predicting a continuous value (regression), and reducing dimensionality, among other topics. This pocket reference includes sections that cover: • Classification, using the Titanic dataset • Cleaning data and dealing with missing data • Exploratory data analysis • Common preprocessing steps using sample data • Selecting features useful to the model • Model selection • Metrics and classification evaluation • Regression examples using k-nearest neighbor, decision trees, boosting, and more • Metrics for regression evaluation • Clustering • Dimensionality reduction • Scikit-learn pipelines

Author(s): Matt Harrison
Edition: 1
Publisher: O'Reilly Media
Year: 2019

Language: English
Commentary: Vector PDF
Pages: 320
City: Sebastopol, CA
Tags: Machine Learning; Data Analysis; To Read; Regression; Decision Trees; Python; Classification; Clustering; Support Vector Machines; Pipelines; scikit-learn; Data Cleaning; Model Selection; Dimensionality Reduction

Cover
Copyright
Table of Contents
Preface
What to Expect
Who This Book Is For
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
Chapter 1. Introduction
Libraries Used
Installation with Pip
Installation with Conda
Chapter 2. Overview of the Machine Learning Process
Chapter 3. Classification Walkthrough: Titanic Dataset
Project Layout Suggestion
Imports
Ask a Question
Terms for Data
Gather Data
Clean Data
Create Features
Sample Data
Impute Data
Normalize Data
Refactor
Baseline Model
Various Families
Stacking
Create Model
Evaluate Model
Optimize Model
Confusion Matrix
ROC Curve
Learning Curve
Deploy Model
Chapter 4. Missing Data
Examining Missing Data
Dropping Missing Data
Imputing Data
Adding Indicator Columns
Chapter 5. Cleaning Data
Column Names
Replacing Missing Values
Chapter 6. Exploring
Data Size
Summary Stats
Histogram
Scatter Plot
Joint Plot
Pair Grid
Box and Violin Plots
Comparing Two Ordinal Values
Correlation
RadViz
Parallel Coordinates
Chapter 7. Preprocess Data
Standardize
Scale to Range
Dummy Variables
Label Encoder
Frequency Encoding
Pulling Categories from Strings
Other Categorical Encoding
Date Feature Engineering
Add col_na Feature
Manual Feature Engineering
Chapter 8. Feature Selection
Collinear Columns
Lasso Regression
Recursive Feature Elimination
Mutual Information
Principal Component Analysis
Feature Importance
Chapter 9. Imbalanced Classes
Use a Different Metric
Tree-based Algorithms and Ensembles
Penalize Models
Upsampling Minority
Generate Minority Data
Downsampling Majority
Upsampling Then Downsampling
Chapter 10. Classification
Logistic Regression
Naive Bayes
Support Vector Machine
K-Nearest Neighbor
Decision Tree
Random Forest
XGBoost
Gradient Boosted with LightGBM
TPOT
Chapter 11. Model Selection
Validation Curve
Learning Curve
Chapter 12. Metrics and Classification Evaluation
Confusion Matrix
Metrics
Accuracy
Recall
Precision
F1
Classification Report
ROC
Precision-Recall Curve
Cumulative Gains Plot
Lift Curve
Class Balance
Class Prediction Error
Discrimination Threshold
Chapter 13. Explaining Models
Regression Coefficients
Feature Importance
LIME
Tree Interpretation
Partial Dependence Plots
Surrogate Models
Shapley
Chapter 14. Regression
Baseline Model
Linear Regression
SVMs
K-Nearest Neighbor
Decision Tree
Random Forest
XGBoost Regression
LightGBM Regression
Chapter 15. Metrics and Regression Evaluation
Metrics
Residuals Plot
Heteroscedasticity
Normal Residuals
Prediction Error Plot
Chapter 16. Explaining Regression Models
Shapley
Chapter 17. Dimensionality Reduction
PCA
UMAP
t-SNE
PHATE
Chapter 18. Clustering
K-Means
Agglomerative (Hierarchical) Clustering
Understanding Clusters
Chapter 19. Pipelines
Classification Pipeline
Regression Pipeline
PCA Pipeline
Index