Bayes Rules!: An Introduction to Applied Bayesian Modeling

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

An engaging, sophisticated, and fun introduction to the field of Bayesian statistics, Bayes Rules!: An Introduction to Applied Bayesian Modeling brings the power of modern Bayesian thinking, modeling, and computing to a broad audience. In particular, the book is an ideal resource for advanced undergraduate statistics students and practitioners with comparable experience. the book assumes that readers are familiar with the content covered in a typical undergraduate-level introductory statistics course. Readers will also, ideally, have some experience with undergraduate-level probability, calculus, and the R statistical software. Readers without this background will still be able to follow along so long as they
are eager to pick up these tools on the fly as all R code is provided.Bayes Rules! empowers readers to weave Bayesian approaches into their everyday practice. Discussions and applications are data driven. A natural progression from fundamental to multivariable, hierarchical models emphasizes a practical and generalizable model building process. The evaluation of these Bayesian models reflects the fact that a data analysis does not exist in a vacuum.

Features

• Utilizes data-driven examples and exercises.

• Emphasizes the iterative model building and evaluation process.

• Surveys an interconnected range of multivariable regression and classification models.

• Presents fundamental Markov chain Monte Carlo simulation.

• Integrates R code, including RStan modeling tools and the bayesrules package.

• Encourages readers to tap into their intuition and learn by doing.

• Provides a friendly and inclusive introduction to technical Bayesian concepts.

• Supports Bayesian applications with foundational Bayesian theory.

Author(s): Alicia A. Johnson, Miles Q. Ott, Mine Dogucu
Series: Chapman & Hall/CRC Texts in Statistical Science
Publisher: CRC Press/Chapman & Hall
Year: 2022

Language: English
Pages: 543
City: Boca Raton

Cover
Half Title
Series Page
Title Page
Copyright Page
Dedication
Contents
Foreword
Preface
About the Authors
I. Bayesian Foundations
1. The Big (Bayesian) Picture
1.1. Thinking like a Bayesian
1.1.1. Quiz yourself
1.1.2. The meaning of probability
1.1.3. The Bayesian balancing act
1.1.4. Asking questions
1.2. A quick history lesson
1.3. A look ahead
1.3.1. Unit 1: Bayesian foundations
1.3.2. Unit 2: Posterior simulation & analysis
1.3.3. Unit 3: Bayesian regression & classification
1.3.4. Unit 4: Hierarchical Bayesian models
1.4. Chapter summary
1.5. Exercises
2. Bayes’ Rule
2.1. Building a Bayesian model for events
2.1.1. Prior probability model
2.1.2. Conditional probability & likelihood
2.1.3. Normalizing constants
2.1.4. Posterior probability model via Bayes’ Rule!
2.1.5. Posterior simulation
2.2. Example: Pop vs soda vs coke
2.3. Building a Bayesian model for random variables
2.3.1. Prior probability model
2.3.2. The Binomial data model
2.3.3. The Binomial likelihood function
2.3.4. Normalizing constant
2.3.5. Posterior probability model
2.3.6. Posterior shortcut
2.3.7. Posterior simulation
2.4. Chapter summary
2.5. Exercises
2.5.1. Building up to Bayes’ Rule
2.5.2. Practice Bayes’ Rule for events
2.5.3. Practice Bayes’ Rule for random variables
2.5.4. Simulation exercises
3. The Beta-Binomial Bayesian Model
3.1. The Beta prior model
3.1.1. Beta foundations
3.1.2. Tuning the Beta prior
3.2. The Binomial data model & likelihood function
3.3. The Beta posterior model
3.4. The Beta-Binomial model
3.5. Simulating the Beta-Binomial
3.6. Example: Milgram’s behavioral study of obedience
3.6.1. A Bayesian analysis
3.6.2. The role of ethics in statistics and data science
3.7. Chapter summary
3.8. Exercises
3.8.1. Practice: Beta prior models
3.8.2. Practice: Beta-Binomial models
4. Balance and Sequentiality in Bayesian Analyses
4.1. Different priors, different posteriors
4.2. Different data, different posteriors
4.3. Striking a balance between the prior & data
4.3.1. Connecting observations to concepts
4.3.2. Connecting concepts to theory
4.4. Sequential analysis: Evolving with data
4.5. Proving data order invariance
4.6. Don’t be stubborn
4.7. A note on subjectivity
4.8. Chapter summary
4.9. Exercises
4.9.1. Review exercises
4.9.2. Practice: Different priors, different posteriors
4.9.3. Practice: Balancing the data & prior
4.9.4. Practice: Sequentiality
5. Conjugate Families
5.1. Revisiting choice of prior
5.2. Gamma-Poisson conjugate family
5.2.1. The Poisson data model
5.2.2. Potential priors
5.2.3. Gamma prior
5.2.4. Gamma-Poisson conjugacy
5.3. Normal-Normal conjugate family
5.3.1. The Normal data model
5.3.2. Normal prior
5.3.3. Normal-Normal conjugacy
5.3.4. Optional: Proving Normal-Normal conjugacy
5.4. Why no simulation in this chapter?
5.5. Critiques of conjugate family models
5.6. Chapter summary
5.7. Exercises
5.7.1. Practice: Gamma-Poisson
5.7.2. Practice: Normal-Normal
5.7.3. General practice exercises
II. Posterior Simulation & Analysis
6. Approximating the Posterior
6.1. Grid approximation
6.1.1. A Beta-Binomial example
6.1.2. A Gamma-Poisson example
6.1.3. Limitations
6.2. Markov chains via rstan
6.2.1. A Beta-Binomial example
6.2.2. A Gamma-Poisson example
6.3. Markov chain diagnostics
6.3.1. Examining trace plots
6.3.2. Comparing parallel chains
6.3.3. Calculating effective sample size & autocorrelation
6.3.4. Calculating R-hat
6.4. Chapter summary
6.5. Exercises
6.5.1. Conceptual exercises
6.5.2. Practice: Grid approximation
6.5.3. Practice: MCMC
7. MCMC under the Hood
7.1. The big idea
7.2. The Metropolis-Hastings algorithm
7.3. Implementing the Metropolis-Hastings
7.4. Tuning the Metropolis-Hastings algorithm
7.5. A Beta-Binomial example
7.6. Why the algorithm works
7.7. Variations on the theme
7.8. Chapter summary
7.9. Exercises
7.9.1. Conceptual exercises
7.9.2. Practice: Normal-Normal simulation
7.9.3. Practice: Simulating more Bayesian models
8. Posterior Inference & Prediction
8.1. Posterior estimation
8.2. Posterior hypothesis testing
8.2.1. One-sided tests
8.2.2. Two-sided tests
8.3. Posterior prediction
8.4. Posterior analysis with MCMC
8.4.1. Posterior simulation
8.4.2. Posterior estimation & hypothesis testing
8.4.3. Posterior prediction
8.5. Bayesian benefits
8.6. Chapter summary
8.7. Exercises
8.7.1. Conceptual exercises
8.7.2. Practice exercises
8.7.3. Applied exercises
III. Bayesian Regression & Classification
9. Simple Normal Regression
9.1. Building the regression model
9.1.1. Specifying the data model
9.1.2. Specifying the priors
9.1.3. Putting it all together
9.2. Tuning prior models for regression parameters
9.3. Posterior simulation
9.3.1. Simulation via rstanarm
9.3.2. Optional: Simulation via rstan
9.4. Interpreting the posterior
9.5. Posterior prediction
9.5.1. Building a posterior predictive model
9.5.2. Posterior prediction with rstanarm
9.6. Sequential regression modeling
9.7. Using default rstanarm priors
9.8. You’re not done yet!
9.9. Chapter summary
9.10. Exercises
9.10.1. Conceptual exercises
9.10.2. Applied exercises
10. Evaluating Regression Models
10.1. Is the model fair?
10.2. How wrong is the model?
10.2.1. Checking the model assumptions
10.2.2. Dealing with wrong models
10.3. How accurate are the posterior predictive models?
10.3.1. Posterior predictive summaries
10.3.2. Cross-validation
10.3.3. Expected log-predictive density
10.3.4. Improving posterior predictive accuracy
10.4. How good is the MCMC simulation vs how good is the model?
10.5. Chapter summary
10.6. Exercises
10.6.1. Conceptual exercises
10.6.2. Applied exercises
10.6.3. Open-ended exercises
11. Extending the Normal Regression Model
11.1. Utilizing a categorical predictor
11.1.1. Building the model
11.1.2. Simulating the posterior
11.2. Utilizing two predictors
11.2.1. Building the model
11.2.2. Understanding the priors
11.2.3. Simulating the posterior
11.2.4. Posterior prediction
11.3. Optional: Utilizing interaction terms
11.3.1. Building the model
11.3.2. Simulating the posterior
11.3.3. Do you need an interaction term?
11.4. Dreaming bigger: Utilizing more than 2 predictors!
11.5. Model evaluation & comparison
11.5.1. Evaluating predictive accuracy using visualizations
11.5.2. Evaluating predictive accuracy using cross-validation
11.5.3. Evaluating predictive accuracy using ELPD
11.5.4. The bias-variance trade-off
11.6. Chapter summary
11.7. Exercises
11.7.1. Conceptual exercises
11.7.2. Applied exercises
11.7.3. Open-ended exercises
12. Poisson & Negative Binomial Regression
12.1. Building the Poisson regression model
12.1.1. Specifying the data model
12.1.2. Specifying the priors
12.2. Simulating the posterior
12.3. Interpreting the posterior
12.4. Posterior prediction
12.5. Model evaluation
12.6. Negative Binomial regression for overdispersed counts
12.7. Generalized linear models: Building on the theme
12.8. Chapter summary
12.9. Exercises
12.9.1. Conceptual exercises
12.9.2. Applied exercises
13. Logistic Regression
13.1. Pause: Odds & probability
13.2. Building the logistic regression model
13.2.1. Specifying the data model
13.2.2. Specifying the priors
13.3. Simulating the posterior
13.4. Prediction & classification
13.5. Model evaluation
13.6. Extending the model
13.7. Chapter summary
13.8. Exercises
13.8.1. Conceptual exercises
13.8.2. Applied exercises
13.8.3. Open-ended exercises
14. Naive Bayes Classification
14.1. Classifying one penguin
14.1.1. One categorical predictor
14.1.2. One quantitative predictor
14.1.3. Two predictors
14.2. Implementing & evaluating naive Bayes classification
14.3. Naive Bayes vs logistic regression
14.4. Chapter summary
14.5. Exercises
14.5.1. Conceptual exercises
14.5.2. Applied exercises
14.5.3. Open-ended exercises
IV. Hierarchical Bayesian models
15. Hierarchical Models are Exciting
15.1. Complete pooling
15.2. No pooling
15.3. Hierarchical data
15.4. Partial pooling with hierarchical models
15.5. Chapter summary
15.6. Exercises
15.6.1. Conceptual exercises
15.6.2. Applied exercises
16. (Normal) Hierarchical Models without Predictors
16.1. Complete pooled model
16.2. No pooled model
16.3. Building the hierarchical model
16.3.1. The hierarchy
16.3.2. Another way to think about it
16.3.3. Within- vs between-group variability
16.4. Posterior analysis
16.4.1. Posterior simulation
16.4.2. Posterior analysis of global parameters
16.4.3. Posterior analysis of group-specific parameters
16.5. Posterior prediction
16.6. Shrinkage & the bias-variance trade-off
16.7. Not everything is hierarchical
16.8. Chapter summary
16.9. Exercises
16.9.1. Conceptual exercises
16.9.2. Applied exercises
17. (Normal) Hierarchical Models with Predictors
17.1. First steps: Complete pooling
17.2. Hierarchical model with varying intercepts
17.2.1. Model building
17.2.2. Another way to think about it
17.2.3. Tuning the prior
17.2.4. Posterior simulation & analysis
17.3. Hierarchical model with varying intercepts & slopes
17.3.1. Model building
17.3.2. Optional: The decomposition of covariance model
17.3.3. Posterior simulation & analysis
17.4. Model evaluation & selection
17.5. Posterior prediction
17.6. Details: Longitudinal data
17.7. Example: Danceability
17.8. Chapter summary
17.9. Exercises
17.9.1. Conceptual exercises
17.9.2. Applied exercises
17.9.3. Open-ended exercises
18. Non-Normal Hierarchical Regression & Classification
18.1. Hierarchical logistic regression
18.1.1. Model building & simulation
18.1.2. Posterior analysis
18.1.3. Posterior classification
18.1.4. Model evaluation
18.2. Hierarchical Poisson & Negative Binomial regression
18.2.1. Model building & simulation
18.2.2. Posterior analysis
18.2.3. Model evaluation
18.3. Chapter summary
18.4. Exercises
18.4.1. Applied & conceptual exercises
18.4.2. Open-ended exercises
19. Adding More Layers
19.1. Group-level predictors
19.1.1. A model using only individual-level predictors
19.1.2. Incorporating group-level predictors
19.1.3. Posterior simulation & global analysis
19.1.4. Posterior group-level analysis
19.1.5. We’re just scratching the surface!
19.2. Incorporating two (or more!) grouping variables
19.2.1. Data with two grouping variables
19.2.2. Building a model with two grouping variables
19.2.3. Simulating models with two grouping variables
19.2.4. Examining the group-specific parameters
19.2.5. We’re just scratching the surface!
19.3. Exercises
19.3.1. Conceptual exercises
19.3.2. Applied exercises
19.4. Goodbye!
Bibliography
Index