Modern Mathematical Statistics with Applications

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Modern Mathematical Statistics with Applications, Second Edition strikes a balance between mathematical foundations and statistical practice. In keeping with the recommendation that every math student should study statistics and probability with an emphasis on data analysis, accomplished authors Jay Devore and Kenneth Berk make statistical concepts and methods clear and relevant through careful explanations and a broad range of applications involving real data. The main focus of the book is on presenting and illustrating methods of inferential statistics that are useful in research. It begins with a chapter on descriptive statistics that immediately exposes the reader to real data. The next six chapters develop the probability material that bridges the gap between descriptive and inferential statistics. Point estimation, inferences based on statistical intervals, and hypothesis testing are then introduced in the next three chapters. The remainder of the book explores the use of this methodology in a variety of more complex settings. This edition includes a plethora of new exercises, a number of which are similar to what would be encountered on the actuarial exams that cover probability and statistics. Representative applications include investigating whether the average tip percentage in a particular restaurant exceeds the standard 15%, considering whether the flavor and aroma of Champagne are affected by bottle temperature or type of pour, modeling the relationship between college graduation rate and average SAT score, and assessing the likelihood of O-ring failure in space shuttle launches as related to launch temperature.

Author(s): Jay L. Devore, Kenneth N. Berk
Series: Springer Texts in Statistics
Edition: 2
Publisher: Springer New York, NY
Year: 2011

Language: English
Pages: XII, 845

About the Authors
Contents
Preface
Purpose
Content
Mathematical Level
Recommended Coverage
Acknowledgments
A Final Thought
Chapter 1: Overview and Descriptive Statistics
1.1 Populations and Samples
Branches of Statistics
Collecting Data
1.2 Pictorial and Tabular Methods in Descriptive Statistics
Notation
Stem-and-Leaf Displays
Dotplots
Histograms
Histogram Shapes
Qualitative Data
Multivariate Data
1.3 Measures of Location
The Mean
The Median
Other Measures of Location: Quartiles, Percentiles, and Trimmed Means
Categorical Data and Sample Proportions
1.4 Measures of Variability
Measures of Variability for Sample Data
Motivation for s2
A Computing Formula for s2
Boxplots
Boxplots That Show Outliers
Comparative Boxplots
Bibliography
Chapter 2: Probability
2.1 Sample Spaces and Events
The Sample Space of an ExperimentDEFINITIONThe sample space of an experiment, denoted by $${\tf=
Events
Some Relations from Set Theory
2.2 Axioms, Interpretations, and Properties of Probability
Interpreting Probability
More Probability Properties
Determining Probabilities Systematically
Equally Likely Outcomes
2.3 Counting Techniques
The Product Rule for Ordered Pairs
Tree Diagrams
A More General Product Rule
Permutations
Combinations
2.4 Conditional Probability
The Definition of Conditional Probability
The Multiplication Rule for P(A B)
Bayes´ Theorem
2.5 Independence
P(A B) When Events Are Independent
Independence of More Than Two Events
Bibliography
Chapter 3: Discrete Random Variables and Probability Distributions
3.1 Random Variables
Two Types of Random Variables
3.2 Probability Distributions for Discrete Random Variables
A Parameter of a Probability Distribution
The Cumulative Distribution Function
Another View of Probability Mass Functions
3.3 Expected Values of Discrete Random Variables
The Expected Value of X
The Expected Value of a Function
The Variance of X
A Shortcut Formula for sigma2
Rules of Variance
3.4 Moments and Moment Generating Functions
3.5 The Binomial Probability Distribution
The Binomial Random Variable and Distribution
Using Binomial Tables
The Mean and Variance of X
The Moment Generating Function of X
3.6 Hypergeometric and Negative Binomial Distributions
The Hypergeometric Distribution
The Negative Binomial Distribution
3.7 The Poisson Probability Distribution
The Poisson Distribution as a Limit
The Mean, Variance and MGF of X
The Poisson Process
Bibliography
Chapter 4: Continuous Random Variables and Probability Distributions
4.1 Probability Density Functions and Cumulative Distribution Functions
Probability Distributions for Continuous Variables
The Cumulative Distribution Function
Using F(x) to Compute Probabilities
Obtaining f(x) from F(x)
Percentiles of a Continuous Distribution
4.2 Expected Values and Moment Generating Functions
Expected Values
The Variance and Standard Deviation
Approximating the Mean Value and Standard Deviation
Moment Generating Functions
4.3 The Normal Distribution
The Standard Normal Distribution
Percentiles of the Standard Normal Distribution
zα Notation
Nonstandard Normal Distributions
Percentiles of an Arbitrary Normal Distribution
The Normal Distribution and Discrete Populations
Approximating the Binomial Distribution
The Normal Moment Generating Function
4.4 The Gamma Distribution and Its Relatives
The Family of Gamma Distributions
The Exponential Distribution
The Chi-Squared Distribution
4.5 Other Continuous Distributions
The Weibull Distribution
The Lognormal Distribution
The Beta Distribution
4.6 Probability Plots
Sample Percentiles
A Probability Plot
Beyond Normality
4.7 Transformations of a Random Variable
Bibliography
Chapter 5: Joint Probability Distributions
5.1 Jointly Distributed Random Variables
The Joint Probability Mass Function for Two Discrete Random Variables
The Joint Probability Density Function for Two Continuous Random Variables
Independent Random Variables
More than Two Random Variables
5.2 Expected Values, Covariance, and Correlation
Covariance
Correlation
5.3 Conditional Distributions
Independence
The Bivariate Normal Distribution
Regression to the Mean
The Mean and Variance Via the Conditional Mean and Variance
5.4 Transformations of Random Variables
The Joint Distribution of Two New Random Variables
The Joint Distribution of More than Two New Variables
5.5 Order Statistics
The Distributions of Yn and Y1
The Joint Distribution of the n Order Statistics
The Distribution of a Single Order Statistic
The Joint Distribution of Two Order Statistics
An Intuitive Derivation of Order Statistic PDF´s
Bibliography
Chapter 6: Statistics and Sampling Distributions
6.1 Statistics and Their Distributions
Random Samples
Deriving the Sampling Distribution of a Statistic
Simulation Experiments
6.2 The Distribution of the Sample Mean
The Case of a Normal Population Distribution
The Central Limit Theorem
Other Applications of the Central Limit Theorem
The Law of Large Numbers
6.3 The Mean, Variance, and MGF for Several Variables
The Difference Between Two Random Variables
The Case of Normal Random Variables
Moment Generating Functions for Linear Combinations
6.4 Distributions Based on a Normal Random Sample
The Chi-Squared Distribution
The t Distribution
The F Distribution
Summary of Relationships
Bibliography
Appendix: Proof of the Central Limit Theorem
Chapter 7: Point Estimation
7.1 General Concepts and Criteria
Mean Squared Error
Unbiased Estimators
Estimators with Minimum Variance
More Complications
Reporting a Point Estimate: The Standard Error
The Bootstrap
7.2 Methods of Point Estimation
The Method of Moments
Maximum Likelihood Estimation
Some Properties of MLEs
Large-Sample Behavior of the MLE
Some Complications
7.3 Sufficiency
The Factorization Theorem
Jointly Sufficient Statistics
Minimal Sufficiency
Improving an Estimator
Further Comments
7.4 Information and Efficiency
Information in a Random Sample
The Cramér-Rao Inequality
Large Sample Properties of the MLE
Bibliography
Chapter 8: Statistical Intervals Based on a Single Sample
8.1 Basic Properties of Confidence Intervals
Interpreting a Confidence Level
Other Levels of Confidence
Confidence Level, Precision, and Choice of Sample Size
Deriving a Confidence Interval
8.2 Large-Sample Confidence Intervals for a Population Mean and Proportion
A Large-Sample Interval for $$ \mu $$
A General Large-Sample Confidence Interval
A Confidence Interval for a Population Proportion
One-Sided Confidence Intervals (Confidence Bounds)
8.3 Intervals Based on a Normal Population Distribution
Properties of t Distributions
The One-Sample t Confidence Interval
A Prediction Interval for a Single Future Value
Tolerance Intervals
Intervals Based on Nonnormal Population Distributions
8.4 Confidence Intervals for the Variance and Standard Deviation of a Normal Population
8.5 Bootstrap Confidence Intervals
The Percentile Interval
A Refined Interval
Bootstrapping the Median
The Mean Versus the Median
Bibliography
Chapter 9: Tests of Hypotheses Based on a Single Sample
9.1 Hypotheses and Test Procedures
Test Procedures
Errors in Hypothesis Testing
9.2 Tests About a Population Mean
Case I: A Normal Population with Known sigma
Case II: Large-Sample Tests
Case III: A Normal Population Distribution with Unknown sigma
9.3 Tests Concerning a Population Proportion
Large-Sample Tests
Small-Sample Tests
9.4 P-Values
P-Values for z Tests
P-Values for t Tests
More on Interpreting P-Values
9.5 Some Comments on Selecting a Test Procedure
Statistical Versus Practical Significance
Best Tests for Simple Hypotheses
Power and Uniformly Most Powerful Tests
Likelihood Ratio Tests
Bibliography
Chapter 10: Inferences Based on Two Samples
10.1 z Tests and Confidence Intervals for a Difference Between Two Population Means
Test Procedures for Normal Populations with Known Variances
Using a Comparison to Identify Causality
beta and the Choice of Sample Size
Large-Sample Tests
Confidence Intervals for $$ {\tf=
10.2 The Two-Sample t Test and Confidence Interval
Pooled t Procedures
Type II Error Probabilities
10.3 Analysis of Paired Data
The Paired t Test
A Confidence Interval for $$ {\tf=
Paired Data and Two-Sample t Procedures
Paired Versus Unpaired Experiments
10.4 Inferences About Two Population Proportions
A Large-Sample Test Procedure
Type II Error Probabilities and Sample Sizes
A Large-Sample Confidence Interval for p1 - p2
Small-Sample Inferences
10.5 Inferences About Two Population Variances
Testing Hypotheses
P-Values for F Tests
A Confidence Interval for $$ {\tf=
10.6 Comparisons Using the Bootstrap and Permutation Methods
The Bootstrap for Two Samples
Permutation Tests
Inferences About Variability
The Analysis of Paired Data
Bibliography
Chapter 11: The Analysis of Variance
11.1 Single-Factor ANOVA
Notation and Assumptions
Sums of Squares and Mean Squares
The F Test
Computational Formulas
Testing for the Assumption of Equal Variances
11.2 Multiple Comparisons in ANOVA
Tukey´s Procedure
The Interpretation of α in Tukey´s Procedure
Confidence Intervals for Other Parametric Functions
11.3 More on Single-Factor ANOVA
An Alternative Description of the ANOVA Model
beta for the F Test
Relationship of the F Test to the t Test
Single-Factor ANOVA When Sample Sizes Are Unequal
Multiple Comparisons When Sample Sizes Are Unequal
Data Transformation
A Random Effects Model
11.4 Two-Factor ANOVA with Kij=1
The Model
Test Procedures
Expected Mean Squares
Multiple Comparisons
Randomized Block Experiments
Models for Random Effects
11.5 Two-Factor ANOVA with Kij1
Parameters for the Fixed Effects Model with Interaction
Notation, Model, and Analysis
Multiple Comparisons
Models with Mixed and Random Effects
Bibliography
Chapter 12: Regression and Correlation
12.1 The Simple Linear and Logistic Regression Models
A Linear Probabilistic Model
The Logistic Regression Model
12.2 Estimating Model Parameters
Estimating $$ {\bisigma^{\bf 2}} $$ and $$ \bisigma $$
The Coefficient of Determination
Terminology and Scope of Regression Analysis
12.3 Inferences About the Regression Coefficient 1
A Confidence Interval for beta1
Hypothesis-Testing Procedures
Regression and ANOVA
Fitting the Logistic Regression Model
12.4 Inferences Concerning $$ {\tf=
Inferences Concerning $$ \tf=
A Prediction Interval for a Future Value of Y
12.5 Correlation
The Sample Correlation Coefficient r
Properties of r
The Population Correlation Coefficient rho and Inferences About Correlation
Other Inferences Concerning rho
12.6 Assessing Model Adequacy
Residuals and Standardized Residuals
Diagnostic Plots
Difficulties and Remedies
12.7 Multiple Regression Analysis
Estimating Parameters
$$ {\hat{\sigma }^2} $$ and the Coefficient of Multiple Determination
A Model Utility Test
Inferences in Multiple Regression
Assessing Model Adequacy
Multiple Regression Models
Models with Predictors for Categorical Variables
12.8 Regression with Matrices
The Normal Equations
Residuals, ANOVA, F, and R-Squared
Covariance Matrices
The Hat Matrix
Bibliography
Chapter 13: Goodness-of-Fit Tests and Categorical Data Analysis
13.1 Goodness-of-Fit Tests When Category Probabilities Are Completely Specified
P-Values for Chi-Squared Tests
2 When the pi´s Are Functions of Other Parameters
2 When the Underlying Distribution Is Continuous
13.2 Goodness-of-Fit Tests for Composite Hypotheses
2 When Parameters Are Estimated
Goodness of Fit for Discrete Distributions
Goodness of Fit for Continuous Distributions
A Special Test for Normality
13.3 Two-Way Contingency Tables
Testing for Homogeneity
Testing for Independence
Ordinal Factors and Logistic Regression
Bibliography
Chapter 14: Alternative Approaches to Inference
14.1 The Wilcoxon Signed-Rank Test
A General Description of the Wilcoxon Signed-Rank Test
Paired Observations
Efficiency of the Wilcoxon Signed-Rank Test
14.2 The Wilcoxon Rank-Sum Test
Development of the Test When m=3, n=4
General Description of the Rank-Sum Test
Efficiency of the Wilcoxon Rank-Sum Test
14.3 Distribution-Free Confidence Intervals
The Wilcoxon Signed-Rank Interval
The Wilcoxon Rank-Sum Interval
14.4 Bayesian Methods
Bibliography
Erratum to: Statistics and Sampling Distributions