Mathematical Statistics

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This graduate textbook covers topics in statistical theory essential for graduate students preparing for work on a Ph.D. degree in statistics. This new edition has been revised and updated and in this fourth printing, errors have been ironed out. The first chapter provides a quick overview of concepts and results in measure-theoretic probability theory that are useful in statistics. The second chapter introduces some fundamental concepts in statistical decision theory and inference. Subsequent chapters contain detailed studies on some important topics: unbiased estimation, parametric estimation, nonparametric estimation, hypothesis testing, and confidence sets. A large number of exercises in each chapter provide not only practice problems for students, but also many additional results.

Author(s): Jun Shao
Series: Springer Texts in Statistics
Edition: 2
Publisher: Springer
Year: 2005

Language: English
Pages: 592

Cover
Title
Copyright
Preface to the First Edition
Preface to the Second Edition
Contents
Chapter 1 Probability Theory
1.1 Probability Spaces and Random Elements
1.1.1 σ-fields and measures
1.1.2 Measurable functions and distributions
1.2 Integration and Differentiation
1.2.1 Integration
1.2.2 Radon-Nikodym derivative
1.3 Distributions and Their Characteristics
1.3.1 Distributions and probability densities
1.3.2 Moments and moment inequalities
1.3.3 Moment generating and characteristic functions
1.4 Conditional Expectations
1.4.1 Conditional expectations
1.4.2 Independence
1.4.3 Conditional distributions
1.4.4 Markov chains and martingales
1.5 Asymptotic Theory
1.5.1 Convergence modes and stochastic orders
1.5.2 Weak convergence
1.5.3 Convergence of transformations
1.5.4 The law of large numbers
1.5.5 The central limit theorem
1.5.6 Edgeworth and Cornish-Fisher expansions
1.6 Exercises
Chapter 2 Fundamentals of Statistics
2.1 Populations, Samples, and Models
2.1.1 Populations and samples
2.1.2 Parametric and nonparametric models
2.1.3 Exponential and location-scale families
2.2 Statistics, Sufficiency, and Completeness
2.2.1 Statistics and their distributions
2.2.2 Sufficiency and minimal suffciency
2.2.3 Complete statistics
2.3 Statistical Decision Theory
2.3.1 Decision rules, loss functions, and risks
2.3.2 Admissibility and optimality
2.4 Statistical Inference
2.4.1 Point estimators
2.4.2 Hypothesis tests
2.4.3 Confidence sets
2.5 Asymptotic Criteria and Inference
2.5.1 Consistency
2.5.2 Asymptotic bias, variance, and mse
2.5.3 Asymptotic inference
2.6 Exercises
Chapter 3 Unbiased Estimation
3.1 The UMVUE
3.1.1 Sufficient and complete statistics
3.1.2 A necessary and sufficient condition
3.1.3 Information inequality
3.1.4 Asymptotic properties of UMVUE's
3.2 U-Statistics
3.2.1 Some examples
3.2.2 Variances of U-statistics
3.2.3 The projection method
3.3 The LSE in Linear Models
3.3.1 The LSE and estimability
3.3.2 The UMVUE and BLUE
3.3.3 Robustness of LSE's
3.3.4 Asymptotic properties of LSE's
3.4 Unbiased Estimators in Survey Problems
3.4.1 UMVUE’s of population totals
3.4.2 Horvitz-Thompson estimators
3.5 Asymptotically Unbiased Estimators
3.5.1 Functions of unbiased estimators
3.5.2 The method of moments
3.5.3 V-statistics
3.5.4 The weighted LSE
3.6 Exercises
Chapter 4 Estimation in Parametric Models
4.1 Bayes Decisions and Estimators
4.1.1 Bayes actions
4.1.2 Empirical and hierarchical Bayes methods
4.1.3 Bayes rules and estimators
4.1.4 Markov chain Monte Carlo
4.2 Invariance
4.2.1 One-parameter location families
4.2.2 One-parameter scale families
4.2.3 General location-scale families
4.3 Minimaxity and Admissibility
4.3.1 Estimators with constant risks
4.3.2 Results in one-parameter exponential families
4.3.3 Simultaneous estimation and shrinkage estimators
4.4 The Method of Maximum Likelihood
4.4.1 The likelihood function and MLE's
4.4.2 MLE's in generalized linear models
4.4.3 Quasi-likelihoods and conditional likelihoods
4.5 Asymptotically Efficient Estimation
4.5.1 Asymptotic optimality
4.5.2 Asymptotic efficiency of MLE's and RLE's
4.5.3 Other asymptotically efficient estimators
4.6 Exercises
Chapter 5 Estimation in Nonparametric Models
5.1 Distribution Estimators
5.1.1 Empirical c.d.f.'s in i.i.d. cases
5.1.2 Empirical likelihoods
5.1.3 Density estimation
5.1.4 Semi-parametric methods
5.2 Statistical Functionals
5.2.1 Differentiability and asymptotic normality
5.2.2 L-, M-, and R-estimators and rank statistics
5.3 Linear Functions of Order Statistics
5.3.1 Sample quantiles
5.3.2 Robustness and efficiency
5.3.3 L-estimators in linear models
5.4 Generalized Estimating Equations
5.4.1 The GEE method and its relationship with others
5.4.2 Consistency of GEE estimators
5.4.3 Asymptotic normality of GEE estimators
5.5 Variance Estimation
5.5.1 The substitution method
5.5.2 The jackknife
5.5.3 The bootstrap
5.6 Exercises
Chapter 6 Hypothesis Tests
6.1 UMP Tests
6.1.1 The Neyman-Pearson lemma
6.1.2 Monotone likelihood ratio
6.1.3 UMP tests for two-sided hypotheses
6.2 UMP Unbiased Tests
6.2.1 Unbiasedness, similarity, and Neyman structure
6.2.2 UMPU tests in exponential families
6.2.3 UMPU tests in normal families
6.3 UMP Invariant Tests
6.3.1 Invariance and UMPI tests
6.3.2 UMPI tests in normal linear models
6.4 Tests in Parametric Models
6.4.1 Likelihood ratio tests
6.4.2 Asymptotic tests based on likelihoods
6.4.3 χ2-tests
6.4.4 Bayes tests
6.5 Tests in Nonparametric Models
6.5.1 Sign, permutation, and rank tests
6.5.2 Kolmogorov-Smirnov and Cramér-von Mises tests
6.5.3 Empirical likelihood ratio tests
6.5.4 Asymptotic tests
6.6 Exercises
Chapter 7 Confidence Sets
7.1 Construction of Confidence Sets
7.1.1 Pivotal quantities
7.1.2 Inverting acceptance regions of tests
7.1.3 The Bayesian approach
7.1.4 Prediction sets
7.2 Properties of Confidence Sets
7.2.1 Lengths of confidence intervals
7.2.2 UMA and UMAU confidence sets
7.2.3 Randomized confidence sets
7.2.4 Invariant confidence sets
7.3 Asymptotic Confidence Sets
7.3.1 Asymptotically pivotal quantities
7.3.2 Confidence sets based on likelihoods
7.3.3 Confidence intervals for quantiles
7.3.4 Accuracy of asymptotic confidence sets
7.4 Bootstrap Confidence Sets
7.4.1 Construction of bootstrap confidence intervals
7.4.2 Asymptotic correctness and accuracy
7.4.3 High-order accurate bootstrap confidence sets
7.5 Simultaneous Confidence Intervals
7.5.1 Bonferroni's method
7.5.2 Scheffé's method in linear models
7.5.3 Tukey's method in one-way ANOVA models
7.5.4 Confidence bands for c.d.f.'s
7.6 Exercises
References
List of Notation
List of Abbreviations
Index of Definitions, Main Results, and Examples
Author Index
Subject Index