Statistics for Making Decisions

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Making decisions is a ubiquitous mental activity in our private and professional or public lives. It entails choosing one course of action from an available shortlist of options. Statistics for Making Decisions places decision making at the centre of statistical inference, proposing its theory as a new paradigm for statistical practice. The analysis in this paradigm is earnest about prior information and the consequences of the various kinds of errors that may be committed. Its conclusion is a course of action tailored to the perspective of the specific client or sponsor of the analysis. The author’s intention is a wholesale replacement of hypothesis testing, indicting it with the argument that it has no means of incorporating the consequences of errors which self-evidently matter to the client. The volume appeals to the analyst who deals with the simplest statistical problems of comparing two samples (which one has a greater mean or variance), or deciding whether a parameter is positive or negative. It combines highlighting the deficiencies of hypothesis testing with promoting a principled solution based on the idea of a currency for error, of which we want to spend as little as possible. This is implemented by selecting the option for which the expected loss is smallest (the Bayes rule). The price to pay is the need for a more detailed description of the options, and eliciting and quantifying the consequences (ramifications) of the errors. This is what our clients do informally and often inexpertly after receiving outputs of the analysis in an established format, such as the verdict of a hypothesis test or an estimate and its standard error. As a scientific discipline and profession, statistics has a potential to do this much better and deliver to the client a more complete and more relevant product. Nicholas T. Longford is a senior statistician at Imperial College, London, specialising in statistical methods for neonatal medicine. His interests include causal analysis of observational studies, decision theory, and the contest of modelling and design in data analysis. His longer-term appointments in the past include Educational Testing Service, Princeton, NJ, USA, de Montfort University, Leicester, England, and directorship of SNTL, a statistics research and consulting company. He is the author of over 100 journal articles and six other monographs on a variety of topics in applied statistics.

Author(s): Longford, Nicholas T.;
Publisher: CRC Press LLC
Year: 2020

Language: English
Commentary: Statistics for Making Decisions places decision making at the centre of statistical inference, proposing its theory as a new paradigm for statistical practice
Pages: 2021
Tags: Statistics for Making Decisions as the centre of statistical inference, proposing its theory as a new paradigm for statistical practice

Preface
Author
1First steps
1.1What shall we do?
Example 1
1.2The setting
1.2.1Losses and gains
1.2.2States, spaces and parameters
1.2.3Estimation. Fixed and random.
1.3Study design
1.4Exercises
2Statistical paradigms
2.1Frequentist paradigm
2.1.1Bias and variance
2.1.2Distributions
2.1.3Sampling from finite populations
2.2Bayesian paradigm
2.3Computer-based replications
2.4Design and estimation
2.5Likelihood and fiducial distribution
2.5.1Example. Variance estimation.
2.6From estimate to decision
2.7Hypothesis testing
2.8Hypothesis test and decision
2.9Combining values and probabilities—Additivity
2.10Further reading
2.11Exercises
3Positive or negative?
3.1Constant loss
3.1.1Equilibrium and critical value
3.2The margin of error
3.3Quadratic loss
3.4Combining loss functions
3.5Equilibrium function
Example 2
Example 3
3.6Plausible values and impasse
3.7Elicitation
3.7.1Post-analysis elicitation
3.8Plausible rectangles
Example 4
3.8.1Summary
3.9Further reading
3.10Exercises
4Non-normally distributed estimators
4.1Student t distribution
4.1.1Fiducial distribution for the t ratio
Example 5
Example 6
4.2Verdicts for variances
4.2.1Linear loss for variances
4.2.2Verdicts for standard deviations
4.3Comparing two variances
Example 7
4.4Statistics with binomial and Poisson distributions
4.4.1Poisson distribution
Example 8
4.5Further reading
4.6Exercises
Appendix
5Small or large?
5.1Piecewise constant loss
5.1.1Asymmetric loss
5.2Piecewise linear loss
Example 9
5.3Piecewise quadratic loss
Example 10
Example 11
5.4Ordinal categories
5.4.1Piecewise linear and quadratic losses
5.5Multitude of options
5.5.1Discrete options
5.5.2Continuum of options
5.6Further reading
5.7Exercises
Appendix
A. Expected loss Ql in equation (5.3)
B. Continuation of Example 9
C. Continuation of Example 10
6Study design
6.1Design and analysis
6.2How big a study?
6.3Planning for impasse
6.3.1Probability of impasse
Example 12
6.4Further reading
6.5Exercises
Appendix. Sample size calculation for hypothesis testing.
7Medical screening
7.1Separating positives and negatives
Example 13
7.2Cutpoints specific to subpopulations
7.3Distributions other than normal
7.3.1Normal and t distributions
7.4A nearly perfect but expensive test
Example 14
7.5Further reading
7.6Exercises
8Many decisions
8.1Ordinary and exceptional units
Example 15
8.2Extreme selections
Example 16
8.3Grey zone
8.4Actions in a sequence
8.5Further reading
8.6Exercises
Appendix
A. Moment-matching estimator
B. The potential outcomes framework
9Performance of institutions
9.1The setting and the task
9.1.1Evidence of poor performance
9.1.2Assessment as a classification
9.2Outliers
9.3As good as the best
9.4Empirical Bayes estimation
9.5Assessment based on rare events
9.6Further reading
9.7Exercises
Appendix
A. Estimation of θ and ν2
B. Adjustment and matching on background
10Clinical trials
10.1Randomisation
10.2Analysis by hypothesis testing
10.3Electing a course of action—approve or reject?
10.4Decision about superiority
10.4.1More complex loss functions
10.4.2Trials for non-inferiority
10.5Trials for bioequivalence
10.6Crossover design
10.6.1Composition of within-period estimators
10.7Further reading
10.8Exercises
11Model uncertainty
11.1Ordinary regression
11.1.1Ordinary regression and model uncertainty
11.1.2Some related approaches
11.1.3Bounded bias
11.2Composition
11.3Composition of a complete set of candidate models
11.3.1Summary
11.4Further reading
11.5Exercises
Appendix
A. Inverse of a partitioned matrix
B. Mixtures
EM algorithm
C. Linear loss
12Postscript
References
Solutions to exercises
Index