Statistics for Clinicians: How Much Should a Doctor Know?

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

How much statistics does a clinician, surgeon or nurse need to know?
This book provides an essential handbook to help appraise evidence in a scientific paper, to design and interpret the results of research correctly, to guide our students and to review the work of our colleagues. This title is written by a clinician exclusively for fellow clinicians, in their own language and not in statistical or epidemiological terms.
When clinicians discuss probability, it is focussed on how it applies to the management of patients in the flesh and how they are managed in a clinical setting.
Statistics for Clinicians does not overlook the basis of statistics, but reviews techniques specific to medicine with an emphasis on their application. It ensures that readers have the correct tools to hand, including worked examples, guides and links to online calculators and free software, enabling readers to execute most statistical calculations. This book will therefore be enormously helpful for many working across all fields of medicine at any stage of their career.

Author(s): Ahmed Hassouna
Publisher: Springer
Year: 2023

Language: English
Pages: 640
City: Cham

Foreword: The Man and His Dream
Preface: Statistics for Clinicians: How Much Should a Doctor Know?
Acknowledgments: The Payoff
Contents
List of Figures
List of Tables
1 Expressing and Analyzing Variability
Abstract
1.1 Introduction
1.2 Variables: Types, Measurement and Role
1.2.1 Variable Conversion
1.3 Summarizing Data
1.3.1 The Qualitative Variable
1.3.2 The Quantitative Variable
1.3.3 Measures of Central Tendency
1.3.4 Measures of Dispersion
1.3.4.1 The Variance (S2)
1.3.4.2 Standard Deviation (Sd)
1.3.4.3 Standard Error of Mean (Se)
1.3.4.4 Extension to the Qualitative Variable
1.3.4.5 Minimum, Maximum and Interquartile Range
1.4 The Normal Distribution
1.4.1 The Standardized Normal Distribution and the Z Score
1.4.2 The Confidence Interval (CI)
1.4.2.1 The Confidence Interval of Subjects
1.4.2.2 The Confidence Interval of Mean
1.4.2.3 The Confidence Interval of the Small Study
1.4.2.4 The Confidence Interval of a Proportion
1.4.2.5 The Confidence Interval of a Unilateral Versus a Bilateral Study Design
1.4.2.6 The Confidence Interval of the Difference Between Two Means
1.4.2.7 The Confidence Interval of the Difference Between 2 Proportions
1.4.2.9 The Confidence Interval of Variance
1.4.2.10 The Confidence Interval of an Event that Has Never Happened
1.4.3 Verifying Normality
1.4.3.1 Visual Check: The Histogram, Normal Q-Q Plot
1.4.3.2 Calculation of Skewness and Kurtosis
1.4.3.3 Tests of Normality
1.4.4 Normalization of Data
1.5 The P-value
1.5.1 The Primary Risk of Error (α)
1.6 The Null and Alternative Hypothesis
1.6.1 Statistical Significance and the Degree of Significance
1.7 Testing Hypothesis
1.7.1 A Simple Parametric Test
1.7.2 Unilateral Study Design
1.7.4 The Secondary Risk of Error
1.7.4.1 The Power of a Study
1.8 Common Indices of Clinical Outcomes
1.8.1 The Risk and the Odds
1.8.2 The Relative Risks and the Odds Ratio
1.8.2.1 The Two Relative Risks
1.8.2.2 One Odds Ratio
1.8.3 The Hazard Ratio
1.8.3.1 The Hazard Ratio Versus the Relative Risk
1.8.3.2 The Hazard Function in Time to Event Analysis
1.8.4 Relative Risk Increase (RRI) and Relative Risk Reduction (RRR)
1.8.5 Absolute Risk Increase (ARI) and Reduction (ARR)
1.8.6 Number Needed to Treat Benefit (NNTB) and Number Needed to Harm (NNTH)
1.8.7 Calculation of the 95% Confidence Interval and Testing Statistical Significance
1.8.7.1 The 95% CI of the Relative Risk
1.8.7.2 The 95% CI of the Odds Ratio
1.8.7.4 The 95% CI of the Absolute Risk Difference and the Number Needed to Treat
1.9 Diagnostic Accuracy
1.9.1 The Discriminative Measures
1.9.1.1 Sensitivity of the Test (Sn)
1.9.1.2 Specificity of the Test (Sp)
1.9.1.3 Calculation and Interpretation: Choosing the Appropriate Cut-Off Point
1.9.1.4 What Should Be Reported
1.9.2 The Predictive Values
1.9.2.1 Positive Predictive Value (PPV)
1.9.2.2 Negative Predictive Value (NPV)
1.9.2.3 Calculation and Interpretation: The Role of Prevalence
1.9.2.4 What Should Be Reported
1.9.3 The Likelihood Ratios
1.9.3.1 The Positive Likelihood Ratio
1.9.3.2 The Negative Likelihood Ratio
1.9.3.3 Calculation and Interpretation: The Pre and Post-Test Odds
1.9.3.4 What Should Be Reported
1.9.4 Single Indicators of Test Performance
1.9.4.1 Total Accuracy
1.9.4.2 Diagnostic Odds Ratio (DOR)
1.9.4.3 The Area Under the Receiver Operating Curve (AUC-ROC)
1.9.4.4 Youden Index
1.9.5 Choosing the Appropriate Diagnostic Test
1.9.6 Comparing Two Diagnostic Tests
1.9.6.1 Comparison of Paired Measures at a Specific Threshold
1.9.6.2 Comparison of Summary Measures at Specific Thresholds
1.9.6.3 Comparison of Single Measures Averaged Across Multiple Threshold
1.9.7 The Standards for Reporting Diagnostic Accuracy (STARD)
References
2 Bivariate Statistical Analysis
Abstract
2.1 Choosing a Statistical Test
2.1.1 Independence of Data: Paired Versus Unpaired Tests
2.1.2 Data Distribution: Parametric Versus Distribution-Free Tests
2.2 Consulting Statistical Tables
2.2.1 The Test Statistics
2.2.2 The Degrees of Freedom (Df)
2.2.2.1 The df of Student Distribution and Pearson’s Correlation Coefficient
2.2.2.2 The df of the Analysis of Variance (ANOVA)
2.2.2.3 The df of the Chi-Square Tests
2.2.3 Consulting Individual Tables
2.2.3.1 Consulting the Z Table
2.2.3.2 Consulting the Chi-Square Table
2.2.3.3 Consulting the Student Tables
2.2.3.4 Consulting Fisher Tables
2.3 Inferences on Two Qualitative Variables
2.3.1 The Unpaired Tests
2.3.1.1 Pearson’s Chi Square Test Goodness of Fit
2.3.1.2 Pearson’s Chi Square Test of Independence
2.3.1.3 The Corrected Chi-Square Test (Yates)
2.3.1.4 Chi Square for Trend (Cochran–Armitage Test)
2.3.1.5 Fisher’s Exact Test
2.3.2 The Paired Tests
2.3.2.1 McNemar Test (The Paired Chi-Square)
2.3.2.2 McNemar-Bowker Test
2.3.2.3 Cochran Q Test
2.4 Inferences on Means and Variances of Normal Distribution: The Parametric Tests
2.4.1 The Comparison of Two Means
2.4.1.1 The One-Sample Student Test
2.4.1.2 The Unpaired Student Test
2.4.1.3 The Paired Student Test
2.4.2 The Comparison of Two Variances
2.4.3 The Comparison of Multiple Means
2.4.3.1 One-Way Analysis of Variance (One-Way ANOVA)
2.4.3.2 Welch’s F ANOVA
2.4.3.3 ANOVA Contrasts: A Priori and Post-Hoc Analysis
2.4.3.4 One-Way Repeated Measures ANOVA (RMANOVA)
2.5 Inference on Medians and Other Distributions Than Normal: Non-parametric Tests
2.5.1 The Comparison of Two-Groups
2.5.1.1 Mann & Whitney (U) Test
2.5.1.2 Wilcoxon Rank (W) Test
2.5.1.3 Wilcoxon Signed Rank Test (Paired T)
2.5.2 The Comparison of Several Groups
2.5.2.1 Kruskal & Wallis (H) Test
2.5.2.2 Friedman Test (Paired)
2.6 Inference on the Relation of Two Quantitative Variables
2.6.1 Correlation
2.6.1.1 Pearson Correlation Coefficient “r”
2.6.1.2 The Coefficient of Correlation of Ranks (Spearman’s Rank Test)
2.6.2 Regression
2.6.2.1 Simple Regression
2.7 Inference on Survival Curves
2.7.1 Introduction: Assumptions and Definitions
2.7.2 Kaplan Meier Method
2.7.3 Actuarial Method
2.7.4 Comparison of Survival Curves
2.7.4.1 The Log-Rank Test (Mantel–Haenszel or Cox–Mantel Test)
2.7.4.2 The Adjusted (Stratified) Log-Rank Test
2.7.4.3 The Generalized Wilcoxon (Gehan, Breslow) Test
2.7.4.4 Tarone Ware Test
2.7.4.5 Harrington and Fleming Test Family
2.8 Choosing the Appropriate Bivariate Statistical Test
2.8.1 Introduction
2.8.1.1 Independence of Data
2.8.1.2 Variable Type and Distribution
2.8.1.3 Variable Role
2.8.2 The Unpaired Statistical Tests
2.8.2.1 The Association of Two Qualitative Variables
2.8.2.2 The Association of Two Quantitative Variables
2.8.2.3 The Distribution of a Quantitative Outcome Across a Two- or Multiple-Class Qualitative Variable
2.8.3 The Paired Statistical Tests
2.8.4 The Comparison of Survival Curves
2.9 Adjusting Bivariate Analysis: Prognostic Studies
2.9.1 Introduction
2.9.2 Excluding a Qualitative (Reverse) Interaction
2.9.2.1 The Z Test
2.9.2.2 Berslow-Day and Tarone’s Tests
2.9.3 Adjusting Two Proportions
2.9.3.1 Cochran–Mantel–Haenszel Test
2.9.3.2 The Mantel–Haenszel Adjusted Odds Ratio and Risk Ratio
2.9.4 Adjusting Two Means
2.9.4.1 Two-Way ANOVA
2.9.5 Adjusting Two Quantitative Variables
2.9.5.1 Partial Correlation
2.10 Measuring Agreement and Testing Reliability
2.10.1 Introduction
2.10.2 Plan of the Analysis
2.10.2.1 Variable Definition
2.10.2.2 Overruling a Systematic Bias
2.10.2.3 Measuring Agreement
2.10.3 The Qualitative Outcome
2.10.3.1 Cohen’ Kappa
2.10.3.2 Weighted Kappa
2.10.3.3 Fleiss and Randolph Kappa for Multiple Categories
2.10.3.4 Interpretation, Reporting and Conditions of Application of Kappa
2.10.4 Quantitative Outcome
2.10.4.1 The Bland Altman’s Plots
2.10.4.2 Lin’s Concordance Correlation Coefficient (CCC)
2.10.4.3 Intra Class Correlation Coefficient (ICC)
2.10.4.4 Kendall Coefficient of Concordance (W)
References
3 Multivariable Analysis
Abstract
3.1 Introduction
3.2 The ANOVA Family
3.2.1 Testing the Main Effects
3.2.1.1 One-Way ANOVA
3.2.1.2 Two-Way ANOVA
3.2.2 Testing Interaction Effects of Qualitative Variables
3.2.3 Testing Interaction Effects of Quantitative Variables: Analysis of Covariance (ANCOVA)
3.2.3.1 One-Way ANCOVA
3.2.3.2 Two-Way ANCOVA
3.2.4 Multivariate ANOVA and ANCOVA (MANOVA and MANCOVA)
3.2.4.1 One-Way MANOVA
3.2.5 Repeated Measures ANOVA (RMANOVA)
3.2.5.1 Two-Way Repeated Measures ANOVA
3.2.5.2 Two-Way Mixed ANOVA
3.3 General Outlines of Multivariable Models [4–6, 13, 18–20]
3.3.1 Indications
3.3.2 Aim of the Model
3.3.3 Selection of Predictors
3.3.4 Model Selection
3.3.6 Evaluation of the Model
3.4 Multiple Regression Analysis [4–6, 13, 18–20]
3.4.1 Introduction: Simple Versus Multiple Linear Regression
3.4.2 The Basic Assumptions
3.4.2.1 Assumptions Related to the Study Design
3.4.2.2 Individual and Collective Linearity Between the &!blank;Predictors and the &!blank;Outcome Variable
3.4.2.3 Absence of Multicollinearity Among Predictors
3.4.2.4 Analysis of Residuals for &!blank;Normality, Autocorrelation, Independence, and &!blank;Homoscedasticity
3.4.2.5 Absence of Outliers, High Leverage and Influential Points
3.4.3 The Example
3.4.4 Designing the Model
3.4.5 Verification of the Assumptions [6–10]
3.4.5.1 The Study Design
3.4.5.2 Execution of the Analysis
3.4.5.3 Independence of Observations
3.4.5.4 Linearity
3.4.5.5 Homoscedasticity
3.4.5.6 Exclusion of Multicollinearity
3.4.5.7 Checking on Outliers, High Leverage and Influential Points
3.4.5.8 Normality
3.4.6 Model Evaluation
3.4.6.1 The Model Summary
3.4.6.2 Interpreting the Regression Coefficients
3.4.6.3 Prediction of the Outcome: The Regression Equation
3.4.7 What Should Be Reported
3.4.8 The Case of Multiple Outcome Variables
3.5 Binary Logistic Regression Analysis [4–6, 26–29]
3.5.1 Introduction: The Linear Versus the Logistic Regression
3.5.1.1 Variable Transformation
3.5.1.2 Normalization
3.5.1.3 The Linear Logistic Curve
3.5.1.4 The Best Fitting Curve
3.5.1.5 Estimation of Coefficients and Testing Significance
3.5.1.6 Calculation of Pseudo R2, Effect Size and P-value
3.5.1.7 The Logistic Regression Equation
3.5.1.8 Interpretation of the Results: The Odds Ratio
3.5.1.9 Prediction of the Probability of the Outcome
3.5.1.10 The Case of a Qualitative or a Discrete Predictor Variable
3.5.2 The Basic Assumptions
3.5.2.1 Assumptions Related to the Study Design
3.5.2.2 Linearity Between the &!blank;Quantitative Predictors and &!blank;the &!blank;Logit of the &!blank;Outcome
3.5.2.3 Absence of Multicollinearity
3.5.2.4 Absence of Outliers, High Leverage and Influential Points
3.5.3 The Example
3.5.4 Designing the Model
3.5.5 Verification of the Assumptions [6–10]
3.5.5.1 The Study Design
3.5.5.2 Linearity Between the &!blank;Quantitative Predictors and the Logit of Outcome
3.5.5.3 Exclusion of Multicollinearity
3.5.5.4 Execution of the Analysis
3.5.5.5 Checking on Outliers, High Leverage and Influential Points
3.5.6 Model Evaluation
3.5.6.1 The Model Summary
3.5.6.2 Interpretation of the &!blank;Regression Coefficients
3.5.6.3 Prediction of the Outcome
3.5.7 What Should Be Reported
3.6 Cox Regression Analysis
3.6.1 Introduction: Life Tables Versus Cox Regression Analysis
3.6.2 The Basic Assumptions [36–40]
3.6.2.1 Assumptions Related to the Study Design
3.6.2.2 Proportionality Assumption
3.6.3 The Example
3.6.4 Designing the Model [6, 7, 37]
3.6.5 Verification of the Assumptions
3.6.5.1 The Study Design
3.6.5.2 Execution of a Complementary Kaplan–Meier Analysis
3.6.5.3 Verification of the Proportionality Assumption
3.6.5.4 Execution of the Cox Regression Model
3.6.6 Model Evaluation
3.6.6.1 The Model Summary
3.6.6.2 The Individual Contribution of the Predictors
3.6.7 What Should Be Reported
References
4 Sample Size Calculation
Abstract
4.1 Introduction
4.1.1 Estimation of the Effect Size
4.1.1.1 The Magnitude
4.1.1.2 The Variability
4.1.1.3 Effect Size Families
4.1.2 Choosing the Risks of Error
4.1.2.1 The Primary Risk of Error
4.1.2.2 The Secondary Risk of Error
4.1.3 The Direction of the Study
4.1.3.1 The Unilateral (One-Tail) Versus the Bilateral (Two-Tails) Study
4.1.4 Study Specific Factors
4.1.4.1 Factors Related to the Primary Outcome
4.1.4.2 Factors Related to the Secondary Outcomes and Post-Hoc Analysis
4.1.4.3 Interim Analysis
4.1.5 Sample Size Calculation
4.1.5.1 A Basic Formula: The Magic Numbers [5, 12, 13]
4.1.5.2 The Non-centrality Parameter (ncp)
4.1.5.3 Power Calculation Software and Online Calculators
4.2 Comparison of Two Independent Quantitative Variables: Student and Mann & Whitney Tests
4.2.1 Effect Size
4.2.1.1 Cohen ds
4.2.1.2 Hedges g
4.2.1.3 Glass \varvec{\Delta }
4.2.1.4 Non-parametric Effect Size
4.2.2 Sample Size
4.2.2.1 Normal Distribution: Comparison of Two Means
4.2.2.2 Other Distributions Than Normal
4.3 Association of Two Independent Binary Variables: Chi-Square and Fisher’s Exact Tests
4.3.1 Effect Size [28]
4.3.1.1 Cohen d and Cohen h
4.3.1.2 Phi (\varvec{\varphi })
4.3.1.3 Cramer’s V (Φc)
4.3.1.4 Relative Risk (RR)
4.3.1.5 Odds Ratio (OR)
4.3.1.6 Number Needed to Treat (NNT)
4.3.1.7 Cluster Design [36]
4.3.2 Sample Size
4.3.2.1 Difference Between Two Independent Proportions
4.3.2.3 Cluster Design
4.4 Categorical and Ordinal Variables: Chi-Square Tests of Independence and Goodness of Fit
4.4.1 Effect Size
4.4.1.1 Cramer’s V (φc)
4.4.1.2 Cohen W
4.4.1.3 Cumulative Odds Ratio
4.4.2 Sample Size
4.4.2.1 Independence from Odds Ratio
4.4.2.2 Independence from Cohen’s (W)
4.4.2.3 Goodness of Fit
4.5 Paired Analysis
4.5.1 Paired Student Test
4.5.1.1 Effect Size: Cohen dz
4.5.1.2 Sample Size
4.5.2 Paired Wilcoxon-Sign-Rank Test
4.5.2.2 Sample Size: Al-Sunduqchi and Guenther
4.5.3 McNemar’s Test
4.5.3.1 Effect Size: Odds Ratio
4.6.1 Effect Size
4.6.1.1 Eta Squared η2
4.6.1.2 Partial Eta Squared ηP2
4.6.1.3 Omega Squared ω2
4.6.1.4 Cohen f
4.6.2 Sample Size
4.6.2.1 Sample Size for a Given Power
4.6.2.2 Post-Hoc Power Calculation
4.7 Simple Correlation: Pearson’s Correlation Coefficient R
4.7.1 Effect Size
4.7.1.1 Fisher’s Transformation
4.7.2 Sample Size
4.8 Simple Linear Regression
4.8.1 Effect Size
4.8.1.1 The B Coefficient
4.8.2 Sample Size
4.8.2.1 A Simple Equation
4.9 Time to Event
4.9.1 Effect Size
4.9.1.1 The Hazard Rates
4.9.1.2 The Hazard Ratio
4.9.2 Sample Size
4.9.2.1 The Exponential Method
4.9.2.2 Cox Proportional Hazard Model
4.10 Logistic Regression
4.10.1 Effect Size: Log Odds Ratio
4.10.2 Sample Size
4.10.2.1 Large Sample Approximation Formulae
4.10.2.2 The Enumeration Procedure
4.11 Multiple Regression
4.11.1 Conditional Fixed Factors Model
4.11.1.1 Effect Size (f2)
4.11.1.2 Sample Size
4.11.2 Unconditional Random Effect Model
4.11.2.1 Effect Size
4.11.2.2 Sample Size
4.12 Repeated Measures
4.12.1 Repeated Measures ANOVA (RMANOVA)
4.12.1.1 Effect Size: Within-Group, Between-Group and Interaction
4.12.1.2 Sample Size
4.12.2 Friedman Test
4.12.2.1 Effect Size: Kendall W Coefficient of Concordance
4.12.2.2 Sample Size
4.13 Non-inferiority and Equivalence Studies
4.13.1 Comparison of 2 Means
4.13.1.1 Effect Size
4.13.1.2 Sample Size
4.13.2 Comparison of Two Proportions
4.13.2.1 Effect Size
4.13.2.2 Sample Size from Proportions
4.13.2.3 Sample Size from Odds Ratio
4.13.3 Time to Event Analysis
4.13.3.1 Calculation of Effect Size: Hazard Rates and Hazard Ratio
4.13.3.2 Calculation of Sample Size: Exponential and Cox Proportional Hazard Methods
4.14 Diagnostic Accuracy
4.14.1 Sensitivity and Specificity
4.14.1.1 Establishing Sensitivity or Specificity with a Known Disease State
4.14.1.2 Establishing Sensitivity or Specificity with an Unknown Disease State
4.14.1.3 Testing Sensitivity or Specificity of a Test
4.14.1.4 Comparing Sensitivity or Specificity of Two Independent Tests
4.14.1.5 Comparing Sensitivity or Specificity in a Paired Design
4.14.2 ROC Analysis
4.14.2.1 Estimating Accuracy Index
4.14.2.2 Testing Accuracy of a Quantitative Diagnostic Test
4.14.2.3 Comparing Accuracy of Two Independent ROC Curves
4.14.2.4 Comparing Accuracy of Two Dependent ROC Curves
4.15 Measuring Agreement
4.15.1 Qualitative Outcome
4.15.1.1 Cohen’s Kappa
4.15.1.2 A General Formula
4.15.2 Quantitative Outcome
4.15.2.1 Intra-class Correlation Coefficient (ICC)
4.15.2.2 Kendall Correlation Coefficient (W)
4.16 Survey Analysis
4.16.1 Introduction
4.16.2 Factors Regulating Sample Size Calculation
4.16.2.1 Level of Confidence
4.16.2.2 Variability of Outcome
4.16.2.3 Margin of Error
4.16.3 Sample Size Calculation
4.16.3.1 Infinite or Unknown Population
References
5 The Protocol of a Comparative Clinical Study: Statistical Considerations
Abstract
5.1 Background and Rationale
5.2 Objectives
5.3 Study Design
5.3.1 The Formulation of the Study
5.3.2 Number of Study Groups and Number of Effectuated Comparisons
5.3.3 The Classic Parallel Groups Versus Other Designs
5.3.4 Design Framework: Superiority, Non-inferiority, Equivalence or Pilot Study
5.3.5 Allocation Ratio
5.4 Methods
5.4.1 Study Endpoints
5.4.1.1 Primary Outcome
5.4.1.2 Secondary Outcomes
5.4.2 Assignment of Interventions
5.4.2.1 Allocation Sequence Generation: Randomization
Simple Randomization
Blocked Randomization
Stratified Randomization
Covariate Adaptive Randomization: Minimization
5.4.2.2 Allocation Concealment and Implementation
5.4.3 Blinding (Masking)
5.5 Study Population and Samples
5.5.1 Study Settings
5.5.2 Inclusion Criteria
5.5.3 Exclusion Criteria
5.5.4 Study Timeline
5.5.5 Follow-Up
5.5.5.1 Patients Who Become Illegible
5.5.5.2 Patients’ Discontinuation, Withdrawal or Crossing-Over
5.5.5.3 Patients Lost to Follow-Up
5.6 Treatments and Interventions
5.6.1 Studied Treatments and Interventions
5.6.2 Associated Treatments and Interventions
5.7 Data Management
5.7.1 Data Collection and Storage
5.7.2 Data Monitoring and Auditing
5.8 Statistical Methods
5.8.1 Population for the Analysis
5.8.1.1 Intention to Treat Analysis
5.8.1.2 Modified Intention to Treat and Per Protocol Analysis
5.8.2 Statistical Hypothesis
5.8.3 Statistical Analysis
5.8.3.1 Descriptive and Inferential Analysis
5.8.3.2 Interim and Subgroup Analysis
5.8.3.3 Adjusted Analysis
5.8.3.4 Analysis of Repeated Measures
5.8.3.5 Sequential Analysis
5.8.3.6 Handling of Missing Data
5.8.4 Sample Size Determination
5.8.4.1 Choosing the Effect Size
5.8.4.2 The Risks of Error
5.8.4.3 The Direction of the Study
5.9 Study Documentations
5.10 Study Ethics
5.11 Data Sharing and Publication
5.12 Appendices
References
6 Introduction to Meta-Analysis
Abstract
6.1 Introduction
6.1.1 From Narrative to Systematic Review
6.1.2 Why Do We Need Meta-Analysis?
6.1.3 Types of Meta-Analysis
6.1.3.1 Descriptive Meta-Analysis
6.1.3.2 Methods Focusing on Sampling Error
6.1.3.3 Psychometric Meta-Analysis (Hunter and Schmidt)
6.2 Stages of Meta-Analysis
6.2.1 Formulation of the Problem
6.2.2 Data Collection
6.2.3 Assessment of Risk of Bias
6.2.4 Choosing the Model
6.2.4.1 The Fixed-Effect Model
6.2.4.2 The Random Effects Model
6.2.4.3 The Mixed Effects Model
6.2.4.4 Which Model to Choose?
6.2.5 Managing the Effect Size Estimates
6.2.5.1 The Effect Size Families
6.2.5.2 Selecting an Effect Size
6.2.5.3 Converting Among Effect Size Estimates
6.2.5.4 Creation of a Reliable Variance
6.2.6 Estimation of a Mean Effect Size, Se, 95% CI and P Value
6.2.6.1 How Does Meta-Analysis Work: Can We Do It by Hand?
The Fixed-Effect Model
6.2.6.2 Calculation of Individual Effect Sizes
Unstandardized (Raw) Mean of the Difference: Unpaired Design
Unstandardized (Raw) Mean Difference: Paired Design
Hedges g
The Odds Ratio
6.2.6.3 Other Methods of Calculation
Rosenthal and Rubin Method
The Mantel–Haenszel Method
The Peto One-Step Approach
The Barebones Method
6.2.7 Assessment of Heterogeneity
6.2.7.1 Testing Heterogeneity: The Cochran Q Test
6.2.7.2 Analyzing Heterogeneity: I2, H2 and R2
The I2 Index
The H2 Index
The R2 Index
6.2.7.3 Estimation of Variability Between Studies (T2)
Maximum Likelihood (ML)
Restricted Maximum Likelihood (REML)
6.2.7.4 Heterogeneity in Fixed Effect Versus Random Effects Model
6.2.8 Subgroup Analysis
6.2.8.1 Fixed Effect Model Within Subgroups
6.2.8.2 Random Effects Model Within Subgroups: Separate Versus Pooled T2
Random Effects Model Assuming Separate T2
Primary Analysis
Random Effects Model Assuming Pooled T2
Comparison of Subgroups
6.2.9 Meta Regression
6.2.10 Assessment of Publication Bias and Small-Study Effects
6.2.10.1 The Fail-Safe Methods
Fail-Safe B (Orwin)
The Rosenberg Weighting Effect Size Method
6.2.10.2 The Funnel Plot and Correcting Asymmetry
Trim and Fill Method
Weight Function Models
6.2.10.3 Testing the Small-Study Effects
Begg and Mazumdar Test
Egger Test
Harbord’s and Peter’s Tests
6.2.11 Sensitivity Analysis
6.2.11.1 Reviewing Data
6.2.11.2 Comparing Fixed to Random Effects Model
6.2.11.3 Cumulative Meta-Analysis
6.2.11.4 Leave One Meta-Analysis
6.2.12 Reporting Meta-Analysis
6.2.12.1 The Effect Size
6.2.12.2 The Model
6.2.12.3 Analysis of Heterogeneity
Testing the Statistical Significance of Heterogeneity (Q)
Measuring the Amount of Heterogeneity (T2)
Comparing Heterogeneity (I2 and H2)
6.2.12.4 Reporting the Main Effect
6.2.12.5 Moderator Analysis
Analysis by Subgroups
Meta Regression
6.2.12.6 Publication Bias and the Small-Study Effects
Fail-Safe Methods
The Funnel Plot and Correcting Asymmetry
Testing the Small-Study Effects
6.2.12.7 Sensitivity Analysis
6.2.12.8 The Example
The Fixed-Effect Model
The Random-Effects Model
6.3 Psychometric Meta-Analysis (Hunter and Schmidt)
6.3.1 The Basic Concept
6.3.2 The Bare Bones Meta-Analysis
6.3.3 Meta-Analysis Corrected for All Artifacts
6.3.3.2 Assessment of Heterogeneity
The Hunter and Schmidt Rule of Thumb
Formal Statistical Testing (Q Test)
References
7 Pitfalls and Common Errors
Abstract
7.1 Sample Size Calculation
7.1.1 Empower the Study
7.1.2 Sculpture the Primary Outcome
7.1.2.1 Choose a Single Outcome
7.1.2.2 Choose a Continuous Variable
7.1.2.3 Create a Continuous Endpoint (Composite Score)
7.1.2.4 Adopt an Ordered Categorical Rather Than a Binomial Variable
7.1.2.5 Choose an Optimal Cut-Off Point
7.1.2.6 Select the Right Variance
7.1.2.7 Analyze the Source Study
7.1.2.8 Predict the Future Confidence Interval
7.1.3 Reduce Variability by Ameliorating the Study Design
7.1.3.1 Adopt a Paired or Cross-Over Design
7.1.3.2 Target Surrogate Endpoints
7.1.3.3 Repeat the Measurement
7.1.3.4 Make Study Arms Equal
7.1.3.5 Manage Sources of Variability
7.1.3.6 Plan a Tight Study
7.1.3.7 Choose a One-Tail Design
7.1.4 Prepare the Study to Receive the Selected Tests
7.1.5 Account for the Effect of Covariates in Multivariate Analysis
7.1.6 Manage the Secondary Outcomes
7.2 Data Management
7.2.1 Check on Errors and Data Consistency
7.2.2 Verify Outliers
7.2.3 Manage Missing Data
7.2.3.1 How Much Data is Missing?
7.2.3.2 Mechanisms of Missingness
7.2.3.3 Missing Data Statistics
7.2.3.4 Replace Missing Data
7.2.4 Normalizing Data
7.2.4.1 Why to Normalize Data?
7.2.4.2 Suggested Algorithm
7.3 Tools of the Analysis
7.3.1 The Statistical Software
7.3.1.1 Choose a Statistical Software Program
7.3.1.2 The Software Does not Ensure the Results’ Validity
7.3.1.3 There Are Many Faces of Reality
7.3.1.4 A Bird in the Hand is Worth Two in the Bush
7.3.2 The Complementary Online Calculators
7.3.2.1 Check on the Equations and the References’ Validity
7.3.2.2 Cross-Examine the Results
7.3.2.3 Understand the Equation’s Elements
7.3.2.4 Verify the Conditions of Application
7.4 Data Reporting in a Manuscript
7.4.1 The Introduction
7.4.2 The Material and Methods
7.4.2.1 Study Design
7.4.2.2 Population of the Analysis
7.4.2.3 Type of the Analysis
7.4.2.4 Patients’ Description, Treatment Allocation and Follow-Up
7.4.2.5 The Statistical Analysis Section
Descriptive Statistics
The Direction of the Study
Inferential Statistics
Risks of Error
Sample Size Calculation
Statistical Software
7.4.3 The Results
7.4.3.1 Verify the Protocol is Being Implemented
Should We Test Randomization?
Report Difficulties in Recruitment and Follow-Up
Verify the Conditions of Application of the Statistical Tests
7.4.3.2 Report the Descriptive Statistics
7.4.3.3 Report the Estimates
Bivariate Analysis
Multivariable Analysis
The ANOVA Family
7.4.3.4 Report the Subgroup and Intermediate Analysis
The Bias Behind Repeated Analysis
The Between-Groups Analysis
Subgroup Analysis Versus Testing Interaction
The Within-Groups Analysis
7.4.3.5 Create Informative Tables and Figures
7.4.4 The Discussion
7.4.4.1 When Do We Have the Right to Conclude?
7.4.4.2 The Absence of Evidence is not Evidence of Absence
7.4.5 The Abstract
7.4.6 Common Pitfalls, Misinterpretations, and Inadequate Reporting
7.4.6.1 Clinical Relevance Must Lead Statistical Significance Statistical
7.4.6.2 Avoid Incomplete and Selective Reporting
7.4.6.3 Avoid the Misinterpretation of the Indices of Outcome
7.4.6.4 The 95% CI and the P Value Are not Interchangeable
7.4.6.5 A Smaller P-Value Does not Reflect a Truer or a Larger Effect
7.4.6.6 Express the Results with Confidence
7.4.6.7 Interpret the Results of the Statistical Analysis Correctly
7.4.6.8 The Results of the Within-Group Analysis Are not Informative About the Difference Between Groups
7.4.6.9 Whether Placebo is Better Than Treatment is not Tested in a Unilateral Study
7.4.6.10 A Multivariate Test is not More Robust Than a Bivariate Test
7.4.6.11 The Bias Behind Small Studies
7.4.6.12 The Non-parametric Approach is a Benefit
7.5 The Role of the Statistician
7.5.1 Include the Statistician in the Research Team
7.5.2 Begin from the Beginning
7.5.3 The Protocol is not a One-Man Show
7.5.4 Meeting Mid-Way
7.5.5 Data Management
7.5.6 Statistical Analysis
7.5.7 Publication
References
Appendix_1
Index