Practical statistical methods : a SAS programming approach

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Author(s): Lakshmi V Padgett
Publisher: CRC Press
Year: 2011

Language: English
Pages: xiii, 290 p.. ; 25 cm
City: Boca Raton, FL
Tags: Библиотека;Компьютерная литература;SAS / JMP;


Content: 1. Introduction --
1.1. Types of Data --
1.2. Descriptive Statistics/Data Summaries --
1.3. Graphical and Tabular Representation --
1.4. Population and Sample --
1.5. Estimation and Testing Hypothesis --
1.6. Normal Distribution --
1.7. Nonparametric Methods --
1.8. Some Useful Concepts --
2. Qualitative Data --
2.1. One Sample --
2.1.1. Binary Data --
2.1.2. t Categorical Responses --
2.2. Two Independent Samples --
2.2.1. Two Proportions --
2.2.2. Odds Ratio and Relative Risk --
2.2.3. Logistic Regression with One Dichotomous Explanatory Variable --
2.2.4. Cochran-Mantel-Haenszel Test for a 2 x 2 Table --
2.2.5. t Categorical Responses --
2.3. Paired Two Samples --
2.3.1. Binary Responses --
2.3.2. t Categorical Responses --
2.4. k Independent Samples --
2.4.1. k Proportions --
2.4.2. Logistic Regression When the Explanatory Variable Is Not Dichotomous 2.4.3. CMH Test --
2.4.4. t Categorical Responses --
2.5. Cochran's Test --
2.6. Ordinal Data --
2.6.1. Row Mean Score Test --
2.6.2. Cochran-Armitage Test --
2.6.3. Measures of Association --
2.6.4. Ridit Analysis --
2.6.5. Weighted Kappa --
2.6.6. Ordinal Logistic Regression --
2.6.6.1. Two Samples --
2.6.6.2. k Samples --
3. Continuous Normal Data --
3.1. One Sample --
3.2. Two Samples --
3.2.1. Independent Samples --
3.2.1.1. Means --
3.2.1.2. Variances --
3.2.2. Paired Samples --
3.3. k Independent Samples --
3.3.1. One-Way Analysis of Variance --
3.3.1.1. Variance --
3.3.2. Covariance Analysis --
3.4. Multivariate Methods --
3.4.1. Correlation, Partial, and Intraclass Correlation --
3.4.2. Hotelling's T2 --
3.4.2.1. One Sample --
3.4.2.2. Two Samples --
3.4.3. One-Way Multivariate Analysis of Variance --
3.4.4. Profile Analysis --
3.4.5. Discriminant Functions --
3.4.6. Cluster Analysis --
3.4.7. Principal Components 3.4.8. Factor Analysis --
3.4.9. Canonical Correlation --
3.5. Multifactor ANOVA --
3.5.1. Crossed Factors --
3.5.2. Tukey 1 df for Nonadditivity --
3.5.3. Nested Factors --
3.6. Variance Components --
3.7. Split Plot Designs --
3.8. Latin Square Design --
3.9. Two-Treatment Crossover Design --
4. Nonparametric Methods --
4.1. One Sample --
4.1.1. Sign Test --
4.1.2. Wilcoxon Signed-Rank Test --
4.1.3. Kolmogorov Goodness of Fit --
4.1.4. Cox and Stuart Test --
4.2. Two Samples --
4.2.1. Wilcoxon-Mann-Whitney Test --
4.2.2. Mood's Median Test --
4.2.3. Kolmogorov-Smirnov --
4.2.4. Equality of Variances --
4.3. k Samples --
4.3.1. Kruskal-Wallis Test --
4.3.2. Median Test --
4.3.3. Jonckheere Test --
4.4. Transformations --
4.5. Friedman Test --
4.6. Association Measures --
4.6.1. Spearman Rank Correlation --
4.6.2. Kendall's Tau --
4.6.3. Kappa Statistic --
4.7. Censored Data 4.7.1. Kaplan-Meier Survival Distribution Function --
4.7.2. Wilcoxon (Gehan) and Log-Rank Test --
4.7.3. Life-Table (Acturial Method) --
5. Regression --
5.1. Simple Regression --
5.2. Polynomial Regression --
5.3. Multiple Regressions --
5.3.1. Multicollinearity --
5.3.2. Dummy Variables --
5.3.3. Interaction --
5.3.4. Variable Selection --
5.4. Diagnostics --
5.4.1. Outliers --
5.4.2. Influential Observations --
5.4.3. Durbin-Watson Statistic --
5.5. Weighted Regression --
5.6. Logistic Regression --
5.6.1. Dichotomous Logistic Regression --
5.6.2. Multinomial Logistic Model --
5.6.3. Cumulative Logistic Model --
5.7. Poisson Regression --
5.8. Robust Regression --
5.9. Nonlinear Regression --
5.10. Piecewise Regression --
5.11. Accelerated Failure Time (AFT) Model --
5.12. Cox Regression --
5.12.1. Proportional Hazards Model --
5.12.2. Proportional Hazard Assumption --
5.12.3. Stratified Cox Model 5.12.4. Time-Varying Covariates --
5.12.5. Competing Risks --
5.13. Parallelism of Regression Equations --
5.14. Variance-Stabilizing Transformations --
5.15. Ridge Regression --
5.16. Local Regression (LOESS) --
5.17. Response Surface Methodology: Quadratic Model --
5.18. Mixture Designs and Their Analysis --
5.19. Analysis of Longitudinal Data: Mixed Models --
6. Miscellaneous Topics --
6.1. Missing Data --
6.2. Diagnostic Errors and Human Behavior --
6.2.1. Introduction --
6.2.2. Independent Samples --
6.2.2.1. Two Independent Samples --
6.2.2.2. k Independent Samples --
6.2.3. Two Dependent Samples --
6.2.4. Finding the Threshold for a Screening Variable --
6.2.5. Analyzing Response Data with Errors --
6.2.6. Responders' Anonymity --
6.3. Density Estimation --
6.3.1. Parametric Density Estimation --
6.3.2. Nonparametric Univariate Density Estimation --
6.3.3. Bivariate Kernel Estimator --
6.4. Robust Estimators 6.5. Jackknife Estimators --
6.6. Bootstrap Method --
6.7. Propensity Scores --
6.8. Interim Analysis and Stopping Rules --
6.8.1. Stopping Rules --
6.8.2. Conditional Power --
6.9. Microarrays and Multiple Testing --
6.9.1. Microarrays --
6.9.2. Multiple Testing --
6.10. Stability of Products --
6.11. Group Testing --
6.12. Correspondence Analysis --
6.13. Classification Regression Trees --
6.14. Multidimensional Scaling --
6.15. Path Analysis --
6.16. Choice-Based Conjoint Analysis --
6.16.1. Availability Designs and Cross Effects --
6.16.2. Pareto-Optimal Choice Sets --
6.16.3. Mixture-Amount Designs --
6.17. Meta-Analysis --
6.17.1. Homogeneity of the Effect Sizes --
6.17.2. Combining the p-Values.