Statistics in Medicine, Fourth Edition, by Robert H. Riffenburgh and Daniel L. Gillen, is an excellent book, useful as a reference for researchers in the medical sciences and as a textbook. It focuses largely on understanding statistical concepts rather than on mathematical and theoretical underpinnings. The authors cover both introductory statistical techniques and advanced methods commonly appearing in medical journals.
The text begins with a discussion related to planning studies and writing articles to report results. Following this, it introduces statistics that would typically be covered in an introductory biostatistics course. These include summary statistics, distributions, two-way tables, confidence intervals, and hypothesis tests. In addition, the authors give an overview of a variety of more sophisticated statistical techniques such as regression models for binary and count outcomes, survival analysis, equivalence testing, Bayesian analysis, and meta-analysis.
1.2 Stages of Scientific Investigation
1.3 Science Underlying Clinical Decision-Making
1.4 Why Do We Need Statistics?
1.5 Concepts in Study Design
1.6 Study Types
1.7 Convergence with Sample Size
1.8 Sampling Schemes in Observational Studies
1.9 Sampling Bias
1.10 Randomizing a Sample
1.11 How to Plan and Conduct a Study
1.12 Mechanisms to Improve Your Study Plan
1.13 Reading Medical Articles
1.14 Where Articles May Fall Short
1.15 Writing Medical Articles
1.16 Statistical Ethics in Medical Studies
1.17 Conclusion
Appendix to Chapter 1
2.2 Notation (or Symbols)
2.3 Quantification and Accuracy
2.4 Data Types
2.5 Multivariable Concepts and Types of Adjustment Variables
2.6 How to Manage Data
2.7 Defining the Scientific Goal: Description, Association Testing, Prediction
2.8 Reporting Statistical Results
2.9 A First-Step Guide to Descriptive Statistics
2.10 An Overview of Association Testing
2.11 A Brief Discussion of Prediction Modeling
3.2 Probability and Relative Frequency
3.3 Graphing Relative Frequency
3.4 Continuous Random Variables
3.5 Frequency Distributions for Continuous Variables
3.6 Probability Estimates From Continuous Distributions
3.7 Probability as Area Under the Curve
References
4.2 Greek Versus Roman Letters
4.3 What Is Typical
4.4 The Spread About the Typical
4.5 The Shape
4.6 Sampling Distribution of a Variable Versus a Statistic
4.7 Statistical Inference
4.8 Distributions Commonly Used in Statistics
4.9 Approximate Distribution of the Mean (Central Limit Theorem)
4.10 Approximate Distribution of a Sample Quantile
5.2 Numerical Descriptors, One Variable
5.3 Numerical Descriptors, Two Variables
5.4 Numerical Descriptors, Three Variables
5.5 Graphical Descriptors, One Variable
5.6 Graphical Descriptors, Two Variables
5.7 Graphical Descriptors, Three Variables
5.8 Principles of Informative Descriptive Tables and Figures
References
6.2 The Normal Distribution
6.3 The t Distribution
6.4 The Chi-Square Distribution
6.5 The F Distribution
6.6 The Binomial Distribution
6.7 The Poisson Distribution
References
7.2 Error Probabilities
7.3 Two Policies of Testing
7.4 Distinguishing Between Statistical and Clinical Significance
7.5 Controversies Regarding the Rigid Use and Abuse of p-Values
7.6 Avoiding Multiplicity Bias
7.7 Organizing Data for Inference
7.8 Evolving a Way to Answer Your Data Question
Reference
8.2 Tolerance Intervals for Patient Measurements
8.3 Concept of a Confidence Interval for a Parameter
8.4 Confidence Interval for a Population Mean, Known Standard Deviation
8.5 Confidence Interval for a Population Mean, Estimated Standard Deviation
8.6 Confidence Interval for a Population Proportion
8.7 Confidence Interval for a Population Median
8.8 Confidence Interval for a Population Variance or Standard Deviation
8.9 Confidence Interval for a Population Correlation Coefficient
References
9.2 Tests on Categorical Data: 2 × 2 Tables
9.3 The Chi-Square Test of Contingency
9.4 Fisher’s Exact Test of Contingency
9.5 Tests on r × c Contingency Tables
9.6 Tests on Proportion
9.7 Tests of Rare Events (Proportions Close to Zero)
9.8 McNemar’s Test: Matched Pair Test of a 2 × 2 Table
9.9 Cochran’s Q: Matched Pair Test of a 2 × r Table
9.10 Three or More Ranked Samples With Two Outcome Categories: Royston’s Ptrend Test
References
10.2 Inference for the Risk Ratio: The Log Risk Ratio Test
10.3 Inference for the Odds Ratio: The Log Odds Ratio Test
10.4 Receiver Operation Characteristic Curves
10.5 Comparing Two Receiver Operating Characteristic Curves
References
11.2 Single or Paired Means: One-Sample Normal (z) and t Tests
11.3 Two Means Two-Sample Normal (z) and t Tests
11.4 Three or More Means: One-Factor Analysis of Variance
11.5 Three or More Means in Rank Order: Analysis of Variance Trend Test
11.6 The Basics of Nonparametric Tests
11.7 Single or Paired Sample Distribution(s): The Signed-Rank Test
11.8 Two Independent Sample Distributions: The Rank-Sum Test
11.9 Large Sample-Ranked Outcomes
11.10 Three or More Independent Sample Distributions: The Kruskal—Wallis Test
11.11 Three or More Matched Sample Distributions: The Friedman Test
11.12 Three or More Ranked Independent Samples With Ranked Outcomes: Cusick’s Nptrend Test
11.13 Three or More Ranked Matched Samples With Ranked Outcomes: Page’s L Test
11.14 Potential Drawbacks to Using Nonparametric Tests
Reference
12.2 Basics Underlying Equivalence Testing
12.3 Choosing a Noninferiority or Equivalence Margin
12.4 Methods for Noninferiority Testing
12.5 Methods for Equivalence Testing
12.6 Joint Difference and Equivalence Testing
References
13.2 Testing Variability on a Single Sample
13.3 Testing Variability Between Two Samples
13.4 Testing Variability Among Three or More Samples
13.5 Basics on Tests of Distributions
13.6 Test of Normality of a Distribution
13.7 Test of Equality of Two Distributions
References
14.2 Contingency as Association
14.3 Correlation as Association
14.4 Contingency as Agreement
14.5 Correlation as Agreement
14.6 Agreement Among Ratings: Kappa
14.7 Agreement Among Multiple Rankers
14.8 Reliability
14.9 Intraclass Correlation
References
15.2 Regression Concepts and Assumptions
15.3 Simple Regression
15.4 Assessing Regression: Tests and Confidence Intervals
15.5 Deming Regression
15.6 Types of Regression
15.7 Correlation Concepts and Assumptions
15.8 Correlation Coefficients
15.9 Correlation as Related to Regression
15.10 Assessing Correlation: Tests and Confidence Intervals
15.11 Interpretation of Small-But-Significant Correlations
References
16.2 Multiple Linear Regression
16.3 Model Diagnosis and Goodness of Fit
16.4 Accounting for Heteroscedasticity
16.5 Curvilinear Regression
16.6 Two-Factor Analysis of Variance
16.7 Analysis of Covariance
16.8 Three-Way and Higher Way Analysis of Variance
16.9 Concepts of Experimental Design
References
17.2 Extensions of Contingency Table Analyses Simple Logistic Regression
17.3 Multiple Logistic Regression Model Specification and Interpretation
17.4 Inference for Association Parameters
17.5 Model Diagnostics and Goodness-of-Fit
References
18.2 The Poisson Distribtution
18.3 Means Versus Rates
18.4 Inference for the Rate of a Poisson Random Variable
18.5 Comparing Poisson Rates From Two Independent Samples
18.6 The Simple Poisson Regression Model
18.7 Multiple Poisson Regression: Model Specification and Interpretation
18.8 Obtaining Predicted Rates
18.9 Inference for Association Parameters
Reference
19.2 Censoring
19.3 Survival Estimation: Life Table Estimates and Kaplan—Meier Curves
19.4 Survival Testing: The Log-Rank Test
19.5 Adjusted Comparison of Survival Times: Cox Regression
References
20.2 Distinguishing Longitudinal Data From Time-Series Data
20.3 Analysis of Longitudinal Data
20.4 Time-Series:
References
21.2 Is the Sample Size Estimate Adequate
21.3 The Concept of Power Analysis
21.4 Sample Size Methods
21.5 Test on One Mean (Normal Distribution)
21.6 Test on Two Means (Normal Distribution)
21.7 Tests When Distributions Are Nonnormal or Unknown
21.8 Test With No Objective Prior Data
21.9 Confidence Intervals on Means
21.10 Test of One Proportion (One Rate)
21.11 Test of Two Proportions (Two Rates)
21.12 Confidence Intervals on Proportions (On Rates)
21.13 Test on a Correlation Coefficient
21.14 Tests on Ranked Data
21.15 Variance Tests, Analysis of Variance, and Regression
21.16 Equivalence Tests
21.17 Number Needed to Treat or Benefit
References
22.2 Fundamentals of Clinical Trial Design
22.3 Reducing Bias in Clinical Trials Blinding and Randomization
22.4 Interim Analyses in Clinical Trials: Group Sequential Testing
References
23.2 Some Key Stages in the History of Epidemiology
23.3 Concept of Disease Transmission
23.4 Descriptive Measures
23.5 Types of Epidemiologic Studes
23.6 Retrospective Study Designs: The Case—Control Study Design
23.7 The Nested Case—Control Study Design
23.8 The Case—Cohort Study Design
23.9 Methods to Analyze Survival and Causal Factors
23.10 A historical note
References
24.2 Publication Bias in Meta-analyses
24.3 Fixed- and Random-Effects Estimates of the Pooled Effect
24.4 Tests for Heterogeneity of Estimated Effects Across Studies
24.5 Reporting the Results of a Meta-analysis
24.6 Further References
References
25.2 Bayesian Concepts
25.3 Describing and Testing Means
25.4 On Parameters Other Than Means
25.5 Describing and Testing a Rate (Proportion)
25.6 Conclusion
References
26.2 Surveys
26.3 Questionnaires
27.2 Significance in Interpretation
27.3 Post Hoc Confidence and Power
27.4 Multiple Tests and Significance
27.5 Bootstrapping, Resampling, and Simulation
27.6 Bland-Altman Plot: A Diagnostic Tool
27.7 Cost Effectiveness
References
28.2 Analysis of Variance Issues
28.3 Regression Issues
28.4 Rates and Proportions Issues
28.5 Multivariate Methods
28.6 Markov Chains: Following Multiple States through Time
28.7 Markov Chain Monte Carlo: Evolving Models
28.8 Markov Chain Monte Carlo: Stationary Models
28.9 Further Nonparametric Tests
28.10 Imputation of Missing Data
28.11 Frailty Models in Survival Analysis
28.12 Bonferroni “Correction”
28.13 Logit and Probit
28.14 Adjusting for Outliers
28.15 Curve Fitting to Data
28.16 Sequential Analysis
28.17 Another Test of Normality
28.18 Data Mining
28.19 Data Science and The Relationship Among Statistics, Machine Learning, and Artificial Intelligence
References