John Wiley & Sons Evidence-Based Statistics Cover Evidence-Based Statistics: An Introduction to the Evidential Approach - from Likelihood Principle to.. Product #: 978-1-119-54980-2 Regular price: $101.87 $101.87 In Stock

Evidence-Based Statistics

An Introduction to the Evidential Approach - from Likelihood Principle to Statistical Practice

Cahusac, Peter M. B.

Cover

1. Edition October 2020
256 Pages, Hardcover
Wiley & Sons Ltd

ISBN: 978-1-119-54980-2
John Wiley & Sons

Buy now

Price: 109,00 €

Price incl. VAT, excl. Shipping

Further versions

epubmobipdf

Evidence-Based Statistics: An Introduction to the Evidential Approach - from Likelihood Principle to Statistical Practice provides readers with a comprehensive and thorough guide to the evidential approach in statistics. The approach uses likelihood ratios, rather than the probabilities used by other statistical inference approaches. The evidential approach is conceptually easier to grasp, and the calculations more straightforward to perform. This book explains how to express data in terms of the strength of statistical evidence for competing hypotheses.

The evidential approach is currently underused, despite its mathematical precision and statistical validity. Evidence-Based Statistics is an accessible and practical text filled with examples, illustrations and exercises. Additionally, the companion website complements and expands on the information contained in the book.

While the evidential approach is unlikely to replace probability-based methods of statistical inference, it provides a useful addition to any statistician's "bag of tricks." In this book:
* It explains how to calculate statistical evidence for commonly used analyses, in a step-by-step fashion
* Analyses include: t tests, ANOVA (one-way, factorial, between- and within-participants, mixed), categorical analyses (binomial, Poisson, McNemar, rate ratio, odds ratio, data that's 'too good to be true', multi-way tables), correlation, regression and nonparametric analyses (one sample, related samples, independent samples, multiple independent samples, permutation and bootstraps)
* Equations are given for all analyses, and R statistical code provided for many of the analyses
* Sample size calculations for evidential probabilities of misleading and weak evidence are explained
* Useful techniques, like Matthews's critical prior interval, Goodman's Bayes factor, and Armitage's stopping rule are described

Recommended for undergraduate and graduate students in any field that relies heavily on statistical analysis, as well as active researchers and professionals in those fields, Evidence-Based Statistics: An Introduction to the Evidential Approach - from Likelihood Principle to Statistical Practice belongs on the bookshelf of anyone who wants to amplify and empower their approach to statistical analysis.

Acknowledgements xi

About the Author xiii

About the Companion Site xv

Introduction 1

References 2

1 The Evidence is the Evidence 3

1.1 Evidence-Based Statistics 3

1.1.1 The Literature 4

1.2 Statistical Inference - The Basics 6

1.2.1 Different Statistical Approaches 7

1.2.2 The Likelihood/Evidential Approach 8

1.2.3 Types of Approach Using Likelihoods 11

1.2.4 Pros and Cons of Likelihood Approach 11

1.3 Effect Size - True If Huge! 12

1.4 Calculations 15

1.5 Summary of the Evidential Approach 16

References 18

2 The Evidential Approach 21

2.1 Likelihood 21

2.1.1 The Principle 22

2.1.2 Support 24

2.1.3 Example - One Sample 29

2.1.4 Direction Matters 36

2.1.5 Maximum Likelihood Ratio 37

2.1.6 Likelihood Intervals 39

2.1.7 The Support Function 42

2.1.8 Choosing the Effect Size 42

2.2 Misleading and Weak Evidence 46

2.3 Adding More Data and Multiple Testing 48

2.4 Sequence of Calculations Using t 49

2.5 Likelihood Terminology 51

2.6 R Code for Chapter 2 52

2.6.1 Calculating the Likelihood Function for a One Sample t 52

2.7 Exercises 53

References 53

3 Two Samples 55

3.1 Basics Using the t Distribution 55

3.1.1 Steps in Calculations 56

3.2 Related Samples 56

3.3 Independent Samples 59

3.3.1 Independent Samples with Unequal Variances 60

3.4 Calculation Simplification 62

3.5 If Variance is Known, or Large Sample Size, Use z 63

3.6 Methodological and Pro Forma Analyses 65

3.7 Adding More Data 68

3.8 Estimating Sample Size 70

3.8.1 Sample Size for One Sample and Related Samples 71

3.8.2 Sample Size for Independent Samples 73

3.9 Differences in Variances 73

3.10 R Code For Chapter 3 74

3.10.1 Calculating the Likelihood Function, the Likelihoods and Support for Independent Samples 74

3.10.2 Creating a Gardner-Altman Estimation Plot with Likelihood Function and Interval 76

3.11 Exercises 77

References 77

4 ANOVA 79

4.1 Multiple Means 79

4.1.1 The Modelling Approach 79

4.1.2 Model Complexity 80

4.2 Example - Fitness 81

4.2.1 Comparing Models 82

4.2.2 Specific Model Comparisons 84

4.2.2.1 A Non-Orthogonal Contrast 88

4.2.3 Unequal Sample Sizes 89

4.3 Factorial ANOVA 90

4.3.1 Example - Blood Clotting Times 91

4.3.2 Specific Analyses in Factorial ANOVA, Including Contrasts 93

4.4 Alerting r² 96

4.4.1 Alerting r² to Compare Contrasts for Effect Size 96

4.5 Repeated Measures Designs 97

4.5.1 Mixed Repeated Measures with Between Participant Designs 98

4.5.2 Contrasts in Mixed Designs 100

4.6 Exercise 102

References 102

5 Correlation and Regression 103

5.1 Relationships Between Two Variables 103

5.2 Correlation 103

5.2.1 Likelihood Intervals for Correlation 107

5.3 Regression 108

5.3.1 Obtaining Evidence from F values 110

5.3.2 Examining Non-linearity 111

5.4 Logistic Regression 113

5.5 Exercises 120

References 120

6 Categorical Data 121

6.1 Types of Categorical Data 121

6.1.1 How is the chi² Test Used? 122

6.2 Binomial 123

6.2.1 Likelihood Intervals for Binomial 125

6.2.2 Comparing Different pi 126

6.2.3 The Support Function 127

6.3 Poisson 129

6.4 Rate Ratios 131

6.5 One-Way Categorical Data 134

6.5.1 One-Way Categorical Comparing Different Expected Values 135

6.5.2 One-Way with More than Two Categories 135

6.6 2 × 2 Contingency Tables 137

6.6.1 Paired 2 × 2 Categorical Analysis 139

6.6.2 Diagnostic Tests 141

6.6.2.1 Sensitivity and Specificity 141

6.6.2.2 Positive and Negative Predictive Values 142

6.6.2.3 Likelihood Ratio and Post-test Probability 143

6.6.2.4 Comparing Sensitivities and Specificities of Two Diagnostic Procedures 144

6.6.3 Odds Ratio 146

6.6.3.1 Likelihood Function for the Odds Ratio 149

6.6.4 Likelihood Function for Relative Risk with Fixed Entries 151

6.7 Larger Contingency Tables 151

6.7.1 Main Effects 153

6.7.2 Evidence for Linear Trend 154

6.7.3 Higher Dimensions? 155

6.8 Data That Fits a Hypothesis Too Well 158

6.9 Transformations of the Variable 159

6.10 Clinical Trials - A Tragedy in 3 Acts 161

6.11 R Code for Chapter 6 164

6.11.1 One-Way Categorical Data Support Against Specified Proportions 164

6.11.2 Calculating the Odds Ratio Likelihood Function and Support 164

6.11.3 Calculating the Likelihood Function and Support for Relative Risk with Fixed Entries 166

6.11.4 Calculating Interaction and Main Effects for Larger Contingency Tables 168

6.11.5 Log-Linear Modelling for Multi-way Tables 169

6.12 Exercises 171

References 172

7 Nonparametric Analyses 175

7.1 So-Called 'Distribution-Free' Statistics 175

7.2 Hacking SM 176

7.3 One Sample and Related Samples 176

7.4 Independent Samples 179

7.5 More than Two Independent Samples 181

7.6 Permutation Analyses 182

7.7 Bootstrap Analyses for One Sample or Related Samples 184

7.7.1 Bootstrap Analyses for Independent Samples 186

7.8 R Code for Chapter 7 187

7.8.1 Calculating Relative Support for One Sample 187

7.8.2 Calculating Relative Support for Differences in Two Independent Samples 188

7.8.3 Calculating Relative Support for Differences in Three Independent Samples 189

7.8.4 Calculating Relative Support Using Permutations Analysis 189

7.8.5 Bootstrap Analyses for One Sample 191

7.8.6 Bootstrap Analyses for Two Independent Samples 193

7.9 Exercises 195

References 196

8 Other Useful Techniques 197

8.1 Other Techniques 197

8.2 Critical Prior Interval 197

8.3 False Positive Risk 201

8.4 The Bayes Factor and the Probability of the Null Hypothesis 205

8.4.1 Example 208

8.5 Bayesian t Tests 210

8.6 The Armitage Stopping Rule 212

8.7 Counternull Effect Size 214

References 217

Appendix A Orthogonal Polynomials 219

Appendix B Occam's Bonus 221

Reference 222

Appendix C Problems with p Values 223

C.1 The Misuse of p Values 223

C.1.1 p Value Fallacies 225

C.2 The Use of p Values 225

C.2.1 Two Contradictory Traditions 226

C.2.2 Whither the p Value? 227

C.2.3 Remedies 228

References 229

Index 231
PETER M.B. CAHUSAC, PHD, received his doctorate in neuropharmacology from the Medical School Bristol University in 1984. He completed post-doctoral studies at Oxford University where he obtained an MSc in Applied Statistics in 1992. He is a member of the British Pharmacological Society, and Fellow of the Physiological (UK) and the Royal Statistical Societies. He is currently Associate Professor in Biostatistics and Pharmacology at Alfaisal University in Riyadh, Saudi Arabia.