John Wiley & Sons A Practical Approach to Quantitative Validation of Patient-Reported Outcomes Cover A Simulation-Based Guide Using SAS In A Practical Approach to Quantitative Validation of Patient-Rep.. Product #: 978-1-119-37637-8 Regular price: $104.67 $104.67 Auf Lager

A Practical Approach to Quantitative Validation of Patient-Reported Outcomes

A Simulation-based Guide Using SAS

Bushmakin, Andrew G. / Cappelleri, Joseph C.

Statistics in Practice


1. Auflage November 2022
368 Seiten, Hardcover
Wiley & Sons Ltd

ISBN: 978-1-119-37637-8
John Wiley & Sons

Jetzt kaufen

Preis: 112,00 €

Preis inkl. MwSt, zzgl. Versand

Weitere Versionen


A Simulation-Based Guide Using SAS
In A Practical Approach to Quantitative Validation of Patient-Reported Outcomes, two distinguished researchers, with 50 years of collective research experience and hundreds of publications on patient-centered research, deliver a detailed and comprehensive exposition on the critical steps required for quantitative validation of patient-reported outcomes (PROs). The book provides an incisive and instructional explanation and discussion on major aspects of psychometric validation methodology on PROs, especially relevant for medical applications sponsored by the pharmaceutical industry, where SAS is the primary software, and evaluated in regulatory and other healthcare environments.

Central topics include test-retest reliability, exploratory and confirmatory factor analyses, construct and criterion validity, responsiveness and sensitivity, interpretation of PRO scores and findings, and meaningful within-patient change and clinical important difference. The authors provide step-by-step guidance while walking readers through how to structure data prior to a PRO analysis and demonstrate how to implement analyses with simulated examples grounded in real-life scenarios.

Readers will also find:
* A thorough introduction to patient-reported outcomes, including their definition, development, and psychometric validation
* Comprehensive explorations of the validation workflow, including discussions of clinical trials as a data source for validation and the validation workflow for single and multi-item scales
* In-depth discussions of key concepts related to a validation of a measurement scale
* Special attention is given to the US Food and Drug Administration (FDA) guidance on development and validation of the PROs, which lay the foundation and inspiration for the analytic methods executed

A Practical Approach to Quantitative Validation of Patient-Reported Outcomes is a required reference that will benefit psychometricians, statisticians, biostatisticians, epidemiologists, health service and public health researchers, outcome research scientists, regulators, and payers.


A series of practical books outlining the use of statistical techniques in a wide range of applications areas:

Preface xi

About the Authors xv

1 Introduction 1

1.1 What Is a PRO Measure? 1

1.2 Development of a PRO Measure 4

1.2.1 Concept Identification 4 Literature and Instrument Review 5 Patient- Centered Input 6

1.2.2 Item Development 9

1.2.3 Cognitive Interviews 11

1.2.4 Additional Considerations 12

1.2.5 Documentation of Development Process with Conceptual Framework 13

1.3 Psychometric Validation 15

1.3.1 Psychometric Evaluation Data 16

1.3.2 Psychometric Properties 17 Distributional Characteristics 19 Measurement Model Structure 20 Reliability 22 Construct Validity 23 Ability to Detect Change 24 Interpretation 25

1.4 Learning Through Simulations 26

1.5 Summary 27

References 28

2 Validation Workflow 35

2.1 Clinical Trials as a Data Source for Validation 35

2.2 Validation Workflow for Single- Item Scales 39

2.3 Confirmatory Validation Workflow for Multi- item Multi- domain Scales 43

2.4 Validation Flow for a New Multi- item Multi- domain Scale 45

2.4.1 New Scale with Known Conceptual Framework 45

2.4.2 New Scale with Unknown Measurement Structure 47

2.5 Cross- Sectional Studies and Field Tests 48

2.6 Summary 49

References 49

3 An Assessment of Classical Test Theory and Item Response Theory 51

3.1 Overview of Classical Test Theory 52

3.1.1 Basics 52

3.1.2 Illustration 52

3.1.3 Another Look 53

3.2 Person- Item Maps 55

3.2.1 CTT Revisited 55

3.2.2 Note on IRT 56

3.2.3 Implementation of Person- Item Maps 58

3.2.4 CTT- Based Scoring vs. IRT- Based Scoring 69

3.3 Summary 78

References 80

4 Reliability 83

4.1 Reproducibility/Test-Retest 85

4.1.1 Measurement Error Model 85

4.1.2 Two Time Points 87

4.1.3 Random- Effects Model for ICC Estimation 90

4.1.4 Test-Retest Reliability Assessment in the Context of Clinical Studies 95 Pre- Treatment/Pre- Baseline Data 95 Post- Baseline Data 97 Time Period Between Observations 101

4.1.5 Spearman- Brown Prophecy Formula 104

4.1.6 Domain Score Test-Retest vs. Item Test-Retest 109

4.1.7 Observer- Based and Interviewer- Based Scales 111

4.1.8 Uncovering True Relationship Between Measurements 113 Accounting for Measurement Error 113 Measurement Error Model with Two Observations 122

4.2 Cronbach's Alpha 129

4.2.1 Likert- Type Scales 129

4.2.2 Dichotomous Items 139

4.3 Summary 148

References 148

5 Construct Validity and Criterion Validity 151

5.1 Exploratory Factor Analyses 153

5.1.1 Modeling Assumptions 153

5.1.2 Exploratory Factor Analysis Implementation 159

5.1.3 Evaluating the Number of Factors and Factor Loadings 165 Scree Plot 165 Correlated Latent Factors 168 Parallel Analysis with Reduced Correlation Matrix 171 Factor Loadings 175

5.2 Confirmatory Factor Analyses 179

5.2.1 Confirmatory Factor Analysis Model 179

5.2.2 Confirmatory Factor Analysis Model Implementation 183

5.2.3 Confirmatory Factor Analysis with Domains Represented by a Single Item 192

5.2.4 Second- Order Confirmatory Factor Analysis 204 Implementation of the Model with at Least Three First- Order Latent Domains 204 Implementation of the Model with Two First- Order Latent Domains 207

5.2.5 Formative vs. Reflective Model 213

5.2.6 Bifactor Model 219

5.2.7 Confirmatory Factor Analysis Using Polychoric Correlations 227

5.3 Convergent and Discriminant Validity 231

5.3.1 Convergent and Discriminant Validity Assessment 231

5.3.2 Convergent and Discriminant Validity Evaluation in a Clinical Study 232

5.4 Known- Groups Validity 237

5.5 Criterion Validity 242

5.6 Summary 247

References 248

6 Responsiveness and Sensitivity 251

6.1 Ability to Detect Change 252

6.1.1 Definitions and Concepts 252

6.1.2 Ability to Detect Change Analysis Implementation 255

6.1.3 Correlation Analysis to Support Ability to Detect Change 263

6.1.4 Deconstructing Correlation Between Changes 268

6.2 Sensitivity to Treatment 270

6.2.1 What Is the Sensitivity to Treatment? 270

6.2.2 Concurrent Estimation of the Treatment Effects for a Multi- Domain Scale 273 Assessment of the Treatment Effect for a Single Domain 273 Assessment of the Treatment Effects for a Multi- Domain Scale 279

6.3 Summary 292

References 293

7 Interpretation of Patient- Reported Outcome Findings 295

7.1 Meaningful Within- Patient Change 296

7.1.1 Definitions and Concepts 296

7.1.2 Anchor- Based Method to Assess Meaningful Within- Patient Change 298

7.1.3 Cumulative Distribution Functions to Supplement Anchor- Based Methods 310

7.2 Clinical Important Difference 315

7.2.1 Meaningful Within- Patient Change Versus Between- Group Difference 315

7.2.2 Anchor- Based Method to Assess Clinically Important Difference 316

7.3 Responder Analyses and Cumulative Distribution Functions 320

7.3.1 Treatment Effect Model 320

7.3.2 MWPC Application: A Responder Analysis 323

7.3.3 Using CDFs for Interpretation of Results 325

7.4 Summary 331

References 332

Index 335
ANDREW G. BUSHMAKIN earned his M.S. in applied mathematics and physics from the National Research Nuclear University (former Moscow Engineering Physics Institute, Moscow, Russia). He has more than 20 years of experience in mathematical modeling and data analysis. He is a director of biostatistics in the Statistical Research and Data Science Center at Pfizer Inc. He has co-authored numerous articles and presentations on topics ranging from mathematical modeling of neutron physics processes to patient-reported outcomes, as well as several monographs.

JOSEPH C. CAPPELLERI earned his M.S. in statistics from the City University of New York, Ph.D. in psychometrics from Cornell University, and M.P.H. in epidemiology from Harvard University. He is an Executive Director of biostatistics in the Statistical Research and Data Science Center at Pfizer Inc. As an adjunct professor, he has served on the faculties of Brown University, University of Connecticut, and Tufts Medical Center. He has delivered numerous conference presentations and has published extensively on clinical and methodological topics. He is a fellow of the American Statistical Association and recipient of the ISPOR Avedis Donabedian Outcomes Research Lifetime Achievement Award.