Validity, Reliability, and Significance: Empirical Methods for Nlp and Data Science
- Length: 165 pages
- Edition: 1
- Language: English
- Publisher: Morgan & Claypool
- Publication Date: 2021-12-03
- ISBN-10: 1636392733
- ISBN-13: 9781636392738
- Sales Rank: #0 (See Top 100 Books)
Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model’s performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science.
Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data.
This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
Preface Acknowledgments Introduction Empirical Methods in Machine Learning Scope and Outline of this Book Intended Readership Validity Validity Problems in NLP and Data Science Bias Features Illegitimate Features Circular Features Theories of Measurement and Validity The Concept of Validity in Psychometrics The Theory of Scales of Measurement Theories of Measurement in Philosophy of Science Prediction as Measurement Feature Representations Measurement Data Descriptive and Model-Based Validity Tests Dataset Bias Test Transformation Invariance Test A Model-Based Test for Circularity Notes on Practical Usage Reliability Untangling Terminology: Reliability, Agreement, and Others Performance Evaluation as Measurement Descriptive and Model-Based Reliability Tests Agreement Coefficients for Data Annotation Bootstrap Confidence Intervals for Model Evaluation Model-Based Reliability Testing Notes on Practical Usage Significance Parametric Significance Tests Sampling-Based Significance Tests Bootstrap Resampling Permutation Tests Model-Based Significance Testing The Generalized Likelihood Ratio Test Likelihood Ratio Tests using LMEMs Notes on Practical Usage Mathematical Background Generalized Additive Models General Form of Model Example Parameter Estimation Linear Mixed Effects Models General Form of Model Example Parameter Optimization The Distribution of the Likelihood Ratio Statistic Score Function and Fisher Information Taylor Expansion and Asymptotic Distribution Bibliography Authors' Biographies
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.