000 04269nam a22005415i 4500
001 978-3-031-02183-1
003 DE-He213
005 20240730163448.0
007 cr nn 008mamaa
008 220601s2022 sz | s |||| 0|eng d
020 _a9783031021831
_9978-3-031-02183-1
024 7 _a10.1007/978-3-031-02183-1
_2doi
050 4 _aQ334-342
050 4 _aTA347.A78
072 7 _aUYQ
_2bicssc
072 7 _aCOM004000
_2bisacsh
072 7 _aUYQ
_2thema
082 0 4 _a006.3
_223
100 1 _aRiezler, Stefan.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_978677
245 1 0 _aValidity, Reliability, and Significance
_h[electronic resource] :
_bEmpirical Methods for NLP and Data Science /
_cby Stefan Riezler, Michael Hagmann.
250 _a1st ed. 2022.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2022.
300 _aXVII, 147 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Human Language Technologies,
_x1947-4059
505 0 _aPreface -- Acknowledgments -- Introduction -- Validity -- Reliability -- Significance -- Bibliography -- Authors' Biographies.
520 _aEmpirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of machine learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and machine learning prediction in the fields of NLP and data science. Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and data science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
650 0 _aArtificial intelligence.
_93407
650 0 _aNatural language processing (Computer science).
_94741
650 0 _aComputational linguistics.
_96146
650 1 4 _aArtificial Intelligence.
_93407
650 2 4 _aNatural Language Processing (NLP).
_931587
650 2 4 _aComputational Linguistics.
_96146
700 1 _aHagmann, Michael.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_978678
710 2 _aSpringerLink (Online service)
_978679
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031001949
776 0 8 _iPrinted edition:
_z9783031010552
776 0 8 _iPrinted edition:
_z9783031033117
830 0 _aSynthesis Lectures on Human Language Technologies,
_x1947-4059
_978680
856 4 0 _uhttps://doi.org/10.1007/978-3-031-02183-1
912 _aZDB-2-SXSC
942 _cEBK
999 _c84633
_d84633