Abstract
In Learner Corpus Research (LCR), a common source of errors stems from manual coding and annotation of linguistic features. To estimate the amount of error present in a coded dataset, coefficients of inter-rater reliability are used. However, despite the importance of reliability and internal consistency for validity and, by extension, study quality, interpretability and generalizability, it is surprisingly uncommon for studies in the field of LCR to report on such reliability coefficients. In this Methods Report, we use a recent collaborative research project to illustrate the pertinence of considering inter-rater reliability. In doing so, we hope to initiate methodological discussion on instrument design, piloting and evaluation. We also suggest some ways forward to encourage increased transparency in reporting practices.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 237-251 |
| Number of pages | 15 |
| Journal | International Journal of Learner Corpus Research |
| Volume | 6 |
| Issue number | 2 |
| DOIs | |
| State | Published - Dec 10 2020 |
Keywords
- Coding errors
- Fleiss’ kappa
- Inter-rater reliability
- Reporting practices
- Study quality
ASJC Scopus subject areas
- Language and Linguistics
- Education
- Linguistics and Language
Fingerprint
Dive into the research topics of 'Inter-rater reliability in Learner Corpus Research Insights from a collaborative study on adverb placement'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS