Automated English proficiency scoring of unconstrained speech using prosodic features

Okim Kang, David O. Johnson

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

This paper evaluates the performance of 17 machinelearning classifiers in automatically scoring the English proficiency of unconstrained speech. Each classifier was tested with different groups of features drawn from a master set of prosodic measures founded in Brazil’s model [3]. The prosodic measures were calculated from the output of an ASR that recognizes phones instead of words and other software designed to detect the elements of Brazil’s prosody model. The performance of the best classifier was 0.68 (p < 0.01) in terms of the correlation between the computer’s calculated proficiency ratings and those scored by humans. Using only prosodic features, this correlation is in the range of other similar computer programs for automatically scoring the proficiency of unconstrained speech.

Original languageEnglish (US)
Pages (from-to)617-620
Number of pages4
JournalProceedings of the International Conference on Speech Prosody
Volume2018-June
DOIs
StatePublished - 2018
Event9th International Conference on Speech Prosody, SP 2018 - Poznan, Poland
Duration: Jun 13 2018Jun 16 2018

Keywords

  • Automatic speech recognition (ASR)
  • Brazil’s prosody model
  • Large vocabulary spontaneous speech recognition (LVCSR)
  • World Englishes

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Automated English proficiency scoring of unconstrained speech using prosodic features'. Together they form a unique fingerprint.

Cite this