Mini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study

Amber Shinsel, Todd Kulesza, Margaret Burnett, William Curran, Alex Groce, Simone Stumpf, Weng Keen Wong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

Intelligent assistants sometimes handle tasks too important to be trusted implicitly. End users can establish trust via systematic assessment, but such assessment is costly. This paper investigates whether, when, and how bringing a small crowd of end users to bear on the assessment of an intelligent assistant is useful from a cost/benefit perspective. Our results show that a mini-crowd of testers supplied many more benefits than the obvious decrease in workload, but these benefits did not scale linearly as mini-crowd size increased - there was a point of diminishing returns where the cost-benefit ratio became less attractive.

Original languageEnglish (US)
Title of host publicationProceedings - 2011 IEEE Symposium on Visual Languages and Human Centric Computing, VL/HCC 2011
Pages47-54
Number of pages8
DOIs
StatePublished - 2011
Externally publishedYes
Event2011 IEEE Symposium on Visual Languages and Human Centric Computing, VL/HCC 2011 - Pittsburgh, PA, United States
Duration: Sep 18 2011Sep 22 2011

Publication series

NameProceedings - 2011 IEEE Symposium on Visual Languages and Human Centric Computing, VL/HCC 2011

Conference

Conference2011 IEEE Symposium on Visual Languages and Human Centric Computing, VL/HCC 2011
Country/TerritoryUnited States
CityPittsburgh, PA
Period9/18/119/22/11

Keywords

  • crowdsourcing
  • end-user programming
  • machine learning
  • testing

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Software

Fingerprint

Dive into the research topics of 'Mini-crowdsourcing end-user assessment of intelligent assistants: A cost-benefit study'. Together they form a unique fingerprint.

Cite this