Abstract
How do you test a program when only a single user, with no expertise in software testing, is able to determine if the program is performing correctly? Such programs are common today in the form of machine-learned classifiers. We consider the problem of testing this common kind of machine-generated program when the only oracle is an end user: e.g., only you can determine if your email is properly filed. We present test selection methods that provide very good failure rates even for small test suites, and show that these methods work in both large-scale random experiments using a 'gold standard' and in studies with real users. Our methods are inexpensive and largely algorithm-independent. Key to our methods is an exploitation of properties of classifiers that is not possible in traditional software testing. Our results suggest that it is plausible for time-pressured end users to interactively detect failures-even very hard-to-find failures-without wading through a large number of successful (and thus less useful) tests. We additionally show that some methods are able to find the arguably most difficult-to-detect faults of classifiers: cases where machine learning algorithms have high confidence in an incorrect result.
Original language | English (US) |
---|---|
Article number | 6682887 |
Pages (from-to) | 307-323 |
Number of pages | 17 |
Journal | IEEE Transactions on Software Engineering |
Volume | 40 |
Issue number | 3 |
DOIs | |
State | Published - Mar 2014 |
Externally published | Yes |
Keywords
- Machine learning
- end-user testing
- test suite size
ASJC Scopus subject areas
- Software