Taming compiler fuzzers

Yang Chen, Alex Groce, Chaoqiang Zhang, Weng Keen Wong, Xiaoli Fern, Eric Eide, John Regehr

Research output: Contribution to journalArticlepeer-review

54 Scopus citations

Abstract

Aggressive random testing tools ("fuzzers") are impressively effective at finding compiler bugs. For example, a single test-case generator has resulted in more than 1,700 bugs reported for a single JavaScript engine. However, fuzzers can be frustrating to use: they indiscriminately and repeatedly find bugs that may not be severe enough to fix right away. Currently, users filter out undesirable test cases using ad hoc methods such as disallowing problematic features in tests and grepping test results. This paper formulates and addresses the fuzzer taming problem: given a potentially large number of random test cases that trigger failures, order them such that diverse, interesting test cases are highly ranked. Our evaluation shows our ability to solve the fuzzer taming problem for 3,799 test cases triggering 46 bugs in a C compiler and 2,603 test cases triggering 28 bugs in a JavaScript engine.

Original languageEnglish (US)
Pages (from-to)197-207
Number of pages11
JournalACM SIGPLAN Notices
Volume48
Issue number6
DOIs
StatePublished - Jun 2013
Externally publishedYes

Keywords

  • Automated testing
  • Bug reporting
  • Compiler defect
  • Compiler testing
  • Fuzz testing
  • Random testing
  • Test-case reduction

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Taming compiler fuzzers'. Together they form a unique fingerprint.

Cite this