Using Relative Lines of Code to Guide Automated Test Generation for Python

Josie Holmes, Iftekhar Ahmed, Caius Brindescu, Rahul Gopinath, He Zhang, Alex Groce

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Raw lines of code (LOC) is a metric that does not, at first glance, seem extremely useful for automated test generation. It is both highly language-dependent and not extremely meaningful, semantically, within a language: one coder can produce the same effect with many fewer lines than another. However, relative LOC, between components of the same project, turns out to be a highly useful metric for automated testing. In this article, we make use of a heuristic based on LOC counts for tested functions to dramatically improve the effectiveness of automated test generation. This approach is particularly valuable in languages where collecting code coverage data to guide testing has a very high overhead. We apply the heuristic to property-based Python testing using the TSTL (Template Scripting Testing Language) tool. In our experiments, the simple LOC heuristic can improve branch and statement coverage by large margins (often more than 20%, up to 40% or more) and improve fault detection by an even larger margin (usually more than 75% and up to 400% or more). The LOC heuristic is also easy to combine with other approaches and is comparable to, and possibly more effective than, two well-established approaches for guiding random testing.

Original languageEnglish (US)
Article number3408896
JournalACM Transactions on Software Engineering and Methodology
Volume29
Issue number4
DOIs
StatePublished - Oct 2020

Keywords

  • Automated test generation
  • static code metrics
  • testing heuristics

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Using Relative Lines of Code to Guide Automated Test Generation for Python'. Together they form a unique fingerprint.

Cite this