What went wrong: Explaining counterexamples

Alex Groce, Willem Visser

Research output: Contribution to journalArticlepeer-review

134 Scopus citations

Abstract

One of the chief advantages of model checking is the production of counterexamples demonstrating that a system does not satisfy a specification. However, it may require a great deal of human effort to extract the essence of an error from even a detailed source-level trace of a failing run. We use an automated method for finding multiple versions of an error (and similar executions that do not produce an error), and analyze these executions to produce a more succinct description of the key elements of the error. The description produced includes identification of portions of the source code crucial to distinguishing failing and succeeding runs, differences in invariants between failing and nonfailing runs, and information on the necessary changes in scheduling and environmental actions needed to cause successful runs to fail.

Original languageEnglish (US)
Pages (from-to)121-135
Number of pages15
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume2648
DOIs
StatePublished - 2003
Externally publishedYes

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'What went wrong: Explaining counterexamples'. Together they form a unique fingerprint.

Cite this