TY - GEN
T1 - Lightweight automated testing with adaptation-based programming
AU - Groce, Alex
AU - Fern, Alan
AU - Pinto, Jervis
AU - Bauer, Tim
AU - Alipour, Amin
AU - Erwig, Martin
AU - Lopez, Camden
PY - 2012
Y1 - 2012
N2 - This paper considers the problem of testing a container class or other modestly-complex API-based software system. Past experimental evaluations have shown that for many such modules, random testing and shape abstraction based model checking are effective. These approaches have proven attractive due to a combination of minimal requirements for tool/language support, extremely high usability, and low overhead. These "lightweight" methods are therefore available for almost any programming language or environment, in contrast to model checkers and concolic testers. Unfortunately, for the cases where random testing and shape abstraction perform poorly, there have been few alternatives available with such wide applicability. This paper presents a generalizable approach based on reinforcement learning (RL), using adaptation-based programming (ABP) as an interface to make RL-based testing (almost) as easy to apply and adaptable to new languages and environments as random testing. We show how learned tests differ from random ones, and propose a model for why RL works in this unusual (by RL standards) setting, in the context of a detailed large-scale experimental evaluation of lightweight automated testing methods.
AB - This paper considers the problem of testing a container class or other modestly-complex API-based software system. Past experimental evaluations have shown that for many such modules, random testing and shape abstraction based model checking are effective. These approaches have proven attractive due to a combination of minimal requirements for tool/language support, extremely high usability, and low overhead. These "lightweight" methods are therefore available for almost any programming language or environment, in contrast to model checkers and concolic testers. Unfortunately, for the cases where random testing and shape abstraction perform poorly, there have been few alternatives available with such wide applicability. This paper presents a generalizable approach based on reinforcement learning (RL), using adaptation-based programming (ABP) as an interface to make RL-based testing (almost) as easy to apply and adaptable to new languages and environments as random testing. We show how learned tests differ from random ones, and propose a model for why RL works in this unusual (by RL standards) setting, in the context of a detailed large-scale experimental evaluation of lightweight automated testing methods.
KW - Reinforcement learning
KW - Software testing
UR - http://www.scopus.com/inward/record.url?scp=84876394399&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84876394399&partnerID=8YFLogxK
U2 - 10.1109/ISSRE.2012.1
DO - 10.1109/ISSRE.2012.1
M3 - Conference contribution
AN - SCOPUS:84876394399
SN - 9780769548883
T3 - Proceedings - International Symposium on Software Reliability Engineering, ISSRE
SP - 161
EP - 170
BT - Proceedings - 2012 IEEE 23rd International Symposium on Software Reliability Engineering, ISSRE 2012
T2 - 2012 IEEE 23rd International Symposium on Software Reliability Engineering, ISSRE 2012
Y2 - 27 November 2012 through 30 November 2012
ER -