Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?

Marcelo Costalonga, Bianca Minetto Napoleao, Maria Teresa Baldassarre, Katia Romero Felizardo, Igor Steinmacher, Marcos Kalinowski

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

[Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.

Original languageEnglish (US)
Title of host publicationProceedings - 2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages56-63
Number of pages8
ISBN (Electronic)9798331502256
DOIs
StatePublished - 2025
Event2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025 - Ottawa, Canada
Duration: May 3 2025 → …

Publication series

NameProceedings - 2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025

Conference

Conference2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025
Country/TerritoryCanada
CityOttawa
Period5/3/25 → …

Keywords

  • Machine Learning
  • Selection of Studies
  • Systematic Literature Review Update
  • Systematic Review Automation

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?'. Together they form a unique fingerprint.

Cite this