TY - GEN
T1 - Can Machine Learning Support the Selection of Studies for Systematic Literature Review Updates?
AU - Costalonga, Marcelo
AU - Napoleao, Bianca Minetto
AU - Baldassarre, Maria Teresa
AU - Felizardo, Katia Romero
AU - Steinmacher, Igor
AU - Kalinowski, Marcos
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - [Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.
AB - [Background] Systematic literature reviews (SLRs) are essential for synthesizing evidence in Software Engineering (SE), but keeping them up-to-date requires substantial effort. Study selection, one of the most labor-intensive steps, involves reviewing numerous studies and requires multiple reviewers to minimize bias and avoid loss of evidence. [Objective] This study aims to evaluate if Machine Learning (ML) text classification models can support reviewers in the study selection for SLR updates. [Method] We reproduce the study selection of an SLR update performed by three SE researchers. We trained two supervised ML models (Random Forest and Support Vector Machines) with different configurations using data from the original SLR. We calculated the study selection effectiveness of the ML models for the SLR update in terms of precision, recall, and F-measure. We also compared the performance of human-ML pairs with human-only pairs when selecting studies. [Results] The ML models achieved a modest F-score of 0.33, which is insufficient for reliable automation. However, we found that such models can reduce the study selection effort by 33.9% without loss of evidence (keeping a 100% recall). Our analysis also showed that the initial screening by pairs of human reviewers produces results that are much better aligned with the final SLR update result. [Conclusion] Based on our results, we conclude that although ML models can help reduce the effort involved in SLR updates, achieving rigorous and reliable outcomes still requires the expertise of experienced human reviewers for the initial screening phase.
KW - Machine Learning
KW - Selection of Studies
KW - Systematic Literature Review Update
KW - Systematic Review Automation
UR - https://www.scopus.com/pages/publications/105012201672
UR - https://www.scopus.com/inward/citedby.url?scp=105012201672&partnerID=8YFLogxK
U2 - 10.1109/WSESE66602.2025.00016
DO - 10.1109/WSESE66602.2025.00016
M3 - Conference contribution
AN - SCOPUS:105012201672
T3 - Proceedings - 2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025
SP - 56
EP - 63
BT - Proceedings - 2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2025
Y2 - 3 May 2025
ER -