Improving IoT Security With Explainable AI: Quantitative Evaluation of Explainability for IoT Botnet Detection

Rajesh Kalakoti, Hayretdin Bahsi, Sven Nomm

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

Detecting botnets is an essential task to ensure the security of Internet of Things (IoT) systems. Machine learning (ML)-based approaches have been widely used for this purpose, but the lack of interpretability and transparency of the models often limits their effectiveness. In this research paper, our aim is to improve the transparency and interpretability of high-performance ML models for IoT botnet detection by selecting higher quality explanations using explainable artificial intelligence (XAI) techniques. We used three data sets to induce binary and multiclass classification models for IoT botnet detection, with sequential backward selection (SBS) employed as the feature selection technique. We then use two post hoc XAI techniques such as local interpretable model-agnostic explanations (LIME) and Shapley additive explanation (SHAP), to explain the behavior of the models. To evaluate the quality of explanations generated by XAI methods, we employed faithfulness, monotonicity, complexity, and sensitivity metrics. ML models employed in this work achieve very high detection rates with a limited number of features. Our findings demonstrate the effectiveness of XAI methods in improving the interpretability and transparency of ML-based IoT botnet detection models. Specifically, explanations generated by applying LIME and SHAP to the extreme gradient boosting model yield high faithfulness, high consistency, low complexity, and low sensitivity. Furthermore, SHAP outperforms LIME by achieving better results in these metrics.

Original languageEnglish (US)
Pages (from-to)18237-18254
Number of pages18
JournalIEEE Internet of Things Journal
Volume11
Issue number10
DOIs
StatePublished - May 15 2024
Externally publishedYes

Keywords

  • Botnet
  • complexity
  • consistency
  • explainable artificial intelligence (XAI)
  • faithfulness
  • feature importance
  • Internet of Things (IoT)
  • local interpretable model-agnostic explanations (LIME)
  • posthoc XAI
  • robustness
  • Shapley additive explanation (SHAP)

ASJC Scopus subject areas

  • Signal Processing
  • Information Systems
  • Hardware and Architecture
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Improving IoT Security With Explainable AI: Quantitative Evaluation of Explainability for IoT Botnet Detection'. Together they form a unique fingerprint.

Cite this