Federated Learning of Explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks

Rajesh Kalakoti, Sven Nõmm, Hayretdin Bahsi

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The rapid growth of Internet of Things(IoT) devices has increased their vulnerability to botnet attacks, posing serious network security challenges. While deep learning models within federated learning (FL) can detect such threats while preserving privacy, their black-box nature limits interpretability, crucial for trust in security systems. Integrating explainable AI (XAI) into FL is significantly challenging, as many XAI methods require access to client data to interpret the behaviour of the global model on the server side. In this study, we propose a Federated Learning of Explainable AI (FedXAI) framework for binary and multiclass classification (botnet type and attack type) to perform intrusion detection in IoT devices. We incorporate one of the widely known XAI methods, SHAP (SHapley Additive exPlanations), into the detection framework. Specifically, we propose a privacy-preserving method in which the server securely aggregates SHAP value-based explanations from local models on the client side to approximate explanations for the global model on the server, without accessing any client data. Our evaluation demonstrates that the securely aggregated client-side explanations closely approximate the global model explanations generated when the server has access to client data. Our FL framework utilises a long-short-term memory (LSTM) network in a horizontal FL setup with the FedAvg (federated averaging) aggregation algorithm, achieving high detection performance for botnet detection in all binary and multiclass classification tasks. Additionally, we evaluated post-hoc explanations for local models client-side using LIME (Local Interpretable Model-Agnostic Explanations), Integrated Gradients(IG), and SHAP, with SHAP performing better based on metrics like Faithfulness, Complexity, Monotonicity, and Robustness. This study demonstrates that it is possible to achieve a high-performing FL model that addresses both explainability and privacy in the same framework for intrusion detection in IoT networks.

Original languageEnglish (US)
Article number111479
JournalComputer Networks
Volume270
DOIs
StatePublished - Oct 2025

Keywords

  • Deep learning
  • Explainable AI
  • Federated Learning
  • Intrusion detection
  • IoT botnet
  • Privacy-preserving

ASJC Scopus subject areas

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Federated Learning of Explainable AI(FedXAI) for deep learning-based intrusion detection in IoT networks'. Together they form a unique fingerprint.

Cite this