TY - GEN
T1 - Explainable Transformer-based Intrusion Detection in Internet of Medical Things (IoMT) Networks
AU - Kalakoti, Rajesh
AU - Nomm, Sven
AU - Bahsi, Hayretdin
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Internet of Medical Things (IoMT) systems have brought transformative benefits to patient monitoring and remote diagnosis in healthcare. However, these systems are prone to various cyber attacks that have a high impact on security and privacy. Detecting such attacks is crucial for implementing timely and effective countermeasures. Machine learning methods have been applied for intrusion detection tasks in various networks, but explaining the reasons for detection decisions remains an obstacle for security analysts. In this paper, we demonstrate that Transformer architecture, the core of the recent revolutionary large language models, constitutes a promising solution for intrusion detection in IoMT networks. We utilized a comprehensive dataset, CICIoMT2024, recently released specifically for these networks. We created a binary classification model for discriminating attacks from benign traffic and a multi-class model for the identification of specific attack types. We applied Explainable AI (XAi) methods such as LIME and SHAP to generate posthoc explanations for the model decisions. We evaluated and compared the quality of explanations based on three metrics: faithfulness, sensitivity, and complexity. Our findings demonstrate that the applied XAI methods enhance transparency in the predictions of Transformer-based intrusion detection models for IoMT networks, proving that both transparency and high performance can be achieved simultaneously.
AB - Internet of Medical Things (IoMT) systems have brought transformative benefits to patient monitoring and remote diagnosis in healthcare. However, these systems are prone to various cyber attacks that have a high impact on security and privacy. Detecting such attacks is crucial for implementing timely and effective countermeasures. Machine learning methods have been applied for intrusion detection tasks in various networks, but explaining the reasons for detection decisions remains an obstacle for security analysts. In this paper, we demonstrate that Transformer architecture, the core of the recent revolutionary large language models, constitutes a promising solution for intrusion detection in IoMT networks. We utilized a comprehensive dataset, CICIoMT2024, recently released specifically for these networks. We created a binary classification model for discriminating attacks from benign traffic and a multi-class model for the identification of specific attack types. We applied Explainable AI (XAi) methods such as LIME and SHAP to generate posthoc explanations for the model decisions. We evaluated and compared the quality of explanations based on three metrics: faithfulness, sensitivity, and complexity. Our findings demonstrate that the applied XAI methods enhance transparency in the predictions of Transformer-based intrusion detection models for IoMT networks, proving that both transparency and high performance can be achieved simultaneously.
KW - Evaluation of Explainable AI Intrusion detection
KW - Health Care IoMT Intrusion detection
KW - IoT
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=105000985973&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105000985973&partnerID=8YFLogxK
U2 - 10.1109/ICMLA61862.2024.00179
DO - 10.1109/ICMLA61862.2024.00179
M3 - Conference contribution
AN - SCOPUS:105000985973
T3 - Proceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
SP - 1164
EP - 1169
BT - Proceedings - 2024 International Conference on Machine Learning and Applications, ICMLA 2024
A2 - Wani, M. Arif
A2 - Angelov, Plamen
A2 - Luo, Feng
A2 - Ogihara, Mitsunori
A2 - Wu, Xintao
A2 - Precup, Radu-Emil
A2 - Ramezani, Ramin
A2 - Gu, Xiaowei
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd IEEE International Conference on Machine Learning and Applications, ICMLA 2024
Y2 - 18 December 2024 through 20 December 2024
ER -