TY - GEN
T1 - Improving Transparency and Explainability of Deep Learning Based IoT Botnet Detection Using Explainable Artificial Intelligence (XAI)
AU - Kalakoti, Rajesh
AU - Nomm, Sven
AU - Bahsi, Hayretdin
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Ensuring the utmost security of loT systems is imperative, and robust botnet detection plays a pivotal role in achieving this goal. Deep learning-based approaches have been widely employed for botnet detection. However, the lack of interpretability and transparency in these models can limit these models' effectiveness. In this research, we present a Deep Neural Network (DNN) model specifically designed for the detection of loT botnet attack types. Our model performs exceptionally, demonstrating outstanding performance of classification metrics with 99% accuracy, F1 score, recall, and precision. To gain deeper insights into our DNN model's behaviour, we employ seven different post hoc explanation techniques to provide local expla-nations. We evaluate the quality of Explainable AI (XAI) methods using metrics such as high faithfulness, monotonicity, complexity, and sensitivity. Our findings highlight the effectiveness of XAI techniques in enhancing the interpretability and transparency of the DNN model for loT botnet detection. Specifically, our results indicate that DeepLIFT yields high faithfulness, high consistency, low complexity, and low sensitivity among all the explainers.
AB - Ensuring the utmost security of loT systems is imperative, and robust botnet detection plays a pivotal role in achieving this goal. Deep learning-based approaches have been widely employed for botnet detection. However, the lack of interpretability and transparency in these models can limit these models' effectiveness. In this research, we present a Deep Neural Network (DNN) model specifically designed for the detection of loT botnet attack types. Our model performs exceptionally, demonstrating outstanding performance of classification metrics with 99% accuracy, F1 score, recall, and precision. To gain deeper insights into our DNN model's behaviour, we employ seven different post hoc explanation techniques to provide local expla-nations. We evaluate the quality of Explainable AI (XAI) methods using metrics such as high faithfulness, monotonicity, complexity, and sensitivity. Our findings highlight the effectiveness of XAI techniques in enhancing the interpretability and transparency of the DNN model for loT botnet detection. Specifically, our results indicate that DeepLIFT yields high faithfulness, high consistency, low complexity, and low sensitivity among all the explainers.
KW - Deep learning
KW - explainable artificial intelligence
KW - loT Botnet
KW - Post-hoc explanation
KW - XAI
UR - http://www.scopus.com/inward/record.url?scp=85190108888&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85190108888&partnerID=8YFLogxK
U2 - 10.1109/ICMLA58977.2023.00088
DO - 10.1109/ICMLA58977.2023.00088
M3 - Conference contribution
AN - SCOPUS:85190108888
T3 - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
SP - 595
EP - 601
BT - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
A2 - Arif Wani, M.
A2 - Boicu, Mihai
A2 - Sayed-Mouchaweh, Moamar
A2 - Abreu, Pedro Henriques
A2 - Gama, Joao
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
Y2 - 15 December 2023 through 17 December 2023
ER -