Capability detection and evaluation metrics for cyber security lab exercises

Emin Caliskan, Unal Tatar, Hayretdin Bahsi, Rain Ottis, Risto Vaarandi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

This research aims to identify metrics that can be used to evaluate the success of cyber security students, based on the logs, IDS alarms, and system events triggered during the practical lab examination in a cyber range environment. This is achieved by analyzing students of a cyber security master degree class, specially focusing on their lab performance by leveraging educational data mining techniques. After collecting related logs from monitoring systems, sanitized and cleaned data were analyzed with supervised machine learning algorithms. The results reveal there are interesting relationships and common patterns among students from different success levels. Logs and events collected from monitoring systems provide novel findings. Metrics, like the number of IDS alerts, network sessions, or top destination IP addresses, are discovered as indicators of success or failure for final grade. The research has several important contributions. First, it aims to determine the most significant evaluation metrics for capability detection of students. Identifying those metrics would make it much easier to work with in future studies. Growing number of cyber security exercises, as well as practical examinations in academia can benefit from these indicators in order to establish fair, automatically generated and verifiable results. Second, applying machine learning algorithms to the domain of cyber security education will be a more efficient way of evaluating students. Especially for the cyber security exercises with many participants, this technique can significantly reduce the manual workload of exercise organizers. Third, this research discovers common patterns that students from different skill levels possess, such as the total number of IDS alerts they generate during a practical exercise. This information can be used in several beneficial ways, including prevention of cheating and auto-grading systems. The implementation of such standardized scoring engine would be beneficial for evaluating students across different institutions and academic years in a fair way.

Original languageEnglish (US)
Title of host publicationProceedings of the 12th International Conference on Cyber Warfare and Security, ICCWS 2017
EditorsJuan R. Lopez, Adam R. Bryant, Robert F. Mills
PublisherAcademic Conferences and Publishing International Limited
Pages407-414
Number of pages8
ISBN (Electronic)9781911218258
StatePublished - 2017
Externally publishedYes
Event12th International Conference on Cyber Warfare and Security, ICCWS 2017 - Dayton, United States
Duration: Mar 2 2017Mar 3 2017

Publication series

NameProceedings of the 12th International Conference on Cyber Warfare and Security, ICCWS 2017

Conference

Conference12th International Conference on Cyber Warfare and Security, ICCWS 2017
Country/TerritoryUnited States
CityDayton
Period3/2/173/3/17

Keywords

  • Capability metrics
  • Cyber exercise
  • Cyber security
  • Educational data mining
  • Student evaluation

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Capability detection and evaluation metrics for cyber security lab exercises'. Together they form a unique fingerprint.

Cite this