Benchmarking framework for IDS classifiers in term of security and performance based on multicriteria analysis

<p>This research aims to assist the developers of intrusion detection systems (IDS) to make the right</p><p>selection decision of a suitable classification model. Many classification algorithms have been</p><p>developed to be used...

Full description

Saved in:
Bibliographic Details
Main Author: Alamleh, Amneh Hussein Mohd
Format: thesis
Language:eng
Published: 2022
Subjects:
Online Access:https://ir.upsi.edu.my/detailsg.php?det=9163
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p>This research aims to assist the developers of intrusion detection systems (IDS) to make the right</p><p>selection decision of a suitable classification model. Many classification algorithms have been</p><p>developed to be used in an IDS detection engine. Developers of IDS have been facing challenges</p><p>in how to evaluate and benchmark classifiers. Different perspectives and multiple, conflicting</p><p>importance evaluation criteria represent the challenges in evaluation, benchmarking and selecting</p><p>suitable IDS classifiers. The current evaluation studies depend on evaluating the IDS classifier</p><p>from a single incomplete perspective. In each study, the evaluations have been achieved with</p><p>reference to some security-related evaluation criteria and ignore performance criteria. Furthermore,</p><p>the weighting process that reflects the importance of each criterion depended on a personal</p><p>subjective perspective. The goal of this thesis is to set a new standardisation and benchmarking</p><p>framework based on a set of standardised criteria and set of unified multi-criteria decision-making</p><p>(MCDM) methods that overcome the shortage. This study attempts to establish and standardise</p><p>IDS classifier evaluation criteria and construct a decision matrix (DM) based on crossover of the</p><p>standardised criteria and 12 classifiers. This DM was evaluated using datasets consist of 125,973</p><p>records; each record consists of 41 features. Subsequently, the classifiers are evaluated and ranked</p><p>using unified MCDM techniques. The proposed framework consists of three main parts: the first</p><p>for standardising evaluation criteria, the second for constructing the DM and the third for</p><p>developing weighting and ranking unified MCDM methods and IDS classifiers evaluation and</p><p>benchmarking. The fuzzy Delphi method (FDM) has been used for criteria standardisation.</p><p>Integrated weighting methods using direct rating and the entropy objective method are developed</p><p>to calculate the weights of the criteria. The Vlse Kriterijumska Optimizacija Kompromisno Resenje</p><p>(VIKOR) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) ranking</p><p>methods were integrated into a unified method for ranking the selected classifiers. The Borda voting</p><p>method was used to unify the different ranks and perform a group ranking context. An objective</p><p>validation process has been used to validate the ranking results. The mean standard deviation was</p><p>computed to ensure that the classifier ranking underwent systematic ranking. The following results</p><p>were confirmed. (1) FDM is a suitable way to reach a standard set of evaluation criteria. (2) Using</p><p>an integrated (subjective, objective) weighting method can find the suitable criteria weights. (3) A</p><p>unified ranking method that integrates VIKOR and TOPSIS effectively solves the classifier</p><p>selection problem and (4) the objective validation shows significant differences between the</p><p>groups scores, indicating indicates that the ranking results of the proposed framework were valid.</p><p>(5) The evaluation of the proposed framework shows an advantage over the benchmarked works</p><p>with a percentage of 100%. The implications of this study benefit IDS developers in making the</p><p>right decisions in selecting the best classification model. Researchers can use the proposed</p><p>framework for evaluation and selection in similar evaluation problems.</p>