Cybersecurity is a domain where the data distribution is constantly changing with attackers exploring newer patterns to attack cyber infrastructure. Intrusion detection system is one of the important layers in cyber safety in today’s world. Machine learning based network intrusion detection systems started showing effective results in recent years. With deep learning models, detection rates of network intrusion detection system are improved. More accurate the model, more the complexity and hence less the interpretability. Deep neural networks are complex and hard to interpret which makes difficult to use them in production as reasons behind their decisions are unknown.
In our recently published paper authored by Shraddha Mane and Dattaraj Rao, we have used deep neural network for network intrusion detection and also proposed explainable AI framework to add transparency at every stage of machine learning pipeline. This is done by leveraging Explainable AI algorithms which focus on making ML models less of black boxes by providing explanations as to why a prediction is made. Explanations give us measurable factors as to what features influence the prediction of a cyberattack and to what degree. These explanations are generated from SHAP, LIME, Contrastive Explanations Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply these approaches to NSL KDD dataset for intrusion detection system and demonstrate results.
We present a framework where local and global explanations help to get important features in decision making, instance-wise explanations and relation between features and outcome of the system. By looking at the explanations Data scientists can understand what patterns model has learned, if learned patterns are wrong then data scientist can choose different set of features or make changes in the dataset so that model learns appropriately.
Network analyst makes the final decision by considering model’s prediction. Along with prediction, we provide explanations where similar instances are shown for which prediction of the model is same as the test instance so that network analyst understands similarities between them and can make final decision. Explanations provided for end users help them to understand which features are contributing in decision making with what weightage. So that they can alter values of features to change the model’s decision. Thus, presented framework provides explanations at every stage of machine learning pipeline which are suited for different users in network intrusion detection system.
Persistent offers a detailed offering around Artificial Intelligence and Machine Learning that is available HERE and an Explainable AI offering to help make sense of black-box ML systems – available HERE.
Download our detailed research paper on explainable network IDS systems below
Find more content about
Cybersecurity (2) Intrusion detection system (1) network intrusion detection systems (1) explainable AI (1)