A Theoretical Framework for Adversarial Robustness in Real-Time ML-Based Intrusion Detection Systems (IDS)

Authors

Ogechukwu Scholastica Onyenaucheya

Computer Information Systems, Prairie View A&M University, 100 University Drive, Prairie View, Texas 77446, United States of America (America)

Article Information

DOI: 10.51584/IJRIAS.2026.110100143

Subject Category: Cyber Security

Volume/Issue: 11/1 | Page No: 1672-1690

Publication Timeline

Submitted: 2026-02-08

Accepted: 2026-02-11

Published: 2026-02-20

Abstract

Machine learning-based intrusion detection systems (IDS) are now commonly used in real-time cybersecurity to defend against quickly changing threats. However, recent developments in adversarial machine learning have shown that many IDS models are still very susceptible to adaptive attacks, especially in real-time conditions. Most existing research examines adversarial robustness in offline or static scenarios, missing the dynamic nature of live network traffic, ongoing data flows, and strict time requirements. This gap limits the effectiveness of current adversarial defense strategies in practical intrusion detection systems.
This paper aims to fill this gap by proposing a theoretical framework for adversarial robustness in real-time machine learning-based intrusion detection systems. The framework treats adversarial robustness as a characteristic that changes over time, influenced by detection delays, attacker strategies, concept drift, and ongoing interactions between models and adversaries. We introduce formal concepts such as time-to-evasion, detection stability, and robustness decay to describe how IDS reacts under lasting adversarial pressure.
Instead of creating a new detection algorithm, this study gives a theoretical viewpoint that clarifies why many adversarial defenses perform well in offline tests but struggle in real-time scenarios. The framework applies to various IDS designs and machine learning methods. By connecting adversarial machine learning theory with the needs of real-time intrusion detection, this work lays the groundwork for future testing, comparisons, and development of resilient IDS for challenging operational environments.

Keywords

Adversarial Machine Learning; Intrusion Detection Systems; Real-Time Cybersecurity

Downloads

References

1. Sommer, R., & Paxson, V. (2010). Outside the closed world: On using machine learning for network intrusion detection. IEEE Symposium on Security and Privacy, 305–316. [Google Scholar] [Crossref]

2. Scarfone, K., & Mell, P. (2007). Guide to intrusion detection and prevention systems (IDPS). NIST Special Publication 800-94. [Google Scholar] [Crossref]

3. Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331. [Google Scholar] [Crossref]

4. Buczak, A. L., & Guven, E. (2016). A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Communications Surveys & Tutorials, 18(2), 1153–1176. [Google Scholar] [Crossref]

5. Ring, M., Wunderlich, S., Grüdl, D., Landes, D., & Hotho, A. (2019). A survey of network-based intrusion detection data sets. Computers & Security, 86, 147–167. [Google Scholar] [Crossref]

6. Shone, N., Ngoc, T. N., Phai, V. D., & Shi, Q. (2018). A deep learning approach to network intrusion detection. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(1), 41–50. [Google Scholar] [Crossref]

7. Kim, J., Kim, J., Thu, H. L. T., & Kim, H. (2016). Long short term memory recurrent neural network classifier for intrusion detection. International Conference on Platform Technology and Service (PlatCon). [Google Scholar] [Crossref]

8. Biggio, B., Nelson, B., & Laskov, P. (2012). Poisoning attacks against support vector machines. International Conference on Machine Learning (ICML). [Google Scholar] [Crossref]

9. Apruzzese, G., Colajanni, M., Ferretti, L., Marchetti, M., & Guido, A. (2020). On the effectiveness of machine and deep learning for cyber security. IEEE International Conference on Cyber Conflict. [Google Scholar] [Crossref]

10. Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I. P., & Tygar, J. (2011). Adversarial machine learning. ACM Workshop on Security and Artificial Intelligence. [Google Scholar] [Crossref]

11. Lin, Z., Shi, Y., & Xue, Z. (2018). IDSGAN: Generative adversarial networks for attack generation against intrusion detection. arXiv preprint arXiv:1809.02077. [Google Scholar] [Crossref]

12. Rigaki, M., & Garcia, S. (2018). Bringing a GAN to a knife-fight: Adapting malware communication to avoid detection. IEEE Security and Privacy Workshops. [Google Scholar] [Crossref]

13. Cheng, L., Liu, F., & Yao, D. (2019). Enterprise data breach: Causes, challenges, prevention, and future directions. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(5). [Google Scholar] [Crossref]

14. Goodfellow, I., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR). [Google Scholar] [Crossref]

15. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. International Conference on Learning Representations (ICLR). [Google Scholar] [Crossref]

16. Apruzzese, G., Marchetti, M., & Colajanni, M. (2017). Deep reinforcement adversarial learning against botnet evasion attacks. IEEE Transactions on Network and Service Management, 14(4), 864–876. [Google Scholar] [Crossref]

17. Papernot, N., McDaniel, P., Goodfellow, I., et al. (2016). Practical black-box attacks against machine learning. ACM Asia Conference on Computer and Communications Security. [Google Scholar] [Crossref]

18. Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L. E., & Jordan, M. (2019). Theoretically principled trade-off between robustness and accuracy. International Conference on Machine Learning (ICML). [Google Scholar] [Crossref]

19. Corona, I., Giacinto, G., & Roli, F. (2013). Adversarial attacks against intrusion detection systems. ACM Transactions on Information and System Security, 16(2). [Google Scholar] [Crossref]

20. Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M., & Bouchachia, A. (2014). A survey on concept drift adaptation. ACM Computing Surveys, 46(4). [Google Scholar] [Crossref]

21. Krawczyk, B., Minku, L. L., Gama, J., Stefanowski, J., & Woźniak, M. (2017). Ensemble learning for data stream analysis. Information Fusion, 37, 132–156. [Google Scholar] [Crossref]

22. Tsymbal, A. (2004). The problem of concept drift: Definitions and related work. Computer Science Department, Trinity College Dublin. [Google Scholar] [Crossref]

23. Jordaney, R., Sharad, K., Dash, S. K., et al. (2017). Transcend: Detecting concept drift in malware classification models. USENIX Security Symposium. [Google Scholar] [Crossref]

24. Sculley, D., Holt, G., Golovin, D., et al. (2015). Hidden technical debt in machine learning systems. Advances in Neural Information Processing Systems (NeurIPS). [Google Scholar] [Crossref]

25. Xu, W., Evans, D., & Qi, Y. (2016). Feature squeezing: Detecting adversarial examples in deep neural networks. NDSS Symposium. [Google Scholar] [Crossref]

Metrics

Views & Downloads

Similar Articles