RSIS International

Comprehensive Review on Advanced Adversarial Attack and Defense Strategies in Deep Neural Network

Submission Deadline: 17th December 2024
Last Issue of 2024 : Publication Fee: 30$ USD Submit Now
Submission Deadline: 20th December 2024
Special Issue on Education & Public Health: Publication Fee: 30$ USD Submit Now
Submission Deadline: 05th January 2025
Special Issue on Economics, Management, Psychology, Sociology & Communication: Publication Fee: 30$ USD Submit Now

International Journal of Research and Innovation in Applied Science (IJRIAS) |Volume VIII, Issue IV, April 2023|ISSN 2454-6194

Comprehensive Review on Advanced Adversarial Attack and Defense Strategies in Deep Neural Network

Oliver Smith, Anderson Brown
University of Western Australia, Crawley WA 6009, Australia
DOI: https://doi.org/10.51584/IJRIAS.2023.8418
Received: 06 March 2023; Accepted: 16 March 2023; Published: 5 May 2023

IJRISS Call for paper

Abstract. In adversarial machine learning, attackers add carefully crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions. In this paper, we did comprehensive review of some of the most recent research, advancement and discoveries on adversarial attack, adversarial sampling generation, the potency or effectiveness of each of the existing attack methods, we also did comprehensive review on some of the most recent research, advancement and discoveries on adversarial defense strategies, the effectiveness of each defense methods, and finally we did comparison on effectiveness and potency of different adversarial attack and defense methods. We came to conclusion that adversarial attack will mainly be blackbox for the foreseeable future since attacker has limited or no knowledge of gradient use for NN model, we also concluded that as dataset becomes more complex, so will be increase in demand for scalable adversarial defense strategy to mitigate or combat attack, and we strongly recommended that any neural network model with or without defense strategy should regularly be revisited, with the source code continuously updated at regular interval to check for any vulnerability against newer attack.

Keywords: Adversarial sampling, adversarial example, adversarial training, deep Neural Network, adversarial defense, neural network robustness

I. Introduction

It is now possible to achieve state-of-the-art performance in various artificial intelligence tasks such as speech translation, image classification, game, and machine translation [27],[28],[29]. Despite the magnitude of the success accomplished in the application of artificial intelligence for state-of-the-art performance, machine learning models remains vulnerable to adversarial attack.





Subscribe to Our Newsletter

Sign up for our newsletter, to get updates regarding the Call for Paper, Papers & Research.