Adversarial Machine Learning In Network Security: A Systematic Review Of Threat Vectors And Defense Mechanisms
DOI:
https://doi.org/10.70937/itej.v1i01.9Keywords:
Adversarial Machine Learning, Network Security, Threat Vectors, Defense Mechanisms, Systematic ReviewAbstract
Adversarial Machine Learning (AML) has emerged as a critical area of research within network security, addressing the evolving challenge of adversaries exploiting machine learning (ML) models. This systematic review adopts the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to comprehensively examine threat vectors and defense mechanisms in AML. The study identifies, categorizes, and evaluates existing research focused on adversarial attacks targeting ML algorithms in network security applications, including evasion, poisoning, and model extraction attacks. By rigorously following the PRISMA guidelines, a systematic search across multiple scholarly databases yielded a robust dataset of peer-reviewed articles that were screened, reviewed, and analyzed for inclusion. The review outlines key adversarial techniques employed against ML systems, such as gradient-based attack strategies and black-box attacks and explores the underlying vulnerabilities in network security architectures. Additionally, it highlights defense mechanisms, including adversarial training, input preprocessing, and robust model design, discussing their efficacy and limitations in mitigating adversarial threats. The study also identifies critical gaps in current research, such as the lack of standardized benchmarking for adversarial defenses and the need for scalable and real-time AML solutions.