đź§  New Publication in Array (Q1 Journal)

We are pleased to announce our latest publication in the prestigious Q1-ranked journal Array (Elsevier) with title: From Vulnerability to Resilience: Adversarial Training and Real-Time Detection for AI Security

As Artificial Intelligence becomes a foundational component across critical domains—such as cybersecurity, healthcare, finance, and smart infrastructures—the need to safeguard AI models against adversarial threats is more urgent than ever. This publication addresses this challenge by investigating both the vulnerabilities and defense mechanisms of modern AI systems.

🔍 Key Contributions of the Study:

🔹 Comprehensive evaluation of multiple Machine Learning and Deep Learning models — including Decision Tree, Random Forest, Logistic Regression, XGBoost, RNN, CNN, and a custom PyTorch Neural Network — against prominent adversarial attacks such as FGSM, PGD, DeepFool, Carlini–Wagner, and transfer attacks.

🔹 Comparative analysis of adversarial training versus real-time adversarial detection methods, utilizing binary feature-based and activation-based detection schemes.

📊 Using the CIC-IDS2017 and CICIoT2023 cybersecurity datasets, our results reveal that adversarial training consistently outperforms detection-based mechanisms, offering a more robust and reliable defense strategy across diverse attack scenarios.

➡️ This research advances the development of resilient and trustworthy AI systems, capable of sustaining secure and reliable operation even under adversarial conditions.

đź”— Read the full article here: https://lnkd.in/dYhHhA75