Our research article βTesting the limits: exploring adversarial techniques in AI modelsβ, authored by Apostolis Zarras, Athanasia Kollarou, Aristeidis Farao, Panagiotis Bountakas, and Christos Xenakis, has been published in PeerJ Computer Science and was proudly supported by the AIAS project.
As Artificial Intelligence increasingly permeates critical domains such as healthcare, finance, and autonomous systems, the need for secure and resilient AI models becomes more urgent. Despite significant advances in deep learning, these systems remain vulnerable to adversarial manipulation. In this publication, our research team systematically examines how various AI architectures respond to cutting-edge adversarial attack techniques, providing valuable insights into the robustness and reliability of modern machine learning models.
Using the custom-built EVAISION tool, the study examines the effectiveness of prominent adversarial methods β Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, and Carlini & Wagner β across five neural network models:
πΉ Fully Connected Neural Network
πΉ LeNet
πΉ Simple CNN
πΉ MobileNetV2
πΉ VGG11
π The attacks were evaluated using accuracy, F1-score, and misclassification rate, revealing compelling findings:
β Simpler architectures sometimes demonstrated higher resilience to adversarial manipulation than more complex models.
β No single attack method performed best across all architectures β highlighting the importance of architecture-specific attack tuning.
β Model robustness is not strictly tied to model complexity β a key insight for secure AI system design.
π The results underline a central message: Robust AI cannot rely solely on performanceβit must be evaluated under adversarial pressure, tailored to the modelβs characteristics, and fortified through informed choices of defensive strategies.
π Publication URL: https://shorturl.at/TD0TW
π Journal: PeerJ Computer Science

