On February 27, the University of Malaga organized an online training event dedicated to the study of adversarial attacks targeting Artificial Intelligence (AI) and Machine Learning (ML) models. The session focused on presenting key research outcomes developed within the framework of the AIAS MSCA project, particularly those reported in Deliverable 3.2: “Taxonomy of Adversarial AI Attacks.”
During the event, participants were introduced to the fundamental concepts and evolving landscape of adversarial AI. The training provided a comprehensive overview of the different types of adversarial attacks that threaten AI and ML systems, along with the defensive mechanisms that can be employed to mitigate such threats. Special emphasis was placed on how adversarial techniques are transforming the cybersecurity landscape, creating new forms of vulnerabilities and reshaping the nature of cyber threats in AI-driven environments.
In addition to the technical discussions, the event also addressed the broader regulatory context surrounding AI technologies. Participants were introduced to the European Union’s Artificial Intelligence Act, the first comprehensive regulatory framework designed to govern the development, deployment, and safe use of AI systems across the European Union.
The training event contributed to strengthening the understanding of adversarial AI risks and defenses among participants while highlighting the importance of responsible AI governance and regulatory compliance in the rapidly evolving AI ecosystem.
Some snippets are provided below:





