Adversarial Artificial Intelligence (AI)

What is adversarial AI and what is it used for?

In this article, I provide an introduction to AI methods in general and in hostile environments (that is, adversarial AI), such as cyber attack detection.

Aug. 21, 2025
matrix Igino Corona

By now it is a buzzword, inserted almost everywhere to promote a product or to attract clicks from those who deal in fluff: it is Artificial Intelligence or AI.

In this article, I'll try to clarify things and be honest with you, having worked in the research field for 20 years now. This article should be truly understandable to everyone; if it isn't, please let me know! Let's start by analyzing the meaning of each term.

What is Intelligence?

There is no single, scientifically accepted definition of intelligence, as it is a complex concept with many dimensions, different meanings, and facets. That said, almost all definitions can be associated with a sophisticated information processing capacity, characterized by mechanisms of synthesis, adaptation, and generalization for understanding context and solving new problems. We can find it in all forms of life, which, by processing information from their sensory organs, are able to act to survive and reproduce in the natural environment.

In our case, the highest level processing is known as thinking, which, based on information from our senses, builds (more or less implicitly) models of reality through which:

  1. we say we understand, we judge, we explain facts or actions of others, we are aware of ourselves;
  2. we take action, we make ourselves understood by others, we adapt to new situations or we modify the environment around us for our benefit.

What is Artificial Intelligence?

The leading standards organization, the International Organization for Standardization (ISO), provides the following definition (see ISO/IEC 22989-2022 , section 3.1.3):

Research and development of mechanisms and applications of AI systems.

So what is an AI System?

ISO also provides the following definition (see ISO/IEC 22989-2022 , point 3.1.4):

Engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.

If we read between the lines of the definition, the pattern is simply this:

Human Goals ➜ Engineered System ➜ Results

This is clearly a rather general (and anthropocentric) definition, so much so that one might (rightly) wonder where the intelligence lies. In fact, I think this definition reflects the state of affairs in the field of artificial intelligence quite well.

In practice, those who develop these systems are simply interested in solving practical (complex) problems through automated data processing.

The question of whether or not such systems are intelligent (and, if so, how much) is secondary, and in any case is much more difficult to address, primarily because, as we have discussed, there is no single, scientifically accepted definition of intelligence.

But then why talk about intelligence? The answer lies in two main aspects. There's an undeniable commercial aspect. Talking about intelligence makes any proposition more interesting, especially to non-experts. It's synonymous with technological sophistication and the completeness of a tool. However, there's also a practical aspect: artificial intelligence systems can indeed implement key mechanisms common to many aspects of intelligence, which have proven very useful in solving complex problems. Let's see which ones.

How does an AI System work?

Nowadays, AI systems are mainly based on machine learning mechanisms:

  1. they fit models based on a set of sample (training) data;
  2. based on the learned patterns and new data, they make predictions.

Input data and predictions can be anything, depending on the specific application. Developers set up systems to use/select specific input models. Learning involves (possibly) selecting, combining, and modifying the input model parameters.

Models substantially encode the AI system's experience, what it has learned from training data. They provide a useful synthesis for generalizing from data, capturing its characteristic features, in order to make accurate predictions based on new data during the operational phase. In this sense, they replicate a key characteristic of intelligence as described above, which has proven extraordinarily effective for solving extremely complex practical problems.

There is also a so-called symbolic AI paradigm (mostly employed in the early days of the discipline, between 1950 and 1980), in which developers explicitly encode "knowledge," that is, all the rules on which predictions (inferences) are based. It can be seen as a special case in which the models (i.e., rules) are completely fixed and there is no adaptation based on data. In general, this paradigm is appropriate for encoding an expert's knowledge of the problem and is complementary to the data-driven approach: the two approaches can be combined to improve the quality and performance of an AI-based system.

So, what is adversarial AI?

There are applications in which an AI system must confront an intelligent adversary, who is determined to compromise its functionality. This is the case with AI systems applied to cybersecurity: they must operate in a hostile environment. If the system is trained on data specifically contaminated by a cybercriminal, or the attacker modifies his/her attack based on the models learned by the system, the latter can fail spectacularly, if not properly designed.

It may seem strange, but this problem was only recognized in the early 2000s. I was among the pioneers who studied the problem in depth and developed concrete and effective mechanisms to defend against these attacks. We gave rise to a new field of research called adversarial machine learning (or adversarial AI), which has now become subject of standardization by the US National Institute of Standards and Technology (NIST) for the development of trustworthy and responsible AI.

Conclusions

We've seen that the ISO standardized definition of Artificial Intelligence and AI systems is extremely broad and, paradoxically, contains no reference to intelligence or its key characteristics. This is somewhat the state of affairs in the field: ultimately, those developing these AI systems are simply interested in solving complex practical problems through automated data processing. On the other hand, AI has become a buzzword, tacked on to just about anything. There's also a practical aspect: AI systems can indeed exploit key intelligence mechanisms, such as learning from examples, to function. In applications like cybersecurity, however, they must contend with an intelligent adversary determined to compromise them. How can this be done? In the next article, I'll provide some key hints.

Find more blog posts with similar tags

Filter blog posts by tag artificial intelligence