ML CONFERENCE Blog

“Tricking an autonomous vehicle into not recognizing a stop sign is an evasion attack”

Sep 25, 2019

David Glavas

As machine learning technologies become more prevalent, the risk of attacks continues to rise. Which types of attacks on ML systems exist, how do they work, and which is the most dangerous? ML Conference speaker David Glavas answered our questions.

MLcon: Different approaches have been made to launch attacks against machine learning systems, as their use continues to become increasingly widespread. Can you tell us more about these angles of attack and how they differ from each other?

During an evasion attack, the adversary aims to avoid detection by deliberately manipulating an example input.

David Glavas: Here are four common attack techniques:

  • Poisoning attacks inject malicious examples into the training dataset or modify features/labels of existing training data.
  • Evasion attacks deliberately manipulate the input to evade detection.
  • Impersonation attacks craft inputs that the target model misclassifies as some specific real input.
  • Inversion attacks steal sensitive information such as training data or model parameters.

MLcon: In your opinion, which type of attack on ML systems currently poses the greatest threat, and for what reason?

David Glavas: Evasion attacks, because they can be performed with less knowledge about the target system. The more knowledge about the target system is required to perform an attack, the more difficult it is to perform.

MLcon: How is an evasion attack carried out?

David Glavas: During an evasion attack, the adversary aims to avoid detection by deliberately manipulating an example input. To be more specific, while training a neural network, we usually use backpropagation to compute the derivative of the cost function with respect to the network’s weights. In contrast, during an evasion attack, we use backpropagation to compute the derivative of the cost function with respect to the input.

So we use backpropagation to answer the question: “How do I need to modify the input to maximize the cost function?”, where maximizing the cost function results in the target network being confused and making an error. For example, given a picture of a cat, we look for pixels we can modify such that the target network sees something that’s not a cat. Researchers have proposed various algorithms to perform this (e.g. Basic Iterative Method, Projected Gradient Descent, CW-Attack).

MLcon: Can you provide us with an example for an evasion attack?

Researchers showed that they can cause a stop sign to ‘disappear’ according to the detector.

David Glavas: To name an example with obviously negative consequences, tricking an autonomous vehicle into not recognizing a stop sign is an evasion attack. Autonomous vehicles use object detectors to both locate and classify multiple objects in a given scene (e.g. pedestrians, other cars, street signs, etc.). An object detector outputs a set of bounding boxes as well as the label and likelihood of the most probable object contained within each box.

In late 2018, researchers showed that they can cause a stop sign to “disappear” according to the detector by adding adversarial stickers onto the sign. This attack tricked state-of-the-art object detection models to not recognize a stop sign over 85% of times in a lab environment and over 60% of times in a more realistic outdoor environment. Imagine having to win a coin toss every time you want your autonomous car to stop at a stop sign.

Other research suggests that similar attacks are possible on face recognition systems (you can’t recognize a face without detecting it first). Other common evasion attacks are evading malware and spam detectors.

Source: Eykholt et al, Robust Physical-World Attacks on Deep Learning Visual Classification

MLcon: When an ML system is attacked, e.g. via data poisoning, this may go on unnoticed for some time. Which measures could be taken to help prevent attacks from going undetected?

David Glavas: The measures will depend on the details of the specific system at hand. A general approach would be to provide access control more carefully (who can see and change what data) and to monitor system behavior for anomalies (anomalies may indicate that the system has been poisoned).

MLcon: Generally speaking, what are the greatest challenges in securing an ML system, and do they differ from other types of longer-known security risks?

Companies that encounter such attacks either cover it up entirely or don’t disclose the details.

David Glavas: I think it’s difficult to envision all possible moves/changes/manipulations that an attacker can perform, but that’s a general issue people dealing with computer security have.

There are many commonalities, as ML systems still suffer from the same security risks as non-ML systems. Attacks on ML systems don’t necessarily need to specifically target the system’s ML component. In fact, it’s probably easier to focus on other components.

MLcon: Have any large-scale attacks on ML systems been carried out recently – and what can we learn from how they have or have not been fended off?

David Glavas: It’s difficult to say, since companies that encounter such attacks either cover it up entirely or don’t disclose the details of what exactly happened. For example, the evasion of spam filters and anti-viruses seem to be quite common.

MLcon: Thank you for the interview!

Behind the Tracks