Generative Data Intelligence

Testing Robustness Against Unforeseen Adversaries

Date:

We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during training. Our method yields a new metric, UAR (Unforeseen Attack Robustness), which evaluates the robustness of a single model against an unanticipated attack, and highlights the need to measure performance across a more diverse range of unforeseen attacks.

Read PaperView Code

Modern neural networks have achieved high accuracies on a wide range of benchmark tasks. However, they remain susceptible to adversarial examples, small but carefully crafted distortions of inputs created by adversaries to fool the networks. For example, the adversarial example with $L_infty$ distortion below differs from the original image by at most 32 in each RGB pixel value; a human can still classify the changed image, but it is confidently misclassified by a standard neural network.

Sample images (black swan) generated by adversarial attacks with different distortion types. Each distortion is optimized to fool the network.

swan_linf

$L_infty$: each pixel value may be changed by at most 32.

swan_l1

$L_1$: the vector of pixel values may be changed by a vector bounded in $L_1$-norm.

swan_l2jpeg

$L_2$-JPEG: image is trans­formed to a JPEG-com­pressed vector and distorted.

swan_elastic

Elastic: a flow along a local vector field is applied to the image.

swan_fog

Fog: a fog-like distortion of bounded magnitude is applied to the image.

swan_gabor

Gabor: additive noise is added to adversarially texture the image.

swan_snow

Snow: snowflakes are adversarially constructed to partially obscure the image.

AI systems deployed in the wild will need to be robust to unforeseen attacks, but most defenses so far have focused on specific known attack types. The field has made progress in hardening models against such attacks; however, robustness against one type of distortion often does not transfer to robustness against attacks unforeseen by designers of the model. Consequently, evaluating against only a single distortion type can give a false sense of security about a model in the wild which may remain vulnerable to unforeseen attacks such as fake eyeglasses and adversarial stickers.

negative-transfer

An example where adversarial robustness does not transfer well. Hardening a model against Distortion A initially increases robustness against both Distortions A and B. However, as we harden further, adversarial robustness is harmed for Distortion B but remains about the same for Distortion A. (A = $L_infty$, B = $L_1$)

Method principles

We’ve created a three-step method to assess how well a model performs against a new held-out type of distortion. Our method evaluates against diverse unforeseen attacks at a wide range of distortion sizes and compares the results to a strong defense which has knowledge of the distortion type. It also yields a new metric, UAR, which assesses the adversarial robustness of models against unforeseen distortion types.

1. Evaluate against diverse unforeseen distortion types

Typical papers on adversarial defense evaluate only against the widely studied $L_infty$ or $L_2$ distortion types. However, we show that evaluating against the $L_p$ distortions gives very similar information about adversarial robustness. We conclude that evaluating against $L_p$ distortions is insufficient to predict adversarial robustness against other distortion types. Instead, we suggest that researchers evaluate models against adversarial distortions that are not similar to those used in training. We offer the $L_1$, $L_2$-JPEG, Elastic, and Fog attacks as a starting point. We provide implementations, pre-trained models, and calibrations for a variety of attacks in our code package.

2. Choose a wide range of distortion sizes calibrated against strong models

We found that considering too narrow a range of distortion sizes can reverse qualitative conclusions about adversarial robustness. To pick a range, we examine images produced by an attack at different distortion sizes and choose the largest range for which the images are still human-recognizable. However, as shown below, an attack with a large distortion budget only uses it against strong defenses. We recommend choosing a calibrated range of distortion sizes by evaluating against adversarially trained models (we also provide calibrated sizes for a wide variety of attacks in our code package).

Sample images (espresso maker) of the same strong attack applied to different defense models. Attacking stronger defenses causes greater visual distortion.

espresso_clean

Undefended

espresso_8

Weakly defended

espresso_16

Strongly defended

3. Benchmark adversarial robustness against adversarially trained models

We developed a new metric, UAR, which compares the robustness of a model against an attack to adversarial training against that attack. Adversarial training is a strong defense that uses knowledge of an adversary by training on adversarially attacked images. A UAR score near 100 against an unforeseen adversarial attack implies performance comparable to a defense with prior knowledge of the attack, making this a challenging objective.

We computed the UAR scores of adversarially trained models for several different distortion types. As shown below, the robustness conferred by adversarial training does not transfer broadly to unforeseen distortions. In fact, robustness against a known distortion can reduce robustness against unforeseen distortions. These results underscore the need for evaluation against significantly more diverse attacks like Elastic, Fog, Gabor, and Snow.

uar-scores

UAR scores for adversarially trained models against adversarial attacks with different distortion types.

Next steps

We hope that researchers developing adversarially robust models will use our methodology to evaluate against a more diverse set of unforeseen attacks. Our code includes a suite of attacks, adversarially trained models, and calibrations which allow UAR to be easily computed.

If you’re interested in topics in AI Safety, consider applying to work at OpenAI.

Source: https://openai.com/blog/testing-robustness/

spot_img

Latest Intelligence

spot_img