Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs
Technical Report , arXiv:2307.13078 - Jul 2023
Download the publication: 448 KB
As deep learning models continue to advance and are increasingly utilized in
real-world systems, the issue of robustness remains a major challenge.
Existing certified training methods produce models that achieve high provable
robustness guarantees at certain perturbation levels. However, the main
problem of such models is a dramatically low standard accuracy, i.e. accuracy
on clean unperturbed data, that makes them impractical. In this work, we
consider a more realistic perspective of maximizing the robustness of a model
at certain levels of (high) standard accuracy. To this end, we propose a novel
certified training method based on a key insight that training with adaptive
certified radii helps to improve both the accuracy and robustness of the model,
advancing state-of-the-art accuracy-robustness tradeoffs. We demonstrate the
effectiveness of the proposed method on MNIST, CIFAR-10, and TinyImageNet
datasets. Particularly, on CIFAR-10 and TinyImageNet, our method yields models
with up to two times higher robustness, measured as an average certified
radius of a test set, at the same levels of standard accuracy compared to
baseline approaches.
The work was presented at the ICML 2023 workshop "New Frontiers in Adversarial Machine Learning".
This paper is also stored on arXiv.
The work was presented at the ICML 2023 workshop "New Frontiers in Adversarial Machine Learning".
This paper is also stored on arXiv.
Images and movies
BibTex references
@TechReport\{NSB23a,
author = "Nurlanov, Zhakshylyk and Schmidt, Frank R. and Bernard, Florian",
title = "Adaptive Certified Training: Towards Better Accuracy-Robustness Tradeoffs",
institution = "arXiv:2307.13078",
month = "Jul",
year = "2023",
url = "http://frank-r-schmidt.de/Publications/2023/NSB23a"
}
