Generalizing To Unseen Domains Via Adversarial Data Augmentation

Authors:
Riccardo Volpi Istituto Italiano di Tecnologia
Hong Namkoong Stanford University
Ozan Sener Intel Labs
John Duchi Stanford
Vittorio Murino Istituto Italiano di Tecnologia
Silvio Savarese Stanford University

Abstract:

We are concerned with learning models that generalize well to different unseendomains. We consider a worst-case formulation over data distributions that arenear the source domain in the feature space. Only using training data from a singlesource distribution, we propose an iterative procedure that augments the datasetwith examples from a fictitious target domain that is "hard" under the current model. We show that our iterative scheme is an adaptive data augmentation method where we append adversarial examples at each iteration. For softmax losses, we show that our method is a data-dependent regularization scheme that behaves differently from classical regularizers that regularize towards zero (e.g., ridge or lasso). On digit recognition and semantic segmentation tasks, our method learns models improve performance across a range of a priori unknown target domains.

You may want to know: