Authors:
Oliver Rippel
1
;
Arnav Chavan
2
;
Chucai Lei
1
and
Dorit Merhof
1
Affiliations:
1
Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
;
2
Indian Institute of Technology, ISM Dhanbad, India
Keyword(s):
Anomaly Detection, Anomaly Segmentation, Transfer Learning, PDF Estimation, Visual Inspection.
Abstract:
Current state-of-the-art anomaly detection (AD) methods exploit the powerful representations yielded by large-scale ImageNet training. However, catastrophic forgetting prevents the successful fine-tuning of pretrained representations on new datasets in the semi-supervised setting, and representations are therefore commonly fixed. In our work, we propose a new method to overcome catastrophic forgetting and thus successfully fine-tune pre-trained representations for AD in the transfer learning setting. Specifically, we induce a multivariate Gaussian distribution for the normal class based on the linkage between generative and discriminative modeling, and use the Mahalanobis distance of normal images to the estimated distribution as training objective. We additionally propose to use augmentations commonly employed for vicinal risk minimization in a validation scheme to detect onset of catastrophic forgetting. Extensive evaluations on the public MVTec dataset reveal that a new state of t
he art is achieved by our method in the AD task while simultaneously achieving anomaly segmentation performance comparable to prior state of the art. Further, ablation studies demonstrate the importance of the induced Gaussian distribution as well as the robustness of the proposed fine-tuning scheme with respect to the choice of augmentations.
(More)