Online Robust Lagrangian Support Vector Machine against Adversarial Attack

https://doi.org/10.1016/j.procs.2018.10.239Get rights and content
Under a Creative Commons license
open access

Abstract

In adversarial environment such as intrusion detection and spam filtering, the adversary-intruder or spam advertiser may attempt to produce contaminate training instance and manipulate the learning of classifier. In order to keep good classification performance, many robuster learning methods have been proposed to deal with the adversarial attack. Support Vector Machines(SVMs) is a kind of successful approach in the adversarial classification tasks and the investigation of robust SVMs is very popular. However, in many real application, the data including stain instance is coming dynamically. Batch learning which needs retraining when encountering new samples, will consume more computing resources. In this paper, we propose a robust Lagrangian support vector machine (RLSVM) with modified kernel matrix and explore the online learning algorithm on it. The experimental results show the robustness of RLSVM against label noise produced by adversaries under the online adversarial environment.

Keywords

adversarial attack
poison attack
label noise
online learning
Lagrangian SVM

Cited by (0)