Loading [a11y]/accessibility-menu.js
Dataflow design for optimal incremental SVM training | IEEE Conference Publication | IEEE Xplore

Dataflow design for optimal incremental SVM training


Abstract:

This paper proposes a new parallel architecture for incremental training of a Support Vector Machine (SVM), which produces an optimal solution based on manipulating the K...Show More

Abstract:

This paper proposes a new parallel architecture for incremental training of a Support Vector Machine (SVM), which produces an optimal solution based on manipulating the Karush-Kuhn-Tucker (KKT) conditions. Compared to batch training methods, our approach avoids re-training from scratch when training dataset changes. The proposed architecture is the first to adopt an efficient dataflow organisation. The main novelty is a parametric description of the parallel dataflow architecture, which deploys customisable arithmetic units for dense linear algebraic operations involved in updating the KKT conditions. The proposed architecture targets on-line SVM training applications. Experimental evaluation with real world financial data shows that our architecture implemented on Stratix-V FPGA achieved significant speedup against LIBSVM on Core i7-4770 CPU.
Date of Conference: 07-09 December 2016
Date Added to IEEE Xplore: 18 May 2017
ISBN Information:
Conference Location: Xi'an, China

References

References is not available for this document.