Distributed Training of Support Vector Machine on a Multiple-FPGA System | IEEE Journals & Magazine | IEEE Xplore

Distributed Training of Support Vector Machine on a Multiple-FPGA System


Abstract:

Support Vector Machine (SVM) is a supervised machine learning model for classification tasks. Training SVM on a large number of data samples is challenging due to the hig...Show More

Abstract:

Support Vector Machine (SVM) is a supervised machine learning model for classification tasks. Training SVM on a large number of data samples is challenging due to the high computational cost and memory requirement. Hence, model training is supported on a high-performance server which typically runs a sequential training algorithm on centralized data. However, as we move towards massive workloads, it will be impossible to store all the data in a centralized manner and expect such sequential training algorithms to scale on traditional processors. Moreover, with the growing demands of real-time machine learning for edge analytics, it is imperative to devise an efficient training framework with relatively cheaper computations and limited memory. Therefore, we propose and implement a first-of-its-kind system of multiple FPGAs as a distributed computing framework comprising up to eight FPGA units on Amazon F1 instances with negligible communication overhead to fully parallelize, accelerate, and scale the SVM training on decentralized data. Each FPGA unit has a pipelined SVM training IP logic core operating at 125 MHz with a power dissipation of 39 Watts for accelerating its allocated computations in the overall training process. We evaluate and compare the performance of the proposed system on five real SVM benchmarks.
Published in: IEEE Transactions on Computers ( Volume: 69, Issue: 7, 01 July 2020)
Page(s): 1015 - 1026
Date of Publication: 11 May 2020

ISSN Information:


Contact IEEE to Subscribe

References

References is not available for this document.