Hardware accelerators have been used recently to augment the compute power of data centers to improve the performance of many applications, particularly to optimize latency sensitive applications. In fact, several commercial vendors offer FPGAs in their cloud platforms. This Special Issue of TRETS presents advanced research in using FPGAs in data centers. The papers present recent research in several topics including: impact of terrestrial radiation, memory system optimization using FPGAs, use and management of network accessible FPGAs, virtualization and run-time resource management in using FPGAs, novel applications of FPGAs in data centers, FPGA IP cores for data center acceleration, latency and performance tradeoffs in using FPGAs for acceleration, and communication optimization using FPGAs.
In response to the call for papers, 21 papers were received. After a thorough review of these manuscripts following the ACM manuscript review guidelines, 13 papers were accepted. The papers are grouped into two issues. The previous issue (Issue 15:2) includes 10 papers as Part I of the special issue, and this issue includes the remaining 3 papers as Part II of the same special issue.
The paper “ Algean:AnOpenFrameworkforDeployingMachineLearningonHeterogeneousClusters” by Tarafdar et al. presents an open framework to build and deploy machine learning (ML) algorithms on a heterogeneous cluster of devices (CPUs and FPGAs). The paper “ AUnifiedFPGAVirtualizationFrameworkforGeneral-PurposeDeepNeuralNetworksintheCloud” by Zeng et al. presents a unified virtualization framework for general-purpose deep neural networks in the cloud, enabling multi-tenant sharing for both the Convolution Neural Network (CNN) and the Recurrent Neural Network (RNN) accelerators on a single FPGA. Finally, the paper “ ScalablePhylogenyReconstructionwithDisaggregatedNear-memoryProcessing” by Alachiotis et al. explores the potential of deploying custom acceleration units adjacently to disaggregated-memory controller on memory bricks (in IBM dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms).
We would like to thank all the authors for submitting their work and the anonymous reviewers for their careful evaluation and thoughtful reviews in selecting these manuscripts for publication. We also would like to thank Deming Chen (Editor in Chief, ACM TRETS) and Megan Shuler (Editorial Associate, ACM TRETS) for their support throughout the review process. We hope you enjoy this Special Issue.
Ken Eguro, Microsoft
email: eguro@microsoft.
Stephen Neuendorffer, AMD/Xilinx
email: stephenn@amd.
Viktor Prasanna, University of Southern California
email: prasanna@usc.
Hongbo Rong, Intel
email: hongbo.
Guest Editors
Index Terms
- Introduction to Special Issue on FPGAs in Data Centers, Part II
Recommendations
Accelerating Big Data Analytics Using FPGAs
FCCM '15: Proceedings of the 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing MachinesEmerging big data analytics applications require a significant amount of server computational power. As chips are hitting power limits, computing systems are moving away from general-purpose designs and toward greater specialization. Hardware ...
A hybrid Nano/CMOS dynamically reconfigurable system—Part II: Design optimization flow
In Part I of this work, a hybrid nano/CMOS reconfigurable architecture, called NATURE, was described. It is composed of CMOS reconfigurable logic and interconnect fabric, and nonvolatile nano on-chip memory. Through its support for cycle-by-cycle ...
Architecture and CAD for FPGAs
SBCCI '04: Proceedings of the 17th symposium on Integrated circuits and system designLong past are the days when programmable logic (FPGAs and CPLDs) were used only for prototyping and interface logic. Today's modern devices have complicated architectures with close to 200,000 logic elements and flip-flops, dedicated blocks for DSP ...
Comments