Skip to main content
Log in

A middleware for efficient stream processing in CUDA

  • Special Issue Paper
  • Published:
Computer Science - Research and Development

Abstract

This paper presents a middleware capable of out-of-order execution of kernels and data transfers for efficient stream processing in the compute unified device architecture (CUDA). Our middleware runs on the CUDA-compatible graphics processing unit (GPU). Using the middleware, application developers are allowed to easily overlap kernel computation with data transfer between the main memory and the video memory. To maximize the efficiency of this overlap, our middleware performs out-of-order execution of commands such as kernel invocations and data transfers. This run-time capability can be used by just replacing the original CUDA API calls with our API calls. We have applied the middleware to a practical application to understand the run-time overhead in performance. It reduces execution time by 19% and allows us to process large data that cannot be entirely stored in the video memory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. NVIDIA Corporation (2009) CUDA programming guide version 2.3, July 2009. http://developer.nvidia.com/cuda/

  2. Garland M, Grand SL, Nickolls J, Anderson J, Hardwick J, Morton S, Phillips E, Zhang Y, Volkov V (2008) Parallel computing experiences with CUDA. IEEE MICRO 28(4):13–27

    Article  Google Scholar 

  3. Hou Q, Zhou K, Guo B (2008) BSGP: bulk-synchronous GPU programming. ACM Trans Graph 27(3):19

    Article  Google Scholar 

  4. Khailany B, Dally WJ, Kapasi UJ, Mattson P, Namkoong J, Owens JD, Towles B, Chang A, Rixner S (2001) Imagine: media processing with streams. IEEE MICRO 21(2):35–46

    Article  Google Scholar 

  5. Nagayasu D, Ino F, Hagihara K (2008) A decompression pipeline for accelerating out-of-core volume rendering of time-varying data. Comput Graph 32(3):350–362

    Article  Google Scholar 

  6. Phillips JC, Stone JE, Schulten K (2008) Adapting a message-driven parallel application to GPU-accelerated clusters. In: Proceedings of the international conference on high performance computing, networking, storage and analysis (SC’08), November 2008 (CD-ROM)

  7. Rodrigues CI, Hardy DJ, Stone JE, Schulten K, Hwu W-MW GPU acceleration of cutoff pair potentials for molecular modeling applications. In: Proceedings of the 5th international conference on computing frontiers (CF’08), May 2008, pp 273–282

  8. Yamagiwa S, Sousa L (2007) Design and implementation of a stream-based distributed computing platform using graphics processing units. In: Proceedings of the 4th international conference on computing frontiers (CF’07), May 2007, pp 197—204

  9. Yang X, Yan X, Xing Z, Deng Y, Jiang J, Du J, Zhang Y (2009) Fei teng 64 stream processing system: architecture, compiler, and programming. IEEE Trans Parallel Distrib Syst 20(8):1142–1157

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fumihiko Ino.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nakagawa, S., Ino, F. & Hagihara, K. A middleware for efficient stream processing in CUDA. Comput Sci Res Dev 25, 41–49 (2010). https://doi.org/10.1007/s00450-010-0107-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00450-010-0107-3

Keywords

Navigation