Skip to main content

Improving Parallel I/O Performance Using Multithreaded Two-Phase I/O with Processor Affinity Management

  • Conference paper
  • First Online:
Parallel Processing and Applied Mathematics (PPAM 2013)

Abstract

I/O has been one of the performance bottlenecks in parallel computing. Using a parallel I/O API such as MPI-IO is one effective approach to improve parallel computing performance. The most popular MPI-IO implementation, ROMIO, utilizes two-phase I/O technique for collective I/O for non-contiguous access patterns. Furthermore, such two-phase I/O is frequently used in application oriented parallel I/O libraries such as HDF5 through an MPI-IO interface layer. Therefore performance improvement in the two-phase I/O may have a big impact in improving I/O performance in parallel computing. We report enhancements of the two-phase I/O by using Pthreads in order to improve I/O performance in this paper. The enhancements include overlapping scheme between file I/O and data exchanges by multithreaded operations and the processor affinity for threads dedicated for file I/O and data exchanges. We show performance advantages of the optimized two-phase I/O with an appropriate processor affinity management relative to the original two-phase I/O in parallel I/O throughput evaluation of HDF5.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. General Parallel File System. http://www-03.ibm.com/systems/software/gpfs/

  2. Blas, J.G., Isaila, F., Carretero, J., Singh, D., Garcia-Carballeira, F.: Implementation and evaluation of file write-back and prefetching for MPI-IO over GPFS. Int. J. High Perform. Comput. Appl. 24, 78–92 (2010)

    Article  Google Scholar 

  3. Blas, J.G., Isaila, F., Singh, D.E., Carretero, J.: View-based collective I/O for MPI-IO. In: CCGRID, pp. 409–416 (2008)

    Google Scholar 

  4. Dickens, P., Thakur, R.: Improving collective I/O performance using threads. In: Proceedings of the Joint International Parallel Processing Symposium and IEEE Symposium on Parallel and Distributed Processing, pp. 38–45 (1999)

    Google Scholar 

  5. Institute of Electrical, Electronic Engineers: Information Technology – Portable Operating Systems Interface – Part 1: System Application Program Interface (API) – Amendment 2: Threads Extensions [C Languages] (1995)

    Google Scholar 

  6. IOR. http://sourceforge.net/projects/ior-sio/

  7. Li, J., Liao, W.K., Choudhary, A., Ross, R., Thakur, R., Gropp, W., Latham, R., Siegel, A., Gallagher, B., Zingale, M.: Parallel netCDF: a high-performance scientific I/O interface. In: Proceedings of the 2003 ACM/IEEE Conference on Supercomputing. SC ’03, p. 39. ACM, Nov 2003

    Google Scholar 

  8. Lustre. http://wiki.lustre.org/index.php/Main_Page

  9. Ma, X., Winslett, M., Lee, J., Yu, S.: Improving MPI-IO output performance with active buffering plus threads. In: Proceedings of the 17th International Parallel and Distributed Processing Symposium (IPDPS’03), p. 68b. IEEE Computer Society, Apr 2003

    Google Scholar 

  10. MPI Forum. http://www.mpi-forum.org/

  11. MPICH. http://www.mpich.org/

  12. Myricom Inc. http://www.myricom.com/

  13. Prost, J.P., Treumann, R., Hedges, R., Jia, B., Koniges, A.: MPI-IO/GPFS, an optimized implementation of MPI-IO on top of GPFS. In: SC ’01: Proceedings of the 2001 ACM/IEEE Conference on Supercomputing, p. 58. IEEE Computer Society (2001)

    Google Scholar 

  14. Thakur, R., Gropp, W., Lusk, E.: On implementing MPI-IO portably and with high performance. In: Proceedings of the Sixth Workshop on Input/Output in Parallel and Distributed Systems, pp. 23–32 (1999)

    Google Scholar 

  15. Thakur, R., Gropp, W., Lusk, E.: Optimizing noncontiguous accesses in MPI-IO. Parallel Comput. 28(1), 83–105 (2002)

    Article  MATH  Google Scholar 

  16. The National Center for Supercomputing Applications. http://hdf.ncsa.uiuc.edu/HDF5/

  17. Tsujita, Y., Muguruma, H., Yoshinaga, K., Hori, A., Namiki, M., Ishikawa, Y.: Improving collective I/O performance using pipelined two-phase I/O. In: Proceedings of the 2012 Symposium on High Performance Computing. HPC ’12, pp. 7:1–7:8. Society for Modeling and Simulation International, CD-ROM, Mar 2012

    Google Scholar 

Download references

Acknowledgment

This research work is partially supported by JST CREST. The authors would like to thank the Information Technology Center, the University of Tokyo for their assistance in using the T2K-Todai cluster system.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuichi Tsujita .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tsujita, Y., Yoshinaga, K., Hori, A., Sato, M., Namiki, M., Ishikawa, Y. (2014). Improving Parallel I/O Performance Using Multithreaded Two-Phase I/O with Processor Affinity Management. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds) Parallel Processing and Applied Mathematics. PPAM 2013. Lecture Notes in Computer Science(), vol 8384. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55224-3_67

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-55224-3_67

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-55223-6

  • Online ISBN: 978-3-642-55224-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics