Skip to main content

UNIO: A Unified I/O System Framework for Hybrid Scientific Workflow

  • Conference paper
  • First Online:
Cloud Computing and Big Data (CloudCom-Asia 2015)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 9106))

Abstract

Recent years have seen an increasing number of Hybrid Scientific Applications. They often consist of one HPC simulation program along with its corresponding data analytics programs. Unfortunately, current computing platform settings do not accommodate this emerging workflow very well. This is mainly because HPC simulation programs store output data into a dedicated storage cluster equipped with Parallel File System (PFS). To perform analytics on data generated by simulation, data has to be migrated from storage cluster to compute cluster. This data migration could introduce severe delay which is especially true given an ever-increasing data size.

While the scale-up supercomputers equipped with dedicated PFS storage cluster still represent the mainstream HPC, ever increasing scale-out small-medium sized HPC clusters have been supplied to facilitate hybrid scientific workflow applications in fast-growing cloud computing infrastructures such as Amazon cluster compute instances. Different from traditional supercomputer setting, the limited network bandwidth in scale-out HPC clusters makes the data migration prohibitively expensive. To attack the problem, we develop a Unified I/O System Framework (UNIO) to avoid such migration overhead for scale-out small-medium sized HPC clusters. Our main idea is to enable both HPC simulation programs and analytics programs to run atop one unified file system, e.g. data-intensive file system (DIFS in brief). In UNIO, an I/O middle-ware component allows original HPC simulation programs to execute direct I/O operations over DIFS without any porting effort, while an I/O scheduler dynamically smoothes out both disk write and read traffic for both simulation and analysis programs. By experimenting with a real-world scientific workflow over a 46-node UNIO prototype, we found that UNIO is able to achieve comparable read/write I/O performance in small-medium sized HPC clusters equipped with parallel file system. More importantly, since UNIO completely avoids the most expensive data movement overhead, it achieves up to 3x speedups for hybrid scientific workflow applications compared with current solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amazon Inc.: High performance computing (hpc). https://aws.amazon.com/hpc/

  2. Data-Intensive Computing: Finding the Right Program Models. http://hpdc2010.eecs.northwestern.edu/HPDC2010Bryant.pdf

  3. Fastbit: An efficient compressed bitmap index technology. https://sdm.lbl.gov/fastbit/

  4. Fuse-dfs. http://wiki.apache.org/hadoop/MountableHDFS

  5. Fuse: Filesystem in userspace

    Google Scholar 

  6. The hadoop distributed file system. http://hadoop.apache.org/hdfs/

  7. Lustre file system. http://www.lustre.org

  8. Roadrunner open science. http://lanl.gov/roadrunner/rropenscience.shtml

  9. US Lattice Quantum Chromodynamics. http://www.usqcd.org/usqcd-software/

  10. Ahrens, J., Heitmann, K., Habib, S., Ankeny, L., McCormick, P., Inman, J., Armstrong, R., Ma, K.-L.: Quantitative and comparative visualization applied to cosmological simulations. 46, 526–534 (2006)

    Google Scholar 

  11. Balaji, P., Chan, A., Gropp, W.D., Thakur, R., Lusk, E.R.: Non-data-communication overheads in MPI: analysis on blue gene/P. In: Lastovetsky, A., Kechadi, T., Dongarra, J. (eds.) EuroPVM/MPI 2008. LNCS, vol. 5205, pp. 13–22. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  12. Bennett, J.C., Abbasi, H., Bremer, P.-T., Grout, R., Gyulassy, A., Jin, T., Klasky, S., Kolla, H., Parashar, M., Pascucci, V., Pebay, P., Thompson, D., Yu, H., Zhang, F., Chen, J.: Combining in-situ and in-transit processing to enable extreme-scale scientific analysis. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC 2012, pp. 49:1–49:9. IEEE Computer Society Press, Los Alamitos (2012)

    Google Scholar 

  13. Bent, J., Gibson, G., Grider, G., McClelland, B., Nowoczynski, P., Nunez, J., Polte, M., Wingate, M.: PLFS: a checkpoint filesystem for parallel applications. In: 2009 ACM/IEEE Conference on Supercomputing, November 2009

    Google Scholar 

  14. Bryant, R.E.: Data-intensive supercomputing: the case for disc (2007)

    Google Scholar 

  15. Chen, Y., Chen, C., Sun, X.-H., Gropp, W.D., Thakur, R.: A decoupled execution paradigm for data-intensive high-end computing. In: 2012 IEEE International Conference on Cluster Computing (CLUSTER), pp. 200–208. IEEE (2012)

    Google Scholar 

  16. Klasky, S., et al.: In situ data processing for extreme-scale computing. In: SciDAC, Denver, CO, USA (2011)

    Google Scholar 

  17. FLASH Center for Computational Science. Flash user’s guide

    Google Scholar 

  18. He, J., Bent, J., Torres, A., Grider, G., Gibson, G., Maltzahn, C., Sun, X.-H.: Discovering structure in unstructured i/o. In: PDSW (2012)

    Google Scholar 

  19. Henderson, A.: Paraview guide, a parallel visualization application

    Google Scholar 

  20. Hey, T., Tansley, S., Tolle, K. (eds.): The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research, Redmond (2009)

    Google Scholar 

  21. Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A.D., Katz, R., Shenker, S., Stoica, I.: Mesos: a platform for fine-grained resource sharing in the data center. In: Proceedings of the 8th USENIX Conference on NSDI, p. 22. USENIX Association, Berkeley (2011)

    Google Scholar 

  22. Lakshminarasimhan, S., Jenkins, J., Arkatkar, I., Gong, Z., Kolla, H., Ku, S.-H., Ethier, S., Chen, J., Chang, C.S., Klasky, S., Latham, R., Ross, R., Samatova, N.F.: ISABELA-QA: query-driven analytics with ISABELA-compressed extreme-scale scientific data. In: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2011, pp. 31:1–31:11. ACM, New York (2011)

    Google Scholar 

  23. Luo, Y., Guo, Z., Sun, Y., Plale, B., Qiu, J., Li, W.W.: A hierarchical framework for cross-domain mapreduce execution. In: Proceedings of the Second International Workshop on Emerging Computational Methods for the Life Sciences, pp. 15–22. ACM (2011)

    Google Scholar 

  24. Matsunaga, A., Tsugawa, M., Fortes, J.: Cloudblast: combining mapreduce and virtualization on distributed resources for bioinformatics applications. In: ESCIENCE, pp. 222–229. IEEE Computer Society, Washington, DC (2008)

    Google Scholar 

  25. Molina-Estolano, E., Gokhale, M., Maltzahn, C., May, J., Bent, J., Brandt, S.: Mixing hadoop and hpc workloads on parallel filesystems. In: Proceedings of the 4th Annual Workshop on Petascale Data Storage, PDSW 2009, pp. 1–5. ACM, New York (2009)

    Google Scholar 

  26. Rebel, O., Geddes, C.G.R., Cormier-Michel, E., Wu, K., Prabhat, Weber, G.H., Ushizima, D.M., Messmer, P., Hagen, H., Hamann, B., Bethel, W.: Automatic beam path analysis of laser wakefield particle acceleration data. Comput. Sci. Discov. 2(1), 015005 (2009)

    Google Scholar 

  27. Rosenblum, M., Ousterhout, J.K.: The design and implementation of a log-structured file system. ACM Trans. Comput. Syst. 10(1), 26–52 (1992)

    Article  Google Scholar 

  28. Ross, R.B., Thakur, R., et al.: Pvfs: a parallel file system for linux clusters. In: Proceedings of the 4th Annual Linux Showcase and Conference, pp. 391–430 (2000)

    Google Scholar 

  29. Sun, X.-H., Byna, S., Chen, Y.: Server-based data push architecture for multi-processor environments. J. Comput. Sci. Technol. 22(5), 641–652 (2007)

    Article  Google Scholar 

  30. Szalay, A.S., Kunszt, P.Z., Thakar, A., Gray, J., Slutz, D., Brunner, R.J.: Designing and mining multi-terabyte astronomy archives: the sloan digital sky survey. SIGMOD Rec. 29(2), 451–462 (2000)

    Article  Google Scholar 

  31. Tantisiriroj, W., Patil, S., Gibson, G.: Data-intensive file systems for internet services: a rose by any other name. Technical report (2008)

    Google Scholar 

  32. Tantisiriroj, W., Son, S.W., Patil, S., Lang, S.J., Gibson, G., Ross, R.B.: On the duality of data-intensive file system design: reconciling hdfs and pvfs. In: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2011, pp. 67:1–67:12. ACM, New York (2011)

    Google Scholar 

  33. Tiwari, D., Solihin, Y.: Mapreuse: reusing computation in an in-memory mapreduce system. In: IPDPS. IEEE Computer Society, Phoenix (2014)

    Google Scholar 

  34. Wu, X., Vijayakumar, K., Mueller, F., Ma, X., Roth, P.C.: Probabilistic communication and I/O tracing with deterministic replay at scale. In: ICPP, pp. 196–205. IEEE Computer Society, Washington, DC (2011)

    Google Scholar 

  35. Zhai, Y., Liu, M., Zhai, J., Ma, X., Chen, W.: Cloud versus in-house cluster: evaluating Amazon cluster compute instances for running MPI applications. In: State of the Practice Reports, SC 2011, pp. 11:1–11:10. ACM, New York (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dan Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Huang, D., Yin, J., Wang, J., Zhang, X., Zhang, J., Zhou, J. (2015). UNIO: A Unified I/O System Framework for Hybrid Scientific Workflow. In: Qiang, W., Zheng, X., Hsu, CH. (eds) Cloud Computing and Big Data. CloudCom-Asia 2015. Lecture Notes in Computer Science(), vol 9106. Springer, Cham. https://doi.org/10.1007/978-3-319-28430-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-28430-9_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-28429-3

  • Online ISBN: 978-3-319-28430-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics