Skip to main content
Log in

Performance prediction of parallel computing models to analyze cloud-based big data applications

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Performance evaluation of cloud center is a necessary prerequisite to fulfilling contractual quality of service, particularly in big data applications. However, effectively evaluating performance of cloud services is challenging due to the complexity of cloud services and the diversity of big data applications. In this paper, we propose a performance evaluation model for parallel computing models deployed in cloud centers to support big data applications. In this evaluation model, a big data application is divided into lots of parallel tasks and the task arrivals follow a general distribution. In our approach, we also consider factors associated with resource heterogeneity, resource contention among cloud nodes, and data storage strategy, which have an impact on the performance of parallel computing models. Our model also allows us to calculate key performance indicators of cloud center such as mean number of tasks in the system, probability that a task obtains immediate service, and task waiting time. The model can also be used to predict the time of performing applications. We then demonstrate the utility of the model based on simulations and benchmarking using WordCount and TeraSort applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Khazaei, H., Misic, J., Misic, V.B.: Performance analysis of cloud computing centers using M/G/m/m+r queuing system. IEEE Trans. Parallel Distrib. Syst. https://doi.org/10.1109/TPDS.2011.199 (2012)

  2. Liu, X.D., Tong, W.Q., Zhi, X.L., Fu, Z.R., Liao, W.Z.: Performance analysis of cloud computing services considering resources sharing among virtual machines. J Supercomput, pp. 357-374 (2014)

  3. Nita, M.-C., Pop, F., Voicu, C., Dobre, C., Xhafa, F.: MOMTH: multi-objective scheduling algorithm of many tasks in Hadoop. Clust. Comput. J. Netw. Softw. Tools Appl. 18, 1011–1024 (2015)

    Google Scholar 

  4. Evans, J.J., Lucas, C.E.: Parallel application-level behavioral attributes for performance and energy management of high-performance computing systems. Clust. Comput. J. Netw. Softw. Tools Appl. 16, 91–115 (2013)

    Google Scholar 

  5. Sandhu, R., Sood, S.K.: Scheduling of big data applications on distributed cloud based on QoS parameters. Clust. Comput. J. Netw. Softw. Tools Appl. 18, 817–828 (2015)

    Google Scholar 

  6. Luo, T., Liao, Y., Chen, G., Zhang, Y.: P-DOT: a model of computation for big data. In: IEEE International Conference on Big Data, pp. 31–37 (2013)

  7. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)

    Article  Google Scholar 

  8. Olsen, B., McKenney, M.: Storm system database: a big data approach to moving object databases. In: The 4th International Conference on Computing for Geospatial Research and Application (COM.Geo), pp. 142–143 (2013)

  9. Abouzeid, A., Bajda-Pawlikowski, K., Abadi, D.J., Rasin, A., Silberschatz, A.: HadoopDB: an architectural hybrid of MapReduce and DBMS technologies for analytical workloads. Proc. VLDB Endow. 2(1), 922–933 (2009)

    Article  Google Scholar 

  10. Valiant, L.G.: A bridging model for parallel computation. Commun. ACM 33(8), 103–111 (1990)

    Article  Google Scholar 

  11. Herodotou, H.: Hadoop Performance Models. Technical Report CS-2011-05. Computer Science Department, Duke University

  12. Lin, X., Meng, Z., Xu, C., Wang, M.: A practical performance model for Hadoop MapReduce. In : IEEE International Conference on Cluster Computing Workshops, pp. 231–239 (2012)

  13. Yigitbasi, N., Willke, T.L., Liao, G., Epema, D.: Towards machine learning-based auto-tuning of MapReduce. In: 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 11–20 (2013)

  14. Kadirvel, S., Fortes, J.A.B.: Grey-box approach for performance prediction in map-reduce based platforms. In: 21st International Conference on Computer Communications and Networks, pp. 1–9 (2012)

  15. Karloff, H., Suri, S., Vassilvitskii, S.: A model of computation for MapReduce. In: Proceedings of the 21st Annual ACM–SIAM Symposium on Discrete Algorithms, pp. 938–948 (2010)

  16. Morton, K., Balazinska, M., Grossman, D.: ParaTimer: a progress indicator for MapReduce DAGs. In: Proceedings of the 2010 ACM SIGMOD International Conference on Management of Data, pp. 507–518 (2010)

  17. Wasi-ur-Rahman, M., Lu, X., Islam, N.S., Panda, D.K.: Performance modeling for RDMA-enhanced Hadoop MapReduce. In: 43rd International Conference on Parallel Processing, pp. 50–59 (2014)

  18. Yzelman, A.N., Bisseling, R.H., Roose, D., et al.: MulticoreBSP for C: a high-performance library for shared-memory parallel programming. Int. J. Parallel Program. https://doi.org/10.1007/s10766-013-0262-9 (2013)

  19. Niculescu, V.: Cost evaluation from specifications for BSP programs. In: 20th International Parallel and Distributed Processing Symposium, pp. 25–29 (2006)

  20. Huai, Y., Lee, R., Zhang, S., Xia, C.H., Zhang, X.: DOT: a matrix model for analyzing, optimizing and deploying software for big data analytics in distributed systems. In: Proceedings of the 2nd ACM Symposium on Cloud Computing, pp. 1–14 (2011)

  21. Lee, J.W., Cho, Y.: An effective shared memory allocator for reducing false sharing in NUMA multiprocessors. In: IEEE Second International Conference on Algorithms and Architectures for Parallel Processing, pp. 373–382 (1996)

  22. Felice Pace, M.: BSP vs MapReduce. Procedia Comput. Sci. 9, 246–255 (2012)

    Article  Google Scholar 

  23. Dagum, L., Menon, R.: OpenMP: an industry-standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5, 46–55 (1998)

    Article  Google Scholar 

  24. Ahuja, S., Carriero, N.J., Gelernter, D.H., Krishnaswamy, V.: Matching language and hardware for parallel computation in the Linda machine. IEEE Trans. Comput. 37(8), 921–929 (1998)

    Article  Google Scholar 

  25. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface. MIT Press, Cambridge (1999)

    MATH  Google Scholar 

  26. Sunderam, V.S.: PVM: a framework for parallel distributed programming. Concurr. Pract. Exp. 2, 315–339 (1990)

    Article  Google Scholar 

  27. Blumofe, R.D., Joerg, C.F., Kuszmaul, B.C., Leiserson, C.E., Randall, K.H., Zhou, Y.: Cilk: an efficient multithreaded runtime system. In: Proceedings of the Fifth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), pp. 207–216 (1995)

  28. Reinders, J.: Intel Threading Building Blocks: Outfitting C++ for Multicore Processor Parallelism. O’Reilly Media, Inc., Sebastopol (2007)

    Google Scholar 

  29. Isard, M., Budiu, M., Yu, Y., Birrell, A., Fetterly, D.: Dryad: distributed data-parallel programs from sequential building blocks. In: European Conference on Computer Systems (EuroSys), pp. 59–72 (2007)

  30. Power, R., Li, J.Y.: Piccolo: building fast, distributed programs with partitioned tables. In: Proceedings of the 9th USENIX Conference on Operating Systems, pp. 1–14 (2010)

  31. Khazaei, H.: Performance analysis of cloud computing centers. In: Quality, Reliability, Security and Robustness in Heterogeneous Networks, pp. 251–264 (2012)

  32. Shen, C., Tong, W.Q., Kausar, S.: Predicting the performance of parallel computing models using queuing system. In: 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, pp. 757–760 (2015)

  33. Lu, C.: Queuing Theory (the Second Version). Beijing University of Posts and Telecommunication Press, Beijing (2009)

    Google Scholar 

  34. Maple 18, Maplesoft. http://www.maplesoft.com/ (2016)

  35. Hadoop, apache. http://hadoop.apache.org/ (2016)

Download references

Acknowledgements

This work is partially supported by Shanghai Innovation Action Plan Project under the Grant No. 16511101200.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Shen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, C., Tong, W., Choo, KK.R. et al. Performance prediction of parallel computing models to analyze cloud-based big data applications. Cluster Comput 21, 1439–1454 (2018). https://doi.org/10.1007/s10586-017-1385-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-017-1385-3

Keywords

Navigation