Skip to main content
Log in

Exploiting resource profiling mechanism for large-scale scientific computing on grids

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Large-scale scientific applications from various scientific domains (e.g., astronomy, physics, pharmaceuticals, chemistry, etc.) usually require substantial amounts of computing resources and storage space. International Grid computing resources can be a viable choice for supporting these challenging applications so that effectively locating suitable computing resources with minimal allocation overhead can be crucial. However, Grid resource availability is highly unstable and current Grid Information Service (GIS) cannot provide accurate state information. This can make it very difficult for users to schedule the jobs on the Grid system and to map tasks on appropriate available resources. In this paper, we present SCOUT system that can periodically profile Grid computing elements based on available number of CPU cores and average response time, and monitor the performance of each CE in the Virtual Organizations (VO). Micro-benchmark experimental results demonstrate that leveraging profiled data by SCOUT can improve the success rate of task executions and reduce the average response time.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bruneo, D., Scarpa, M., Puliafito, A.: Performance evaluation of glite grids through gspns. IEEE Trans. Parallel Distrib. Syst. 21(11), 1611–1625 (2010)

    Article  Google Scholar 

  2. Catlett, C.: The philosophy of teragrid: building an open, extensible, distributed terascale facility. In: 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid, 2002, pp. 8–8. IEEE (2002)

  3. Henderson, R.L.: Job scheduling under the portable batch system. In: Workshop on Job Scheduling Strategies for Parallel Processing, pp. 279–294. Springer (1995)

  4. Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid information services for distributed resource sharing. In: Proceedings of the 10th IEEE International Symposium on High Performance Distributed Computing, 2001, pp. 181–194. IEEE (2001)

  5. Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the grid: enabling scalable virtual organizations. Int. J. High Perform. Comput. Appl. 15(3), 200–222 (2001)

    Article  Google Scholar 

  6. Frey, J., Tannenbaum, T., Livny, M., Foster, I., Tuecke, S.: Condor-g: a computation management agent for multi-institutional grids. Cluster Comput. 5(3), 237–246 (2002)

    Article  Google Scholar 

  7. Gentzsch, W.: Sun grid engine: towards creating a compute power grid. In: Proceedings of the First IEEE/ACM International Symposium on Cluster Computing and the Grid, 2001, pp. 35–36. IEEE (2001)

  8. Hossain, M.A., Vu, H.T., Kim, J.S., Lee, M., Hwang, S.: Scout: a monitor and profiler of grid resources for large-scale scientific computing. In: 2015 International Conference on Cloud and Autonomic Computing (ICCAC), pp. 260–267. IEEE (2015)

  9. Laure, E., Jones, B.: Enabling grids for e-science: the egee project. In: Grid Computing: Infrastructure, Service, and Applications, pp. 55–74. CRC Press, Taylor & Francis Group, Boca Ratan, FL (2009)

  10. Liang, T.Y., Wang, S.Y., Wu, I.H.: Using frequent workload patterns in resource selection for grid jobs. In: Asia-Pacific Services Computing Conference, 2008. APSCC’08. IEEE, pp. 807–812. IEEE (2008)

  11. Pordes, R., Petravick, D., Kramer, B., Olson, D., Livny, M., Roy, A., Avery, P., Blackburn, K., Wenaus, T., Würthwein, F.: The open science grid. J. Phys. Conf. Ser. 78, 012057 (2007)

  12. Raicu, I., Foster, I., Wilde, M., Zhang, Z., Iskra, K., Beckman, P., Zhao, Y., Szalay, A., Choudhary, A., Little, P., et al.: Middleware support for many-task computing. Cluster Comput. 13(3), 291–314 (2010)

    Article  Google Scholar 

  13. Sciaba, A., Burke, S., Campana, S., Lanciotti, E., Litmaath, M., Lorenzo, P., Miccio, V., Nater, C., Santinelli, R.: Glite 3.2 user guide. Sciaba, S. Burke, S. Campana, E. Lanciotti, M. Litmaath, PM Lorenzo, V. Miccio, C. Nater, R. Santinelli.–CERN (2011)

  14. Sim, K.M.: Grid resource negotiation: survey and new directions. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 40(3), 245–257 (2010)

    Article  MathSciNet  Google Scholar 

  15. The biomed Virtual Organization. http://lsgc.org/biomed.html

  16. Tsouloupas, G., Dikaiakos, M.D.: Characterization of computational grid resources using low-level benchmarks. In: Second IEEE International Conference on e-Science and Grid Computing, 2006. e-Science’06, pp. 70–70. IEEE (2006)

  17. Yang, W., Chi, X., Zhang, H.: Performance-forecast and resource-autonomy grid monitoring architecture (pfra-gma). In: 2010 Ninth International Symposium on Distributed Computing and Applications to Business Engineering and Science (DCABES), pp. 361–365. IEEE (2010)

  18. Zhang, W., Fang, B., He, H., Zhang, H., et al.: Multisite resource selection and scheduling algorithm on computational grid. In: Proceedings of the 18th International Parallel and Distributed Processing Symposium, 2004, p. 105. IEEE (2004)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jik-Soo Kim.

Additional information

This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. R0190-15-2012, High Performance Big Data Analytics Platform Performance Acceleration Technologies Development).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hossain, M.A., Nguyen, C.N., Kim, JS. et al. Exploiting resource profiling mechanism for large-scale scientific computing on grids. Cluster Comput 19, 1527–1539 (2016). https://doi.org/10.1007/s10586-016-0590-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-016-0590-9

Keywords

Navigation