Skip to main content
Log in

NEKTAR, SPICE and Vortonics: using federated grids for large scale scientific applications

  • Original Paper
  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

In response to a joint call from US’s NSF and UK’s EPSRC for applications that aim to utilize the combined computational resources of the US and UK, three computational science groups from UCL, Tufts and Brown Universities teamed up with a middleware team from NIU/Argonne to meet the challenge. Although the groups had three distinct codes and aims, the projects had the underlying common feature that they were comprised of large-scale distributed applications which required high-end networking and advanced middleware in order to be effectively deployed. For example, cross-site runs were found to be a very effective strategy to overcome the limitations of a single resource.

The seamless federation of a grid-of-grids remains difficult. Even if interoperability at the middleware and software stack levels were to exist, it would not guarantee that the federated grids can be utilized for large scale distributed applications. There are important additional requirements for example, compatible and consistent usage policy, automated advanced reservations and most important of all co-scheduling. This paper outlines the scientific motivation and describes why distributed resources are critical for all three projects. It documents the challenges encountered in using a grid-of-grids and some of the solutions devised in response.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Blake, R., Coveney, P.V., Clarke, P., Pickles, S.M.: The teragyroid experiment—supercomputing 2003. Sci. Program. 13(1), 1–17 (2005)

    Google Scholar 

  2. Chin, J., Coveney, P.V.: Chirality and domain growth in the gyroid mesophase, preprint (2005)

  3. Fowler, P., Jha, S., Coveney, P.V.: Grid-based steered thermodynamic integration accelerates the calculation of binding free energies. Phil. Trans. Roy. Soc. London A 363(1833), 1999–2015 (2005), http://www.pubs.royalscoc.ac.uk/philtransa.shtml

    Article  Google Scholar 

  4. Allen, G., Goodale, T., Russell, M., Seidel, E., Shalf, J.: Classifying and enabling grid applications. In: Grid Computing: Making the Global Infrastructure a Reality, p. 601. Wiley (2004)

  5. Chin, J., Harvey, M.J., Jha, S., Coveney, P.V.: Scientific grid computing: the first generation. Comput. Sci. Eng. 10(2), 24–32 (2005)

    Article  Google Scholar 

  6. Sherwin, S., Formaggia, L., Peiro, J., Franke, V.: Computational modelling of 1D blood flow with variable mechanical properties in the human arterial system. Int. J. Num. Meth. Fluids 43, 673 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  7. Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A grid-enabled implementation of the message passing interface. J. Parallel Distrib. Comput. 63, 551–563 (2003)

    Article  MATH  Google Scholar 

  8. Lubensky, D.K., Nelson, D.R.: Phys. Rev. E 65, 31917 (1999)

    Article  Google Scholar 

  9. Metzler, R., Klafter, J.: Biophys. J. 85, 2776 (2003)

    Google Scholar 

  10. Howorka, S., Bayley, H.: Biophys. J. 83, 3202 (2002)

    Article  Google Scholar 

  11. Meller, A., et al.: Phys. Rev. Lett. 86, 3435 (2003)

    Article  Google Scholar 

  12. Sauer-Budge, A.F., et al.: Phys. Rev. Lett. 90(23), 238101 (2003)

    Article  Google Scholar 

  13. Karplus, M., McCammon, J.A.: Molecular dynamics simulations of biomolecules. Nat. Struct. Biol. 9(9), 646–652 (2002)

    Article  Google Scholar 

  14. Jha, S., Coveney, P.V., Harvey, M.J., Pinning, R.: SPICE: simulated pore interactive computing environment. Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p. 70 (2005), dx.doi.org/10.1109/SC.2005.65

  15. Jha, S., Harvey, M.J., Coveney, P.V., Pezzi, N., Pickles, S., Pinning, R.L., Clarke, P.: Simulated pore interactive computing environment (SPICE)—using grid computing to understand DNA translocation across protein nanopores embedded in lipid membranes. Proceedings of the UK e-Science All Hands Meeting, 19–22 September 2005, http://www.allhands.org.uk/2005/proceedings/papers/455.pdf

  16. http://www.glif.is

  17. Saffman, P.: Vortex Dynamics. Cambridge University Press, Cambridge (1995)

    Google Scholar 

  18. Finn, L., Boghosian, B., Kottke, C.: Vortex core identification in viscous hydrodynamics. Phil. Trans. Roy. Soc. A 363, 1937–1948 (2005)

    Article  MathSciNet  Google Scholar 

  19. Finn, L., Boghosian, B.: A global variational approach to vortex core identification. Physica A 362, 11–16 (2006)

    Article  Google Scholar 

  20. Boghosian, B., Finn, L., Coveney, P.V.: Moving the data to the computation: multi-site distributed parallel computation. RealityGrid, Tech. Rep. (2006), http://www.realitygrid.org/publications/GD3.pdf

  21. Kale, L., et al.: J. Comput. Phys. 151, 283–312 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  22. Humphrey, W., Dalke, A., Schulten, K.: VMD—visual molecular dynamics. J. Mol. Graph. 14, 33–38 (1996)

    Article  Google Scholar 

  23. JISC: UKLight switched optical lightpath network, http://www.uklight

  24. Carson, M., Santay, D.: NISTNet—a Linux-based network emulation tool. Comput. Commun. Rev. 6 (2003)

  25. Pickles, S., et al.: The RealityGrid computational steering API version 1.1, http://www.sve.man.ac.uk/Research/AtoZ/RealityGrid/Steering/ReG_steering_api.pdf

  26. Chin, J., Harting, J., Jha, S., Coveney, P.V., Porter, A.R., Pickles, S.M.: Contemp. Phys. 44, 417–432 (2003)

    Article  Google Scholar 

  27. Karonis, N., de Supinski, B., Foster, I., Gropp, W., Lusk, E., Bresnahan, J.: Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In: Fourteenth International Parallel and Distributed Processing Symposium (IPDPS’00), pp. 377–384. Cancun, Mexico, 1–5 May 2000

  28. Allen, G., Dramlitsch, T., Foster, I., Karonis, N.T., Ripeanu, M., Seidel, E., Toonen, B.: Supporting efficient execution in heterogenous distributed computing environments with Cactus and Globus. Proceedings of the 2001 ACM/IEEE conference on Supercomputing, Denver, CO. Gordon Bell Prize Winner, Special Category (2001)

  29. Hargrove, W.W., Hoffman, F.M., Mahinthakumar, G., Karonis, N.: Multivariate geographic clustering in a MetaComputing environment using Globus. Proceedings of the 1999 ACM/IEEE conference on Supercomputing, Portland, OR (1999)

  30. Karonis, N., Papka, M., Binns, J., Bresnahan, J., Insley, J., Jones, D., Link, J.: High-resolution remote rendering of large datasets in a collaborative environment. Futur. Gener. Comput. Syst. (FGCS) 19(6), 909–917 (2003)

    Article  Google Scholar 

  31. Mahinthakumar, K., Sayeed, M., Karonis, N.: Grid enabled solution of groundwater inverse problems on the TeraGrid network. In: High Performance Computing Symposium. Huntsville, Alabama, 2–6 April 2006

  32. Gabriel, E., Fagg, G.E., Boscilca, G., Angskun, T., Dongarra, J.J., Squyres, J.M., Sahay, V., Kambadur, P., Barrett, B., Lumsdaine, A., Castain, R.H., Daniel, D.J., Graham, R.L., Woodall, T.S.: OpenMPI: goals, concept and design of a next generations MPI implementation. In: 11th European PVM/MPI Users Group Meeting. ser. Budapest, Hungary (September 2004)

  33. http://www.teragrid.org/news/news06/0127.html

  34. Grid Resource Allocation Agreement Protocol Working Group: https://forge.gridforum.org/projects/graap-wg, http://www.fz-juelich.de/zam/RD/coop/ggf/graap/

  35. McLaren, J., McKeown, M.: HARC: highly available robust co-scheduler, e-print http://www.realitygrid.org/publications/HARC.pdf

  36. G-Lambda: Coordination of a Grid Scheduler and Lambda Path Service over GMPLS, http://www.gtrc.aist.go.jp/g-lambda/

  37. A Meta-Scheduling Service for Co-allocating Arbitrary Types of Resources: http://www.coregrid.net/mambo/images/stories/TechnicalReports/tr-0010.pdf

  38. Grid Interoperation Now (GIN): https://forge.gridforum.org/sf/projects/gin-cg

  39. The Application Hosting Environment: http://www.realitygrid.org/AHE

  40. http://gridsam.sourceforge.net

  41. U.O.S.C. Information Sciences Institute: rFC 793: Transmission Control Protocol 1981, http://rfc.net/rfc793.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shantenu Jha.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Boghosian, B., Coveney, P., Dong, S. et al. NEKTAR, SPICE and Vortonics: using federated grids for large scale scientific applications. Cluster Comput 10, 351–364 (2007). https://doi.org/10.1007/s10586-007-0029-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-007-0029-4

Keywords

Navigation