Skip to main content

Ursa Major: Exploring Web technology for design and evaluation of high-performance systems

  • 3. Computer Science
  • Conference paper
  • First Online:
High-Performance Computing and Networking (HPCN-Europe 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1401))

Included in the following conference series:

Abstract

Ursa Major is a Web-based tool that facilitates the gathering of information about program characteristics and their execution behavior on high-performance machines. It supports users who want to learn about programs and programming methodologies as well as users who want to study in detail computational applications and their performance aspects. We present early experiences with Ursa Major and our vision of how the new internet technology and its combination with high-performance computing systems can be put to maximal use.

This work was supported in part by Purdue University, U. S. Army contract #DABT63-92-C-0033, and an NSF CAREER award. This work is not necessarily representative of the positions or policies of the U. S. Army or the Government.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brian Armstrong and Rudolf Eigenmann. Performance forecasting: Characterization of applications on current and future architectures. Technical Report ECE-HPCLab-97202, Purdue University, School of Electrical and Computer, Engineering, High-Performance Computing Laboratory, February 97.

    Google Scholar 

  2. William Blume, Rudolf Eigenmann, Jay Hoeflinger, David Padua, Paul Petersen, Lawrence Rauchwerger, and Peng Tu. Automatic Detection of Parallelism: A Grand Challenge for High-Performance Computing. IEEE Parallel and Distributed Technology, 2(3):37–47, Fall 1994.

    Google Scholar 

  3. Zina Ben-Milld, Rudolf Eigenmann, José A. B. Fortes, and Valerie Taylor. Hierarchical processors-and-memory architecture for high performance computing. In Proc. of Frontiers'96 Conference, pages 355–362, Oct 96.

    Google Scholar 

  4. Rudolf Eigenmann and Patrick McClaughry. Practical Tools for Optimizing Parallel Programs. Presented at the 1993 SCS Multiconference, Arlington, VA, March 27–April 1, 1993.

    Google Scholar 

  5. M. W. Hall, J. M. Anderson, S. P. Amarasinghe, B. R. Murphy, S.-W. Liao, E. Bugnion, and M. S. Lam. Maximizing multiprocessor performance with the SUIF compiler. IEEE Computer, pages 84–89, December 1996.

    Google Scholar 

  6. Seon-Wook Kim and Rudolf Eigenmann. Max/P: detecting the maximum parallelism in a Fortran program. Purdue University, School of Electrical and Computer, Engineering, High-Performance Computing Laboratory, 1997. Manual ECE-HPCLab-97201.

    Google Scholar 

  7. Nirav H. Kapadia, Mark S. Lundstrom, and Jose A. B. Fortes. A network-based simulation laboratory for collaborative research and technology transfer. In Semiconductor Research Corporation's TECHCON '96, September 1996.

    Google Scholar 

  8. Brian LaRose. The development and implementation of a performance database server. Master's thesis, University of Tennessee, August 1993.

    Google Scholar 

  9. Barton P. Miller, Mark D. Callaghan, Jonathan M. Cargille, Jeffrey K. Hollingsworth R. Bruce Irvin, Karen L. Karavanic, Krishna Kunchithapadam, and Tia Newhall. The Paradyn parallel performance measurement tools. IEEE Computer, 28(11), November 1995.

    Google Scholar 

  10. The University of Southampton. Graphical Benchmark Information Service (GBIS). http://www.ccg.ecs.soton.ac.uk/gbis/papiani-newgbis.html, 1997.

    Google Scholar 

  11. Paul Marx Petersen. Evaluation of Programs and Parallelizing Compilers Using Dynamic Analysis Techniques. PhD thesis, Univ. of Illinois at Urbana-Champaign, Center for Supercomputing Res. & Dev., January 1993.

    Google Scholar 

  12. Insung Park, Michael J. Voss, Brian Armstrong, and Rudolf Eigenmann. Interactive compilation and performance analysis with Ursa Minor. In Workshop of Languages and Compilers for Parallel Computing, August 97.

    Google Scholar 

  13. Daniel A. Reed. Experimental performance analysis of parallel systems: Techniques and open problems. In Proc. of the 7th Int' Conf on Modelling Techniques and Tools for Computer Performance Evaluation, pages 25–51, 1994.

    Google Scholar 

  14. Michael J. Voss. Portable loop-level parallelism for shared memory multiprocessor architectures. Master's thesis, School of Electrical and Computer Engineering, Purdue University, October 97.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Peter Sloot Marian Bubak Bob Hertzberger

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Park, I., Eigenmann, R. (1998). Ursa Major: Exploring Web technology for design and evaluation of high-performance systems. In: Sloot, P., Bubak, M., Hertzberger, B. (eds) High-Performance Computing and Networking. HPCN-Europe 1998. Lecture Notes in Computer Science, vol 1401. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0037181

Download citation

  • DOI: https://doi.org/10.1007/BFb0037181

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64443-9

  • Online ISBN: 978-3-540-69783-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics