Skip to main content

Abstract

SCore is a software  package for high-  performance clusters. It includes a low-level  communication layer named PM(v2), a user-level, global operating system called SCore-D, an MPI  implementation, an OpenMP compiler that enables OpenMP programs to run on distributed memory  clusters, as well as other cluster management  utility programs. SCore was  developed by the Real World  Computing Partnership  project during 1992–2002. SCore provided state-of-the-art technologies at the time. Some of the technologies became obsolete but some of them, e.g., gang scheduling and checkpoint with parity are still unique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 149.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    PC Cluster Consortium: https://www.pccluster.org, the SCore package software series can be still downloaded from https://www.pccluster.org/en/score-download.html.

  2. 2.

    The RSCC cluster was developed by Fujitsu and the InfiniBand   made by Fujitsu was used. They also developed their proprietary PM(v2) InfiniBand   device, named PM/InfiniBand-FJ (Sumimoto et al. 2004). SCore as an open-source software package did not support InfiniBand   at that time.

References

  • Boden, N. J., Cohen, D., Felderman, R. E., Kulawik, A. E., Seitz, C. L., Seizovic, J. N., et al. (1995). Myrinet: A gigabit-per-second local area network. IEEE Micro, 15(1), 29–36.

    Article  Google Scholar 

  • Buntinas, D., Mercier, G., & Gropp, W. (2006). Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem. In Sixth IEEE International Symposium on Cluster Computing and the Grid, 2006. CCGRID 2006. (vol. 1, p. 530, pages 10).

    Google Scholar 

  • Cappello, F., Richard, O., & Etiemble, D. (2001). Understanding performance of SMP clusters running MPI programs. Future Generation Computer Systems, 17(6), 711–720. PaCT. II: HPC applications.

    Google Scholar 

  • Harada, H., Ishikawa, Y., Hori, A., Tezuka, H., Sumimoto, S., & Takahashi, T. (2000). Dynamic home node reallocation on software distributed shared memory. In HPC Asia 2000.

    Google Scholar 

  • Hori, A. (2009). PMX Specification –DRAFT–. Allinea Software.

    Google Scholar 

  • Hori, A., Tezuka, H., & Ishikawa, Y. (1997). Global state detection using network preemption. In JSSPP (pp. 262–276).

    Google Scholar 

  • Hori, A., Tezuka, H., & Ishikawa, Y. (1998). Highly efficient gang scheduling implementation. In Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), Supercomputing 1998 (pp. 1–14). USA: IEEE Computer Society.

    Google Scholar 

  • Ishikawa, Y., Hori, A., Tezuka, H., Sumimoto, S., Takahashi, T., & Harada, H. (1999). Parallel C\(++\) programming system on cluster of heterogeneous computers. In Heterogeneous Computing Workshop (pp. 73–82).

    Google Scholar 

  • Ishikawa, Y. (1996). MPC\(++\) approach to parallel computing environment. SIGAPP Applied Computing Review, 4(1), 15–18.

    Article  Google Scholar 

  • Jin, H. W., Sur, S., Chai, L., & Panda, D. K. (2005). LiMIC: support for high-performance MPI intra-node communication on Linux cluster. In 2005 International Conference on Parallel Processing (ICPP 2005) (pp. 184–191).

    Google Scholar 

  • Kondo, M., Hayashida, T., Imai, M., Nakamura, H., Nanya, T., & Hori, A. (2003). Evaluation of checkpointing mechanism on score cluster system. IEICE Transactions on Information and Systems, 86(12), 2553–2562.

    Google Scholar 

  • Kumon, K., Kimura, T., Hotta, K., & Hoshiya, T. (2004). RIKEN Super Combined Cluster (RSCC) system. Technical Report 2, Fujitsu.

    Google Scholar 

  • Leiserson, C. E., Abuhamdeh, Z. S., Douglas, D. C., Feynman, C. R., Ganmukhi, M. N., Hill, J. V., et al. (1996). The network architecture of the connection machine CM-5. Journal of Parallel and Distributed Computing, 33(2), 145–158.

    Google Scholar 

  • Nishioka, T., Hori, A., & Ishikawa, Y. (2000). Consistent checkpointing for high performance clusters. In CLUSTER (pp. 367–368).

    Google Scholar 

  • O’Carroll, F., Tezuka, H., Hori, A., & Ishikawa, Y. (1998). The design and implementation of zero copy MPI using commodity hardware with a high performance network. In International Conference on Supercomputing (pp. 243–250).

    Google Scholar 

  • Pakin, S., Karamcheti, V., & Chien, A. A. (1997). Fast messages: Efficient, portable communication for workstation clusters and MPPs. IEEE Transactions on Parallel and Distributed Systems, 5, 60–73.

    Google Scholar 

  • Sato, M., Harada, H., Hasegawa, A., & Ishikawa, Y. (2001). Cluster-enabled OpenMP: An OpenMP compiler for the SCASH software distributed shared memory system. Scientific Programming, 9(2,3), 123–130.

    Google Scholar 

  • Sterling, T., Becker, D. J., Savarese, D., Dorband, J. E., Ranawake, U. A., & Packer, C. V. (1995). Beowulf: A parallel workstation for scientific computation. In Proceedings of the 24th International Conference on Parallel Processing (pp. 11–14). CRC Press.

    Google Scholar 

  • Sumimoto, S., Naruse, A., Kumon, K., Hosoe, K., & Shimizu, T. (2004). PM/InfiniBand-FJ: A high performance communication facility using InfiniBand for large scale PC clusters. In Proceedings of Seventh International Conference on High Performance Computing and Grid in Asia Pacific Region (pp. 104–113).

    Google Scholar 

  • Sumimoto, S., Tezuka, H., Hori, A., Harada, H., Takahashi, T., & Ishikawa, Y. (1999). The design and evaluation of high performance communication using a Gigabit Ethernet. In International Conference on Supercomputing (pp. 260–267).

    Google Scholar 

  • Sumimoto, S., Tezuka, H., Hori, A., Harada, H., Takahashi, T., & Ishikawa, Y. (2000a). GigaE PM: A high performance communication facility using a Gigabit Ethernet. New Generation Computing, 18(2), 177–186.

    Google Scholar 

  • Sumimoto, S., Tezuka, H., Hori, A., Harada, H., Takahashi, T., & Ishikawa, Y. (2000b). High performance communication using a commodity network for cluster systems. In HPDC (pp. 139–146).

    Google Scholar 

  • Takahashi, T., O’Carroll, F., Tezuka, H., Hori, A., Sumimoto, S., Harada, H., et al. (1999). Implementation and evaluation of MPI on an SMP cluster. In IPPS/SPDP Workshops (pp. 1178–1192).

    Google Scholar 

  • Takahashi, T., Sumimoto, S., Hori, A., Harada, H., & Ishikawa, Y. (2000). PM2: A high performance communication middleware for heterogeneous network environments. In SC.

    Google Scholar 

  • Tezuka, H., Hori, A., & Ishikawa, Y. (1997). PM: a highperformance communication library for multi-user parallel environments. In Usenix 1997.

    Google Scholar 

  • Tezuka, H., O’Carroll, F., Hori, A., & Ishikawa, Y. (1998). Pin-down Cache: A virtual memory management technique for zero-copy communication. In Proceedings of the 12th International Parallel Processing Symposium on International Parallel Processing Symposium, IPPS 1998 (p. 308). USA: IEEE Computer Society.

    Google Scholar 

  • von Eicken, T., Basu, A., Buch, V., & Vogels, W. (1995). U-Net: A user-level network interface for parallel and distributed computing. SIGOPS Operating System Review, 29, 40–53.

    Google Scholar 

  • von Eicken, T., Culler, D. E., Goldstein, S. C., & Schauser, K. E. (1992). Active messages: a mechanism for integrated communication and computation. In Proceedings of the 19th Annual International Symposium on Computer Architecture, ISCA 1992 (pp. 256–266). USA: ACM.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Atsushi Hori .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hori, A. et al. (2019). SCore. In: Gerofi, B., Ishikawa, Y., Riesen, R., Wisniewski, R.W. (eds) Operating Systems for Supercomputers and High Performance Computing. High-Performance Computing Series, vol 1. Springer, Singapore. https://doi.org/10.1007/978-981-13-6624-6_8

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-6624-6_8

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-6623-9

  • Online ISBN: 978-981-13-6624-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics