skip to main content
10.1145/2500727.2500740acmotherconferencesArticle/Chapter ViewAbstractPublication PagesapsysConference Proceedingsconference-collections
research-article

Lazy tree mapping: generalizing and scaling deterministic parallelism

Published:29 July 2013Publication History

ABSTRACT

Many parallel programs are intended to yield deterministic results, but unpredictable thread or process interleavings can lead to subtle bugs and nondeterminism. We are exploring a producer-consumer memory model---SPMC---for efficient system-enforced deterministic parallelism. However, the previous eager page mapping wastes physical memory, and cannot support large-size and real applications. This paper presents a novel lazy tree mapping approach to the model, which introduces "shadow page table" for allocating pages "on demand", and extends an SPMC region by a tree of lazily generated pages, representing an infinite stream on reusing a finite-size of virtual addresses. We build Dlinux to emulate the SPMC model entirely in Linux user space to make the SPMC more powerful. Dlinux uses virtual memory to emulate physical pages, and sets up page tables at user-level to emulate lazy tree mapping. Atop the SPMC, DetMP and DetMPI are explored and integrated into Dlinux, offering both thread- and process-level deterministic message passing programming. Experimental evaluations suggest lazy tree mapping improves memory use and address reuse. Dlinux scales close to ideal with 2048*2048 matrices for matmult, and better than MPICH2 for some workloads with larger input datasets.

References

  1. Cyrille Artho, Klaus Havelund, and Armin Biere. High-level data races. In VVEIS, pages 82--93, April 2003.Google ScholarGoogle ScholarCross RefCross Ref
  2. Amittai Aviram et al. Efficient system-enforced deterministic parallelism. In 9th OSDI, October 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Tom Bergan et al. CoreDet: A compiler and runtime system for deterministic multithreaded execution. In 15th ASPLOS, March 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Tom Bergan et al. Deterministic process groups in dOS. In 9th OSDI, October 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Emery D. Berger et al. Grace: Safe multithreaded programming for C/C++. In 24th OOPSLA, October 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Robert L. Bocchino et al. A type and effect system for deterministic parallel Java. In OOPSLA, October 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Heming Cui et al. Stable deterministic multithreading through schedule memoization. In 9th OSDI, October 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Rob F. Van der Wijingaart. NAS parallel benchmarks version 2.4. Technical Report NAS-02-007, NASA Ames Research Center, October 2002.Google ScholarGoogle Scholar
  9. Stephen A. Edwards et al. Programming shared memory multiprocessors with deterministic message-passing concurrency: Compiling SHIM to Pthreads. In DATE, March 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Gilles Kahn. The semantics of a simple language for parallel programming. In Information Processing, pages 471--475, Amsterdam, Netherlands, 1974. North-Holland.Google ScholarGoogle Scholar
  11. Doug Lea. A memory allocator, 2000.Google ScholarGoogle Scholar
  12. E. A. Lee. The problem with threads. Computer, 39(5):33--42, May 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Shan Lu et al. Learning from mistakes --- a comprehensive study on real world concurrency bug characteristics. In 13th ASPLOS, pages 329--339, March 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Mathematics and Computer Science Division Argonne National Laboratory. MPICH2-1.4: a high-performance and widely portable implementation of the MPI standard, June 2011.Google ScholarGoogle Scholar
  15. Message Passing Interface Forum. MPI: A message-passing interface standard version 2.2, September 2009.Google ScholarGoogle Scholar
  16. Marek Olszewski, Jason Ansel, and Saman Amarasinghe. Scaling deterministic multithreading. In 2nd WoDet, March 2011.Google ScholarGoogle Scholar
  17. Steven J. Plimpton et al. MapReduce in MPI for large-scale graph algorithms. Parallel Comput., 37(9):610--632, September 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Colby Ranger et al. Evaluating MapReduce for multi-core and multiprocessor systems. In 13th HPCA, pages 13--24, Washington, DC, USA, 2007. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Yu Zhang and Bryan Ford. A virtual memory foundation for scalable deterministic parallelism. In 2nd APSys, July 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    APSys '13: Proceedings of the 4th Asia-Pacific Workshop on Systems
    July 2013
    131 pages
    ISBN:9781450323161
    DOI:10.1145/2500727

    Copyright © 2013 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 29 July 2013

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    APSys '13 Paper Acceptance Rate23of73submissions,32%Overall Acceptance Rate149of386submissions,39%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader