skip to main content
10.1145/2555243.2555275acmconferencesArticle/Chapter ViewAbstractPublication PagesppoppConference Proceedingsconference-collections
poster

Parallelization hints via code skeletonization

Published:06 February 2014Publication History

ABSTRACT

Tools that provide optimization hints for program developers are facing severe obstacles and often unable to provide meaningful guidance on how to parallelize real--life applications. The main reason is due to the high code complexity and its large size when considering commercially valuable code. Such code is often rich with pointers, heavily nested conditional statements, nested while--based loops, function calls, etc. These constructs prevent existing compiler analysis from extracting the full parallelization potential. We propose a new paradigm to overcome this issue by automatically transforming the code into a much simpler skeleton-like form that is more conductive for auto-parallelization. We then apply existing tools of source--level automatic parallelization on the skeletonized code in order to expose possible parallelization patterns. The skeleton code, along with the parallelized version, are then provided to the programmer in the form of an IDE (Integrated Development Environment) recommendation.

The proposed skeletonization algorithm replaces pointers by integer indexes and C-struct references by references to multi-dimensional arrays. This is because automatic parallelizers cannot handle pointer expressions. For example, while(p != NULL){ p->val++; p=p->next; } will be skeletonized to the parallelizable for(Ip=0;Ip<N;Ip++){ Aval[Ip]++; } where Aval[] holds the embedding of the original list. It follows that the main goal of the skeletonization process is to embed pointer-based data structures into arrays. Though the skeletonized code is not semantically equivalent to the original code, it points out a possible parallelization pattern for this code segment and can be used as an effective parallelization hint to the programmer. We applied the method on several representative benchmarks from SPEC CPU 2000 and reached up to 80% performance gain after several sequential code segments had been manually parallelized based on the parallelization patterns of the generated skeletons. In a different set of experiments we tried to estimate the potential of skeletonization for a larger set of programs in SPEC 2000 and obtained an estimation of 27% additional loops that can be parallelized/vectorized due to skeletonization.

References

  1. Intel® advisor xe 2013. In phhttp://software.intel.com/en-us/intel-advisor-xe.Google ScholarGoogle Scholar
  2. N. Jensen, P. Larsen, R. Ladelsky, A. Zaks, and S. Karlsson. Guiding programmers to higher memory performance. 2011.Google ScholarGoogle Scholar
  3. P. Larsen, R. Ladelsky, S. Karlsson, and A. Zaks. Compiler driven code comments and refactoring. In Fourth Workshop on Programmability Issues for Multi-Core Computers (MULTIPROG-2011), page 64, 2011.Google ScholarGoogle Scholar
  4. P. Larsen, R. Ladelsky, J. Lidman, S. McKee, S. Karlsson, and A. Zaks. Automatic loop parallelization via compiler guided refactoring. Technical report, Technical Report IMM-Technical Report-2011--12, DTU Informatics, Technical University of Denmark, 2011. http://www2. imm. dtu. dk/pubdb/views/publication_details. php, 2011.Google ScholarGoogle Scholar
  5. J. Singh and J. Hennessy. An empirical investigation of the effectiveness and limitations of automatic parallelization. The MIT Press, Cambridge, Mass, pages 213--240, 1992.Google ScholarGoogle Scholar

Index Terms

  1. Parallelization hints via code skeletonization

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        PPoPP '14: Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming
        February 2014
        412 pages
        ISBN:9781450326568
        DOI:10.1145/2555243

        Copyright © 2014 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 6 February 2014

        Check for updates

        Qualifiers

        • poster

        Acceptance Rates

        PPoPP '14 Paper Acceptance Rate28of184submissions,15%Overall Acceptance Rate230of1,014submissions,23%
      • Article Metrics

        • Downloads (Last 12 months)6
        • Downloads (Last 6 weeks)2

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader