Skip to main content

The SPNT test: A new technology for run-time speculative parallelization of loops

  • Automatic Parallelization
  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1366))

Abstract

The only way for parallelizing compilers to exploit potential parallelism of loops in which dependence information is inadequate statically is using run-time loop parallelization technique. There are two approaches in this field: the inspector-executor method [17] and the speculative DOALL test [13]. For the former approach, there always incurs heavy preprocessing overhead during inspector phase and synchronization barrier burden as well as load imbalance impact in executor phase. In this paper, a new proposal for a highly practicable speculative parallelization test, the SPNT test (Speculative Parallelization with New Technology), is presented. Speculative parallel execution as DOALL actually obtains the biggest speedup if the loop is in fact a DOALL loop. Otherwise, it will suffer rather extent penalty. The objective of SPNT test is twofold. The first is to increase the success rate by ignoring avoidable dependence restrictions. The second is to reduce the failure penalty by detecting the unavoidable data dependences and giving up the speculative parallel execution as soon as possible. As the result, the SPNT test can greatly improve the practicability of the speculative parallel execution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. U. Banerjee, R. Eigenmann, A. Nicolau, and D. A. Padua, “Automatic program parallelization,” in Proc. IEEE, vol. 81, no. 2, Feb. 1993.

    Google Scholar 

  2. M. Berry, D. Chen, P. Koss, D. Kuck, S. Lo, Y. Pang, R. Roloff, A. Sameh, E. Clementi, S. Chin, D. Schneider, G. Fox, P. Messina, D. Walker, C. Hsiung, J. Schwarzmeier, K. Lue, S. Orzag, F. Seidl, O. Johnson, G. Swanson, R. Goodrum, and J. Martin, ”The PERFECT club benchmarks: Effective performance evaluation of supercomputers,” CSRD Rep. 827, Univ. Illinois, Urbana-Champaign, May 1989.

    Google Scholar 

  3. D. K. Chen, P. C. Yew, and J. Torrellas, “An efficient algorithm for the run-time parallelization of doacross loops,” in Proc. 1994 Supercomputing, Nov. 1994, pp. 518–527.

    Google Scholar 

  4. M. H. Hsieh and S. S. Tseng, “An efficient run-time parallelizing method for multiprocessor systems,” M.S. Thesis, Dept. CIS., National Chiao Tung Univ., R.O.C., May 1996.

    Google Scholar 

  5. T. R. Lawrence, “Implementation of run-time techniques in the Polaris fortran restructurer,” M.S. Thesis, Univ. Illinois, Urbana-Champaign, 1996.

    Google Scholar 

  6. S. T. Leung and J. Zahorjan, “Improving the performance of run-time parallelization,” in Proc. 4th ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, May 1993, pp. 83–91.

    Google Scholar 

  7. S. T. Leung and J. Zahorjan, “Extending the applicability and improving the performance of run-time parallelization,” Dept. CSE, Univ. Washington, Rep. 95-01-08, Jan. 1995.

    Google Scholar 

  8. S. P. Midkiff and D. A. Padua, “Compiler algorithms for synchronization,” IEEE Trans. Comput., vol. C-36(12), pp. 1485–1495, Dec. 1987.

    Google Scholar 

  9. D. A. Padua, “Outline of a roadmap for compiler technology,” CSRD Rep. 1489, Univ. Illinois, Urbana-Champaign, May 1996.

    Google Scholar 

  10. C. D. Polychronopoulos, “Compiler optimizations for enhancing parallelism and their impact on architecture design,” IEEE Trans. Comput., vol. C-37(8), pp. 991–1004, Aug. 1988.

    Google Scholar 

  11. L. Rauchwerger, N. M. Amato, and D. A. Padua, “A scalable method for run-time loop parallelization,” CSRD Rep. 1444, Univ. Illinois, Urbana-Champaign, Aug. 1995.

    Google Scholar 

  12. L. Rauchwerger and D. A. Padua, "The privatizing doall test: A run-time technique for doall loop identification and array privatization,” in Proc.1994 ACM Int. Conf. Supercomputing, July 1994, pp. 33–43.

    Google Scholar 

  13. L. Rauchwerger and D. A. Padua, "The LRPD test: Speculative run-time parallelization of loops with privatization and reduction parallelization,” in Proc.1995 SIGPLAN Conf. Programming Language Design and Implementation, CA, June 1995, pp. 218–232.

    Google Scholar 

  14. L. Rauchwerger, “Run-time parallelization: A framework for parallel computation,” Ph.D. dissertation, Univ. Illinois, Urbana-Champaign, 1995.

    Google Scholar 

  15. J. H. Saltz, R. Mirchandaney, and K. Crowley, “The doconsider loop,” in Proc. 1989 ACM Int. Conf. Supercomputing, June 1989, pp. 29–40.

    Google Scholar 

  16. J. H. Saltz and R. Mirchandaney, and K. Crowley, “The preprocessed doacross loop,” in Dr. H.D. Schwetman, editor, Proc. 1991 Int. Conf. Parallel Processing, CRC Press, 1991, Vol. II-Software, pp. 174–178.

    Google Scholar 

  17. J. H. Saltz, R. Mirchandaney, and K. Crowley, “Run-time parallelization and scheduling of loops,” IEEE Trans. Comput., vol. 40(5), pp. 603–612, May 1991.

    Google Scholar 

  18. M. Wolfe, High Performance Compilers for Parallel Computing, Addison-Wesley publishing, CA, 1996.

    Google Scholar 

  19. J. Wu, J. H. Saltz, S. Hiranandani, and H. Berryman, “Run-time compilation methods for multicomputers,” in Dr. H.D. Schwetman, editor, Proc. 1991 Int. Conf. Parallel Processing, CRC Press, 1991, vol. II-Software, pp. 26–30.

    Google Scholar 

  20. C. T. Yang, C. D. Chuang, and S. S. Tseng, “KPLS: An efficient knowledge-based parallel loop scheduling for parallelizing compilers,” Dept. CIS., National Chiao Tung Univ., R.O.C., June 1996.

    Google Scholar 

  21. C. Q. Zhu and P.C. dYew, ”A scheme to enforce data dependence on large multiprocessor systems,” IEEE Trans. Software Eng., vol. 13(6), pp. 726–739, June 1987.

    Google Scholar 

  22. H. Zima and B. Chapman, Supercompilers for Parallel and Vector Computers, Addison-Wesley Publishing and ACM Press, NY, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Zhiyuan Li Pen-Chung Yew Siddharta Chatterjee Chua-Huang Huang P. Sadayappan David Sehr

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Huang, TC., Hsu, PH. (1998). The SPNT test: A new technology for run-time speculative parallelization of loops. In: Li, Z., Yew, PC., Chatterjee, S., Huang, CH., Sadayappan, P., Sehr, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1997. Lecture Notes in Computer Science, vol 1366. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0032691

Download citation

  • DOI: https://doi.org/10.1007/BFb0032691

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64472-9

  • Online ISBN: 978-3-540-69788-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics