Skip to main content

A Hybrid Parallel Search Algorithm for Solving Combinatorial Optimization Problems on Multicore Clusters

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10393))

  • 2297 Accesses

Abstract

Multicore clusters are widely used to solve combinatorial optimization problems, which require high computing power and a large amount of memory. In this sense, Hash Distributed A* (HDA*) parallelizes A*, a combinatorial optimization algorithm, using the MPI library. HDA* scales well on multicore clusters and on multicore machines. Additionally, there exist several versions of HDA* that were adapted for multicore machines, using the Pthreads library. In this paper, we present Hybrid HDA* (HHDA*), a hybrid parallel search algorithm based on HDA* that combines message-passing (MPI) with shared-memory programming (Pthreads) to better exploit the computing power and memory of multicore clusters. We evaluate the performance and memory consumption of HHDA* on a multicore cluster, using the 15-puzzle as a case study. The results reveal that HHDA* achieves a slightly higher average performance and uses considerably less memory than HDA*. These improvements allowed HHDA* to solve one of the hardest 15-Puzzle instances.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Efficiency is defined as Sp/N, where Sp is the speedup of the parallel algorithm over the sequential algorithm and N is the number of workers/cores used.

  2. 2.

    15 14 13 12 10 11 8 9 2 6 5 1 3 7 4 0.

References

  1. Hart, P., et al.: A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 4(2), 100–107 (1968)

    Article  Google Scholar 

  2. Russel, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice Hall, Upper Saddle River (2003)

    Google Scholar 

  3. Kishimoto, A., et al.: Evaluation of a simple, scalable, parallel best-first search strategy. Artif. Intell. 195, 222–248 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  4. Sanz, V., et al.: Scalability analysis of Hash Distributed A* on commodity cluster: results on the 15-puzzle problem. In: Proceedings of PDPTA 2016, 221–230. CSREA Press, Georgia (2016)

    Google Scholar 

  5. Burns, E., et al.: Best-first heuristic search for multicore machines. J. Artif. Intell. Res. 39(1), 689–743 (2010)

    MathSciNet  MATH  Google Scholar 

  6. Sanz, V., et al.: On the optimization of HDA* for multicore machines. Performance analysis. In: Proceedings of PDPTA 2014, pp. 625–631. CSREA Press, Georgia (2014)

    Google Scholar 

  7. Sanz, V., et al.: Performance tuning of the HDA* algorithm for multicore machines. In: Computer Science and Technology Series 2015. EDULP, La Plata (2015)

    Google Scholar 

  8. Chow, E., et al.: Assessing performance of hybrid MPI/OpenMP programs on SMP clusters. Technical report, UCRL-JC-143957. Lawrence Livermore National Laboratory, California (2001)

    Google Scholar 

  9. Rabenseifner, R., et al.: Hybrid MPI, OpenMP parallel programming on clusters of multi-core SMP nodes. In: Proceedings of PDP 2009, pp. 427–436. IEEE Computer Society, Washington, D.C. (2009)

    Google Scholar 

  10. Hager, G., et al.: Introduction to High Performance Computing for Scientists and Engineers, 1st edn. CRC Press, Boca Raton (2010)

    Book  Google Scholar 

  11. Kumar, V., et al.: Parallel best-first search of state-space graphs: a summary of results. In: Proceedings of AAAI 1988, pp. 122–127. AAAI Press, California (1988)

    Google Scholar 

  12. Vidal, V., et al.: Parallel AI planning on the SCC. In: 4th Many-Core Applications Research Community (MARC) Symposium, pp. 15–20. Postsdam University Press (2011)

    Google Scholar 

  13. Dijkstra, E.W.: Shmuel Safra’s version of termination detection. EWD-Note 998. Department of Computer Sciences, University of Texas, Austin (1987)

    Google Scholar 

  14. Korf, R.: Depth-first iterative-deepening: an optimal admissible tree search. Artif. Intell. 27(1), 97–109 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  15. Brüngger, A.: Solving hard combinatorial optimization problems in parallel: two cases studies. Ph.D. thesis, ETH Zurich, Dissertation ETH No. 12358 (1998)

    Google Scholar 

  16. Brüngger, A., et al.: The parallel search bench ZRAM and its applications. Ann. Oper. Res. 90, 45–63 (1999)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Victoria Sanz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Sanz, V., De Giusti, A., Naiouf, M. (2017). A Hybrid Parallel Search Algorithm for Solving Combinatorial Optimization Problems on Multicore Clusters. In: Ibrahim, S., Choo, KK., Yan, Z., Pedrycz, W. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2017. Lecture Notes in Computer Science(), vol 10393. Springer, Cham. https://doi.org/10.1007/978-3-319-65482-9_62

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-65482-9_62

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-65481-2

  • Online ISBN: 978-3-319-65482-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics