Loading [a11y]/accessibility-menu.js
Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection | IEEE Conference Publication | IEEE Xplore

Comparing Message Passing Interface and MapReduce for large-scale parallel ranking and selection


Abstract:

We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the ...Show More

Abstract:

We compare two methods for implementing ranking and selection algorithms in large-scale parallel computing environments. The Message Passing Interface (MPI) provides the programmer with complete control over sending and receiving messages between cores, and is fragile with regard to core failures or messages going awry. In contrast, MapReduce handles all communication and is quite robust, but is more rigid in terms of how algorithms can be coded. As expected in a high-performance computing context, we find that MPI is the more efficient of the two environments, although MapReduce is a reasonable choice. Accordingly, MapReduce may be attractive in environments where cores can stall or fail, such as is possible in low-budget cloud computing.
Date of Conference: 06-09 December 2015
Date Added to IEEE Xplore: 18 February 2016
ISBN Information:
Electronic ISSN: 1558-4305
Conference Location: Huntington Beach, CA, USA

Contact IEEE to Subscribe

References

References is not available for this document.