Reference Hub1
Flexible MapReduce Workflows for Cloud Data Analytics

Flexible MapReduce Workflows for Cloud Data Analytics

Carlos Goncalves, Luis Assuncao, Jose C. Cunha
Copyright: © 2013 |Volume: 5 |Issue: 4 |Pages: 17
ISSN: 1938-0259|EISSN: 1938-0267|EISBN13: 9781466635715|DOI: 10.4018/ijghpc.2013100104
Cite Article Cite Article

MLA

Goncalves, Carlos, et al. "Flexible MapReduce Workflows for Cloud Data Analytics." IJGHPC vol.5, no.4 2013: pp.48-64. http://doi.org/10.4018/ijghpc.2013100104

APA

Goncalves, C., Assuncao, L., & Cunha, J. C. (2013). Flexible MapReduce Workflows for Cloud Data Analytics. International Journal of Grid and High Performance Computing (IJGHPC), 5(4), 48-64. http://doi.org/10.4018/ijghpc.2013100104

Chicago

Goncalves, Carlos, Luis Assuncao, and Jose C. Cunha. "Flexible MapReduce Workflows for Cloud Data Analytics," International Journal of Grid and High Performance Computing (IJGHPC) 5, no.4: 48-64. http://doi.org/10.4018/ijghpc.2013100104

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Data analytics applications handle large data sets subject to multiple processing phases, some of which can execute in parallel on clusters, grids or clouds. Such applications can benefit from using MapReduce model, only requiring the end-user to define the application algorithms for input data processing and the map and reduce functions, but this poses a need to install/configure specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. In order to provide more flexibility in defining and adjusting the application configurations, as well as in the specification of the composition of the application phases and their orchestration, the authors describe an approach for supporting MapReduce stages as sub-workflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). The authors discuss how a text mining application is represented as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. Access to intermediate data produced during the MapReduce computations is supported by a data sharing abstraction. The authors describe two implementations of this abstraction, one based on a shared tuple space and another based on an in-memory distributed key/value store. The authors describe the implementation of the framework, a set of developed tools, and our experimentation with the execution of the text mining algorithm over multiple Amazon EC2 (Elastic Compute Cloud) instances, and report on the speed-up and size-up results obtained up to 20 EC2 instances and for different corpus sizes, up to 97 million words.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.