Abstract
The way in which jobs are scheduled is critical to achieve high job processing performance in large scale data clusters. Most existing scheduling mechanism employs a First-In First-Out, serialized approach encompassed with task straggler hunting techniques which launches speculative tasks after detecting slow tasks. This is often achieved through the instrumentation of processing nodes. Such node instrumentation incurs frequent communication overheads as the number of processing nodes increase. Moreover the sequential scheduling of job tasks and the straggler hunting approach fails to meet optimal performance as they increase job waiting time in queue and incurs delayed speculative execution of straggling tasks respectively. In this paper we propose an Enhanced Phase based Performance Aware Dynamic Scheduler (EPPADS), which schedules job tasks without additional instrumentation modules. EPPADS uses a two staged scheduling approach, that is, the slow start phase (SSP) and accelerate phase (AccP). The SSP schedules the initial task in the queue in the normal FIFO way and records the initial execution times of the processing nodes. The AccP uses the initial execution times to compute the processing nodes task distribution ratio of the remaining tasks and schedules them using a single scheduling I/O. We implement EPPADS scheduler in Hadoop’s MapReduce framework. Our evaluation shows that EPPADS can achieve a performance improvement on FIFO scheduler of 30%. Compared with existing Dynamic scheduling approach which uses node instrumentation, EPPADS achieves a better performance of 22%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fair Scheduler. http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html
Yarn Scheduler Load Simulator (SLS). https://hadoop.apache.org/docs/stable/hadoop-sls/SchedulerLoadSimulator.html
Ananthanarayanan, G., Ghodsi, A., Shenker, S., Stoica, I.: Effective straggler mitigation: attack of the clones. In: Presented as part of the 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13), pp. 185–198. USENIX, Lombard, IL (2013)
Ananthanarayanan, G., et al.: Reining in the outliers in Map-Reduce clusters using mantri. In: 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI 10), Vancouver, BC (2010)
Chang, H., Kodialam, M., Kompella, R.R., Lakshman, T.V., Lee, M., Mukherjee, S.: Scheduling in MapReduce-like systems for fast completion time. In: 2011 Proceedings of IEEE INFOCOM, pp. 3074–3082 (2011)
Chen, Q., Liu, C., Xiao, Z.: Improving MapReduce performance using smart speculative execution strategy. IEEE Trans. Comput. 63(4), 954–967 (2014)
Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)
Fu, H., Chen, H., Zhu, Y., Yu, W.: FARMS: efficient MapReduce speculation for failure recovery in short jobs. Parallel Comput. 61, 68–82 (2017)
Hamandawana, P., Mativenga, R., Kwon, S.J., Chung, T.: PADS: performance-aware dynamic scheduling for effective mapreduce computation in heterogeneous clusters. In: 2018 IEEE International Conference on Cluster Computing (CLUSTER), pp. 160–161 (2018)
Hsiao, J.H., Kao, S.J.: A usage-aware scheduler for improving MapReduce performance in heterogeneous environments. In: 2014 International Conference on Information Science, Electronics and Electrical Engineering, vol. 3, pp. 1648–1652 (2014)
You, H.-H., Yang, C.C., Huang, J.L.: A load-aware scheduler for MapReduce framework in heterogeneous cloud environments. In: Proceedings of the 2011 ACM Symposium on Applied Computing, SAC 2011, pp. 127–132. ACM, New York (2011)
Rasooli, A., Down, D.G.: A hybrid scheduling approach for scalable heterogeneous hadoop systems. In: 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, pp. 1284–1291 (2012)
Rasooli, A., Down, D.G.: COSHH: a classification and optimization based scheduler for heterogeneous hadoop systems. Future Gener. Comput. Syst. 36, 1–15 (2014)
Sun, X., He, C., Lu, Y.: ESAMR: an enhanced self-adaptive MapReduce scheduling algorithm. In: 2012 IEEE 18th International Conference on Parallel and Distributed Systems, pp. 148–155 (2012)
Swabey, P.: The data deluge: five years on. https://www.slideshare.net/economistintelligenceunit/the-data-deluge-five-years-on
Xu, H., Lau, W.C.: Task-cloning algorithms in a MapReduce cluster with competitive performance bounds. In: 2015 IEEE 35th International Conference on Distributed Computing Systems, pp. 339–348 (2015)
Yadwadkar, N.J., Ananthanarayanan, G., Katz, R.: Wrangler: predictable and faster jobs using fewer resources. In: Proceedings of the ACM Symposium on Cloud Computing, SOCC 2014, pp. 26:1–26:14 (2014). https://doi.org/10.1145/2670979.2671005
Yang, S.J., Chen, Y.R., Hsieh, Y.M.: Design dynamic data allocation scheduler to improve MapReduce performance in heterogeneous clouds. In: 2012 IEEE Ninth International Conference on e-Business Engineering, pp. 265–270 (2012)
Zaharia, M., Konwinski, A., Joseph, A.D., Katz, R., Stoica, I.: Improving MapReduce performance in heterogeneous environments. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation, OSDI 2008, pp. 29–42. USENIX Association, Berkeley (2008)
Acknowledgement
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1D1A1B03934129).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Hamandawana, P., Mativenga, R., Kwon, S.J., Chung, TS. (2019). EPPADS: An Enhanced Phase-Based Performance-Aware Dynamic Scheduler for High Job Execution Performance in Large Scale Clusters. In: Li, G., Yang, J., Gama, J., Natwichai, J., Tong, Y. (eds) Database Systems for Advanced Applications. DASFAA 2019. Lecture Notes in Computer Science(), vol 11446. Springer, Cham. https://doi.org/10.1007/978-3-030-18576-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-18576-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18575-6
Online ISBN: 978-3-030-18576-3
eBook Packages: Computer ScienceComputer Science (R0)