Skip to main content

Memory Management Approaches in Apache Spark: A Review

  • Conference paper
  • First Online:
Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020 (AISI 2020)

Abstract

In the era of Big Data, processing large amounts of data through data-intensive applications, is presenting a challenge. An in-memory distributed computing system; Apache Spark is often used to speed up big data applications. It caches intermediate data into memory, so there is no need to repeat the computation or reload data from disk when reusing these data later. This mechanism of caching data in memory makes Apache Spark much faster than other systems. When the memory used for caching data is full, the cache replacement policy used by Apache Spark is the Least Recently Used (LRU), however LRU algorithm performs poorly in some workloads. This review is going to give an insight about different replacement algorithms used to address the LRU problems, categorize the different selection factors and provide a comparison between the algorithms in terms of selection factors, performance and the benchmarks used in the research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Singh, D., Reddy, C.K.: A survey on platforms for big data analytics. J. Big Data 2(1), 1–20 (2014). https://doi.org/10.1186/s40537-014-0008-6

    Article  Google Scholar 

  2. Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)

    Article  Google Scholar 

  3. Zaharia, M., et al.: Spark: cluster computing with working sets. HotCloud 10(10–10), 95 (2010)

    Google Scholar 

  4. Gu, L., Li, H.: Memory or time: performance evaluation for iterative operation on Hadoop and spark. In: 2013 IEEE 10th International Conference on High Performance Computing and Communications (2013)

    Google Scholar 

  5. Costa, C.H.A., et al.: Optimization of genomics analysis pipeline for scalable performance in a cloud environment. In: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM) (2018)

    Google Scholar 

  6. Sarumi, O.A., Leung, C.K.: Exploiting anti-monotonic constraints in mining palindromic motifs from big genomic data. In: 2019 IEEE International Conference on Big Data (Big Data) (2019)

    Google Scholar 

  7. Zhou, H., et al.: A big data mining approach of PSO-based BP neural network for financial risk management with IoT. IEEE Access 7, 154035–154043 (2019)

    Article  Google Scholar 

  8. Zhang, Z., et al. Scientific computing meets big data technology: an astronomy use case. In: 2015 IEEE International Conference on Big Data (Big Data) (2015)

    Google Scholar 

  9. Karau, H., et al.: Learning Spark: Lightning-Fast Big Data Analysis. O’Reilly Media, Newton (2015)

    Google Scholar 

  10. Zaharia, M.: An Architecture for Fast and General Data Processing on Large Clusters. Association for Computing Machinery and Morgan & Claypool Publishers (2016)

    Google Scholar 

  11. Berger, D.S., Sitaraman, R.K., Harchol-Balter, M.: Adaptsize: orchestrating the hot object memory cache in a content delivery network. In: Proceedings of the 14th USENIX Conference on Networked Systems Design and Implementation, pp. 483–498. USENIX Association, Boston (2017)

    Google Scholar 

  12. Megiddo, N., Modha, D.S.: ARC: a self-tuning, low overhead replacement cache. In: Proceedings of the 2nd USENIX Conference on File and Storage Technologies, pp. 115–130. USENIX Association, San Francisco (2003)

    Google Scholar 

  13. Jiang, S., Zhang, X.: LIRS: an efficient low inter-reference recency set replacement policy to improve buffer cache performance. SIGMETRICS Perform. Eval. Rev. 30(1), 31–42 (2002)

    Article  Google Scholar 

  14. Duan, M., et al.: Selection and replacement algorithms for memory performance improvement in Spark. Concurr. Comput.: Pract. Exp. 28(8), 2473–2486 (2016)

    Article  Google Scholar 

  15. Geng, Y., et al.: LCS: an efficient data eviction strategy for Spark. Int. J. Parallel Program. 45(6), 1285–1297 (2017)

    Article  Google Scholar 

  16. Zhao, C., et al.: Research cache replacement strategy in memory optimization of spark. Int. J. New Technol. Res. (IJNTR) 5(9), 27–32 (2019)

    Google Scholar 

  17. Yu, Y., et al. LRC: dependency-aware cache management for data analytics clusters. In: IEEE INFOCOM 2017-IEEE Conference on Computer Communications. IEEE (2017)

    Google Scholar 

  18. Yu, Y., et al. LERC: coordinated cache management for data-parallel systems. In: GLOBECOM 2017-2017 IEEE Global Communications Conference. IEEE (2017)

    Google Scholar 

  19. Wang, B., et al.: LCRC: a dependency-aware cache management policy for Spark. In: 2018 IEEE International Conference on Parallel and Distributed Processing with Applications. IEEE (2018)

    Google Scholar 

  20. Perez, T.B.G., Zhou, X., Cheng, D.: Reference-distance eviction and prefetching for cache management in Spark. In: Proceedings of the 47th International Conference on Parallel Processing, Association for Computing Machinery, p. Article 88, Eugene (2018)

    Google Scholar 

  21. Huang, S., et al.: The HiBench benchmark suite: characterization of the MapReduce-based data analysis. In: 2010 IEEE 26th International Conference on Data Engineering Workshops (ICDEW 2010). IEEE (2010)

    Google Scholar 

  22. Li, M., et al.: SparkBench: a spark benchmarking suite characterizing largescale in-memory data analytics. Cluster Comput. 20(3), 2575–2589 (2017)

    Article  Google Scholar 

  23. Yang, Z., et al.: Intermediate data caching optimization for multi-stage and parallel big data frameworks. In: 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). IEEE (2018)

    Google Scholar 

  24. Niu, D., et al.: The classified and active caching strategy for iterative application in Spark. In: 2018 27th International Conference on Computer Communication and Networks (ICCCN). IEEE (2018)

    Google Scholar 

  25. Xu, E., Saxena, M., Chiu, L.: Neutrino: revisiting memory caching for iterative data analytics. In: 8th {USENIX} Workshop on Hot Topics in Storage and File Systems (HotStorage 2016) (2016)

    Google Scholar 

  26. Zhou, P., et al. Doppio: I/O-aware performance analysis, modeling and optimization for in-memory computing framework. IEEE. (2018)

    Google Scholar 

  27. RubiX. https://github.com/qubole/rubix

  28. Azure HDInsight. https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-improve-performance-iocache

  29. Databricks Delta Lake. https://docs.databricks.com/delta/optimizations/delta-cache.html

  30. Open Cache Acceleration. https://open-cas.github.io/

  31. Alluxio. https://www.alluxio.io/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maha Dessokey .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dessokey, M., Saif, S.M., Salem, S., Saad, E., Eldeeb, H. (2021). Memory Management Approaches in Apache Spark: A Review. In: Hassanien, A.E., Slowik, A., Snášel, V., El-Deeb, H., Tolba, F.M. (eds) Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2020. AISI 2020. Advances in Intelligent Systems and Computing, vol 1261. Springer, Cham. https://doi.org/10.1007/978-3-030-58669-0_36

Download citation

Publish with us

Policies and ethics