skip to main content
10.1145/3315573.3329979acmconferencesArticle/Chapter ViewAbstractPublication PagesismmConference Proceedingsconference-collections
research-article

Massively parallel GPU memory compaction

Published: 23 June 2019 Publication History

Abstract

Memory fragmentation is a widely studied problem of dynamic memory allocators. It is well known that fragmentation can lead to premature out-of-memory errors and poor cache performance.
With the recent emergence of dynamic memory allocators for SIMD accelerators, memory fragmentation is becoming an increasingly important problem on such architectures. Nevertheless, it has received little attention so far. Memory-bound applications on SIMD architectures such as GPUs can experience an additional slowdown due to less efficient vector load/store instructions.
We propose CompactGpu, an incremental, fully-parallel, in-place memory defragmentation system for GPUs. CompactGpu is an extension to the DynaSOAr dynamic memory allocator and defragments the heap in a fully parallel fashion by merging partly occupied memory blocks. We developed several implementation techniques for memory defragmentation that are efficient on SIMD/GPU architectures, such as finding defragmentation block candidates and fast pointer rewriting based on bitmaps.
Benchmarks indicate that our implementation is very fast with typically higher performance gains than compaction overheads. It can also decrease the overall memory usage.

References

[1]
Diab Abuaiadh, Yoav Ossia, Erez Petrank, and Uri Silbershtein. 2004. An Efficient Parallel Heap Compaction Algorithm (OOPSLA ’04). ACM, New York, NY, USA, 224–236.
[2]
Andrew V. Adinetz and Dirk Pleiter. 2014. Halloc: A High-Throughput Dynamic Memory Allocator for GPGPU Architectures. https://github. com/canonizer/halloc . In GPU Technology Conference 2014.
[3]
Darius Bakunas-Milanowski, Vernon Rego, Janche Sang, and Chansu Yu. 2017. Efficient Algorithms for Stream Compaction on GPUs. International Journal of Networking and Computing 7, 2 (2017), 208–226.
[4]
Jeff Bonwick. 1994. The Slab Allocator: An Object-caching Kernel Memory Allocator (USTC ’94). USENIX Association, Berkeley, CA, USA, 12.
[5]
Shigeru Chiba. 1995. A Metaobject Protocol for C++ (OOPSLA ’95). ACM, New York, NY, USA, 285–299.
[6]
Alexander K. Dewdney. 1984. Computer Creations: Sharks and fish wage an ecological war on the toroidal planet Wa-Tor. Scientific American 251, 6 (Dec. 1984), 14–26.
[7]
Isaac Gelado and Michael Garland. 2019. Throughput-oriented GPU Memory Allocation (PPoPP ’19). ACM, New York, NY, USA, 27–37.
[8]
Dirk Grunwald, Benjamin Zorn, and Robert Henderson. 1993. Improving the Cache Locality of Memory Allocation (PLDI ’93). ACM, New York, NY, USA, 177–186.
[9]
Holger Homann and Francois Laenen. 2018. SoAx: A generic C++ Structure of Arrays for handling particles in HPC codes. Computer Physics Communications 224 (2018), 325–332.
[10]
Xiaohuang Huang, Christopher I. Rodrigues, Stephen Jones, Ian Buck, and Wen-Mei Hwu. 2010. XMalloc: A Scalable Lock-free Dynamic Memory Allocator for Many-core Machines (CIT ’10). 1134–1139.
[11]
Byunghyun Jang, Dana Schaa, Perhaad Mistry, and David Kaeli. 2011. Exploiting Memory Access Patterns to Improve Memory Performance in Data-Parallel Architectures. IEEE Transactions on Parallel and Distributed Systems 22, 1 (Jan. 2011), 105–118.
[12]
Mark S. Johnstone and Paul R. Wilson. 1998. The Memory Fragmentation Problem: Solved? (ISMM ’98). 26–36.
[13]
Haim Kermany and Erez Petrank. 2006. The Compressor: Concurrent, Incremental, and Parallel Compaction (PLDI ’06). ACM, New York, NY, USA, 354–363.
[14]
Bernard Lang and Francis Dupont. 1987. Incremental Incrementally Compacting Garbage Collection. In Papers of the Symposium on Interpreters and Interpretive Techniques (SIGPLAN ’87) . ACM, New York, NY, USA, 253–263.
[15]
Roland Leißa, Immanuel Haffner, and Sebastian Hack. 2014. Sierra: A SIMD Extension for C++ (WPMVP ’14). ACM, New York, NY, USA, 17–24.
[16]
Justin Luitjens. 2011. Global Memory Usage and Strategy. https://developer.download.nvidia.com/CUDA/training/cuda_ webinars_GlobalMemory.pdf . (July 2011). GPU Computing Webinar 7/12/2011, Accessed: 2019-02-28.
[17]
Toni Mattis, Johannes Henning, Patrick Rein, Robert Hirschfeld, and Malte Appeltauer. 2015. Columnar Objects: Improving the Performance of Analytical Applications (Onward! 2015). ACM, New York, NY, USA, 197–210.
[18]
Duane Merrill and Michael Garland. 2016. Single-pass Parallel Prefix Scan with Decoupled Look-back . Technical Report NVR-2016-002. NVIDIA Corporation.
[19]
Yoav Ossia, Ori Ben-Yitzhak, and Marc Segal. 2004. Mostly Concurrent Compaction for Mark-sweep GC (ISMM ’04). ACM, New York, NY, USA, 25–36.
[20]
Matt Pharr and William R. Mark. 2012. ispc: A SPMD compiler for High-Performance CPU Programming (InPar 2012). IEEE Computer Society, 1–13.
[21]
Jie Shen, Ana Lucia Varbanescu, Xavier Martorell, and Henk Sips. 2015. A Study of Application Kernel Structure for Data Parallel Applications . Technical Report PDS-2015-001. Delft University of Technology.
[22]
Matthias Springer and Hidehiko Masuhara. 2019. DynaSOAr: A Parallel Memory Allocator for Object-oriented Programming on GPUs with Efficient Memory Access (ECOOP 2019). Schloss Dagstuhl–LeibnizZentrum fuer Informatik, Dagstuhl, Germany.
[23]
Markus Steinberger, Michael Kenzel, Bernhard Kainz, and Dieter Schmalstieg. 2012. ScatterAlloc: Massively Parallel Dynamic Memory Allocation for the GPU (InPar 2012). IEEE Computer Society, 1–10.
[24]
Robert Strzodka. 2012. Chapter 31 - Abstraction for AoS and SoA Layout in C++. In GPU Computing Gems Jade Edition. Morgan Kaufmann, Boston, 429–441.
[25]
Ronald Veldema and Michael Philippsen. 2012. Parallel Memory Defragmentation on a GPU (MSPC ’12). ACM, New York, NY, USA, 38–47.
[26]
Vasily Volkov. 2016. Understanding Latency Hiding on GPUs. Ph.D. Dissertation. EECS Department, University of California, Berkeley. http:// www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-143.html
[27]
Mirek Wójtowicz. 2002. Mirek’s Cellebration: Cellular Automata Rules Lexicon. http://psoup.math.wisc.edu/mcell/rullex_gene.html . (2002). Accessed: 2019-02-22.

Cited By

View all
  • (2024)SyncMalloc: A Synchronized Host-Device Co-Management System for GPU Dynamic Memory Allocation across All ScalesProceedings of the 53rd International Conference on Parallel Processing10.1145/3673038.3673069(179-188)Online publication date: 12-Aug-2024
  • (2024)Certified SAT solving with GPU accelerated inprocessingFormal Methods in System Design10.1007/s10703-023-00432-z62:1-3(79-118)Online publication date: 1-Jun-2024
  • (2023)Innermost many-sorted term rewriting on GPUsScience of Computer Programming10.1016/j.scico.2022.102910225:COnline publication date: 1-Jan-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ISMM 2019: Proceedings of the 2019 ACM SIGPLAN International Symposium on Memory Management
June 2019
135 pages
ISBN:9781450367226
DOI:10.1145/3315573
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. GPUs
  2. dynamic allocation
  3. fragmentation

Qualifiers

  • Research-article

Funding Sources

Conference

ISMM '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 72 of 156 submissions, 46%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)42
  • Downloads (Last 6 weeks)9
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)SyncMalloc: A Synchronized Host-Device Co-Management System for GPU Dynamic Memory Allocation across All ScalesProceedings of the 53rd International Conference on Parallel Processing10.1145/3673038.3673069(179-188)Online publication date: 12-Aug-2024
  • (2024)Certified SAT solving with GPU accelerated inprocessingFormal Methods in System Design10.1007/s10703-023-00432-z62:1-3(79-118)Online publication date: 1-Jun-2024
  • (2023)Innermost many-sorted term rewriting on GPUsScience of Computer Programming10.1016/j.scico.2022.102910225:COnline publication date: 1-Jan-2023
  • (2021)SAT Solving with GPU Accelerated InprocessingTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-030-72016-2_8(133-151)Online publication date: 20-Mar-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media