skip to main content
10.1145/3307681.3325403acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

Adaptive Resource Views for Containers

Published: 17 June 2019 Publication History

Abstract

As OS-level virtualization advances, containers have become a viable alternative to virtual machines in deploying applications in the cloud. Unlike virtual machines, which allow guest OSes to run atop virtual hardware, containers have direct access to physical hardware and share one OS kernel. While the absence of virtual hardware abstractions eliminates most virtualization overhead, it presents unique challenges for containerized applications to efficiently utilize the underlying hardware. The lack of hardware abstraction exposes the total amount of resources that are shared among all containers to each individual container. Parallel runtimes (e.g., OpenMP) and managed programming languages (e.g., Java) that rely on OS-exported information for resource management could suffer from suboptimal performance. In this paper, we develop a per-container view of resources to export information on the actual resource allocation to containerized applications. The central design of the resource view is a per-container sys\_namespace that calculates the effective capacity of CPU and memory in the presence of resource sharing among containers. We further create a virtual sysfs to seamlessly interface user space applications with sys\_namespace. We use two case studies to demonstrate how to leverage the continuously updated resource view to enable elasticity in the HotSpot JVM and OpenMP. Experimental results show that an accurate view of resource allocation leads to more appropriate configurations and improved performance in a variety of containerized applications.

References

[1]
2018. LXCFS. https://linuxcontainers.org/lxcfs/.
[2]
Nadav Amit, Dan Tsafrir, and Assaf Schuster. 2014. VSwapper: a memory swapper for virtualized environments. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 349--366.
[3]
Sergei Arnautov, Bohdan Trach, Franz Gregor, Thomas Knauth, Andre Martin, Christian Priebe, Joshua Lind, Divya Muthukumaran, Dan O'Keeffe, Mark L. Stillwell, David Goltzsche, Dave Eyers, Rüdiger Kapitza, Peter Pietzuch, and Christof Fetzer. 2016. SCONE: Secure Linux Containers with Intel SGX. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). 689--703.
[4]
Roman Atachiants, Gavin Doherty, and David Gregg. 2016. Parallel Performance Problems on Shared-Memory Multicore Systems: Taxonomy and Observation. IEEE Transactions on Software Engineering (TSE) 42 (2016), 764--785.
[5]
Stephen M. Blackburn and Kathryn S. Mckinley. 2008. Immix: a mark-region garbage collector with space efficiency, fast collection, and mutator performance. In Proceedings of the 29th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). 22--32.
[6]
Rodrigo Bruno, Paulo Ferreira, Ruslan Synytsky, Tetiana Fydorenchyk, Jia Rao, Hang Huang, and SongWu. 2018. Dynamic vertical memory scalability for Open- JDK cloud applications. In Proceedings of the 2018 ACM SIGPLAN International Symposium on Memory Management (ISMM). 59--70.
[7]
DaCapo. 2009. DaCapo Benchmarks. http://dacapobench.org/.
[8]
Williams Dan, Hani Jamjoom, Yew Huey Liu, and Hakim Weatherspoon. 2011. Overdriver: handling memory overload in an oversubscribed cloud. In Proceedings of the 7th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE). 205--216.
[9]
Xiaoning Ding, Phillip B. Gibbons, Michael A. Kozuch, and Jianchen Shan. 2014. Gleaner: Mitigating the Blocked-Waiter Wakeup Problem for Virtualized Multicore Applications. In Proceedings of the USENIX Annual Technical Conference ATC). 73--84.
[10]
Wes Felter, Alexandre Ferreira, Ram Rajamony, and Juan Rubio. 2015. An updated performance comparison of virtual machines and Linux containers. In Proceedings of the IEEE international symposium on performance analysis of systems and software (ISPASS). 171--172.
[11]
Lokesh Gidra, Julien Sopena, and Marc Shapiro. 2013. A study of the scalability of stop-the-world garbage collectors on multicores. In Proceedings of the 18th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 229--240.
[12]
Md E. Haque, Yong Hun Eom, Yuxiong He, Sameh Elnikety, Ricardo Bianchini, and Kathryn S. Mckinley. 2015. Few-to-Many: Incremental Parallelism for Reducing Tail Latency in Interactive Services. In Proceedings of the 20th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 161--175.
[13]
Wim Heirman, Trevor E. Carlson, Kenzo Van Craeynest, Ibrahim Hur, Aamer Jaleel, and Lieven Eeckhout. 2014. Undersubscribed threading on clustered cache architectures. In Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA). 678--689.
[14]
Hibench. 2019. Hibench. https://github.com/intel-hadoop/HiBench.
[15]
Myeongjae Jeon, Yuxiong He, Sameh Elnikety, Alan L. Cox, and Scott Rixner. 2013. Adaptive parallelism for web search. In Proceedings of the 8th ACM European Conference on Computer Systems (EuroSys). 155--168.
[16]
Junaid Khalid, Eric Rozner,Wesley Felter, Cong Xu, Karthick Rajamani, Alexandre Ferreira, and Aditya Akella. 2018. Iron: Isolating Network-based CPU in Container Environments. In Proceedings of the 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI). 313--328.
[17]
Jinchun Kim, Viacheslav Fedorov, Paul V. Gratz, and A. L. Narasimha Reddy. 2015. Dynamic Memory Pressure Aware Ballooning. In Proceedings of the International Symposium on Memory Systems (MEMSYS). 103--112.
[18]
Jihye Kwon, Kang-Wook Kim, Sangyoun Paik, Jihwa Lee, and Chang-Gun Lee. 2015. Multicore scheduling of parallel real-time tasks with multiple parallelization options. In Proceedings of the Real-Time and Embedded Technology and Applications Symposium (RTAS). 232--244.
[19]
Pin Lu and Kai Shen. 2007. Virtual Machine Memory Access Tracing with Hypervisor Exclusive Cache. In Proceedings of the USENIX Annual Technical Conference (ATC). 29--43.
[20]
Rina Nakazawa, Kazunori Ogata, Seetharami Seelam, and Tamiya Onodera. 2017. Taming Performance Degradation of Containers in the Case of Extreme Memory Overcommitment. In Proceedings of the IEEE 10th International Conference on Cloud Computing (CLOUD). 196--204.
[21]
NPB. 2019. NAS Parallel Benchmarks. https://www.nas.nasa.gov/.
[22]
Junjie Qian, Witawas Srisaan, Sharad Seth, Hong Jiang, Du Li, and Pan Yi. 2016. Exploiting FIFO Scheduler to Improve Parallel Garbage Collection Performance. In Proceedings of the 12th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE). 109--121.
[23]
Henry Qin, Qian Li, Jacqueline Speiser, Peter Kraft, and John Ousterhout. 2018. Arachne: Core-Aware Thread Management. In Proceedings of the USENIX Conference on Operating Systems Design and Implementation (OSDI). 145--160.
[24]
Arun Raman, Hanjun Kim, Taewook Oh, Jae W. Lee, and David I. August. 2011. Parallelism orchestration using DoPE: the degree of parallelism executive. In Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). 26--37.
[25]
Tudor Ioan Salomie, Gustavo Alonso, Timothy Roscoe, and Kevin Elphinstone. 2013. Application level ballooning for efficient server consolidation. In Proceedings of the 8th ACM European Conference on Computer Systems (EuroSys). 337--350.
[26]
Prateek Sharma, Lucas Chaufournier, Prashant Shenoy, and YC Tay. 2016. Containers and Virtual Machines at Scale: A Comparative Study. In Proceedings of the 17th International Middleware Conference (Middleware). 1--13.
[27]
Stephen Soltesz, Herbert Pötzl, Marc E Fiuczynski, Andy Bavier, and Larry Peterson. 2007. Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors. In Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems (EuroSys). 275--287.
[28]
SPECjvm. 2008. SPECjvm2008 Benchmarks. https://www.spec.org/jvm2008/.
[29]
Kun Suo, Jia Rao, Hong Jiang, and Witawas Srisa-an. 2018. Characterizing and Optimizing Hotspot Parallel Garbage Collection on Multicore Systems. In Proceedings of the European Conference on Computer Systems (EuroSys). 35:1-- 35:15.
[30]
JingjingWang and Magdalena Balazinska. 2017. Elastic memory management for cloud data analytics. In Proceedings of the USENIX Annual Technical Conference (ATC). 745--758.
[31]
Ting Yang, Emery D. Berger, Scott F. Kaplan, and J Eliot B Moss. 2006. CRAMM: Virtual Memory Support for Garbage-collected Applications. In Proceedings of the 7th Symposium on Operating Systems Design and Implementation (OSDI). 103--116.
[32]
Jie Zhang, Xiaoyi Lu, and Dhabaleswar K. Panda. 2016. High Performance MPI Library for Container-Based HPC Cloud on InfiniBand Clusters. In Proceedings of the International Conference on Parallel Processing (ICPP). 268--277.

Cited By

View all
  • (2025)System log isolation for containersFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-2568-819:5Online publication date: 1-May-2025
  • (2024)A Systematic Investigation of Hardware and Software in Electric Vehicular PlatformProceedings of the 2024 ACM Southeast Conference10.1145/3603287.3651203(9-17)Online publication date: 18-Apr-2024
  • (2024)vKernel: Enhancing Container Isolation via Private Code and DataIEEE Transactions on Computers10.1109/TC.2024.338398873:7(1711-1723)Online publication date: Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HPDC '19: Proceedings of the 28th International Symposium on High-Performance Parallel and Distributed Computing
June 2019
278 pages
ISBN:9781450366700
DOI:10.1145/3307681
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 June 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. container
  2. memory management
  3. performance
  4. scheduling

Qualifiers

  • Research-article

Funding Sources

  • National Key Research and Development Program of China
  • National Science Foundation of China

Conference

HPDC '19
Sponsor:

Acceptance Rates

HPDC '19 Paper Acceptance Rate 22 of 106 submissions, 21%;
Overall Acceptance Rate 166 of 966 submissions, 17%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)39
  • Downloads (Last 6 weeks)8
Reflects downloads up to 27 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)System log isolation for containersFrontiers of Computer Science: Selected Publications from Chinese Universities10.1007/s11704-024-2568-819:5Online publication date: 1-May-2025
  • (2024)A Systematic Investigation of Hardware and Software in Electric Vehicular PlatformProceedings of the 2024 ACM Southeast Conference10.1145/3603287.3651203(9-17)Online publication date: 18-Apr-2024
  • (2024)vKernel: Enhancing Container Isolation via Private Code and DataIEEE Transactions on Computers10.1109/TC.2024.338398873:7(1711-1723)Online publication date: Jul-2024
  • (2023)Adapt Burstable Containers to Variable CPU ResourcesIEEE Transactions on Computers10.1109/TC.2022.317448072:3(614-626)Online publication date: 1-Mar-2023
  • (2023)Characterizing and optimizing Kernel resource isolation for containersFuture Generation Computer Systems10.1016/j.future.2022.11.018141(218-229)Online publication date: Apr-2023
  • (2023)Precise control of page cache for containersFrontiers of Computer Science10.1007/s11704-022-2455-018:2Online publication date: 13-Sep-2023
  • (2023)CO2 Emission Mitigation in Container-Based Cloud Computing by the Power of Resource ManagementProceedings of the 9th International Conference on Advanced Intelligent Systems and Informatics 202310.1007/978-3-031-43247-7_9(97-111)Online publication date: 18-Sep-2023
  • (2022)Research on Elastic Extension of Multi Type Resources for OpenMP Program2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys)10.1109/HPCC-DSS-SmartCity-DependSys57074.2022.00155(971-978)Online publication date: Dec-2022
  • (2021)Parallelizing packet processing in container overlay networksProceedings of the Sixteenth European Conference on Computer Systems10.1145/3447786.3456241(261-276)Online publication date: 21-Apr-2021
  • (2021)Quantifying context switch overhead of artificial intelligence workloads on the cloud and edgesProceedings of the 36th Annual ACM Symposium on Applied Computing10.1145/3412841.3441993(1182-1189)Online publication date: 22-Mar-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media