skip to main content
10.1145/2987550.2987557acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

BASS: Improving I/O Performance for Cloud Block Storage via Byte-Addressable Storage Stack

Published: 05 October 2016 Publication History

Abstract

In an Infrastructure-as-a-Service cloud, cloud block storage offers conventional, block-level storage resources via a storage area network. However, compared to local storage, this multilayered cloud storage model imposes considerable I/O overheads due to much longer I/O path in the virtualized cloud. In this paper, we propose a novel byte-addressable storage stack, BASS, to bridge the addressability gap between the storage and network stacks in cloud, and in return boost I/O performance for cloud block storage. Equipped with byte-addressability, BASS not only avails the benefits of using variable-length I/O requests that avoid unnecessary data transfer, but also enables a highly efficient non-blocking approach that eliminates the blocking of write processes. We have developed a generic prototype of BASS based on Linux storage stack, which is applicable to traditional VMs, lightweight containers and physical machines. Our extensive evaluation with micro-benchmarks, I/O traces and real-world applications demonstrates the effectiveness of BASS, with significantly improved I/O performance and reduced storage network usage.

References

[1]
Fio - flexible I/O tester synthetic benchmark. http://www.storagereview.com/fio_flexible_i_o_tester_synthetic_benchmark.
[2]
Linux scsi target framework. http://stgt.sourceforge.net/.
[3]
Mobibench traces. https://github.com/ESOSLab/Mobibench/tree/master/MobiGen.
[4]
Postmark. http://www.dartmouth.edu/~davidg/postmark_instructions.html.
[5]
Production file system syscall traces. http://sylab-srv.cs.fiu.edu/dokuwiki/doku.php?id=projects:nbw:traces:start.
[6]
Sysbench OLTP benchmark. http://www.storagereview.com/sysbench_oltp_benchmark.
[7]
Umass trace repository. http://traces.cs.umass.edu/index.php/Storage/Storage/.
[8]
Understanding the flash translation layer (ftl) specification. http://developer.intel.com/.
[9]
D. Apalkov, A. Khvalkovskiy, S. Watts, V. Nikitin, X. Tang, D. Lottis, K. Moon, X. Luo, E. Chen, A. Ong, A. Driskill-Smith, and M. Krounbi. Spin-transfer torque magnetic random access memory (stt-mram). J. Emerg. Technol. Comput. Syst., 2013.
[10]
J. Arredondo. Performance benchmark for cloud block storage, 2013. http://c1776742.cdn.cloudfiles.rackspacecloud.com/downloads/pdfs/CloudBlockStorage_Benchmark.pdf.
[11]
A. Birrell, M. Isard, C. Thacker, and T. Wobber. A design for high-performance flash disks. ACM SIGOPS Operating Systems Review, 2007.
[12]
D. Campello, H. Lopez, R. Koller, R. Rangaswami, and L. Useche. Non-blocking writes to files. In 13th USENIX Conference on File and Storage Technologies, 2015.
[13]
A. M. Caulfield, A. De, J. Coburn, T. I. Mollow, R. K. Gupta, and S. Swanson. Moneta: A high-performance storage array architecture for next-generation, non-volatile memories. In Proceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, 2010.
[14]
A. M. Caulfield and S. Swanson. Quicksan: a storage area network for fast, distributed, solid state disks. In ACM SIGARCH Computer Architecture News, 2013.
[15]
J. Condit, E. B. Nightingale, C. Frost, E. Ipek, D. Burger, B. Lee, and D. Coetzee. Better i/o through byte-addressable, persistent memory. In Symposium on Operating Systems Principles, 2009.
[16]
B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears. Benchmarking cloud serving systems with ycsb. In Proceedings of the 1st ACM symposium on Cloud computing, 2010.
[17]
X. Ding, S. Jiang, F. Chen, K. Davis, and X. Zhang. Diskseen: Exploiting disk layout and access history to enhance i/o prefetch. In USENIX Annual Technical Conference, 2007.
[18]
D. Ellard, J. Ledlie, P. Malkani, and M. Seltzer. Passive nfs tracing of email and research workloads. In Proceedings of USENIX Conference on File and Storage Technologies, 2003.
[19]
S. Gamage, C. Xu, R. R. Kompella, and D. Xu. vpipe: Piped i/o offloading for efficient data movement in virtualized clouds. In Proceedings of the ACM Symposium on Cloud Computing, 2014.
[20]
G. Gibson and G. Ganger. Principles of operation for shingled disk devices. USENIX HotStorage, 2011.
[21]
B. S. Gill and L. A. D. Bathen. Optimal multistream sequential prefetching in a shared cache. Trans. Storage, 2007.
[22]
B. S. Gill and D. S. Modha. Sarc: sequential prefetching in adaptive replacement cache. In Proceedings of the annual conference on USENIX Annual Technical Conference, 2005.
[23]
A. Kangarlou, S. Gamage, R. R. Kompella, and D. Xu. vs-noop: Improving tcp throughput in virtualized environments via acknowledgement offload. In Proceedings of the 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, 2010.
[24]
S.-W. Lee, D.-J. Park, T.-S. Chung, D.-H. Lee, S. Park, and H.-J. Song. A log buffer-based flash translation layer using fully-associative sector translation. ACM Trans. Embed. Comput. Syst., 2007.
[25]
A. W. Leung, S. Pasupathy, G. R. Goodson, and E. L. Miller. Measurement and analysis of large-scale network file system workloads. In USENIX Annual Technical Conference, 2008.
[26]
M. Li, E. Varki, S. Bhatia, and A. Merchant. Tap: Table-based prefetching for storage caches. In File and storage technologies, 2008.
[27]
Z. Li, Z. Chen, S. M. Srinivasan, and Y. Zhou. C-miner: mining block correlations in storage systems. In Proceedings of USENIX conference on File and storage technologies, 2004.
[28]
H. Lu, B. Saltaformaggio, R. Kompella, and D. Xu. vfair: Latency-aware fair storage scheduling via per-io cost-based differentiation. In Proceedings of the Sixth ACM Symposium on Cloud Computing, 2015.
[29]
H. Lu, A. Srivastava, B. Saltaformaggio, and D. Xu. Storm: Enabling tenant-defined cloud storage middle-box services. In Dependable Systems and Networks (DSN), 46th Annual IEEE/IFIP International Conference on, 2016.
[30]
Y. Sfakianakis, S. Mavridis, A. Papagiannis, S. Papageorgiou, M. Fountoulakis, M. Marazakis, and A. Bilas. Vanguard: Increasing server efficiency via workload isolation in the storage i/o path. In Proceedings of the ACM Symposium on Cloud Computing, 2014.
[31]
J.-Y. Shin, M. Balakrishnan, T. Marian, and H. Weatherspoon. Isotope: Transactional isolation for block storage. In 14th USENIX Conference on File and Storage Technologies, 2016.
[32]
C. Xu, S. Gamage, H. Lu, R. Kompella, and D. Xu. vturbo: Accelerating virtual machine i/o processing using designated turbo-sliced core. USENIX Association, 2013.
[33]
X. Zhang, J. Li, H. Wang, K. Zhao, and T. Zhang. Reducing solid-state storage device write stress through opportunistic in-place delta compression. In 14th USENIX Conference on File and Storage Technologies (FAST 16), 2016.

Cited By

View all
  • (2020)BaoverlayProceedings of the 11th ACM Symposium on Cloud Computing10.1145/3419111.3421291(90-104)Online publication date: 12-Oct-2020
  • (2019)A New Approach to Double I/O Performance for Ceph Distributed File System in Cloud Computing2019 2nd International Conference on Data Intelligence and Security (ICDIS)10.1109/ICDIS.2019.00018(68-75)Online publication date: Jun-2019
  • (2018)Analysis of I/O Performance for Optimizing Software Defined Storage in Cloud Integration2018 IEEE 3rd International Conference on Communication and Information Systems (ICCIS)10.1109/ICOMIS.2018.8645041(222-226)Online publication date: Dec-2018

Index Terms

  1. BASS: Improving I/O Performance for Cloud Block Storage via Byte-Addressable Storage Stack

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SoCC '16: Proceedings of the Seventh ACM Symposium on Cloud Computing
October 2016
534 pages
ISBN:9781450345255
DOI:10.1145/2987550
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 October 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Cloud Block Storage
  2. Cloud Computing
  3. Virtualization

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SoCC '16
Sponsor:
SoCC '16: ACM Symposium on Cloud Computing
October 5 - 7, 2016
CA, Santa Clara, USA

Acceptance Rates

SoCC '16 Paper Acceptance Rate 38 of 151 submissions, 25%;
Overall Acceptance Rate 169 of 722 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)3
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2020)BaoverlayProceedings of the 11th ACM Symposium on Cloud Computing10.1145/3419111.3421291(90-104)Online publication date: 12-Oct-2020
  • (2019)A New Approach to Double I/O Performance for Ceph Distributed File System in Cloud Computing2019 2nd International Conference on Data Intelligence and Security (ICDIS)10.1109/ICDIS.2019.00018(68-75)Online publication date: Jun-2019
  • (2018)Analysis of I/O Performance for Optimizing Software Defined Storage in Cloud Integration2018 IEEE 3rd International Conference on Communication and Information Systems (ICCIS)10.1109/ICOMIS.2018.8645041(222-226)Online publication date: Dec-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media