skip to main content
10.1145/1996130.1996140acmconferencesArticle/Chapter ViewAbstractPublication PageshpdcConference Proceedingsconference-collections
research-article

Introspective end-system modeling to optimize the transfer time of rate based protocols

Published: 08 June 2011 Publication History

Abstract

The transmission capacity of today's high-speed networks is often greater than the capacity of an end-system (such as a server or a remote client) to consume the incoming data. The mismatch between the network and the end-system, which can be exacerbated by high end-system workloads, will result in incoming packets being dropped at different points in the packet receiving process. In particular, a packet may be dropped in the NIC, in the kernel ring buffer, and (for rate based protocols) in the socket buffer. To provide reliable data transfers, these losses require retransmissions, and if the loss rate is high enough result in longer download times. In this paper, we focus on UDP-like rate based transport protocols, and address the question of how best to estimate the rate at which the end-system can consume data which minimizes the overall transfer time of a file.
We propose a novel queueing network model of the end-system, which consists of a model of the NIC, a model of the kernel ring buffer and the protocol processing, and a model of the socket buffer from which the application process reads the data. We show that using simple and approximate queueing models, we can accurately predict the effective end-system bottleneck rate that minimizes the file transfer time. We compare our protocol with PA-UDP, an end-system aware rate based transport protocol, and show that our approach performs better, particularly when the packet losses in the NIC and/or the kernel ring buffer are high. We also compare our approach to TCP. Unlike in our rate based scheme, TCP invokes the congestion control algorithm when there are losses in the NIC and the ring buffer. With higher end-to-end delay, this results in significant performance degradation compared to our reliable end-system aware rate based protocol.

References

[1]
Intime. http://cs.ucdavis.edu/~intime.
[2]
Scalable networking with rss at microsoft. http://www.microsoft.com/whdc/device/network/NDIS_RSS.mspx.
[3]
Solaris dynamic tracing guide. http://docs.sun.com/app/docs/doc/817-6223.
[4]
High-performance networks for high-impact science. Report of the High Performance Network Planning Workshop., 2002.
[5]
Time for toe: The benefits of 10 gbps tcp offload. Chelsio Communications White Paper, May 2005.
[6]
Christian Bienia and Kai Li. Parsec 2.0: A new benchmark suite for chip-multiprocessors. In Proceedings of the 5th Annual Workshop on Modeling, Benchmarking and Simulation, June 2009.
[7]
M. L. Chaudhry and U. C. Gupta. Modelling and analysis of m/g{a,b}/1/n queue - a simple alternative approach. Queueing Syst. Theory Appl., 31:95--100, January 1999.
[8]
Jin-Hee Choi, Young-Pil Kim, and Chuck Yoo. Self-prevention of socket buffer overflow. Computer Networks, 51(8):1942 -- 1954, 2007.
[9]
Willem de~Bruijn and Herbert Bos. Pipesfs: fast linux i/o in the unix tradition. SIGOPS Oper. Syst. Rev., 42:55--63, July 2008.
[10]
B. Eckart, Xubin He, Qishi Wu, and Changsheng Xie. A dynamic performance-based flow control method for high-speed data transfer. Parallel and Distributed Systems, IEEE Transactions on, 21(1):114 --125, 2010.
[11]
Sally Floyd. High speed tcp. RFC 3649, December 2003.
[12]
Donald Gross and Carl~M. Harris. Fundamentals of Queueing Theory, Second Edition.
[13]
G.Serazzi. Java modelling tools. http://jmt.sourceforge.net.
[14]
Yunhong Gu, Xinwei Hong, and R.L. Grossman. Experiences in design and implementation of a high performance transport protocol. In Supercomputing, 2004. Proceedings of the ACM/IEEE SC2004 Conference, page~22, 2004.
[15]
Yunhong Gu, Xinwei Hong, Marco Mazzucco, and Robert Grossman. Sabul: A high performance data transfer protocol. IEEE COMMUNICATIONS LETTERS, 2003.
[16]
E. He, J. Leigh, O. Yu, and T.A. Defanti. Reliable blast udp : predictable high performance bulk data transfer. In Cluster Computing, 2002. Proceedings. 2002 IEEE International Conference on, 2002.
[17]
Ram Huggahalli, Ravi Iyer, and Scott Tetrick. Direct cache access for high bandwidth network i/o. In Proceedings of the 32nd annual international symposium on Computer Architecture, ISCA '05, pages 50--59, Washington, DC, USA, 2005. IEEE Computer Society.
[18]
Cheng Jin, D.X. Wei, and S.H. Low. Fast tcp: motivation, architecture, algorithms, performance. In INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies, volume~4, pages 2490 -- 2501 vol.4, 2004.
[19]
Tom Kelly. Scalable tcp: Improving performance in highspeed wide area networks. ACM SIGCOMM Computer Communication Review, 33:83--91, 2002.
[20]
T. Lehman, J. Sobieski, and B. Jabbari. Dragon: a framework for service provisioning in heterogeneous grid networks. Communications Magazine, IEEE, 44(3):84 -- 90, 2006.
[21]
Srihari Makineni, Ravi Iyer, Partha Sarangam, Donald Newell, Li~Zhao, Ramesh Illikkal, and Jaideep Moses. Receive side coalescing for accelerating tcp/ip processing. In Yves Robert, Manish Parashar, Ramamurthy Badrinath, and Viktor Prasanna, editors, High Performance Computing - HiPC 2006, volume 4297 of Lecture Notes in Computer Science, pages 289--300. Springer Berlin / Heidelberg, 2006.
[22]
T. Marian, D.A. Freedman, K. Birman, and H. Weatherspoon. Empirical characterization of uncongested optical lambda networks and 10gbe commodity endpoints. In Dependable Systems and Networks (DSN), 2010 IEEE/IFIP International Conference on, 28 2010.
[23]
Mark~R. Meiss. Tsunami: A high-speed rate-controlled protocol for file transfer.
[24]
D~Garcia R~Recio, P~Culley. An rdma protocol specification. RFC 5040, December 2003.
[25]
N.S.V. Rao, W.R. Wing, S.M. Carter, and Q. Wu. Ultrascience net: network testbed for large-scale science applications. Communications Magazine, IEEE, 43(11):S12 -- S17, 2005.
[26]
Luigi Rizzo. Dummynet. http://info.iet.unipi.it/~luigi/dummynet/.
[27]
Jeff Tranter. Exploring the sendfile system call. http://linuxgazette.net/91/tranter.html.
[28]
Hyun wook Jin, Jin young Choi, Pavan Balaji, Dhabaleswar~K. Panda, and Chuck Yoo. Exploiting nic architectural support for enhancing ip based protocols on high performance networks. Journal of Parallel and Distributed Computing(JPDC, 65:1348--1365, 2004.
[29]
Qishi Wu, N. Rao, and Xukang Lu. On transport methods for peak utilization of dedicated connections. In Broadband Communications, Networks, and Systems, 2009. BROADNETS 2009. Sixth International Conference on, pages 1--8, 2009.
[30]
Lisong Xu, K. Harfoush, and Injong Rhee. Binary increase congestion control (bic) for fast long-distance networks. In INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies, volume~4, pages 2514 -- 2524 vol.4, 2004.

Cited By

View all
  • (2018)A Survey of End-System Optimizations for High-Speed NetworksACM Computing Surveys10.1145/318489951:3(1-36)Online publication date: 16-Jul-2018
  • (2012)Minimizing the Data Transfer Time Using Multicore End-System Aware Flow BifurcationProceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012)10.1109/CCGrid.2012.54(595-602)Online publication date: 13-May-2012

Index Terms

  1. Introspective end-system modeling to optimize the transfer time of rate based protocols

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      HPDC '11: Proceedings of the 20th international symposium on High performance distributed computing
      June 2011
      296 pages
      ISBN:9781450305525
      DOI:10.1145/1996130
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 June 2011

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. end-system network i/o bottleneck rate
      2. experimental analysis
      3. queueing model
      4. rate based transport protocol

      Qualifiers

      • Research-article

      Conference

      HPDC '11
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 166 of 966 submissions, 17%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 20 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2018)A Survey of End-System Optimizations for High-Speed NetworksACM Computing Surveys10.1145/318489951:3(1-36)Online publication date: 16-Jul-2018
      • (2012)Minimizing the Data Transfer Time Using Multicore End-System Aware Flow BifurcationProceedings of the 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012)10.1109/CCGrid.2012.54(595-602)Online publication date: 13-May-2012

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media