Buffer scaling for optical packet switching networks with shared RAM

https://doi.org/10.1016/j.osn.2010.07.002Get rights and content

Abstract

According to a historical rule of thumb, which is widely used in routers, the buffer size of each output link of a router should be set to the product of the bandwidth and the average round-trip time. However, it is very difficult to satisfy this buffer requirement for ultra-high-speed dense wavelength division multiplexing (DWDM) networks with the current technology. Recently, many researchers have challenged the rule of thumb and have proposed various buffer sizing strategies requiring less buffer. Most of them were proposed for electronic routers with input and output buffering. However, shared buffering is a strong candidate for future DWDM optical packet switching (OPS) networks because of its high efficiency. As all links use the same buffer space, the wavelength count and nodal degree have a big impact on the size requirements of shared buffering. In this paper, we present a new buffer scaling rule showing the relationship between the number of wavelengths, nodal degree, and the required shared buffer size. By an extensive simulation study, we show that the buffer requirement increases with O(N0.85W0.85) for both standard TCP and paced TCP, while XCP-paced TCP’s buffer requirement increases with O(N1W0.85) for a wide range of N and W, where N is the nodal degree and W is the number of wavelengths.

Introduction

Recent advances in optical networks such as dense wavelength division multiplexing (DWDM) networks have allowed us to achieve ultra-high data-transmission rates in optical networks. However, the optical and electronic buffer technology has not been able to keep pace with the optical data-transmission rates, which have become a bottleneck in routers. There are three main buffer types proposed for optical packet switching networks in the literature: electronic RAM, optical RAM, and fiber delay line (FDL)-based buffering. Electronic RAM allows O(1) reading operation when the output port is free. However, electronic buffering requires the conversion of optical packets to the electronic domain. As the operating speed of the electronic components and electromagnetic interference limit the packet bit rate to about 10 Gbps [1], an electronic approach to direct opto–electro–opto (O/E/O) conversion is not a feasible solution. Recently, Takahashi et al. [1] showed that an optoelectronic approach using parallelized all-optical convertors could achieve fast O/E/O conversion. However, the size and the speed of electronic RAM are still bottlenecks as router manufacturers must use large, slow, off-chip DRAMs to satisfy large buffer requirements [2].

Currently, the only available solution to all-optical buffering is switching the contended packets to long fiber delay lines (FDLs). However, FDLs have severe limitations such as signal attenuation and high space requirements in routers, due to the very long fiber lines. It is very difficult to achieve RAM-like O(1) reading operation of variable-length packets in FDL buffers, so FDLs may limit the achievable utilization of links and increase the rate of packet drops. On the other hand, all-optical RAM is still being researched [3]. Optical RAM has several advantages over FDLs and electronic RAM. It solves the problems with FDLs such as lack of real O(1) reading operation, signal attenuation, and bulkiness. Furthermore, optical RAM may have lower power and space requirements than FDLs and electronic RAM. For example, Shinya et al. [4] demonstrated a photonic crystal-based all-optical bit memory operating at very low power. However, each photonic cell can buffer only a single bit, so all wavelengths must share the same buffer space, like in electronic RAM buffering. Moreover, optical RAM is not expected to have a large capacity soon. Therefore, even a small decrease in the buffer requirements may have a high impact on the realization of RAM-buffered high-speed WDM OPS networks.

According to a historical rule of thumb [5], which is widely used in routers, the buffer size of each output link of a router should be set to the product of the bandwidth (BW) and the average round-trip time (RTT). DWDM is capable of ultra-high data rates in excess of 1 Pbit (petabit) per fiber, and many network operators and router manufacturers currently follow a guideline of 250 ms of buffer size per link, which would require 250 Tbits of ultra-high-speed buffer per fiber. Recently, many researchers have challenged the rule of thumb and proposed new rules requiring less buffer. Some of them are listed in Table 1, where B is the bandwidth of the link, T is the round-trip time (RTT), F is the number of TCP flows on the link, N is the nodal degree, M is the maximum segment size (MSS), and S is the TCP congestion window size. All of these rules and guidelines were proposed for output RAM buffered routers. A detailed comparison of these proposals is available in [9].

The well-known bursty behavior of TCP [10] is the main problem limiting the decrease in buffer requirements, because the bursty behavior of TCP results in a high packet drop rate in very small buffered networks. A general solution to solving this problem is to apply pacing, which delays packets according to a special criterion that decreases the short-term burstiness and hence smooths the network traffic. It is well known that applying pacing at TCP senders (paced TCP) dramatically decreases the buffer requirements [11]. However, this method requires changing the TCP sender or receivers. Another possible method for pacing is shaping the traffic at the edge or core nodes. We recently proposed using an explicit congestion control protocol (XCP)-based architecture [12] for pacing at the edge nodes without changing the TCP [13]. We found that our architecture could achieve high utilization and a low packet drop ratio with TCP flows in very small RAM-buffered OPS networks, even better than those with TCP pacing [14].

As shared buffering allows more efficient use of a RAM buffer than output buffering, it decreases the total buffer requirement in the router [15]. There is ongoing research on realization of a WDM OPS router with such a RAM-based buffer, which is fully shared by all links and wavelengths [16]. However, the required shared buffer capacity is still unclear. Most of the papers on the performance of shared buffers in the literature present only the packet drop rate of different shared buffer architectures. They do not propose a guideline for sizing shared buffers for TCP traffic. Moreover, most of the papers on OPS networks with shared buffers use only FDL-based buffering with limitations like fixed packet or slot size. Unlike output buffering, which is a single-output queue, shared buffering is a multiple-output queue, so the nodal degree (N) has a big impact on the buffer requirements. Moreover, all the wavelengths use the same buffer space in optoelectronic and optical RAM, so the wavelength number (W) must be taken into account, too. Additional N and W dimensions make the analytical analysis of RAM-based shared buffering optical routers much more complex than that of the RAM-based output buffering of electronic routers. In this paper, we present a new buffer scaling rule showing the relationship between the number of wavelengths, nodal degree, and the required shared buffer size. We estimate the buffer scaling parameters for TCP, paced TCP, and XCP-based edge node pacing by an extensive simulation study.

The rest of the paper is organized as follows. Section 2 describes the related work on TCP and XCP pacing. Section 3 describes the XCP pacing and switch architecture. Section 4 describes the simulation methodology and presents the simulation results. Finally, we conclude the paper in Section 5.

Section snippets

Related work

This section describes TCP and XCP pacing architectures.

Architecture

This section describes the XCP pacing and switch architecture in detail.

Evaluation

This section discusses our evaluation of the buffer scaling rule of a shared buffered switch, showing the relationship between the number of wavelengths, nodal degree, and the required shared buffer size.

Conclusions

In this paper, we have presented a new buffer scaling rule showing the relationship between the number of wavelengths, nodal degree, and the required shared buffer size. Through an extensive simulation study, we showed that the buffer requirement increases with O(N0.85W0.85) for both standard TCP and paced TCP, while XCP-paced TCP’s buffer requirement increases with O(N1W0.85), under a wide range of N and W values. We evaluated the parameters for an approximate buffer scaling rule with the

Acknowledgement

This work was partly supported by the National Institute of Information and Communications Technology (NICT).

References (21)

  • R. Takahashi et al.

    Photonic random access memory for 40- GB/s 16-b burst optical packets

    IEEE Photonics Technology Letters

    (2004)
  • G. Appenzeller, J. Sommers, N. McKeown, Sizing router buffers, in: Proceedings of ACM SIGCOMM, 2004, pp....
  • T. Aoyama, New generation network (NWGN) beyond NGN in Japan, 2007. Web page:...
  • A. Shinya et al.

    All-optical on-chip bit memory based on ultra high Q InGaAsP photonic crystal

    Optics Express

    (2008)
  • C. Villamizar et al.

    High performance TCP in ANSNET

    Computer Communication Review

    (1994)
  • K. Avrachenkov, U. Ayesta, A. Piunovskiy, Optimal choice of the buffer size in the internet routers, in: Proceedings of...
  • S. Gorinsky, A. Kantawala, J. Turner, Link buffer sizing: a new look at the old problem, in: Proceedings of ISCC, 2005,...
  • M. Enachescu et al.

    Part III: routers with very small buffers

    ACM SIGCOMM Computer Communication Review

    (2005)
  • A. Vishwanath et al.

    Perspectives on router buffer sizing: recent results and open problems

    ACM SIGCOMM Computer Communication Review

    (2009)
  • H. Jiang, C. Dovrolis, Source-level IP packet bursts: causes and effects, in: Proceedings of ACM SIGCOMM/Usenix...
There are more references available in the full text version of this article.

Cited by (2)

  • Improving the efficiency and fairness of eXplicit Control Protocol in multi-bottleneck networks

    2013, Computer Communications
    Citation Excerpt :

    In order to address this problem, two enhanced versions P-XCP and M-XCP are proposed, and simulations show that they are more effective in satellite networks, while preserving low queue length and good fairness [25,26]. Additionally, many other efforts have been done on different aspects of the XCP scheme, such as stability [27–31], security [32,33], network processor based implementation [34], as well as enhancements for optical packet switched networks [35,36]. Among these concerns, a critical defect in XCP was found.

View full text