1 Introduction

Nowadays, every internet user has experimented slow service at least once when going online. Regarding that matter, some questions arise: Can this be considered a new problem? Why does this happen? How can it be solved? [1].

It all started several decades ago when the internet ceased to only have military and academic purposes [2] and began to be used commercially. This led to a constant increase in both the number of users and the network traffic, which translated over time into general congestion. This renders networks inefficient under some conditions and even errors may occur among other consequences that decrease the quality of service [3].

Since the internet began to grow, all network types and topologies with their own connections (wired, wireless or satellite) [4] have presented an increase in traffic levels and in the number of users [5]. Additionally, the number of users within other technologies such as mobile phones has also increased in terms of these services: voice, data, broadcasts of radio stations, audio and video calls and streaming.

2 Background

One of the works that discuss congestion control algorithms comes from Satoshi Utsumi and Salahuddin Muhammad Salim Zabir which carried out an investigation that focuses in the delays in wired and wireless networks which are mainly CAIA Hamilton (CHD) and CAIA Delay-Gradient (CDG). The success of these two algorithms is mentioned and explained to the fact that they were designed to work in tandem with TCP New Reno with resulting improvements of up to 250% [6].

In 1995, Lawrence S. Brakmo and Larry L. Peterson researched the state of the art of the congestion control algorithm TCP VEGAS, which has a better performance than TCP RENO. According to the authors, it surpasses the latter by values between 37% and 71% and only shows packet losses between 1/5 and half a packet [7].

Regarding end-to-end networks, Cheng Cui, Lin Xue, Chui-Hui Chiu, Praveenkumar Kondikoppa and Seung-Jong Park focused their 2014 investigation in the buffer size of LAN-WAN-LAN networks in every node. The main issues to attack according to the authors were: excess of bandwidth, data bursts instead of data flows and having asynchronous packets. They improved these three aspects overall by 80% using 100 threads above 10 Gb/s [8].

In terms of quality of service metrics, K. Avrachenkova, U. Ayestab, J. Doncelc and P. Jacko led an investigation in 2012 on the rapid transmission of flows between different network routers and the congestion it caused. For this purpose, a bottleneck modeling process was applied between the TCP information sources and the router to generate an queued optimal packet-processing control of the router [9].

In 2012, Ghassan A. Abed, Mahamod Ismail and Kasmiran Jumari analyzed some congestion control techniques. They justified why is TCP still used even if has a evident congestion and delays. Furthermore, they exhibit other applications that use the UDP protocol, which provides a service of datagrams that is much more efficient in terms of transmission speeds but with greater probability of error and needs of retransmission [10].

3 Methodology

The number of nodes from end to end within the transmission lines was considered as well as their bandwidth. It was checked that each LAN went through the WAN. The type of traffic was established and each node’s bandwidth was set at 8 mbps. The average latency was set at 115 ms and the average jitter was set at 5 ms.

The packet sizes were defined within a range going from 12000 bytes to 310000 bytes. The number of switches studied for each LAN was three conventional switches and one border switch. There will be MPLS transport and the WAN has a Next-Generation Firewall from McAfee. It was decided to test different types of traffic within the nodes by varying the sizes of the buffer and the segment as well as the ACK time so that its behavior can be seen.

4 Models to Discuss

Stress tests were performed using the free tool meter from the same point of an 802.3 LAN to five servers belonging to another 802.3 LAN, passing in the five cases through the WAN network, as can be seen in Fig. 1. The servers are: Google.com, Youtube.com, Eltiempo.com, Caracoltv.com and Facebook.com.

Fig. 1.
figure 1

Source: Elaborated by author with the openclipart.org free image bank

Point-to-point network schematic for testing purposes in the five servers.

5 Discussion of Results

For the google.com server, the average error was 2.792% which is low for a margin between 14 and 16 nodes, as can be seen in Table 1. It is noteworthy to point out that tests were performed on a very simple page without any multimedia content. In terms of the response times during testing, the ACK speeds remained low in 40% of the tests while 40% remained stable and a 20% increased as can be seen in Fig. 2.

Table 1. Results from extreme tests on the google.com server during five days
Fig. 2.
figure 2

Results chart for stress tests on the google.com server in day 4. In the y axis, the number of requests made can be seen and in the x axis the testing times can be seen.

Regarding the youtube.com server, the average error was 50.776% with a fairly high error margin. The number of nodes was always constant with a total of 14, as can be seen in Table 2. The transmission of images, text and hyperlinks behaved acceptably and the streaming suffered from congestion. The response times were unstable in all the tests since they varied erratically under different number of users as seen in Fig. 3.

Table 2. Results from extreme tests on the youtube.com server during five days
Fig. 3.
figure 3

Results chart for stress tests on the youtube.com server in day 4. In the y axis, the number of requests made can be seen and in the x axis the testing times can be seen.

As for eltiempo.com, the average error was 5.088% having between 10 and 15 nodes detected as seen in Table 3. It was concluded that the content presented in images, hyperlinks and texts behaved appropriately and congestion was related to advertising which consists of GIF-type images, streaming videos and audio (Fig. 4).

Table 3. Results from extreme tests on the eltiempo.com server during five days
Fig. 4.
figure 4

Results chart for stress tests on the eltiempo.com server in day 4. In the y axis, the number of requests made can be seen and in the x axis the testing times can be seen.

As for caracoltv.com, the average error was 4.274% with a constant number of nodes (12 in total) as seen in Table 4. The contents portrayed in form of images, hyper-links and text behaved correctly while the congestion was caused by the advertising contents and small videos with advanced programming. As for the response times, it increased in terms of the number of users as seen in Fig. 5.

Table 4. Results from extreme tests on the caracoltv.com server during five days
Fig. 5.
figure 5

Results chart for stress tests on the caracoltv.com server in day 2. In the y axis, the number of requests made can be seen and in the x axis the testing times can be seen.

As for facebook.com, the average error was 3.4% with a total of 15 detected nodes as shown in Table 5. The content of images, hyperlinks and text behaved correctly and there was a slight congestion during the upload of images into albums as can be seen in Fig. 6.

Table 5. Results from extreme tests on the facebook.com server during five days
Fig. 6.
figure 6

Results chart for stress tests on the facebook.com server in day 2. In the y axis, the number of requests made can be seen and in the x axis the testing times can be seen.

6 Results Comparison to Other Authors

In the article: Seguridad, rendimiento y QoS en TCP [11] the error obtained during simulation for the TCP protocol is greater for the services of streaming and multimedia transmissions. In this work, four scenarios were tested which led to an error margin of 0.275% with intermediate values of 0.425% and 0.592% to 44.35%.

In the project called Predicción de tráfico a traves de redes neuronales artificiales [12] streaming is the service with the highest traffic both before and after the implementation of the neural network. The simulation consisted on a 3-node network sending between 50000 and 51000 packages which had a 4% improvement.

It can be concluded that results of this work and the work of other authors agree in the fact that the amount of congestion is directly proportional to the number of users, the number of nodes and the type of information that is being sent. The most critical service is the streaming or any type of multimedia content as shown in Table 6.

Table 6. Results simulations in four scenarios [11]

7 Conclusions

Pages without multimedia content have a low amount of congestion. The streamed content presents the greatest congestion, causing approximately 50% of the transmission errors. It is important to lower the response times to the maximum by varying the segment sizes and retaining the minimum of 12000 bytes up to 5000 bytes, which indicated a 16% reduction.

Pages with several embedded applications such as chat display and unpredictable amounts of information resulted in additional congestion. The errors were low but the response times were greater (of almost seconds) such as Youtube with 66 874 ms. The type of content and services presented with the current internet requires the implementation of a new generation of congestion control algorithms focused on handling multimedia content such as video, audio and interactive applications.