1 Introduction

The advent of 5G networks and their successors, including advanced 5G and 6 G, has revolutionized modern connectivity, enabling a new wave of applications such as augmented reality, e-health, and autonomous vehicles [1, 2]. These technologies provide enhanced network performance, scalability, and user accessibility [3,4,5]. However, these advancements bring significant challenges, including the management of user mobility, seamless service continuity, and adaptability in dynamic and heterogeneous network environments [6,7,8]. To address these challenges, the paradigm of Multi-Access Edge Computing (MEC) has emerged as a key enabler by bringing computational resources closer to end users. MEC reduces latency, improves performance for delay-sensitive applications, and alleviates network congestion [9,10,11]. Despite these benefits, ensuring efficient and transparent service migration between MEC nodes remains a critical issue, particularly in scenarios involving rapid infrastructure changes, frequent mobility, or unstable connections [12,13,14].

Traditional service migration approaches often rely on protocols like TCP, which tightly couple logical connections to fixed IP addresses [15, 16]. This limitation results in delays and interruptions during connection recovery, degrading the user experience in mobile environments [17, 18]. Modern transport protocols such as QUIC [19] and its multipath variant, MP-QUIC [20], address these challenges by introducing the connection-id parameter, decoupling logical connections from static IPs. This innovation enables seamless "connection migration," allowing users to maintain stable service even in dynamic network conditions [21, 22]. However, most studies leveraging QUIC focus on emulated environments [23, 24], overlooking the complexities of real-world scenarios such as mobility, interference, and resource limitations [25, 26]. The integration of Software-Defined Networking (SDN) with MEC offers a promising solution to optimize service migration processes by leveraging centralized control and network orchestration capabilities [27]. SDN has evolved beyond its initial purpose of configuration management and troubleshooting [28, 29], becoming a key enabler for enhancing system security [30] and operational activities [31]. The paradigm has expanded to dynamic environments such as space, air, ground, and sea [32,33,34], forming the foundation of Software-Defined Infrastructure (SDI) [35].

Recent advancements in 5G platforms, such as OpenAirInterface (OAI) [36], have enabled the creation of customizable testbeds for real-world experiments [37]. These platforms, combined with technologies like NVIDIA Arc [38], provide powerful tools for evaluating innovative paradigms like Software-Defined Service Migration (SDSM) [12, 39]. Building on these advancements, container-based migration strategies, such as those extending the ETSI MEC framework, have demonstrated their effectiveness in supporting stateful application relocation, ensuring seamless service continuity even in scenarios involving frequent user mobility and infrastructure changes [40]. Despite their potential, existing works often lack consideration for practical deployment scenarios, limiting their ability to address latent challenges and fully exploit emerging technologies [22, 41, 42]. To address these gaps, this work introduces the Software-Defined Service Migration (SDSM) paradigm. SDSM integrates the QUIC protocol with SDN controllers to efficiently manage MEC service migrations, reducing client–server interactions and ensuring seamless service continuity. Our key contributions include:

  • A novel MEC service migration mechanism leveraging QUIC and SDN to optimize migration paths and minimize service interruptions.

  • Empirical validation of SDSM through a 5G testbed built on OpenAirInterface (OAI), complemented by controlled experiments in emulated environments.

  • A comprehensive comparison with state-of-the-art approaches, evaluating latency, connection recovery time, and network throughput.

The paper structure is organized as follows. Since we briefly presented the context of the literature, we describe our solutions and the reasons for our design decisions in Sect. 2. Afterward, we survey existing platforms before selecting the suitable candidate to support our experimental usage in Sect. 3. In this section, we also demonstrate the process of adopting the chosen cutting-edge network platform to fit our experimental goal. Next, we present our experimental measurements in two stages (Sects. 4 & 5): initial evaluation in an emulated environment and real-world evaluation using established network testbeds. For each stage, we also illustrate our experimental setup and provide a detailed analysis of the results. Finally, Sect. 6 concludes our study and identifies future work.

2 Software-Defined Service Migration

Fig. 1
figure 1

Sequence diagram comparison

After presenting our motivations to conduct this work, this section describes our SDSM scheme. First, we compare the state-of-the-art service migration sequence diagram and our suggested approach. Next, we propose a mechanism allowing the connection migration process to be issued from a third party on the client side that enables SDSM paradigm. Then, we explain the design decisions and the expected impact.

2.1 Sequence Diagram Comparison

Figure 1 illustrates the comparison in sequence diagram between the SOTA and our proposed solution. Regarding SOTA’s approach [22], the client is only aware of the connection migration process once the SERVER MIGRATION frame arrives from the source server. As soon as the client’s ACK occurs, the CRIU-based [42] container migration starts. Once the service is migrated, the client must send a PROBE message and validate the new path itself before it can use it. To ensure the success of the connection migration, the existing connection between the client and the source server must be maintained during the migration process.

On the other hand, taking advantage of the global overview from the SDN controller, our approach allows the container migration and the connection migration to be planned and executed before the connection migration starts. Instead of waiting for the container migration process to finish, our solution allows the client to use the new destination server as soon as the new IP address is available.

2.2 System Design

Figure 2 represents our connection migration management module on the client side. First, the module should stay within the network management module at the operating system level to allow system-level network communication. Second, it should be an independent management service using a queue to process migration requests corresponding to each client’s designated server via a priority-based scheme. A notification mechanism is implemented to transfer the retrieved migration request from the system-level queue to the QUIC connection’s level and allow instant adoption or scheduled time frame of the newly transferred IP as the new service destination. Figure 2 describes a request migration’s processing flow.

Fig. 2
figure 2

Connection migration module’s design & interactions

  1. 1.

    The network interface nic1 receives a connection migration request.

  2. 2.

    The network management module forwards the request to the connection migration module’s queue.

  3. 3.

    The CM module will decide the request’s processing order to execute the connection migration process regarding its priority and corresponding time frame.

  4. 4.

    The network management module sets the received IP as the new destination IP for the associated connection-id. Then, it executes intercommunication with the application as normal behavior.

  5. 5.

    After receiving the response from the application layer, the network management module passes the response to the physical network interface.

  6. 6.

    The connection migration is done, and the associated network interface nic1 transmits the response’s request using the new destination IP.

2.3 Prospective Impact

2.3.1 Benefits

Compared to the state-of-the-art approach, our most important advantage is the minimization of client-required interaction in the connection migration process. The essence of our approach derived from the service migration’s ultimate goal is toward the seamless network experience for the end-user. The longer the connection migration process, the higher the chance of an event in the infrastructure that requires a new migration. By giving the responsibility of the container migration process to the Software-defined Controller and allowing the client to utilize the new destination IP immediately, the client is not required to consider about the complexity and dynamicity from the infrastructure point of view. Conversely, the Software-defined controller, which can access all required information to make the service migration, will schedule the migration sequences. This approach also eliminates the need to maintain the accessibility between the client and the existing server to allow the connection migration to proceed. The use case becomes quite helpful when it is critical to maintain the original connection. By adopting the SDN controller, the connection could survive while a snapshot of the existing server is being deployed in the accessible infrastructure.

2.3.2 Challenges

Besides the benefits, we acknowledge a new set of challenges to effectively employ this capability for a communication system. First, the control overhead for each connection is mandatory to allow the central server to execute its function, which would consumes both the client and server’s resources over the time. Second, an efficient scheduler is required to effective orchestrating the service migration process based on the gathered information.

To address the first challenge, which pertains to the scalability of the approach, resource consumption on both ends can be significantly reduced early on by filtering connections based on their criticality. Moreover, recent advancements suggest that a logically centralized yet physically distributed controller offers a promising solution to potential bottlenecks. As for the second issue, Toumi et al. [13] recently demonstrated substantial efforts by the research community to tackle this problem. However, a detailed discussion of this challenge is beyond the scope of our study.

3 Open 5G Testbed

After describing the novel connection migration scheme empowered by the SDSM paradigm, we initiated the search for a suitable platform to validate our proposed solution and study the paradigm’s practical behaviors. This section describes further existing solutions in the literature that led to our decision and later the process of adopting the selected solution in our practical use case.

Table 1 Existing 5G platform solutions in the market

3.1 Existing Experimental Platforms

As outlined earlier in the introduction, the real-world considerations for the SDSM paradigm are still missing in the literature. Meanwhile, the advancements of 5G testbeds are receiving significant attention from the community. Therefore, the state-of-the-art led us to study the paradigm on such cutting-edge wireless platforms. Hence, we surveyed existing solutions and compared them in Table 1 to select the most suitable candidate for our purpose.

After careful consideration of the characteristics of various platforms, we have chosen to build our 5G-based SDSM platform around the OAI ecosystem. This decision was influenced by the project’s balancing in the readiness, openness, performance, flexibility, and cost efficiency. The next part of our discussion will outline the challenges we faced while assembling required components to construct our test bed.

3.2 The Encountered Challenges

This section showcases our progress in deploying our 5G test-bed to support not only our investigations on SDSM practical behaviors, but also support our future usage. Hence we address in detail each challenge we encountered in each stage of building our test-bed: the deployment planning, and the actual deployment process.

3.2.1 The Deployment Planning

The first essential step to deploy the targeted platform is to come up with a suitable deployment design that fits our goals. To be clear, our aim here is not only to establish a lab environment to evaluate SDSM behavior in practical conditions. Since then we also try to keep an open mind for developing this platform further for the upcoming complex and advanced scenarios, adapting novel conditions to expand our study and finally, supporting our campus network usage. Thus, the initial version might prioritize to provide the required environment for this study but its extent should be well-prepared for a larger scale deployment with the minimal configured changes and hardware investments.

Fig. 3
figure 3

Experimental architecture

Therefore, we took advantage of our institute of technology Campus Network as the backbone network to enhance the platform’s overall capability, while achieving the utmost realistic environment. Once our validation process for this work is finished, we could adopt the platform for our university usage right afterwards. To provide 5G coverage, we leverage a USRP module and a workstation to establish a 5G cell. As indicated earlier, we choose our platform to be based on the OAI project to accelerate our deployment process with minimal expense. This approach allows us to satisfy the current setup but also extend it later regardless the cells to cover our future usages by adding more cells. We also leverage the SDN management capability to automatically deploy the required software for each cell to reduce deployment effort. In this particular work, we employ two workstations at the network’s edge to handle 5G computing functionalities and manage container migration processes on our MEC servers. Two established cells are sufficient enough to provide 5G coverage while supporting actual UE signaling and interactions. Thus, Fig. 3 represents our complete setup reference that supports the SDSM proof-of-concept and Fig. 4 represents our actual preparation for each edge workstation before deploying them as our campus 5G platform.

Fig. 4
figure 4

Preparation deployment of Edge Workstations

Besides the hardware infrastructure, the software stack to support SDSM is also deployed and reconfigured to adapt to the extended operational environment. Furthermore, since our work is developed from [21], it is essential to be able to run Python code to support QUIC implementation (AioQUIC - Python) on both MECs and UEs sides. On the MECs side, executing Python code is natively supported. However, on the UE side, this task is non-trivial. Thanks to Termux [45]- a terminal emulator that enables a Linux environment on an Android device, our adoption of QUIC-client implementation on UEs became less challenging.

3.2.2 The Actual Deployment Process

After preparing each element’s function, we proceed to integrate them to formulate our experimental platform. The used integration method is performing functional test of each component then gradually adding each of them into a unified system. By steadily adding one well-tested component into a working system at a time, this approach allows us to pinpoint each integration issue with high accuracy, conduct troubleshooting in well-estimated schedule, thus enabling rapid and systematic deployment progress. In reality, our platform integration process includes the following stages:

  • Confirming the 5G cell interactions by deploying one 5G core station and multiple UEs.

  • Confirming the inter-cell interactions by deploying the second 5G station within the same coverage area.

  • Extending the coverage of the platform from in-laboratory into the intranet scale by integrating the campus’s core network.

  • Deploying SDSM’s software components for our final evaluations.

For each stages of our deployment progress, we performed various trial runs to confirm the platform functionality and address the encountered issues. Thus, we succeeded at establishing a stable platform within two months including both development and integration phases. Besides resolving the technical issues, fine-tuning the related parameters to suit our platform with the deployed environment was also essential for our current and future usage.

4 Emulated Experiment

In parallel with preparing the platform for our practical measurements, we compare our scheme with the SOTA in an emulated environment. In this section, the details related to this setup will be given in the following order: the emulated experimental environment, the used metrics, and our experiment descriptions. Our results and in-depth discussions take the final place.

4.1 Emulated Experimental Environment

Thanks to the advance in container technology and published source code in [21], it is straightforward to reproduce the “connection migration" emulated environment described in the SOTA. Figure 5 summaries our setup: one virtual machine as the client and the other two VMs as edge servers: the “source" server hosts the server container at the beginning of the experiment (before the migration occurs), while the “destination" server hosts the server container after the migration occurs. Each virtual machine uses the same specifications: four vCPUs, 8GB of RAM, and 40 GB of disk, which runs Ubuntu 18.04.5 LTS. All the VMs join the same host-only network group supported by VirtualBox version 6.1.38. We chose the “connection migration time“ as the performance metric: it is defined as the time the client sets the new IP as the primary address minus the new IP request’s arrival time. We used two of the four introduced container migration types in the SOTA (cold [46], and pre-copy [47]) for our measurements. For each data point, we repeat the measurement three times.

Fig. 5
figure 5

Emulated experimental environment

4.2 Experiment Scenario

Besides deciding on the metric and required parameters, the most important part in this phase is to conduct our experiments. The experiment emulates the connection migration workflow in a typical service migration scenario. While the client uses the service from the existing server (the source server), the client’s network performance significantly decreases due to an unexpected event. The existing server finds no issue with its internal operation. Invoking assistance to improve the situation, the client should be served by another edge server candidate (the so-called “destination" server), expecting improved communication performance. Depending on the approach, while the SOTA takes advantage of the existing server’s point of view, our solution uses an SDN controller to trigger the connection migration process. Technically, the connection migration could only succeed after the container is successfully migrated. This study uses two basic container migration strategies for our measurements. The distinctions between the two strategies could be described as follows:

  • Cold [46]: Stop the container, create a snapshot of the current container, transfer to the new location, and restore the snapshot

  • Pre-copy [47]: Create a baseline snapshot of the current container and transfer the baseline snapshot. Afterward, stop the container, create the differentiated snapshot, transfer it, then restore the original container

Using the above specifications, we measure the connection migration time using the SOTA and our SDN-based approach. First, we study the impact of the delay metric on the source server link, which causes the performance to decrease on the client side with the migration process. Second, we evaluate the impact of frequent migrations on the process. Such a scenario can typically happen in highly dynamic and changing conditions especially in the mobile environments. To this end, we measured the total connection migration time in multiple consecutive migrations to ensure our solution efficiency. An in-depth analysis will be presented in the following discussion.

4.3 Delay’s Influence on the Connection Migration Process

Fig. 6
figure 6

Source server delay’s impact on connection migration time

Figure 6 illustrates the impact of the delay on the source server on the overall connection migration process inspired by our method versus the SOTA approach. As mentioned before, the work is our initiation in investigating the innovative connection migration mechanism. Thus, only two fundamental container migration strategies are combined with selected connection migration candidates for our evaluation purpose. Despite a significant increase in the delay, the connection migration time in every combination increases slightly except for the Server-side connection + pre-copy combination. Besides, the obtained results confirm that our solution significantly improves the connection migration time in both container migration strategies compared with the SOTA in each corresponding scenario. In our solution, the SDN controller executes the container migration before the connection migration proceeds. Hence, the client can finish the migration process as soon as the newly destinated IP is available. Therefore, our approach minimizes the delay’s impact on the source server side to the connection migration time.

4.4 Migration Frequency’s Influence on the Connection Migration Time

Fig. 7
figure 7

Consecutive connection migration’s impact on connection migration time

We now evaluate the influence of the migration frequency on the connection migration time. Based on the first result, we select the “Cold" container migration strategy as it offers the highest performance when combined with both connection migration approaches. The total connection migration time for four consecutive requests is plotted in Fig. 7. We witness the same increasing trend in both connection migration approaches by the number of consecutive connection migration requests. However, our proposed solution shows a lower increasing slope, hence its overall performance is always better than the SOTA approach. The figure confirms again the improvement in connection migration time by our proposed approach. The trend also indicates our solution will continue to outperform the original approach as we increase the connection migration frequency.

5 Practical Experiment

When the proposed solution’s effectiveness is confirmed in the emulated environment, we also successfully deployed the 5G test-bed and fully employed our current network infrastructure to provide reliable and flexible 5G coverage on our campus. Thus, the practical measurements of SDSM will be ready for us to collect on the formulated 5G test-bed.

However, switching from pure emulation to a real-world implementation can be dramatically challenging. To study the SDSM paradigm in practical conditions, we first had to confirm that our test-bed exhibits a normal, expected behavior in simple scenarios. To validate our 5G platform and assess its baseline performances, we design a first series of measurements using traditional transport protocols. Having successfully passed this phase of basic validation, we eventually designed a second series of measurements in order to study the SDSM paradigm empowered by the QUIC protocol in practical conditions. This section details the two series of measurements, including an experimental scenario discussion and the related results and analysis.

5.1 TCP-Based and UDP-Based Measurements

Figure 9 illustrates our first experiment to study TCP and UDP behaviors on our 5G platform. The throughput results are measured every second with various data rate produced by “iPerf3” client. According to Douarre et al. [48], their maximum downlink throughput with the same hardware configuration is 7.3 Mbps (n41 band) and 9.3 Mbps (n78 band) respectively. In our initial validation (no physical obstacle) after parameter tuning, the highest throughput ever recorded is 25 Mbps. Thus, our official data rate for investigating is 8Mbps, which is quite low when compared to the upper bound of our platform bandwidth. By choosing this data rate, we expect the channel will not be overwhelmed, hence will not introduce any irregular behavior in our expected results. The next discussion is about our first experiment scenario.

5.1.1 Experimental Scenario

Followed by the introduced experimental architecture in Sect. 3, we decided to study the behaviors of TCP and UDP in connection lost scenarios to pave the way for the SDSM paradigm. To perform such interventions, we exploit the established coverage by keeping 5G station static. A session between the iPerf client (from the UE) and the iPerf server (from the MEC) is initiated and maintained throughout the experiment. Not until the measured throughput indicates stable network condition, the UE is moved away from the current location (center of the 5G cell) until the signal is lost. After confirming the connection is lost for five seconds, the experimenter carries the phone back to the original location using the same trajectory. Figure 8 illustrates the interactions involved in this scenario.

Fig. 8
figure 8

The first experimental scenario

5.1.2 Results and Analysis

During our various measurements, we witnessed fluctuations in the throughput from UE even in static position. This gives indications on the practicality of our network platform. In general, discussions about network throughput assume a stationary state [49]. Therefore, Table 2 gives the throughput variance exhibited throughout our experiments.

After three measurements for each protocol, we collect the iPerf server’s log on the MEC and filter the required metric. Thus, Fig. 9 presents the throughput transitions using TCP and UDP as transport protocols with an 8Mbps data rate. The red-dashed squares pinpoint the crucial phases. The first illustrates the stability of the 5G platform. Next, once the UE is out of range for the 5G cell, the connection is lost; thus, the degradation in throughput is witnessed. When the UE enters the station coverage again, the connection is restored, leading to the rise of TCP and UDP connection throughput. The results show that the UDP-based connection provides a quicker recovery than the TCP-based connection, thanks to its connection-less mechanism. Furthermore, we also noticed a strange spike that almost reached our throughput upper bound (25 Mbps) in every measurement right after the handover occurred, regardless of the transport protocol utilized. After noticing such unexpected events, the measured throughput gradually returns to the selected data rate and becomes steady again.

Fig. 9
figure 9

Measured throughput with 8 Mbps data rate

After careful investigation, we identified that Linux’s default network parameter vector - “net.ipv4.udp_mem” [50], which regulates the buffer emitting network data, is the leading cause of producing this spike. The default value of this vector includes “min pressure max” number of page sizes which should be reserved for UDP connections in the system. The default values of the vector may vary, depending on the UE hardware and Android version, however it will always be calculated at boot time from the amount of available memory. In our case, the UE’s default value allows to buffer a minimum of 227986 PAGEs   70MB of all the UDP connections. This behavior combines with the iPerf client’s default behavior, which continues to push the produced data despite losing the connection, thus filling the buffer. Once the connection is restored, all the data is released and sent immediately. Since the specified data rate is smaller than the channel’s capacity, the buffered data is sent with the highest available bandwidth. This results in the measured performance exceeding the original data rate and introduces the highlighted spikes in Fig. 9. Since we could not have access to the root permission of our UE to change the default value of this vector parameter, we keep the same environment and validate these behaviors again with adjusted iPerf’s data rate to a lower value - 4Mbps. In this second trial, we observed similar trends and data spikes, further supporting our previous findings. The network performance and indications for this experiment are illustrated in Fig. 10.

Fig. 10
figure 10

Measured throughput with 4 Mbps data rate

5.2 Non-migration and SDSM Measurements

After validating the established 5G platform functionalities by our previous measurements, we proceed to investigate the SDSM behaviors and our referenced scheme effectiveness in the practical conditions.

5.2.1 Experimental Scenario

To study SDSM behaviors in practical conditions and prove our scheme’s effectiveness, we use two 5G stations, MEC and SDSM-enabled, to perform our second experimental scenario. Taking advantage of the 5G testbed mentioned earlier in Figs. 3, 11 clarifies our actual interactions to conduct the following measurements. In this scenario, the base images of the QUIC server container are stored in each MEC to minimize unnecessary data transmission. Each measurement constructs a cycling journey from the initial location to the return point and then back in the reverse direction. Thus, every measurement includes two SDSM interactions to ensure a seamless network experience for the client. Our previous observations confirmed that UDP enables a faster connection migration process than TCP. Therefore, in this section we focus exclusively on QUIC-based SDSM, QUIC messages being encapsulated in UDP datagrams. Unfortunately, the current lack of support for the QUIC protocol in iPerf, lead us to instrument the AioQUIC framework in order to measure the UE’s throughput over time. Acknowledging iPerf’s and Linux Socket buffer’s behaviors from prior investigations, instead of calculating the throughput in the transport layer, our mechanism calculates the current throughput in the application layer by continuously inspecting each received packet from the server and storing the timestamps. Because of this approach, the actual transferred packet size includes QUIC header overheads. This artificially grows the measured throughput, but we consider it as relatively negligible in the context of our experiment. Afterwards, the throughput is calculated based on the required duration to transmit the request reserved in the obtained logs. This approach allows our measurements to avoid data spikes and maximizes available throughput. At the start of each measurement, a UE attached to a human body initiates a connection with the nearest server (Station A) using its QUIC client. The human waits for the metric logs to initialize, then moves from the starting location to the destination (Station B). Upon reaching the return point, the human reverses direction to complete the remaining half of the cycle.

Fig. 11
figure 11

The second experimental scenario

5.2.2 Results and Analysis

Using the described scenario, Fig. 12 showcases the average UE’s throughput across three trials. Initially, we observe a steady trend in UE throughput, affirming that our platform’s setup and functionality are performing correctly. Notably, highlighted events #1 and #2 demonstrate the behaviors of the SDSM and non-SDSM solutions under real-world conditions. In event #1, with SDSM deactivated, the UE’s throughput falls to zero, and the connection recovery takes considerable time, only succeeding when the signal strength is sufficient. Conversely, when SDSM is enabled, the connection remains stable throughout the process. This means the UE continues to receive data even with a weaker signal, albeit with reduced network performance.

Fig. 12
figure 12

UE’s throughput improvement empowered by SDSM

To delve deeper, we now discuss the interactions of SDSM that enhance the UE’s network experience. Leveraging SDN’s management capabilities, the controller notifies the client about the server’s new IP address during the anticipated service migration process when the UE is about to move out of range. Concurrently, it orchestrates the underlying infrastructure and facilitates the migration of the running service from the current MEC server to Station B. This sophisticated mechanism allows the SDSM paradigm to maintain the client’s connection seamlessly as it transitions to new coverage. The connection to the corresponding server remains uninterrupted, and the underlying interactions are transparent to the end-user in dynamic mobility conditions. Thus, the highlighted data transitions underscore SDSM’s advancement in delivering a seamless network experience for the UE in a cutting-edge 5G network environment. Beside the data visualization, the numerical results in Table 2 confirmed the improvement of SDSM in providing higher average throughput, lower throughput variation and standard deviation, which indicates a smoother network experience.

Table 2 Measurement’s variations

It could seem straigthforward to complete this series of validation measurements by adapting the experimental framework applied in Sect. 4. In particular, we studied how the latency to the server influences connection migration time in a totally controlled SDSM scenario. However, conducting such experiments in the “real world” (by definition, uncontrolled) figured out to be daunting, as we could not manage to achieve perfectly stable network conditions for the duration of our measurements. We believe this failure comes from frequent rate changes due to the 5G protocol stack medium adaptation mechanisms, regrettably worsen by interferences due a heavy use of the electromagnetic spectrum in our telecommunication department. In conclusion, we consider the results gathered during this tentative as unexploitable, so far.

5.3 Practical Challenges and Insights

Conducting practical experimentation often uncovers unexpected technical challenges. In this section, we outline the key lessons learned during the deployment and testing of the Software-Defined Service Migration (SDSM) paradigm on a 5G testbed. These insights aim to assist future researchers in reproducing our work and adapting it to their contexts.

5.3.1 Experimental Design

The SDSM paradigm leverages the synergy between SDN and service migration. SDN offers comprehensive network visibility and control, while service migration, enhanced by containerization technologies, addresses interoperability challenges from the end-user perspective. Combining these paradigms, supported by modern testbeds, allows us to harness their strengths and mitigate their weaknesses, aiming for advanced communication performance. To validate this combination, we identified scenarios where each paradigm’s weaknesses are offset by the other’s strengths. Specifically, service migration benefits from SDN’s network overview to enhance interoperability and improve user experience. We initially conducted experiments in a reproducible emulated environment to anticipate practical challenges. Subsequently, we refined our experimental scenarios and conditions step by step, guided by established theories and observations.

5.3.2 Practical Implementation and Deployment

Transitioning from our emulated design to a feasible 5G-based deployment required staying abreast of current academic and industrial advancements. To ensure reproducibility for educational purposes, we selected commonly available components. The openness of the research community and the affordability of these components facilitated the swift construction of our proof-of-concept testbed. However, initial data trends were obscured due to interference from high-performance industrial equipment (WiFi, 4G, and 5G signals) in our area. To achieve an environment comparable to our emulated setup, we considered two options: investing significantly in infrastructure to enhance coverage and signal strength, or isolating our experimental environment from external interferences. Given the pioneering nature of our study, we opted not to invest heavily in industrial-grade equipment at this stage. Instead, we tested various specifications and fine-tuned channel parameters to establish an optimal communication environment, sufficiently stable for initial use. Once our specific requirements are clarified, we plan to standardize our testbed deployment accordingly. Due to technical variations across countries and operational environments, the following guidelines are general recommendations that helped us minimize unexpected behaviors in our results.

  • 5G-core servers: High-performance workstation (\(\ge \) 8 cores) with sufficient storage (\(\ge \) 40 GBs), RAM (\(\ge \) 16 GBs), supports USB 3.0 with an installed open-sourced operating system (Ubuntu) is recommended.

  • 5G USRP interfaces: Hardware specifications depend on circumstantial usage, mostly related to the target throughput and concurrent management. For PoC and educational purposes, any well-tested hardware is sufficient.

  • End-user mobile devices: any 5G-supported mobile phone is sufficient. However, any mobile device with well-tested Termux’s and Python 3.12’s compatibility is strongly recommended to reduce extra engineering effort to employ our extended QUIC-based client and server implementation that supports SDSM in [51].

  • Operational channel parameter selection: Avoids the frequency band used by the industrial mobile operators and follows Global System for Mobile Communications Cell Global Identity for detailed guidelines.

  • Conducting environments: Minimizes the obstacles and signal interferences from unidentified sources, then confirms a stable signal measurement before conducting the experiments. Each experiment should be repeated at least three times for clear data trend visibility.

6 Conclusion & Future Work

This study addresses critical gaps in service migration within dynamic and heterogeneous network environments by introducing the Software-Defined Service Migration (SDSM) paradigm. Leveraging a centralized SDN controller, the QUIC protocol, and MEC technologies, we propose a mechanism that ensures seamless service continuity, even under challenging conditions such as high mobility and unstable network links. Through comprehensive evaluations in both emulated and real-world 5G testbeds, our results demonstrate significant improvements in connection migration time and overall user experience. These findings validate the effectiveness of SDSM in addressing key limitations of traditional approaches and position it as a robust solution for next-generation networks. While the SDSM paradigm offers clear benefits, such as reduced migration times and improved service reliability, it also raises new challenges, particularly regarding scalability and resource overhead. The centralized nature of the SDN controller, although powerful, requires careful orchestration to avoid bottlenecks and ensure efficient resource utilization. Future research must focus on optimizing these aspects to enable broader adoption in diverse scenarios, including high-density urban environments and large-scale deployments.

In terms of practical applications, SDSM shows promise in high-priority scenarios, such as emergency response, medical services, and critical infrastructure management. However, further exploration is needed to evaluate its potential in more casual use cases and under highly dynamic conditions. Expanding the paradigm to integrate additional transport protocols and support multi-layered edge computing architectures could further enhance its versatility and performance. Additionally, future work should investigate advanced machine learning techniques to enable predictive migration strategies, minimizing downtime and maximizing efficiency. The integration of distributed SDN controllers and hybrid architectures could address scalability challenges, ensuring consistent performance across varying network scales and conditions.