Skip to main content

Testing Quality of Training in QoE-Aware SFC Orchestration Based on DRL Approach

  • Conference paper
  • First Online:
Testing Software and Systems (ICTSS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14131))

Included in the following conference series:

  • 279 Accesses

Abstract

In this paper, we propose a Deep Reinforcement Learning (DRL) approach to optimize a learning policy for Service Function Chaining (SFC) orchestration based on maximizing Quality of Experience (QoE) while meeting Quality of Service (QoS) requirements in Software Defined Networking (SDN)/Network Functions Virtualization (NFV) environments. We adopt an incremental orchestration strategy suitable to online setting and enabling to investigate SFC orchestration by processing each incoming SFC request as a multi-step DRL problem. DRL implementation is achieved using Deep Q-Networks (DQNs) variant referred to as Double DQN. We particularly focus on evaluating performance and robustness of the DRL agent during training phase by investigating and testing the quality of training. In this regard, we define a testing metric monitoring the performance of the DRL agent and quantified by a QoE threshold score to reach on average during the last 100 runs of the training phase. We show through numerical results how DRL agent behaves during training phase and how it attempts to reach for different network scales a predefined average QoE threshold score. We highlight also network scalability effect on achieving a suitable performance-convergence trade-off.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Escheikh, M., Taktak, W.: Online QoS/QoE-driven SFC Orchestration Leveraging a DRL Approach in SDN/NFV Enabled Networks, April 2023. (submitted to Soft Computing)

    Google Scholar 

  2. Arulkumaran, K., et al.: Deep reinforcement learning: a brief survey. IEEE Signal Process. Mag. 34(6), 26–38 (2017)

    Article  Google Scholar 

  3. Benzekki, K., El Fergougui, A., Elalaoui, A.E.: Software-defined networking (SDN): a survey. Secur. Commun. Netw. 9(18), 5803–5833 (2016)

    Article  Google Scholar 

  4. Bhamare, D., et al.: A survey on service function chaining. J. Netw. Comput. Appl. 75, 138–155 (2016)

    Article  Google Scholar 

  5. Chen, X., et al.: Reinforcement learning-based QoS/QoE-aware service function chaining in software-driven 5G slices. Trans. Emerg. Telecommun. Technol. 29(11), e3477 (2018)

    Article  Google Scholar 

  6. Chen, J., Chen, J., Zhang, H.: DRL-QOR: deep reinforcement learning-based QoS/QoE-aware adaptive online orchestration in NFV-enabled networks. IEEE Trans. Netw. Serv. Manage. 18(2), 1758–1774 (2021)

    Article  Google Scholar 

  7. Fiedler, M., Hossfeld, T., Tran-Gia, P.: A generic quantitative relationship between quality of experience and quality of service. IEEE Netw. 24(2), 36–41 (2010)

    Article  Google Scholar 

  8. Hasselt, H.: Double q-learning. In: Advances in Neural Information Processing Systems, vol. 23 (2010)

    Google Scholar 

  9. Herrera, J.G., Botero, J.F.: Resource allocation in NFV: a comprehensive survey. IEEE Trans. Netw. Serv. Manage. 13(3), 518–532 (2016)

    Article  Google Scholar 

  10. Lin, L.-J.: RL for Robots Using Neural Networks. Carnegie Mellon University, Pittsburgh (1992)

    Google Scholar 

  11. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  12. Reichl, P., et al.: The logarithmic nature of QoE and the role of the Weber-Fechner law in QoE assessment. In: 2010 IEEE International Conference on Communications. IEEE (2010)

    Google Scholar 

  13. Sarker, I.H.: Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Comput. Sci. 2(6), 420 (2021)

    Google Scholar 

  14. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, vol. 135. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  15. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30. No. 1 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wiem Taktak .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Escheikh, M., Taktak, W., Barkaoui, K. (2023). Testing Quality of Training in QoE-Aware SFC Orchestration Based on DRL Approach. In: Bonfanti, S., Gargantini, A., Salvaneschi, P. (eds) Testing Software and Systems. ICTSS 2023. Lecture Notes in Computer Science, vol 14131. Springer, Cham. https://doi.org/10.1007/978-3-031-43240-8_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43240-8_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43239-2

  • Online ISBN: 978-3-031-43240-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics