Skip to main content

Towards Diagnosing Accurately the Performance Bottleneck of Software-Based Network Function Implementation

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13882))

Abstract

The software-based Network Functions (NFs) improve the flexibility of network services. Comparing with hardware, NFs have specific behavioral characteristics. Performance diagnosis is the first and most difficult step during NFs’ performance optimization. Does the existing instrumentation-based and sampling-based performance diagnosis methods work well in NFs’ scenario? In this paper, we first re-think the challenges of NF performance diagnosis and correspondingly propose three requirements: fine granularity, flexibility and perturbation-free. We investigate existing methods and find that none of them can simultaneously meet these requirements. We innovatively propose a quantitative indicator, Coefficient of Interference (CoI). CoI is the fluctuation between per-packet latency measurements with and without performance diagnosis. CoI can represent the performance perturbation degree caused by diagnosis process. We measure the CoI of typical performance diagnosis tools with different types of NFs and find that the perturbation caused by instrumentation-based diagnosis solution is \(7.39\%\) to \(74.31\%\) of that by sampling-based solutions. On these basis, we propose a hybrid NF performance diagnosis, to trace the performance bottleneck of NF accurately.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Faqir, Z.Y., Michael, B., Sibylle, S., Fabian, S.: NFV and SDN-Key technology enablers for 5G networks. IEEE J. Sel. Areas Commun. 35(11), 2468–2478 (2017)

    Article  Google Scholar 

  2. Cisco: vector packet processing (2022). https://wiki.fd.io/view/VPP

  3. Linux Community: perf: Linux profiling with performance counters (2009). https://perf.wiki.kernel.org/index.php/Main_Page

  4. Intel Corporation: intel VTune performance analyzer (2022). https://www.intel.com/content/www/us/en/develop/documentation/vtune-help/top.html

  5. Laksono, A.S.B., Michael, F., Mark, K., Gabriel, M., John, M., Nathan, R.T.: HPCTOOLKIT: tools for performance analysis of optimized parallel programs. Concurr. Comput. Pract. Exper. 22(6), 685–701 (2009)

    Google Scholar 

  6. Pengfei, S., Shuyin, J., Milind, C., Xu, L.: Pinpointing performance inefficiencies via lightweight variance profiling. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC2019, pp. 1–19. Association for Computing Machinery, Denver, Colorado (2019)

    Google Scholar 

  7. Qidong, Z., Xu, L., Milind, C.: DrCCTProf: a fine-grained call path profiler for ARM-based clusters. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC2020, pp. 1–16. IEEE Press, Atlanta, GA, USA (2020)

    Google Scholar 

  8. Andreas, K., et al.: Score-P: a joint performance measurement run-time infrastructure for Periscope, Scalasca, Tau, and Vampir. In: Brunst, H., Müller, M., Nagel, W., Resch, M. (eds.) Tools for High Performance Computing 2011. LNCS, pp. 79–91. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-31476-6_7

  9. Markus, G., Felix, W., Brian, J.N.W., Erika, Á’., Daniel, B., Bernd, M.: The Scalasca performance toolset architecture. Concurr. Comput. Pract. Exper. 22(6), 702–719 (2010)

    Google Scholar 

  10. Sameer, S.S., Allen, D.M.: The TAU Parallel Performance System. Int. J. High Perform. Comput. Appl. 20(2), 287–311 (2006)

    Article  Google Scholar 

  11. David, B., et al.: Caliper: performance introspection for HPC software stacks. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC2016, pp. 550–560. IEEE Press, Salt Lake City, UT, USA (2016)

    Google Scholar 

  12. Nicholas, N., Julian, S.: Valgrind: a framework for heavyweight dynamic binary instrumentation. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI2007, pp. 89–100. Association for Computing Machinery, San Diego, California, USA (2007)

    Google Scholar 

  13. Junzhi, G., Yuliang, L., Bilal, A., Aman, S., Minlan, Y.: Microscope: queue-based performance diagnosis for network functions. In: Proceedings of the 2020 Conference of the ACM Special Interest Group on Data Communication, SIGCOMM2020, pp. 390–403. Association for Computing Machinery, Virtual Event, USA (2020)

    Google Scholar 

  14. Yiran, L., Liangcheng, Y., Vincent, L., Mingwei, X.: PrintQueue: performance diagnosis via queue measurement in the data plane. In: Proceedings of the 2022 Conference of the ACM Special Interest Group on Data Communication, SIGCOMM2022, pp. 516–529. Association for Computing Machinery, Amsterdam, Netherlands (2022)

    Google Scholar 

  15. Luis, P., Rishabh, I., Arseniy, Z., Jonas, F., Katerina, A.: Automated synthesis of adversarial workloads for network functions. In: Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, SIGCOMM2018, pp. 372–385. Association for Computing Machinery, Budapest, Hungary (2018)

    Google Scholar 

  16. Rishabh, I., Luis, P., Arseniy, Z., Solal, P., Katerina, A., George, C.: Performance contracts for software network functions. In: 16th USENIX Symposium on Networked Systems Design and Implementation, NSDI2019. USENIX Association, Boston, MA, USA (2019)

    Google Scholar 

  17. Xiaoqi, C., et al.: Fine-grained queue measurement in the data plane. In: Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies, CoNEXT2019, pp. 15–29. Association for Computing Machinery, Orlando, Florida (2019)

    Google Scholar 

  18. Vimalkumar, J., Mohammad, A., Yilong, G., Changhoon, K., David, M.: Millions of little minions: using packets for low latency network programming and visibility. In: Proceedings of the 2014 Conference of the ACM Special Interest Group on Data Communication, SIGCOMM2014, pp. 3–14. Association for Computing Machinery, Chicago, Illinois, USA (2014)

    Google Scholar 

  19. John, S., Oliver, M., Adam, J.A., Eric, K., Jonathan, M.S.: Scaling hardware accelerated network monitoring to concurrent and dynamic queries with *flow. In: Proceedings of the 2018 USENIX Conference on Usenix Annual Technical Conference, ATC2018, pp. 823–835. USENIX Association, Boston, MA, USA (2018)

    Google Scholar 

  20. Nick, M., et al.: OpenFlow: enabling innovation in campus networks. SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008)

    Article  Google Scholar 

  21. Srinivasan, V., Suri, S., Varghese, G.: Packet classification using tuple space search. SIGCOMM Comput. Commun. Rev. 29(4), 135–146 (1999)

    Google Scholar 

  22. James, D., et al.: TupleMerge: fast software packet processing for online packet classification. IEEE/ACM Trans. Networking 27(4), 1417–1431 (2019)

    Article  Google Scholar 

  23. Xinyi, Z., Xie, G., Xin, W., Penghao, Z., Li, Y., Kavé, S.: Fast online packet classification with convolutional neural network. IEEE/ACM Trans. Netw. 29(6), 2765–2778 (2021)

    Google Scholar 

  24. Sorrachai, Y., James, D., Alex, X.L., Eric, T.: A sorted partitioning approach to high-speed and fast-update OpenFlow classification. In: 2016 IEEE 24th International Conference on Network Protocols, ICNP2016, pp. 1–10. IEEE, Singapore (2016)

    Google Scholar 

  25. Kirill, K., Sergey, I.N., Ori, R., William, C., Patrick, E.: Exploiting order independence for scalable and expressive packet classification. IEEE/ACM Trans. Networking 24(2), 1251–1264 (2015)

    Google Scholar 

  26. Vincent, M.W., Sally, A.M.: Can hardware performance counters be trusted? In: 2008 IEEE International Symposium on Workload Characterization, pp. 141–150 (2008)

    Google Scholar 

  27. Dmitrijs, Z., Milan, J., Matthias, H.: Accuracy of performance counter measurements. In: 2009 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS2009, pp. 23–32. IEEE, Boston, Massachusetts (2009)

    Google Scholar 

  28. Todd, M., Amer, D., Matthias, H., Peter, F.S.: Understanding Measurement Perturbation in Trace-based Data. In: 2007 IEEE International Parallel and Distributed Processing Symposium, IPDPS2007, pp.1–6. IEEE, Long Beach, California (2007)

    Google Scholar 

  29. Matthias, W., et al.: Detection and visualization of performance variations to guide identification of application bottlenecks. In: 2016 45th International Conference on Parallel Processing Workshops, ICPPW2016, pp. 289–298. IEEE, Philadelphia, PA, USA (2016)

    Google Scholar 

  30. Lehr, J.-P., Iwainsky, C., Bischof, C.: The influence of HPCToolkit and Score-p on hardware performance counters. In: Proceedings of the 4th ACM SIGPLAN International Workshop on Software Engineering for Parallel Systems, SEPS2017. Association for Computing Machinery, Vancouver, BC, Canada (2017)

    Google Scholar 

  31. Srikanth, K., Ratul, M., Patrick, V., Sharad, A., Jitendra, P., Paramvir, B.: Detailed diagnosis in enterprise networks. In: Proceedings of the ACM SIGCOMM 2009 Conference on Data Communication, SIGCOMM2009, pp. 243–254. Association for Computing Machinery, Barcelona, Spain (2009)

    Google Scholar 

  32. Ben, P., et al.: The design and implementation of open vSwitch. In: 12th USENIX Symposium on Networked Systems Design and Implementation, NSDI2015, pp. 117–130. USENIX Association, Oakland, CA (2015)

    Google Scholar 

  33. Eddie, K., Robert, M., Benjie, C., John, J., Marinus, F.K.: The click modular router. ACM Trans. Comput. Syst. 18(3), 263–297 (2000)

    Google Scholar 

  34. Buck, B., Hollingsworth, J.K.: An API for runtime code patching. Int. J. High Perform. Comput. Appl. 14(4), 317–329 (2000)

    Article  Google Scholar 

  35. Derek, B., Qin, Z., Saman, A.: Transparent dynamic instrumentation. In: Proceedings of the 8th ACM SIGPLAN/SIGOPS conference on Virtual Execution Environments, VEE2012, pp. 133–144. Association for Computing Machinery, London, England, UK (2012)

    Google Scholar 

  36. David, E.T., Jonathan, S.T.: ClassBench: a packet classification benchmark. IEEE/ACM Trans. Networking 15(3), 499–511 (2007)

    Article  Google Scholar 

  37. Sangjin, H., Keon, J., Aurojit, P., Shoumik, P., Dongsu, H., Sylvia, R.: SoftNIC: a software NIC to augment hardware. Technical Report No. UCB/EECS-2015-155 (2015). http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-155.html

Download references

Acknowledgments

We thank our shepherd Simone Ferlin-Reiter and the anonymous reviewers for their insightful feedback. This work is supported in part by the National Key R &D Program of China (Grant No. 2019YFB1802800), and in part by the National Natural Science Foundation of China (Grant No. 61725206).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ru Jia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jia, R., Pan, H., Jiang, H., Fdida, S., Xie, G. (2023). Towards Diagnosing Accurately the Performance Bottleneck of Software-Based Network Function Implementation. In: Brunstrom, A., Flores, M., Fiore, M. (eds) Passive and Active Measurement. PAM 2023. Lecture Notes in Computer Science, vol 13882. Springer, Cham. https://doi.org/10.1007/978-3-031-28486-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-28486-1_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-28485-4

  • Online ISBN: 978-3-031-28486-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics