Abstract
Optical fiber links are known for their high bandwidth and reliable data transmission. However, problems may still arise, affecting signal quality and network performance. These problems are usually happening due to external physical extrusion or excessive bending, insufficient transmission power, damaged connectors causing signal loss; or failures of splice tray connector. In response to increasing optical fiber link problems transparency and interpetability, various attempts have been made to bring explainability in Artificial Intelligence (AI) decision-making and reasoning processes. This paper tackles a crucial and timely topic, i.e., understand the various factors contributing to optical link problems by explaining opaque AI models with two goals: (i) either providing instance explanations for a given decision by using a local and model agnostic approach; or (ii) providing global explanations able to describe the overall logic assuming knowledge of the black box model or its internals. The scientific contribution of this paper entails novel explainable AI (XAI) models harvesting data from optical fiber link events to first derive local explanations, and then apply a hierarchical approach to educe global explanations from the local ones. The proposed approach shows that we can efficiently tackle both explanation complexity and fidelity to reason about the causes that have resulted in optical fiber link problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
SFF-8472: Specification for management interface for SFP+. https://members.snia.org/document/dl/25916
NSys: MikroTik SFP/QSFP. https://nsys.gr/product-category/mikrotik/sfp-qsfp/. Accessed 13 Mar 2024
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Ayoub, O., et al.: Towards explainable artificial intelligence in optical networks: the use case of lightpath QoT estimation. J. Opt. Commun. Netw. 15(1), A26–A38 (2022)
Dijk, O.: oegedijk/explainerdashboard: explainerdashboard 0.4.2: dtreeviz v2 compatiblity, February 2023. https://doi.org/10.5281/zenodo.7633294
Fan, Z., Wu, Z., Lv, J., Zhang, P., Xiao, Y.: Machine learning based optical transmission system link performance degradation prediction and application. In: 2023 24st Asia-Pacific Network Operations and Management Symposium (APNOMS), pp. 397–400. IEEE (2023)
Jacovi, A., Marasović, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence: prerequisites, causes and goals of human trust in AI. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 624–635 (2021)
Karandin, O., Ayoub, O., Musumeci, F., Hirota, Y., Awaji, Y., Tornatore, M.: If not here, there. explaining machine learning models for fault localization in optical networks. In: 2022 International Conference on Optical Network Design and Modeling (ONDM), pp. 1–3. IEEE (2022)
Li, X., et al.: Parameter optimization for modulation-enhanced external cavity resonant frequency in fiber fault detection. Photonics 10, 822 (2023)
Liu, P., Ji, W., Liu, Q., Xue, X., et al.: Ai-assisted failure location platform for optical network. Int. J. Opt. 2023, 1707815 (2023)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Magri, A., Debenedetti, S., Morchio, M., Orsi, P.: Fault classification patent. US Patent US11901938B2, February 2021
Powers, D.M.: Evaluation: from precision, recall and f-measure to ROC, informedness, markedness and correlation. arXiv preprint arXiv:2010.16061 (2020)
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Storage Networking Industry Association: SNIA: SFF specifications (2024). https://www.snia.org/technology-communities/sff/specifications. Accessed 13 Mar 2024
Acknowledgment
The work of the authors has been supported by the TALON project funded by the European Union’s Horizon Europe Research and Innovation program under the grant agreement No. 101070181.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interest
The authors have no competing interests to declare that are relevant to the content of this article.
A Appendix A
A Appendix A
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Theodorou, G., Karagiorgou, S., Fulignoli, A., Magri, R. (2024). On Explaining and Reasoning About Optical Fiber Link Problems. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2154. Springer, Cham. https://doi.org/10.1007/978-3-031-63797-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-63797-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-63796-4
Online ISBN: 978-3-031-63797-1
eBook Packages: Computer ScienceComputer Science (R0)