Abstract
With the rapid development of quantum machine learning (QML), quantum convolutional neural networks (QCNN) have been proposed and shown advantages in classification problems. An intrusion detection system (IDS) based on the QML method is proven to have higher accuracy than IDS based on the traditional machine learning (ML) method. However, the multiple convolution pooling operations of QCNN will cause the loss of valuable data features, resulting in a large error in the final measurement results. In this paper, we design an IDS model of QCNN based on a variational quantum neural network (VQNN), which can effectively reduce data feature loss and improve detection accuracy. We compare this model with traditional ML models such as artificial neural network (ANN), logistic regression (LR), K-nearest neighbor (KNN) algorithm, support vector machine (SVM), and decision tree (DT). Experiment results show that the accuracy of our proposed model is 94.51%, which is higher than other classical IDS models.












Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
References
Gao Y, Wu H, Song B, Jin Y, Luo X, Zeng X (2019) A distributed network intrusion detection system for distributed denial of service attacks in vehicular ad hoc network. IEEE Access 7:154560–154571
SANSInstitute: The history and evolution of intrusion detection
Sarkar T, Das N (2014) Survey on host and network based intrusion detection system. Int J Adv Netw Appl 6(2):2266
Yin C, Zhu Y, Fei J, He X (2017) A deep learning approach for intrusion detection using recurrent neural networks. IEEE Access 5:21954–21961
Lecun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324
Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25(2)
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations
Szegedy C, Liu W, Jia Y, Sermanet P, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR)
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). pp 770–778
Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR). pp 580–587
Girshick R (2015) Fast r-cnn. In: 2015 IEEE international conference on computer vision (ICCV). pp 1440–1448
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). pp 779–788
He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: 2017 IEEE international conference on computer vision (ICCV). pp 2980–2988
Olaf R, Philipp F, Thomas B (2015) U-net: convolutional networks for biomedical image segmentation. In: 2015 medical image computing computer assisted intervention (MICCAI)
Chen L, Zhu Y, Papandreou G, Schroff F. Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Computer vision – ECCV 2018
Anastasia B, Sander B, Cornelis O (2017) Conditional time series forecasting with convolutional neural networks. arXiv preprint arXiv:1703.04691
Yitian C, Yanfei K, Yixiong C, Zizhuo W (2020) Probabilistic forecasting with temporal convolutional neural network. Neurocomputing 399:491–501
Oord A, Dieleman S, Zen H (2016) Wavenet: a generative model for raw audio. arXiv preprint arXiv:1609.03499
Yuan F, Karatzoglou A, Arapakis I, Jose JM, He X (2019) A simple convolutional generative network for next item recommendation. In: Proceedings of the Twelfth ACM international conference on web search and data mining
Xiaohe H, Chunsen L, Yu-Gang J, Peng Z (2020) In-memory computing to break the memory wall. Chin Phys B 29(7):078504
Lloyd S, Mohseni M, Rebentrost P (2013) Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint arXiv:1307.0411
Lloyd S, Mohseni M, Rebentrost P (2014) Quantum principal component analysis. Nat Phys 10:631–633
Wiebe N, Kapoor A, Svore KM (2015) Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Inf Comput 15(3–4):316–356
Rebentrost P, Mohseni M, Lloyd S (2014) Quantum support vector machine for big data classification. Phys Rev Lett 113(13):130503
Kerenidis I, Landman J, Luongo A, Prakash A (2019) q-means: a quantum algorithm for unsupervised machine learning. In: Proceedings of the 33rd international conference on neural information processing systems
Blank C, Park DK, Rhee JK (2020) Quantum classifier with tailored quantum kernel. NPJ Quantum Inf 6:41
Cong I, Choi S, Lukin MD (2019) Quantum convolutional neural networks. Nat Phys 15:1273–1278
Iordanis K, Jonas L, Anupam P (2019) Quantum algorithms for deep convolutional neural networks. arXiv preprint arXiv:1911.01117
Liu J, Lim KH, Wood KL (2021) Hybrid quantum-classical convolutional neural networks. Sci China Phys Mech Astron 64:290311
Maxwell PH, Samriddhi S, Shashindra P, Tristan C (2019) Quanvolutional neural networks: powering image recognition with quantum circuits. Quantum Mach Intell 2:1–9
Chen SYC, Wei TC, Zhang C, Yu H, Yoo S (2022) Quantum convolutional neural networks for high energy physics data analysis. Phys Rev Res 4(1):013231
Dietterich TG (2000) Ensemble methods in machine learning. In: Multiple classifier systems
YaoChong L (2020) A quantum deep convolutional neural network for image recognition. Quantum Sci Technol 5:044003
MacCormack I, Delaney C, Galda A, Aggarwal N, Narang P (2022) Branching quantum convolutional neural networks. Phys. Rev. Res. 4(1):013117
Wei S, Chen Y, Zhou Z (2022) A quantum convolutional neutral network on NISQ devices. AAPPS Bull 32(2):1–11
Mangini S (2021) Quantum computing models for artificial neural networks. EPL 134:10002
Arthur P (2021) Absence of barren plateaus in quantum convolutional neural networks. Phys. Rev. X 11(4):041011
Preskill J (2018) Quantum computing in the NISQ era and beyond. Quantum 2:79
Cerezo M (2021) Variational quantum algorithms. Nat Rev Phys 3:625–644
Bharti K (2014) Noisy intermediate-scale quantum algorithms. Rev Mod Phys 94:015004
Mitarai K, Negoro M, Fujii K (2018) Quantum circuit learning. Phys Rev A 98:032309
Havlicek V, Dcorcoles A, Temme K (2019) Supervised learning with quantum-enhanced feature spaces. Nature 567:209–212
Caro MC, Huang HY, Cerezo M, Sharma K, Sornborger A, Cincio L, Coles PJ (2022) Generalization in quantum machine learning from few training data. Nat Commun 13:4919
Kobayashi M, Nakaji K, Yamamoto N (2022) Overfitting in quantum machine learning and entangling dropout. Quantum Mach Intell 4:30
Vidal G (2008) Class of quantum many-body states that can be efficiently simulated. Phys Rev Lett 101:110501
Zoufal C, Lucchi A, Woerner S (2019) Quantum generative adversarial networks for learning and loading random distributions. NPJ Quantum Inf 5:103
Xia R, Kais S (2020) Hybrid quantum-classical neural network for calculating ground state energies of molecules. Entropy 22:828
Zhou L, Wang S-T, Choi S, Pichler H, Lukin MD (2020) Quantum approximate optimization algorithm: performance, mechanism, and implementation on near-term devices. Phys Rev X 10:021067
Vladyslav V (2020) Measurement optimization in the variational quantum eigensolver using a minimum clique cover. J Chem Phys 152:124114
Pepper A, Tischler N, Pryde GJ (2019) Experimental realization of a quantum autoencoder: the compression of qutrits via machine learning. Phys Rev Lett 112:060501
Benedetti M, Lloyd E, Sack S, Fiorentini M (2019) Parameterized quantum circuits as machine learning models. Quantum Sci Technol 4:043001
Moustafa N, Slay J (2016) The evaluation of network anomaly detection systems: statistical analysis of the unsw-nb15 data set and the comparison with the kdd99 data set. Inf Secur J Glob Perspect 25:1–3
Giovannetti V, Lloyd S, Maccone L (2008) Quantum random access memory. Phys Rev Lett 100:160501
Lloyd S, Mohseni M, Rebentrost P (2014) Quantum principal component analysis. Nat Phys 10:631–633
Rebentrost P, Mohseni M, Lloyd S (2014) Quantum support vector machine for big data classification. Phys Rev Lett 113:130503
Schuld M, Killoran N (2019) Quantum machine learning in feature Hilbert spaces. Phys Rev Lett 122:040504
Park DK, Petruccione F, Rhee JK (2019) Circuit-based quantum random access memory for classical data. Sci Rep 9:3949
Veras TML, Araujo ICS, Park DK, Silva AJ (2021) Circuit-based quantum random access memory for classical data with continuous amplitudes. IEEE Trans Comput 70(12):2125–2135
Araujo IF, Park DK, Petruccione F (2021) A divide-and-conquer algorithm for quantum state preparation. Sci Rep 11:6329
Hur T, Kim L, Park D (2022) Quantum convolutional neural network for classical data classification. Quantum Mach Intell 4:3
Gong C, Guan W, Gani A, Qi H (2022) Network attack detection scheme based on variational quantum neural network. J Supercomput 78:16876–16897
Moustafa N, Slay J (2015) The significant features of the unsw-nb15 and the kdd99 sets for network intrusion detection systems. In: 4th international workshop on building analysis datasets and gathering experience returns for security (BADGERS)
Janarthanan T, Zargari S (2017) IEEE 26th international symposium on industrial electronics (ISIE). Scientific reports
Husain A, Salem A, Jim C, Dimitoglou G (2019) Development of an efficient network intrusion detection model using extreme gradient boosting (xgboost) on the unsw-nb15 dataset. In: IEEE international symposium on signal processing and information technology (ISSPIT)
Bagui S (2019) Using machine learning techniques to identify rare cyber-attacks on the unsw-nb15 dataset. Secur Privacy 2:91
Kasongo S, Sun Y (2019) Performance analysis of intrusion detection systems using a feature selection method on the unsw-nb15 dataset. J Big Data 7:105
Jing D, Chen H (2019) Svm based network intrusion detection for the unsw-nb15 dataset. In: IEEE 13th international conference on ASIC (ASICON)
Acknowledgements
This work was funded in part by the Liaoning Provincial Department of Education Research under Grant LJKZ0208, in part by the Scientific Research Foundation for Advanced Talents from Shenyang Aerospace University under Grant 18YB06, and National Basic Research Program of China Under Grant JCKY2018410C004.
Funding
This work was funded in part by the Liaoning Provincial Department of Education Research under Grant LJKZ0208, in part by the Scientific Research Foundation for Advanced Talents from Shenyang Aerospace University under Grant 18YB06, and National Basic Research Program of China Under Grant JCKY2018410C004.
Author information
Authors and Affiliations
Contributions
Applicable for submissions with multiple authors.
Corresponding author
Ethics declarations
Conflict of interest
Always applicable and includes interests of a financial or personal nature.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: QCNN
The circuit structure of QCNN model applied in this paper is shown in Fig. 13. Figure 13 is a 10-qubit QCNN model with multiple convolution filters and pooling operations in each layer. Among them, the convolution filter we selected is circuit 10 in Fig. 6, and the pooling circuit we selected is (b) in Fig. 7. Given a quantum state \(\rho _\textrm{in} \), after four layers of convolution pooling operation, the user-defined cost function is calculated using the measurement results of the quantum circuit. The classical computer is used to calculate a new set of parameters based on gradient, and the parameters of the subsequent round of quantum circuit are updated accordingly.
Appendix B: The experimental results of 10 convolution circuits
In this part, we give the experimental results of these 10 convolution circuits. For each combination of convolution and pooling operations, five groups of data are obtained from the random initialization of trainable parameters, and Tables 8, 9, 10, and 11 are obtained. They are divided into two groups. Tables 8 and 9 show the accuracy and FAR obtained by applying the first pooling operation to the VQNN–QCNN model. Tables 10 and 11 show the accuracy and FAR obtained by applying the second pooling operation to the VQNN–QCNN model. And we give the time for each combination to iterate over one time dataset.
From Tables 8 and 10, it can be seen that the experimental effect is improved with the increase in parameters. The best experimental effect is circuit 9, and the worst experimental effects are circuit 1 and circuit 2. However, the iteration time of circuit 7, circuit 8, and circuit 9 is too long. So we finally choose convolution circuit 10 as the convolution filter of the VQNN–QCNN model.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Gong, C., Guan, W., Zhu, H. et al. Network intrusion detection based on variational quantum convolution neural network. J Supercomput 80, 12743–12770 (2024). https://doi.org/10.1007/s11227-024-05919-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-024-05919-y