Abstract
Anomaly detection is important to significant real life entities such as network intrusion and credit card fraud. Existing anomaly detection methods were partially learned the features, which is not appropriate for accurate detection of anomalies. In this study we proposed vector-based convolutional autoencoder (V-CAE) for one dimensional anomaly detection. The core of our model is a linear autoencoder, which is used to construct a low-dimensional manifold of feature vectors for normal data. At the same time, we used vector-based convolutional neural network (V-CNN) to extract the features from vector data before and after the linear autoencoder that makes the model learned deep features for efficient anomaly detection. This unsupervised learning method used only normal data in the training phase. We used the combined abnormal score calculated from two reconstruction errors: (i) error between the input and output of the whole architecture and (ii) error between the input and output of the linear encoder. Compared with the nine state-of-the-arts methods, our proposed V-CAE shows effective and stable results of AUC with 0.996 in estimating anomalies based on several benchmark datasets.
1 Introduction
Deep learning has achieved encouraging performance in many visual task applications, which were included with labels. The cost of labeling increases, as the amount of data increases. Generally unusual data appeared in real life entities cannot be effectively trained by a classification model because of less number of data. Hence anomaly detection algorithms is used to identify unusual/abnormal samples by training the model using normal samples [1]. For example, the practical application of anomaly intrusion detection demonstrates the anomaly detection task as shown in Fig. 1. The red color points in the plot represents the abnormal data and it took various positions and values due to different external factors.
In general, anomaly detection tasks used large number of normal samples to train the model parameters \(\varTheta \) to generate the feature distribution p(x) for normal samples. However, in training phase the number of abnormal samples are very small or sometimes not available to identify the abnormal samples in the test phase. In this case, only normal samples can be used to optimize the parameters of the model and hence the abnormal score S(x) can be calculated using the test data for identifying the abnormal samples.
Varying number of neurons and layers has been observed to largely affect the performance of the anomaly detection models [2, 3]. Several intrusion classification models have focused deep belief networks with stacked Restricted Boltzmann Machine and showed superior performance in identifying anomalies [4, 5]. Inspired from the aforementioned studies, we proposed to develop a V-CAE model for anomaly detection. The core of the proposed architecture is a linear autoencoder, which is used to find the sub space of the normal data by using the feature vectors extracted by the vector-based convolutional neural network (V-CNN) [6]. The V-CNN is used to extract non-linear feature vector from the input vector by 2-D convolutional neural network. The proposed V-CAE framework for identifying anomalies is shown in Fig. 2. In this study, we used input data as a vector form and only the features extracted from the normal input data are used to train our proposed model.
The main contributions of this paper are as follows:
-
1.
We used a autoencoder based on mutual information to enable the encoder and decoder to learn the most significant features of the input data.
-
2.
We added a linear autoencoder to construct a low-dimensional manifold of the normal samples.
-
3.
We used combined abnormal score computed from two different reconstruction errors: first one is calculated between the input and output of the model, and the second one is calculated between the input and output of the linear autoencoder.
-
4.
The effectiveness of the proposed method is experimentally evaluated by comparing with the state-of-the-art methods.
-
5.
We conducted ablation study on our proposed framework by removing linear autoencoder in detecting anomalies.
2 Related Works
Anomaly detection has always been the focus of researchers, especially in the fields of finance, information security, video surveillance and medical imaging. The traditional methods are used to measure the similarity between data based on distance [7], density [8], angle [9], isolation and [10], clustering [11], etc. These algorithms are actually similar in lower dimension, because the core assumption is that “the representation of abnormal points is different from normal points and also it is a minority group”. However, most similarity based algorithms will face the curse of dimensionality, that is, common similarity measures (such as Euclidean distance) will often fail on high-dimensional data [12, 13].
In order to solve this problem, many methods have been proposed, including:
-
1.
Dimension reduction or feature selection [14].
-
2.
Subspace methods, such as detection and merging on multiple low-dimensional spaces, random projection (randomly generating multiple subspaces and modeling separately on each subspace, feature bagging) and random forest.
-
3.
Graph based methods are used to represent the relationships and extracted features of data [15].
-
4.
Intrinsic dimensionality based reverse nearest neighbors methods [16].
Furthermore, based on the availability of data labels, anomaly detection technology can be divided into the following two types:
-
Supervised anomaly detection: The supervised anomaly detection mode assumes that we have labeled normal data and abnormal data. The most typical method is to transform the problem into a special two-class problem and establish a predictive classification model. Many general machine learning classification algorithms can be applied to model training [17]. The predicted data can be used to determine whether it is normal or abnormal. The supervised anomaly detection mode mainly has two application difficulties. Firstly, in the training data, the amount of abnormal data is far less than the amount of normal data, which brings a common data imbalance problem in the field of machine learning and data mining. Secondly, it is very challenging to obtain accurate and representative anomaly class label data. Researchers have been proposed sampling, price sensitivity, active learning and other methods to solve the above two problems. However, in practical application, the supervised anomaly detection model is still very limited.
-
Unsupervised anomaly detection: Unsupervised anomaly detection does not need to label data sets, and only normal data in the training set, so it has the widest applicability. This technique contains an implicit assumption that normal samples occur more frequently and are easier to obtain than abnormal samples. This assumption is also based on the fact that the number of abnormal samples in the data set is far lesser than the number of normal samples. Khreich et al. [18] used one-class support vector machine (SVM) to map the data to high-dimensional space by kernel function, looking for hyperplanes to maximize the interval between the data and the origin of coordinates. Tax et al. [19] used support vector domain description (SVDD) method to map the data to high-dimensional space by using kernel function to find the hypersphere as small as possible to wrap the normal data. Yang et al. [20] modeled the normal data with Gaussian mixture model and estimated the parameters with maximum likelihood. When anomaly detection is carried out, the probability that it belongs to normal data can be obtained by bringing its features into the model. Liu et al. [21] used isolation forest method for anomaly detection. This method is suitable for the case where there are few abnormal points, and adopts the method of constructing multiple decision trees for anomaly detection. It is entirely based on the concept of isolation to detect anomalies without any distance or density measurement. He et al. [11] heuristically divided the data set into large and small clusters. If an example belongs to a large cluster, the abnormal score is calculated by using the example and the large cluster to which it belongs; if an example belongs to a small cluster, the abnormal score is calculated by using the example and the nearest large cluster.
3 Proposed Method
3.1 Overview
This paper proposes an anomaly detection method based on vector-based convolutional autoencoder. The flow chart of our proposed method is shown in Fig. 2. In this study, we consider the anomaly detection in one dimensional feature vectors. After the one dimensional feature vector is fed into the first fully connected layer, the vector data is converted to two dimensional matrix form. Then the deep features are extracted by the standard convolutional layers. The core of our model is a linear autoencoder, whose function is to reduce the dimension of data and finds the linear subspace of the normal samples. It is expected that this linear autoencoder in the middle of the convolutional autoencoder can help to find the tight boundary of the normal samples. The reconstructed vector by the linear autoencoder are used to reconstruct the output vector by using the deconvolutional layers. In the test phase, an abnormal sample is detected by using the scores defined by using two reconstruction errors of the convolutional autoencoder and the linear autoencoder.
3.2 Vector-Based Convolutional Autoencoder
In order to extract the non linear manifold of the normal data, we adopt the vector-based convolution autoencoder. As shown in Fig. 2, the vector-based autoencoder includes an input layer, fully connected (FC) layers, a linear autoencoder, convolution layers before and after the linear autoencoder and output layer.
Let \(X=\{ \mathbf {x}_{1},\mathbf {x}_{2},....,\mathbf {x}_{n} \}\) be the set of one dimensional feature vectors in the normal data and \(\mathbf {x}_{i} \in \ R^m\) where m is the dimension of each sample.
The input vector first passes through FC layer and is converted into the two dimensional array. Then convolution neural network is used to extract the features of the input vector. The extracted features are converted into the vector form and then it is used as a input into the linear autoencoder. This procedure is defined by a function C(.) and the flattened feature vector \(\widehat{\mathbf {x_i}} \in R^d\) is given as
where \(d=l \times \ h \times \ ch\) and h, l, and ch are the width, the height and the number of channels of the output of the conv2 layer, respectively.
The linear autoencoder extracts the dimension reduced feature vector \(\mathbf {z}_i\) from the flattened feature vector \(\widehat{\mathbf {x_i}}\) as
where \(\mathbf{W} \in \ R^{d \times \ k}\) and \(\mathbf{b} \in \ R^k\) are the weights and the bias of the linear encoder. The dimension of the extracted feature vector \(\mathbf {z}_i\) is shown as k. The approximation \(\widehat{\mathbf {y_i}}\) of \(\widehat{x_i}\) is calculated by
where \(\mathbf{W} ' \in R^{k \times d}\) and \(\mathbf{b} ' \in R^d\) are the weights and the bias of the linear decoder, respectively.
The approximation \(\widehat{\mathbf {y_i}}\) of \(\widehat{\mathbf {x_i}}\) by the linear autoencoder is reshaped into the original tensor format. It is used as the input of the next deconvolution layers. Finally, the output vector \(\mathbf {y_i}\) of the vector-based convolutional autoencoder is obtained through another FC layer. This procedure is defined by function \(C'(.)\) as
The loss function is defined based on the mean squared errors (MSEs) of the convolutional autoencoder and the embedded linear autoencoder as
where the first term is the mean squred errors (MSE) between the input vector \(\mathbf {x_i}\) and its approximation \(\mathbf {y_i}\) by the convolutional autoencoder and the second term is the mean squared errors (MSE) between the feature vector \(\widehat{\mathbf {x}}_i\) and its approximation \(\widehat{\mathbf {y}}_i\) by the linear autoencoder. The parameter \(\alpha \) is used to adjust the degree of contribution of these two MSEs to the objective function \(\ell \).
3.3 Anomaly Scores
In the test phase, the model calculates the anomaly score of each test sample \(\mathbf {x}\). Again the anomaly score is defined based on the reconstruction error \(S_1(\mathbf {x})\) of the convolutional autoencoder and the reconstruction error \(S_2(\mathbf {x})\) of the linear autoencoder as
where \(\lambda \) is the tuning parameter that can be adjusted according to the tasks. The reconstruction error \(S_1(\mathbf {x})\) between the input vector \(\mathbf {x}\) and it approximation \(\mathbf {y}\) by the convolutional autoencoder is defined as
Similarly, the reconstruction error \(S_2(\mathbf {x})\) between the feature vector \(\widehat{\mathbf {x}}\) and its approximation by the linear autoencoder is defined as
In order to evaluate the impact of the overall anomaly detection performance, the anomaly score are normalized. At first, the anomaly scores \(S = \{ S(\mathbf {x}_i) | \mathbf {x}_i \in X \}\) for all training samples X are calculated and the maximum \(\max (S)\) and the minimum \(\min (S)\) of the anomaly scores are obtained. Then the anomaly score \(S(\mathbf {x})\) for the new samples is normalized as
4 Experimental Setup
4.1 Data Set
In order to confirm the effectiveness and the efficiency of the proposed method, we have performed experiments using three benchmark data sets which are KDD99, Optdigits, default of credit card clients. We first carried out experiments on KDD99 abnormal intrusion data, treating the ‘normal’ class data in the training phase and defining other classes as abnormal data. The test set contains normal as well as abnormal data. Optdigits data is experimented by treating one class (class ‘3’) being an anomaly, while another class (class ‘1’) is considered as the normal data. Default of credit card clients data set is an open source data set of a foreign organization. The content of the data includes some attributes such as gender, education, marriage, age, etc. It also includes the credit card consumption and bill situation of the user over a period of time. ‘Payment next month’, which only includes 0 or 1, is one of the feartures from data indicates whether the user has repaid the credit card bill, ‘1’ indicates repayment, and we classify this sample with 1 as category 1; Similarly, and ‘0’ indicates no repayment. We classify the samples with features of ‘Payment next month’ which equal to ‘1’ into class ‘1’; Similarly, We classify the samples with features of ‘Payment next month’ which equal to ‘0’ into class ‘0’.
The data sets used in our experiments are converted into binary data sets, i.e. normal and abnormal data. The class which is considered as normal data is used to train the model. The labels of the data sets are converted into binary labels, which are used during testing. We calculated the abnormal scores of the test sets in each data set and selected an appropriate threshold to distinguish them. The original data set is randomly divided into training/testing with a ratio of 7/3. The details of the data set used in our experiments are shown in Table 1.
4.2 Parameter Settings and Evaluation
We used adam to optimize the network parameters. The proposed method is implemented in tensorflow. The parameter \(\alpha \) is adjusted depending on the data sets. The training was done with 1,000, 2,000 and 400 epochs for KDD99, Optdigits and default of credit card clients, respectively. In the experiments, we compared the proposed method with nine state-of-the-art methods, including several traditional supervised methods and unsupervised methods. The proposed method is compared with the four most advanced supervised methods including active learning (AL) [22], feature packing (FB) [23], local outlier factor (LOF) [24] and pattern window (PW) [25]. The proposed method is compared with the four unsupervised methods including sparse coding (SC) [26], \(L21-SRC (L21)\) [27], reverse nearest neighbors (RNN) [28] and self-representation outlier detection (SRO) [29]. In addition to these eight methods, the proposed method is also compared with the sparse reconstruction (SR) method proposed by Hou et al. [30]. We computed area under the curve (AUC) value using receiver operating curve analysis as the main evaluation measure for performance evaluation. If the AUC score is large, then the performance of the anomaly detection algorithm is good. Furthermore, we used precision, recall and F1 score to evaluate the performance of the proposed system.
5 Results
5.1 Comparison with the State-of-the-art Methods
Tables 2 and 3 presented the results of our experiments. Based on this our method showed more robust performance, which is better than those of the nine state-of-the-arts methods. Among all the compared models, our model scored the highest AUC score on all the three data sets, especially it is much higher compared over the latest method [30]. Experiments on KDD99 and Optdigits selected the best \(\alpha \) of 0.5 for detecting the anomaly data. Experiment with default of credit card clients, the best \(\alpha \) is chosen as 0.4. By adjusting the values of \(\lambda \), the detection results of the model will also change accordingly. On the KDD99 and Optdigits, the choice of \(\lambda \) has little influence on the model, because the distribution of \(S_1\) and \(S_2\) terms is enough to separate the abnormal data, so the choice of \(\lambda \) is 0.5. But on default of credit card clients dataset, the choice of \(\lambda \) has great influence on the results. We chose \(\lambda =0.6\) by assigning more weight on approximating \(S_1\) which plays a leading role in the data set of default of credit card clients for detection.
Figure 3 shows the distribution of anormaly scores on the KDD99 dataset. Based on the distribution of anomaly scores of \(S_1\) and \(S_2\), it is suggested that these two terms are sufficient to distinguish normal from abnormal data.
The detection results of the proposed model on default of credit card clients are not as good as those of the other two datasets. It is because it showed very poor relationship between the data and its features. Figure 4 shows the correlation of features and the data set. It clearly explained that there is no significant correlation between the characteristics of some of its features (sex, education, marriage and age) and the categories of the data sets. Therefore, the appearance of these irrelevant features increases the difficulty of finding the anomaly data.
5.2 Ablation Study on Proposed Framework
The core of our model is using linear autoencoder, whose function is to reduce the dimensions as well as compress the boundary of normal data. If the linear autoencoder is removed from the framework, the anomaly score can be calculated only from the term \( S_1 \). As shown in Fig. 3(c), removing the linear autoencoder has no effect on KDD99 dataset, and the same is true for Optdigits dataset. For default of credit card clients dataset, the detection performance will be drastically reduced. As can be seen from Table 4, the proposed framework without linear autoencoder showed less value of AUC compared over the framework with linear autoencoder. Furthermore, the precision, recall and F1-score of our method with linear autoencoder is significantly better than those values without linear autoencoder.
As can be seen from Fig. 5, the results of our proposed method are better than without autoencoder on KDD99 data set and Optdigits dataset. Meanwhile, the results on default of credit card clients data set, our proposed framewok with linear autoencoder is better than that without linear autoencoder. It proved that our proposed V-CAE structure has potential ability to detect abnormal samples. In addition, as shown in Tables 5 and 6, the precision, recall and F1-score results of our method with or without linear autoencoder are almost similar. According to Table 7, those measures on default of credit card clients data set, our method with linear autoencoder showed better performance than those values without linear autoencoder.
As can be seen from Table 8, We removed the vector-based convolutional neural network (V-CNN) before and after the linear Autoencoder, and only used the linear Autoencoder for experiments. We found our results were better than the results of structure without the V-CNN.
Overall, experimental results can clearly explained that our proposed system with linear autoencoder can perfectly separate the abnormally distributed data as shown in Fig. 3. The reconstruction error of the abnormal data is always larger than that of normal data and thus the results demonstrated that the anomaly data can be well detected by our proposed V-CAE approach. However, the differences of reconstruction errors between the normal and abnormal data is not very high on the credit card data set. Hence it is difficult to distinguish the anomalies from the normal data set. But still our proposed model achieved the second highest score in detecting anomalies on credit card data set.
6 Conclusions
We introduced a new vector-based convolutional autoencoder model for anomaly detection tasks. The proposed model transformed the vector-based data sets into graphs. In addition it is capable to detect the tight boundary of the normal data by reducing the dimensionality of the linear sub space of the extracted non-linear feature vectors. The anomalies are detected by using the scores defined by two reconstruction errors of the convolutional autoencoder and the embedded linear autoencoder. The combined anomaly score of our proposed system on three benchmark data sets showed highly robust performance in distinguishing anomalies compared over nine state-of-the-art methods. In future, we extend our method by adding an adversarial training framework to improve the differences of reconstruction errors, especially if the data sets showing poor correlation with difficulties in distinguishing the anomalies from the normal data.
References
Dufrenois, F.: A one-class kernel fisher criterion for outlier detection. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 982–994 (2015)
Potluri, S., Diedrich, C.: Accelerated deep neural networks for enhanced intrusion detection system. In: 21st International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–8. IEEE Press, New York (2016)
Kim, J., Shin, N., Jo, S.-Y., Kim, S.-H.: Method of intrusion detection using deep neural network. In: 4th International Conference on Big Data and Smart Computing, pp. 313–316. IEEE Press, New York (2017)
Alom, M.Z., Bontupalli, V., Taha, T.M.: Intrusion detection using deep belief networks. In: National Aerospace and Electronics Conference, pp. 339–344. IEEE Press, New York (2015)
Qu, F., Zhang, J.-T., Shao, Z.-T., Qi, S.-Z.: Intrusion detection model based on deep belief. In: the 2017 VI International Conference on Network, Communication and Computing, pp. 97–101. ACM Press, New York(2017)
Kavitha, M.S., Kurita, T., Park, S.-Y., Chien, S.-I., Bae, J.-S., Ahn, B.-C.: Deep vector-based convolutional neural network approach for automatic recognition of colonies of induced pluripotent stem cells. PLoS ONE 12(12), 1–18 (2017)
Ramaswamy, S., Rastogi, R., Shim, K.: Efficient algorithms for miningoutliers from large data sets. ACM SIGMOD Rec. 29(2), 427–438 (2000)
Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: LOF: identifying density-based local outliers. ACM SIGMOD Rec. 29(2), 93–104 (2000)
Kriegel, H.P., Zimek, A.: Angle-based outlier detection in high-dimensional data. In: 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 444–452. ACM Press, New York (2008)
Liu, F.T., Ting, K.M., Zhou, Z.H.: Isolation forest. In: International Conference on Data Mining, pp. 413–422. IEEE Press, New York (2008)
He, Z., Xu, X., Deng, S.: Discovering cluster-based local outliers. Pattern Recogn. Lett. 24(9–10), 1641–1650 (2003)
Zimek, A., Schubert, E., Kriegel, H.P.: A survey on unsupervised outlier detection in high-dimensional numerical data. Stat. Anal. Data Min.: ASA Data Sci. J. 5(5), 363–387 (2012)
Ro, K., Zou, C., Wang, Z., Yin, G.: Outlier detection for high-dimensionaldata. Biometrika 102(3), 589–599 (2015)
Pang, G., Cao, L., Chen, L., Liu, H.: Learning homophily couplings from Non-IID data for joint feature selection and noise-resilient outlier detection. In: 26th International Joint Conference on Artificial Intelligence, pp. 2585–2591. Morgan Kaufmann Press, San Francisco (2017)
Akoglu, L., Tong, H., Koutra, D.: Graph based anomaly detection and description: a survey. Data Min. Knowl. Discov. 29(3), 626–688 (2015)
Radovanović, M., Nanopoulos, A., Ivanović, M.: Reverse nearest neighbors in unsupervised distance-based outlier detection. IEEE Trans. Knowl. Data Eng. 27(5), 1369–1382 (2015)
Fujimaki, R., Yairi, T., Machida, K.: An anomaly detection method for spacecraft using relevance vector learning. In: Ho, T.B., Cheung, D., Liu, H. (eds.) PAKDD 2005. LNCS (LNAI), vol. 3518, pp. 785–790. Springer, Heidelberg (2005). https://doi.org/10.1007/11430919_92
Khreich, W., Khosravifar, B., Hamou-Lhadj, A., Talhi, C.: An anomaly detection system based on variable N-gram features and one-class SVM. Inf. Softw. Technol. 91, 186–197 (2017)
Tax, D.M.J., Duin, R.P.W.: Support vector domain description. Pattern Recogn. Lett. 20(11–13), 1191–1199 (1999)
Yang, X., Latecki, L.J., Pokrajac, D.: Outlier detection with globally optimal exemplar-based GMM. In: SIAM International Conference on Data Mining, pp. 145–154. SIAM Press, Philadelphia (2009)
Liu, F.-T., Ting, K.-M., Zhou, Z.-H.: Isolation-based anomaly detection. ACM Trans. Knowl. Discov. Data (TKDD) 6(1), 1–39 (2012)
Sun, G., Cong, Y., Xu, X.: Active lifelong learning with “watchdog”. In: The 32th AAAI Conference on Artificial Intelligence, pp. 4107–4114. AAAI Press, Palo Alto (2018)
Lazarevic, A., Kumar, V.: Feature bagging for outlier detection. In: 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining Table of Contents, pp. 157–166. ACM Press, New York (2005)
Breunig, M.M., Kriegel, H.-P., Ng, R.T., Sander, J.: LOF: identifying density-based local outliers. SIGMOD Rec. 29(2), 93–104 (2000)
Yeung, D.-Y., Chow, C.: Parzen-window network intrusion detectors. In: Object Recognition Supported by User Interaction for Service Robots, vol. 4, no. 4, pp. 385–388 (2002)
Adler, A., Elad, M., Hel-Or, Y., Rivlin, E.: Sparse coding with anomaly detection. Signal Process. Syst. 79(2), 179–188 (2015)
Cong, Y., Yuan, J., Liu, J.: Sparse reconstruction cost for abnormal event detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2011), pp. 3449–3456. IEEE Press, New York (2011)
Radovanovi, M., Nanopoulos, A., Ivanovi, M.: Reverse nearest neighbors in unsupervised distance-based outlier detection. IEEE Trans. Knowl. Data Eng. 27(5), 1369–1382 (2015)
You, C., Robinson, D.P., Vidal, R.: Provable self-representation based outlier detection in a union of subspaces. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017), pp. 4323–4332. IEEE Press, New York (2017)
Hou, D.-D., Cong, Y., Sun, G., Liu, J.: Anomaly detection via adaptive greedy model. Neurocomputing 330, 369–379 (2019)
Analysis of credit card default dataset of Taiwan for machine learning. https://github.com/KaushikJais/Credit-Card-Default/blob/master/Credit%20Card%20Default%20(Final%20Submission)%20(1).ipynb. Accessed 19 Feb 2019
Ackknowledgement
This work was partly supported by JSPS KAKENHI Grant Number 16K00239.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yu, Q., Kavitha, M., Kurita, T. (2020). Detection of One Dimensional Anomalies Using a Vector-Based Convolutional Autoencoder. In: Palaiahnakote, S., Sanniti di Baja, G., Wang, L., Yan, W. (eds) Pattern Recognition. ACPR 2019. Lecture Notes in Computer Science(), vol 12047. Springer, Cham. https://doi.org/10.1007/978-3-030-41299-9_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-41299-9_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41298-2
Online ISBN: 978-3-030-41299-9
eBook Packages: Computer ScienceComputer Science (R0)