Abstract
Meetup brings people with similar interests together to do things that matter to them. For example, it provides a platform for getting people who love hiking, coding, running marathons, learning foreign languages together so that they can help, teach and learn from each other. Thanks to the development of web and mobile technologies, organizing these Meetup groups has become much more easily than before. Meetup has become an ideal tool for enriching one’s social life. In this paper, we proposed a coupled linear and deep nonlinear method for Meetup services recommendation. Our method considers both historical user item interactions and group features by combining linear model with deep neural networks. In addition, we designed a pairwise training algorithm with dynamic negative sampling technique to further enhance the model performance. Experiments on two real-world datasets show that our approach outperforms the compared state-of-the-art methods by a large margin.
Keywords
1 Introduction
MeetupFootnote 1 is a social networking website for organizing local offline group meetings for people with similar interests. Thousands of Meetup groups, such as fitness group, career and networking group, photography group, hiking group, etc., are there for us to participate in. It provides a desirable approach to enrich our social life. For example, we can find a fitness partner in the Meetup group to support and encourage each other, or get someone to mentor us on photography. These Meetup groups provide us with a good way to explore the things we are interested in, meet new friends, broaden our social circle and even change our careers.
With so many Meetup group choices being available, a good recommender model can save our time in finding interesting Meetup groups, attracting more group members and making them more active. To this end, we propose to explore user historical interactions as well as group features to better match user interests with Meetup groups. The main contributions of this work are summarized as follows:
-
A coupled linear and deep nonlinear recommendation model is proposed to integrate both historical interactions as well as item side information. It can capture both user’s historical preferences and item characteristics.
-
We designed a pairwise learning algorithm for the proposed approach. To further improve the recommendation quality, we also adopted a dynamic negative sampling approach to conduct negative sampling more effectively.
-
We did extensive experiments on two large-scale datasets and demonstrated the superior performances of our approach over state-of-the-art baselines.
The reminder of this paper is structured as follows. In the next section, we will introduce the research problem we aim to address. Section 3 introduces the proposed approach. Section 4 shows the experimental setup and results. Section 5 introduces the related work and Sect. 6 concludes this paper.
2 Problem Formulation
Assuming that there are N items and M users, we have an interaction matrix \(X \in \mathcal {R}^{M \times N}\), and most entries of X are unobserved. Let \(X_{ui}\) denote the preference of user u to item i, \(X_{u*}\) denote the \(u^{th}\) row of the interaction matrix. For Meetup recommendation, there are only binary implicit feedback available and it can be viewed as a one-class recommendation problem [12]. The entries of X are defined as follows:
The goal of the recommendation is to predict ranking scores for unobserved entires given the observed interactions, and then generate a personalized ordered list of items for each user based on the predicted scores. For a clear presentation, Table 1 summarizes the notations and denotations used in this paper.
3 Proposed Methodology
In this section, we will introduce the proposed methodology in detail. Our model combines a linear part to capture the user historical interactions and a nonlinear component to incorporate the abundant side information.
3.1 Coupled Linear and Deep Nonlinear Model
Sparse linear model has demonstrated to be effective for top-n recommendations [20]. However, this model does not consider any side information. In recent years, deep learning has demonstrated to be very suitable for feature representation learning [1]. Therefore, we propose using deep neural networks to learn low dimensional feature embeddings from raw features. Since both usage history and item properties are critical for uncovering user’s real demands and interests, here, we design a hybrid model which couples sparse linear model with deep neural network for better service recommendation. The former (sparse linear model) is used to learn user’s interaction patterns, while the latter (deep neural network) aims to understand the content of items.
Formally, let \(A \in \mathcal {R}^{N \times N}\) denote a sparse aggregation co-efficient matrix. The ranking score of the linear part is calculated by
where \(X_{u*}\) is the \(u^{th}\) row the interaction matrix, and it is constructed from training set, so there is no leakage of the test data. Equation (2) is very similar to matrix factorization. We can view \(X_{u*}\) as the user latent factor and \(A_{*i}\) as the item latent factor. Nevertheless, \(X_{u*}\) is a known vector, and A is a sparse co-efficient matrix needed to be optimized. Moreover, A is reminiscent of the similarity matrix in item-based neighborhood collaborative filtering [15], but it is determined by minimizing a predefined loss rather than being calculated with Cosine or Jaccard similarities from the interaction matrix. Due to the sparse nature of \(X_{u*}\), some constraints such as sparsity and non-negativity are put on the co-efficient matrix A. More details will be introduced in the following text.
Another important component of our model is a deep neural network, which is used to integrate side information of items to further enhance the recommendation performance. Let \(s_i\) denote the side information of item i. We first feed it into a multi-layered neural network and get the high level dense representations. Formally, the definition of the multi-layered neural network is as follows:
where L denotes the number of layers, \(W_l\) and \(b_l\) denote the weight matrix and bias vector of the \(l^{th}\) layer. \(\sigma _l\) is the activation function which could be sigmoid, hyperbolic tangent (tanh) or Rectifier (ReLU). With this nonlinear transformation, we manage to capture the complex and intricate data structure of item side information. Let k denote the dimension of the output, so \(Z_L(s_i)\) is a k-dimensional vector. To integrate this neural network into the recommendation model, we define a user latent factor \(P \in \mathcal {R}^{M \times k}\), and then model the user and item interactions with inner product:
Finally, we simply add the former two scoring results and get the final predicted ranking score.
Figure 1 illustrates the structure of the proposed methodology. The left part is the linear component and the right part is the deep neural network.
3.2 Pairwise Training Algorithm
To train the above model in a pointwise manner is computationally intensive. To accelerate the training process, here, we propose learning this model with a pairwise algorithm. We adopt a logarithm function which has a scaling factor to weight the difference between positive and negative samples. Formally, the loss function of our model is defined as follows:
where \(\varDelta \) is formulated as \(\varDelta = (Y_{ui^+} - Y_{ui^-})\), \(i^+\) is the Meetup group that user u joined in and \(i^-\) is a negative item that the user has not interacted with. \(\tau \) is a scaling factor to weight \(\varDelta \). As indicated in Fig. 2(a), \(\tau \) can impact the convergence speed as it puts significant influence on the slope of the loss function. \(\theta \) is the model parameters including A, P, and neural network parameters \(W_*\) and \(b_*\).
The regularization terms are critical for the model performance. To ensure the sparse properties of co-efficient A, we put both \(\ell _1\) and Frobenius norm constraints on it. For other parameters, we find that Frobenius norm is sufficient. Thus, we have:
In addition, we set the diagonal of A to zero and clip the value of A after each iteration to ensure \(A \ge 0\).
3.3 Dynamic Negative Sampling
We usually conduct random sampling to sample negative items for each <user, positive item> pair. However, this sampling strategy will not lead to optimal solutions. One reason is that it cannot guarantee to rank all negative items lower than positive items, while higher ranked negative items will hurt the ranking performance of the current model [31]. Figure 2(b) illustrates this point with an example (taken from [31]). We find that if we exchange the positions of the sixth item (observed) with the first item (unobserved), we increase the NDCG by 0.302. While the NDCG increase is only 0.035 if we exchange the sixth item with the fourth item. Therefore, it is better to rank all unobserved items lower than observed items.
This idea is initially designed for Bayesian personalized ranking model [21]. Here, we find that this assumption is also reasonable for our approach. Therefore, we propose applying the dynamic negative sampling method to our model, the sampling strategy is: in each epoch, we randomly sampled t items from negative candidates for each <user, positive item> pair, and calculate their ranking scores, and then treat the item with highest rank as the negative sample. Procedure 1 summarizes the training process of the proposed model.
4 Experiments
In this section, we conduct experiments on two Meetup datasets and compare our approach with several state-of-the-art baselines.
4.1 Datasets Description
These two datasets are collected by Hsieh et al. [11]. We also crawled the Meetup features from the Meetup websites. After removing Meetup groups without content information and users who interacted with less than 20 Meetup groups, we get two subsets: Meetup San Francisco and Meetup New York city. Detail statistics of the two datasets are summarized in Table 2. These two datasets contain thousands of Meetup groups and regular users from San Francisco and New York city. There are 33 categories of the Meetup groups which spread across most aspects of daily life, including: career & business, education & learning, outdoors & adventure, singles, new age & spirituality, support, games, hobbies & crafts, socializing, paranormal, cars & motorcycles, language & ethnic identity, parents & family, photography, music, sports & recreation, alternative lifestyle, tech, fine arts & culture, LGBT, movements & politics, religion & beliefs, pets & animals, fashion & beauty, fitness, food & drink, writing, sci-fi & fantasy, movies & film, book clubs, health & wellbeing, community & environment, dancing. The category distributions of two cities are shown in Fig. 3.
4.2 Evaluation Metrics
To evaluate the recommendation accuracy, we report the results in terms of five evaluation metrics with two of which also consider the ranking qualities [22]. The five evaluation metrics are: Precision@N, Recall@N, Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain (NDCG).
In most cases, users only care about the topmost recommended items, we employ these evaluations at a given cut-off n. The definition are as follows.
The former two evaluation metrics ignore the ranked position. MAP is used to assess the average accuracy of the overall ranking lists. It is the mean of all average precisions (AP) over all relevant users. \(\mathbf 1 _{rel}(i)\) is an indicator which equals to 1 if user u has interacted with item i, and 0, otherwise.
In practice, to make the items that interest target users rank higher will enhance the quality of recommendation lists. Therefore, we also employ two popular rank-aware evaluation metrics: MRR and NDCG. MRR cares about the single highest-ranked relevant item and it calculates the reciprocal of the rank at which the first item was put. NDCG evaluates the ranking quality of the overall recommendation list. The definition of MRR and NDCG are as follows:
Here, \(rank_u\) is the rank of the first correct item for user u.
And \(NDCG_n = DCG_n/IDCG_n\) with \(IDCG_n\) denoting the DCG for perfect ranked list.
4.3 Comparison Baselines
We compare our approach with the following seven traditional and recent advanced baselines:
-
Random. We randomly select Meetup groups from all possible candidates and recommend them to users.
-
MostPopular. It is a non-personalized method which generates recommendations based on item popularity and recommends users with the most popular items.
-
ItemKNN [2]. Item-based collaborative filtering method recommends items which are similar to other items the user has liked. Here the similarity between items is computed with cosine function.
-
BPRMF [21], BPRMF is a competitive baseline for ranking prediction. It also employs a pairwise ranking loss and is optimized with a Bayesian Personalized Ranking algorithm on implicit feedback.
-
WRMF [12], This algorithm is specified for one-class recommendation. It minimizes the squared errors in a pointwise manner and adopts a weight strategy to control the gradients for each user and item latent factors.
-
SLIM [20], SLIM is a top-n recommendation model. It uses a sparse linear method to generate recommendation by aggregating user purchase and rating profiles. We optimize the objective function in a Bayesian personalized ranking criterion due to efficiency consideration.
-
CML [10], collaborative metric learning considers the distances between users and items, and adopts the metric learning idea to learn user and item vectors. Here, we train this model with hinge loss (without WARP) due to the scalability issue of WARP loss [27].
For Random, mostPOP, ITEMKNN, BPR-MF, WRMF, SLIM, we use the implementation of Mymedialite [3]. We implemented our approach and CML with TensorflowFootnote 2. Since SLIM and CML are proved to perform better than many baselines such as [25], we do not further report them.
4.4 Implementation Details
We implement our model with Tensorflow and test it on a Linux machine. All learning parameters are initialized with random normal distribution and we use Adam algorithm [14] to learn the optimal parameters. Hyper-parameters are tuned based on grid search. For the deep neural components, we use two hidden layers with constant structure with 20 neurons for each layer. The output dimension and k are set to 10. We use tanh as the nonlinear activations. The inputs of the deep neural component are the categories of the meetup groups. The learning rate is set to 0.001 and the regularization rate \(\lambda \) is set to 0.001. Batch size is set to 1024. The scaling factor \(\tau \) is set to 2. The dynamic negative sampling size t is set to 5. We randomly split each dataset into a training set and a testing set by the ratio of 5:1, and report the average results over five different splits. Parameters of other baselines are also tuned carefully to achieve the best performances.
4.5 Results and Analysis
Tables 3, 4 and Figs. 4, 5 show the performance comparison on the two datasets. We observe that our model outperforms all other baselines in terms of both accuracy and ranking qualities. The overall improvements on Meetup San Francisco is about \(12.53\%\), and that on Meetup New York city is about \(5.08\%\). We find that latent factor models such as BPR and WRMF do not work well on both datasets, especially on Meetup San Francisco, which might be caused by the extreme sparsity. The performances of CML are slightly worse than WRMF. Similarity based model ItemKNN works well on Meetup New York city but it is computational expensive at prediction stage. SLIM is a very strong baseline. Our model is built upon SLIM, but our model outperforms SLIM by a large margin. The main reason is that our model can capture the content of the items and optimize the results with a more reasonable sampling strategy.
In addition, we also compare our model with SLIM in terms of convergence speed. Figure 6 shows the varying MAP, NDCG, Precision@10 and Recall@10 of our model, SLIM and ItemKNN on dataset Meetup San Francisco with the increase of training epochs. We find that our model converges much faster than SLIM, and it only takes about 15 iterations to achieve the best performance. This is mainly due to the dynamic negative sampling method we adopted as this sampling strategy can help our model to find comparably informative negative samples.
5 Related Work
In this section, we briefly review the related work of event recommendation and deep learning based recommendation.
5.1 Deep Learning for Recommender System
In recent years, deep learning has been revolutionizing the recommender systems. The achievements of deep learning based recommender systems in both industry and academia are inspiring and enlightening [28]. There are a various of deep learning techniques [5], and most of them can be applied to recommendation tasks somehow. For example, Convolutional Neural Network (CNN) can be used to extract features from textual [13] and visual information [6] of items and users. Recurrent Neural Network (RNN) is capable of modeling the temporal dynamics and sequential patterns of historical interactions [9]. Autoencoder can learn salient feature representations from side information to enhance recommendation quality [25, 29, 30]. We can even combine several deep learning technologies together to form a powerful composite recommendation model. Deep learning algorithms can also be integrated into conventional recommendation methods such as matrix factorization, factorization machine and collaborative metric learning [7, 8, 23]. There are two major motivations in applying deep learning techniques to recommender systems. First, Deep learning is powerful in representation learning [1], thus it also provides a desirable tool for feature learning in recommender systems [24]. Second, with nonlinear activations, we can add nonlinearity to recommendation models to capture intricate and complex characteristics of real-world datasets.
5.2 Event Recommendation
Another related work is about event recommendation since Meetup meeting is also a kind of event. Note that, in this work, we mainly focus on recommending Meetup groups for users to join in rather than recommending Meetup meetings (The organizer of Meetup groups can host Meetup meetings regularly, so Meetup groups recommendation and Meetup meetings recommendation are two different tasks for Meetup service recommendation.), nonetheless, it is an important task that we want to solve in the future. [17] proposed an event recommendation methodology based on graph random walking and history preference reranking. They obtain the candidate events by executing random walking on a hybrid graph consisting of different types of nodes to represent available entities in an event-based social network. Then they extract user preferences from her attended events and compute the similarities between her interests and her candidate events. Finally, recommended event lists are obtained by combining the two similarity scores. [26] proposed a Social Information Augmented Recommender System (SIARS), which fully exploits the social influence of event hosts and group members together with basic context information for event recommendation. [16] formulated multiple interactions among users, events, groups and locations into an unified framework and proposed a collective pairwise matrix factorization (CPMF) model to estimate users’ pairwise preferences on events, groups and locations. [18] proposed a successive event recommender system based on graph entropy (SERGE) to deal with the new event cold start problem by exploiting diverse relations as well as asynchronous feedback in EBSNs. [19] proposed a new link prediction method for the Meetup social network, which recommends events to users according to the events they participated in and their field of interests. [4] proposed a Bayesian latent factor model (denoted as SogBmf) for event recommendation, based on the matrix factorization framework, to integrate social group influence with individual preference.
6 Conclusion and Future Work
In this paper, we proposed a coupled linear and deep nonlinear model for Meetup service recommendations. Our model can not only model the historical interaction patterns but also learn the item features effectively. We explored a novel logarithm loss for pairwise training of the proposed model. To further enhance the accuracy, we adopted a dynamic negative sampling strategy to select informative negative samples, which can improve the performance and lead to faster convergences. Experiments on two real-world large-scale Meetup datasets showed that our model can achieve the best performances for Meetup service recommendations.
In the further, we will explore integrate contextual information such as date, location, social network and weather to better anticipate user’s intentions so as to make more satisfying recommendations. We will also explore methods for better Meetup meetings recommendation to enhance the Meetup service user experience.
References
Bengio, Y., Courville, A., Vincent, P.: Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)
Deshpande, M., Karypis, G.: Item-based top-n recommendation algorithms. ACM Trans. Inf. Syst. (TOIS) 22(1), 143–177 (2004)
Gantner, Z., Rendle, S., Freudenthaler, C., Schmidt-Thieme, L.: Mymedialite: a free recommender system library. In: Proceedings of the fifth ACM conference on Recommender systems, pp. 305–308. ACM (2011)
Gao, L., Wu, J., Qiao, Z., Zhou, C., Yang, H., Hu, Y.: Collaborative social group influence for event recommendation. In: Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pp. 1941–1944. ACM (2016)
Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep learning, vol. 1. MIT press, Cambridge (2016)
He, R., McAuley, J.: VBPR: visual Bayesian personalized ranking from implicit feedback. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 144–150. AAAI Press (2016). http://dl.acm.org/citation.cfm?id=3015812.3015834
He, X., Chua, T.S.: Neural factorization machines for sparse predictive analytics. In: Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 355–364. ACM (2017)
He, X., Liao, L., Zhang, H., Nie, L., Hu, X., Chua, T.S.: Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web, pp. 173–182. International World Wide Web Conferences Steering Committee (2017)
Hidasi, B., Karatzoglou, A., Baltrunas, L., Tikk, D.: Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939 (2015)
Hsieh, C.K., Yang, L., Cui, Y., Lin, T.Y., Belongie, S., Estrin, D.: Collaborative metric learning. In: Proceedings of the 26th International Conference on World Wide Web, pp. 193–201. International World Wide Web Conferences Steering Committee (2017)
Hsieh, C.K., Yang, L., Wei, H., Naaman, M., Estrin, D.: Immersive recommendation: News and event recommendations using personal digital traces. In: Proceedings of the 25th International Conference on World Wide Web, WWW 2016, pp. 51–62. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland (2016), https://doi.org/10.1145/2872427.2883006
Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: Eighth IEEE International Conference on Data Mining, ICDM 2008, pp. 263–272. IEEE (2008)
Kim, D., Park, C., Oh, J., Lee, S., Yu, H.: Convolutional matrix factorization for document context-aware recommendation. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 233–240. ACM (2016)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Linden, G., Smith, B., York, J.: Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Comput. 7(1), 76–80 (2003)
Liu, C.Y., Zhou, C., Wu, J., Xie, H., Hu, Y., Guo, L.: Cpmf: A collective pairwise matrix factorization model for upcoming event recommendation. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1532–1539. IEEE (2017)
Liu, S., Wang, B., Xu, M.: Event recommendation based on graph random walking and history preference reranking. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 861–864. ACM (2017)
Liu, S., Wang, B., Xu, M.: Serge: Successive event recommendation based on graph entropy for event-based social networks. IEEE Access (2017)
Müngen, A.A., Kaya, M.: A novel method for event recommendation in meetup. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 959–965. ACM (2017)
Ning, X., Karypis, G.: Slim: Sparse linear methods for top-n recommender systems. In: 2011 IEEE 11th International Conference on Data Mining, pp. 497–506, December 2011
Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: Bpr: Bayesian personalized ranking from implicit feedback. In: Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2009, pp. 452–461. AUAI Press, Arlington, Virginia, United States (2009). http://dl.acm.org/citation.cfm?id=1795114.1795167
Shani, G., Gunawardana, A.: Evaluating Recommendation Systems. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P.B. (eds.) Recommender Systems Handbook, pp. 257–297. Springer, Boston, MA (2011). https://doi.org/10.1007/978-0-387-85820-3_8
Tay, Y., Anh Tuan, L., Hui, S.C.: Latent relational metric learning via memory-based attention for collaborative ranking. In: In: Proceedings of the 2018 World Wide Web Conference, WWW 2018, pp. 729–739. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland (2018). https://doi.org/10.1145/3178876.3186154
Tay, Y., Tuan, L.A., et al.: Multi-pointer co-attention networks for recommendation. CoRR abs/1801.09251 (2018), http://arxiv.org/abs/1801.09251
Wang, H., Wang, N., Yeung, D.Y.: Collaborative deep learning for recommender systems. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2015, pp. 1235–1244. ACM, New York (2015). https://doi.org/10.1145/2783258.2783273
Wang, Z., Zhang, Y., Li, Y., Wang, Q., Xia, F.: Exploiting social influence for context-aware event recommendation in event-based social networks. In: INFOCOM 2017-IEEE Conference on Computer Communications, pp. 1–9. IEEE (2017)
Weston, J., Bengio, S., Usunier, N.: Large scale image annotation: learning to rank with joint word-image embeddings. Mach. Learn. 81(1), 21–35 (2010)
Zhang, S., Yao, L., Sun, A.: Deep learning based recommender system: A survey and new perspectives. arXiv preprint arXiv:1707.07435 (2017)
Zhang, S., Yao, L., Xu, X.: Autosvd\(++\): An efficient hybrid collaborative filtering model via contractive auto-encoders. In: PProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2017, pp. 957–960. ACM, New York (2017). https://doi.org/10.1145/3077136.3080689
Zhang, S., Yao, L., Xu, X., Wang, S., Zhu, L.: Hybrid collaborative recommendation via semi-autoencoder. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S.M. (eds.) Neural Information Processing, pp. 185–193. Springer International Publishing, Cham (2017)
Zhang, W., Chen, T., Wang, J., Yu, Y.: Optimizing top-n collaborative filtering via dynamic negative item sampling. In: Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pp. 785–788. ACM (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Zhang, S., Yao, L., Ning, X., Huang, C., Xu, X., Ou, S. (2018). Coupled Linear and Deep Nonlinear Method for Meetup Service Recommendation. In: Jin, H., Wang, Q., Zhang, LJ. (eds) Web Services – ICWS 2018. ICWS 2018. Lecture Notes in Computer Science(), vol 10966. Springer, Cham. https://doi.org/10.1007/978-3-319-94289-6_16
Download citation
DOI: https://doi.org/10.1007/978-3-319-94289-6_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94288-9
Online ISBN: 978-3-319-94289-6
eBook Packages: Computer ScienceComputer Science (R0)