Skip to main content
Log in

Topic-tracking-based dynamic user modeling with TV recommendation applications

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

One of the challenging issues in TV recommendation applications based on implicit rating data is how to make robust recommendation for the users who irregularly watch TV programs and for the users who have their time-varying preferences on watching TV programs. To achieve the robust recommendation for such users, it is important to capture dynamic behaviors of user preference on watched TV programs over time. In this paper, we propose a topic tracking based dynamic user model (TDUM) that extends the previous multi-scale dynamic topic model (MDTM) by incorporating topic-tracking into dynamic user modeling. In the proposed TDUM, the prior of the current user preference is estimated as a weighted combination of the previously learned preferences of a TV user in multi-time spans where the optimal weight set is found in the sense of the evidence maximization of the Bayesian probability. So, the proposed TDUM supports the dynamics of public users’ preferences on TV programs for collaborative filtering based TV program recommendation and the highly ranked TV programs by similar watching taste user group (topic) can be traced with the same topic labels epoch by epoch. We also propose a rank model for TV program recommendation. In order to verify the effectiveness of the proposed TDUM and rank model, we use a real data set of the TV programs watched by 1,999 TV users for 7 months. The experiment results demonstrate that the proposed TDUM outperforms the Latent Dirichlet Allocation (LDA) model and the MDTM in log-likelihood for the topic modeling performance, and also shows its superiority compared to LDA, MDTM and Bayesian Personalized Rank Matrix Factorization (BPRMF) for TV program recommendation performance in terms of top-N precision-recall.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Abramowitz M, Stegun IA (1964) Handbook of mathematical functions: with formulas, graphs, and mathematical tables. 55. Courier Corporation

  2. Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 17(6):734–749

    Article  Google Scholar 

  3. Ahmed A, Aly M, Gonzalez J, Narayanamurthy S, Smola AJ (2012) Scalable inference in latent variable models. In: Proceedings of the fifth ACM international conference on Web search and data mining. ACM, pp 123–132

  4. Ahmed A, Low Y, Aly M, Josifovski V, Smola AJ (2011) Scalable distributed inference of dynamic user interests for behavioral targeting. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 114–122

  5. Asuncion A, Welling M, Smyth P, Teh YW (2009) On smoothing and inference for topic models. In: Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. AUAI Press, pp 27–34

  6. Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin

    MATH  Google Scholar 

  7. Blei DM, Lafferty JD (2006) Dynamic topic models. In: Proceedings of the 23rd international conference on machine learning. ACM, pp 113–120

  8. Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  9. Canini KR, Shi L, Griffiths TL (2009) Online inference of topics with latent dirichlet allocation. In: International conference on artificial intelligence and statistics, pp 65–72

  10. De Finetti B, Machi A, Smith A (1990) Theory of probability: a critical introductory treatment. Wiley, New York

    Google Scholar 

  11. Diao Q, Qiu M, Wu CY, Smola AJ, Jiang J, Wang C (2014) Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 193–202

  12. Gantner Z, Rendle S, Drumond L, Freudenthaler C (2009) The open source code of my mediate lite

  13. Griffiths TL, Steyvers M (2004) Finding scientific topics. Proc Natl Acad Sci 101(suppl 1):5228–5235

    Article  Google Scholar 

  14. Hoffman M, Bach FR, Blei DM (2010) Online learning for latent dirichlet allocation. In: Advances in neural information processing systems, pp 856–864

  15. Hofmann T (1999) Probabilistic latent semantic indexing. In: Proceedings of the 22nd annual international ACM SIGIR conference on research and development in information retrieval. ACM, pp 50–57

  16. Hofmann T (2004) Latent semantic models for collaborative filtering. ACM Trans Inf Syst (TOIS) 22(1):89–115

    Article  Google Scholar 

  17. Iwata T, Watanabe S, Yamada T, Ueda N (2009) Topic tracking model for analyzing consumer purchase behavior. In: IJCAI, vol 9. Citeseer, pp 1427–1432

  18. Iwata T, Yamada T, Sakurai Y, Ueda N (2012) Sequential modeling of topic dynamics with multiple timescales. ACM Trans Knowl Discov Data (TKDD) 5(4):19

    Google Scholar 

  19. Jasra A, Holmes C, Stephens D (2005) Markov chain monte carlo methods and the label switching problem in bayesian mixture modeling. Stat Sci 20(1):50–67

    Article  MathSciNet  MATH  Google Scholar 

  20. Kim E, Pyo S, Park E, Kim M (2011) An automatic recommendation scheme of tv program contents for (ip) tv personalization. IEEE Trans Broadcast 57(3):674–684

    Article  Google Scholar 

  21. Konstan JA, Riedl J (2012) Recommender systems: from algorithms to user experience. User Model User-Adap Inter 22 (1–2):101–123

    Article  Google Scholar 

  22. Koren Y, Bell R, Volinsky C (2009) Matrix factorization techniques for recommender systems. Computer 42(8):30–37

    Article  Google Scholar 

  23. Mimno D, Wallach HM, Talley E, Leenders M, McCallum A (2011) Optimizing semantic coherence in topic models. In: Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, pp 262–272

  24. Minka T (2000) Estimating a dirichlet distribution

  25. Montaner M, López B, De La Rosa JL (2003) A taxonomy of recommender agents on the internet. Artif Intell Rev 19(4):285–330

    Article  Google Scholar 

  26. Murphy KP (2012) Machine learning: a probabilistic perspective. MIT Press, Cambridge

    MATH  Google Scholar 

  27. Newman D, Asuncion A, Smyth P, Welling M (2009) Distributed algorithms for topic models. J Mach Learn Res 10:1801–1828

    MathSciNet  MATH  Google Scholar 

  28. Porteous I, Newman D, Ihler A, Asuncion A, Smyth P, Welling M (2008) Fast collapsed gibbs sampling for latent dirichlet allocation. In: Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 569–577

  29. Pyo S, Kim E, Kim M (2015) Lda-based unified topic modeling for similar tv user grouping and tv program recommendation. IEEE Trans Cybern 45(8):1476–1490

    Article  Google Scholar 

  30. Rendle S, Freudenthaler C, Gantner Z, Schmidt-Thieme L (2009) Bpr: Bayesian personalized ranking from implicit feedback. In: Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. AUAI Press, pp 452–461

  31. Sato I, Kurihara K, Nakagawa H (2010) Deterministic single-pass algorithm for lda. In: Advances in neural information processing systems, pp 2074–2082

  32. Smola A, Narayanamurthy S (2010) An architecture for parallel topic models. Proc VLDB Endow 3 (1–2):703–710

    Article  Google Scholar 

  33. Su X, Khoshgoftaar TM (2009) A survey of collaborative filtering techniques. Adv Artif Intell 2009 , Article ID: 421425. doi:10.1155/2009/421425

  34. Survey ATU (2013) Charts by topic: leisure and sports activities

  35. Teh YW, Kurihara K, Welling M (2007) Collapsed variational inference for hdp. In: Advances in neural information processing systems, pp 1481–1488

  36. Teh YW, Newman D, Welling M (2006) A collapsed variational bayesian inference algorithm for latent dirichlet allocation. In: Advances in neural information processing systems, pp 1353–1360

  37. TNmS (2011) TNmS Korea Inc.

  38. Wallach HM (2008) Structured topic models for language. Ph.D. thesis, University of Cambridge

  39. Wallach HM, Mimno DM, McCallum A (2009) Rethinking lda: why priors matter. In: Advances in neural information processing systems, pp 1973–1981

  40. Yao L, Mimno D, McCallum A (2009) Efficient methods for topic model inference on streaming document collections. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining ACM, pp 937–946

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Munchurl Kim.

Appendices

Appendix A

In order to derive (6) from (5), we start with the following relation:

$$\begin{array}{@{}rcl@{}} P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) &=&{\int}_{\theta_{t}}\prod\limits_{u=1}^{U}\prod\limits_{s=1}^{S} P\left (\theta_{t,u}|\pi_{t-1,u,s},\eta_{t,u,s}\right )\\ &&\times\prod\limits_{w=1}^{W} P\left (z_{t,u,w}|\theta_{t,u}\right )d_{\theta_{t}} \end{array} $$
(A1)

If we assume that the topic distributions per user are independent of one another, (A1) can be rewritten as

$$\begin{array}{@{}rcl@{}} P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) &=&\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}}\prod\limits_{s=1}^{S} P\left (\theta_{t,u}|\pi_{t-1,u,s},\eta_{t,u,s}\right )\\ &&\times\prod\limits_{w=1}^{W} P\left (z_{t,u,w}|\theta_{t,u}\right )d_{\theta_{t,u}} \end{array} $$
(A2)

If the probability product in (A2) is expressed for the entire time span S as (.), then, we have

$$\begin{array}{@{}rcl@{}} P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) &=&\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}} P\left (\theta_{t,u}|\pi_{t-1,u,\left (. \right )},\eta_{t,u,\left (. \right )}\right)\\ &&\times\prod\limits_{w=1}^{W} P\left (z_{t,u,w}|\theta_{t,u}\right )d_{\theta_{t,u}} \end{array} $$
(A3)

Since 𝜃 t, u is a multinomial distribution with Dirichlet prior, (A3) can be rewritten as

$$\begin{array}{@{}rcl@{}} &&P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) =\prod\limits_{u=1}^{U}\\ &&\times{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\sum\limits_{k=1}^{K} \kappa_{k}^{t,u} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\kappa_{k}^{t,u}-1} \prod\limits_{w=1}^{W}P\left (z_{t,u,w}|\theta_{t,u}\right ) \right ]d_{\theta_{t,u}}\\ \end{array} $$
(A4)

where \(\kappa _{k}^{t,u}\equiv \pi _{t-1,u,\left (. \right )},\eta _{t,u,\left (. \right )}\). To further simplify (A4), we introduce the assumption for \({\Gamma } \left ({\sum }_{k=1}^{K}\kappa _{k}^{t,u} \right )\equiv Gamma \left (\eta _{t,u,\left (. \right )}{\sum }_{k=1}^{K}\pi _{t-1,u,\left (. \right )}\right )\) in (8) is applied for (A4), resulting in

$$\begin{array}{@{}rcl@{}} &&P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) =\prod\limits_{u=1}^{U}\\ &&\times{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\kappa_{k}^{t,u}-1} \prod\limits_{w=1}^{W}P\left (z_{t,u,w}|\theta_{t,u}\right ) \right ]d_{\theta_{t,u}}\\ \end{array} $$
(A5)

For the term \({\prod }_{w=1}^{W}P\left (z_{t,u,w}|\theta _{t,u}\right )\) in (A5), only one topic label z t, u, w is allocated for one word w. Thus, P(z t, u, w |𝜃 t, u ) can be substituted with 𝜃 t, u, k for a topic label k, which results in

$$\begin{array}{@{}rcl@{}} P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) & =&\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\kappa_{k}^{t,u}-1} \prod\limits_{k=1}^{K}\theta^{N_{t,u,k}} \right ]d_{\theta_{t,u}} \\ & =&\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\left (\kappa_{k}^{t,u}+N_{t,u,k}-1 \right )} \right ]d_{\theta_{t,u}} \\ \end{array} $$
(A6)

Equation (A6) can further be simplified by considering the Dirichlet distribution for observing the exponential part \(\kappa _{k}^{t,u}+N_{t,u,k}-1\) of 𝜃 t, u, k as

$$\begin{array}{@{}rcl@{}} &&P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) \\ && =\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\left (\kappa_{k}^{t,u}+N_{t,u,k}-1 \right )} \right ]d_{\theta_{t,u}} \\ && =\prod\limits_{u=1}^{U}{\int}_{\theta_{t,u}} \left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\left (\kappa_{k}^{t,u}+N_{t,u,k}-1 \right )} \right ] \\ && \cdot \left [ \frac{\Gamma \left (\sum\limits_{k=1}^{K}\kappa_{k}^{t,u}+N_{t,u,k} \right )\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right )} {\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right ){\Gamma} \left (\sum\limits_{k=1}^{K}\kappa_{k}^{t,u}+N_{t,u,k} \right )} \right ]d_{\theta_{t,u}}\\ && =\prod\limits_{u=1}^{U}\left [ \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \frac{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right )} {\Gamma \left (\sum\limits_{k=1}^{K}\kappa_{k}^{t,u}+N_{t,u,k} \right )}\right ]\\ &&\cdot \left [{\int}_{\theta_{t,u}} \frac{\Gamma \left (\sum\limits_{k=1}^{K}\kappa_{k}^{t,u}+N_{t,u,k} \right )} {\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right ){\Gamma}} \prod\limits_{k=1}^{K}\theta_{t,u,k}^{\left (\kappa_{k}^{t,u}+N_{t,u,k}-1 \right )} \right ]d_{\theta_{t,u}} \\ \end{array} $$
(A7)

Since the integration of Dirichlet distribution in (A7) is equal to one, (A7) can be simplified to

$$ P\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t}\right ) =\prod\limits_{u=1}^{U} \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \frac{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right )} {\Gamma \left (\sum\limits_{k=1}^{K}\kappa_{k}^{t,u}+N_{t,u,k} \right )} $$
(A8)

Similarly, (7) can be derived as (6). Based on (A8) and (7), the total evidence in (5) is expressed as

$$\begin{array}{@{}rcl@{}} && P(\mathbf{w}_{t},\mathbf{z}_{t}|{\Pi}_{t},H_{t},{\Xi}_{t},{\Lambda}_{t}) \,=\,\prod\limits_{u=1}^{U} \frac{\Gamma \left (\eta_{t,u,\left (. \right )} \right )}{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u} \right )} \frac{\prod\limits_{k=1}^{K}{\Gamma} \left (\kappa_{k}^{t,u}+N_{t,u,k} \right )} {\Gamma \left (\eta_{t,u,\left (. \right )}+N_{t,u,k} \right )} \\ &&\cdot \prod\limits_{k=1}^{K} \frac{\Gamma \left (\lambda_{t,k,\left (. \right )} \right )}{\Gamma \left (\xi_{t,k,w,\left (. \right )}\lambda_{t,k,\left (. \right )} \right )} \frac{\prod\limits_{k=1}^{K}{\Gamma} \left (\xi_{t,k,w,\left (. \right )}\lambda_{t,k,\left (. \right )}+N_{t,k,w} \right )} {\Gamma \left (\lambda_{t,k,\left (. \right )}+N_{t,k,.} \right )}\\ \end{array} $$
(A9)

Appendix B

To obtain (9) from (A9), we simulate (B1) by Gibbs Sampling [10, 2123] such as

$$\begin{array}{@{}rcl@{}} &&P(z_{t,j}=k|\mathbf{z}_{t,-j},\mathbf{w}_{t},{\Pi}_{t-1},H_{t},{\Xi}_{t-1},{\Lambda}_{t})\\ && \propto P(w_{t,j}|\mathbf{z}_{t,-j},\mathbf{w}_{t,-j},{\Xi}_{t-1},{\Lambda}_{t})\\ &&\cdot P(z_{t,j}=k|\mathbf{z}_{t,-j}{\Pi}_{t-1},H_{t})\\ && = \left [\frac{N_{t,k,w_{t,j},-j}+\sum\limits_{s=0}^{S}\xi_{t-1,k,w_{t,j},s}\lambda_{t,k,s}}{N_{t,k,.-j}+\sum\limits_{s=0}^{S}\lambda_{t,k,s}}\right ]\\ &&\cdot \left [ \frac{N_{t,u,k,-j}+\sum\limits_{s=0}^{S}\pi_{t-1,u,k,s}\eta_{t,u,s}}{N_{t,u,.-j}+\sum\limits_{s=0}^{S}\lambda_{t,u,s}}\right ] \end{array} $$
(B1)

Appendix C

Equation (11) is derived from [14]. Similarly, (10) can also be derived on the basis of the total probability evidence in (A9). For this, we only consider the η related terms for formulation and treat the other terms as a constant C with log probability of

$$\begin{array}{@{}rcl@{}} &&logP\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t} \right ) + C \\ &&=\sum\limits_{u=1}^{U}\left [ log{\Gamma} \left (\sum\limits_{s=0}^{S}\eta_{t,u,s}\right )- log{\Gamma} \left (N_{t,u,.}+\sum\limits_{s=0}^{S}\eta_{t,u,s}\right ) \right ] \\ &&+\sum\limits_{u=1}^{U}\sum\limits_{k=1}^{K}\left [ log{\Gamma} \left (N_{t,u,k}+\sum\limits_{s=0}^{S}\pi_{t-1,u,k,s}\eta_{t,u,s}\right )\right.\\ &&\qquad\qquad\left.-log{\Gamma} \left (\sum\limits_{s=0}^{S}\pi_{t-1,u,k,s}\eta_{t,u,s}\right ) \right ]\\ &&+C \end{array} $$
(C1)

Using the following two inequalities in (C2) and (C3) [26], we get (C4) from (C1) as follows:

$$ log\frac{\Gamma \left (x \right )}{\Gamma \left (n + x \right )} \geq log\frac{\Gamma \left (\hat{x} \right )}{\Gamma \left (n + \hat{x} \right )} + \left ({\Psi}\left (\hat{x} \right )-{\Psi}\left (n + \hat{x} \right ) \right )\left (x-\hat{x} \right ) $$
(C2)
$$ \renewcommand{\theequation}{C\arabic{equation}} log\frac{\Gamma \left (n + x \right )}{\Gamma \left (x \right )} \geq log\frac{\Gamma \left (n + \hat{x} \right )}{\Gamma \left (\hat{x} \right )} + \hat{x}\left( {\Psi}\left (n + \hat{x} \right )-{\Psi}\left (\hat{x} \right ) \right )log\frac{x}{\hat{x}} $$
(C3)
$$\begin{array}{@{}rcl@{}} &&logP\left (\mathbf{z}_{t}|{\Pi}_{t-1},H_{t} \right )+C \\ && \geqslant -\sum\limits_{u=1}^{U}\left [ {\Psi} \left (N_{t,u,.} + \sum\limits_{s=0}^{S} \eta_{t,u,s}^{old}\right ) - {\Psi} \left (\sum\limits_{s=0}^{S} \eta_{t,u,s}^{old}\right ) \right ]\left (\sum\limits_{s=0}^{S} \eta_{t,u,s}\right )\\ && +\sum\limits_{u=1}^{U}\sum\limits_{k=1}^{K}\left [ {\Psi} \left (N_{t,u,k} + \sum\limits_{s=0}^{S} \pi_{t-1,u,k,s}\eta_{t,u,s}^{old}\right )\right.\\ &&~~\qquad\qquad\left. - {\Psi} \left (\sum\limits_{s=0}^{S} \pi_{t-1,u,k,s}\eta_{t,u,s}^{old}\right ) \right ]\\ && \cdot \left (\sum\limits_{s=0}^{S} \pi_{t-1,u,k,s}\eta_{t,u,s}^{old}\right ) \cdot log\left (\sum\limits_{s=0}^{S} \pi_{t-1,u,k,s}\eta_{t,u,s}\right ) + C^{\prime}\\ && = F\left (\eta_{t,u,s} \right ) \end{array} $$
(C4)

Taking the derivative on (C4) with respect to η t, u, s and setting it to zero, then we obtain (10).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, E., Kim, M. Topic-tracking-based dynamic user modeling with TV recommendation applications. Appl Intell 44, 771–792 (2016). https://doi.org/10.1007/s10489-015-0720-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-015-0720-8

Keywords

Navigation