Skip to main content
Log in

RecSys Issues Ontology: A Knowledge Classification of Issues for Recommender Systems Researchers

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

Abstract

Scholarly research has extensively examined a number of issues and challenges affecting recommender systems (e.g. ‘cold-start’, ‘scrutability’, ‘trust’, ‘context’, etc.). However, a comprehensive knowledge classification of the issues involved with recommender systems research has yet to be developed. A holistic knowledge representation of the issues affecting a domain is critical for research advancement. The aim of this study is to advance scholarly research within the domain of recommender systems through formal knowledge classification of issues and their relationships to one another within recommender systems research literature. In this study, we employ a rigorous ontology engineering process for development of a recommender system issues ontology. This ontology provides a formal specification of the issues affecting recommender systems research and development. The ontology answers such questions as, “What issues are associated with ‘trust’ in recommender systems research?”,What are issues associated with improving and evaluating the ‘performance’ of a recommender system?” or “What ‘contextual’ factors might a recommender systems developer wish to consider in order to improve the relevancy and usefulness of recommendations?” Additionally, as an intermediate representation step in the ontology acquisition process, a concept map of recommender systems issues has been developed to provide conceptual visualization of the issues so that researchers may discern broad themes as well as relationships between concepts. These knowledge representations may aid future researchers wishing to take an integrated approach to addressing the challenges and limitations associated with current recommender systems research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Abass, A., Zhang, L., & Khan, S. (2015). A survey on context-aware recommender systems based on computational intelligence techniques. Computing, 97(7), 667–690.

    Google Scholar 

  • Adomavicius, G., Bockstedt, J., Curley, S., & Zhang, J. (2018). Effects of online recommendations on consumers’ willingness to pay. Information Systems Research, 29(1), 84–102.

    Google Scholar 

  • Adomavicius, G., & Kwon, Y. O. (2011). Maximizing aggregate recommendation diversity: A graph-theoretic approach. In Proceedings of the 1st international workshop on novelty and diversity in recommender systems (pp. 3–10).

    Google Scholar 

  • Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 17(6), 734–749.

    Google Scholar 

  • Adomavicius, G., Tuzhilin, A. (2011). Context-aware recommender systems. In: Ricci F., Rokach L., Shapira B., Kantor P. (eds.) Recommender Systems Handbook (pp. 217-253). Springer, Boston, MA, Context-Aware Recommender Systems.

  • Aggarwal, C. (2016). Social and trust-centric recommendations. Recommender Systems the Textbook (pp. 345–384). Switzerland: Springer.

    Google Scholar 

  • Avery, C., & Zeckhauser, R. (1999). Recommender systems for evaluating computer messages. Communications of the ACM, 40(4), 88–89.

    Google Scholar 

  • Balabanovic, M., & Shoham, Y. (1997). Fab: content-based, collaborative recommendation. Communications of the ACM, 40(3), 66–72.

  • Baker, C., & Cheung, K.-H. (2006). The evaluation of ontologies. In Semantic web: Revolutionizing knowledge discovery in the life sciences (pp. 139–158). New York: Springer Verlag.

    Google Scholar 

  • Bell, R.M., Koren, Y. (2007). Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights. In Proceedings: Seventh IEEE International Conference on Data Mining (pp. 43-52). Washington, DC: IEEE Computer Society.

  • Bera, P., Burton-Jones, A., & Wand, Y. (2014). Research note —How semantics and pragmatics interact in understanding conceptual models. Information Systems Research, 25(2), 401–419.

    Google Scholar 

  • Beutel, A., Covington, P., Jain, S., Xu, C., Li, J. (2018). Latent cross: Making use of context in recurrent recommender systems. In Proceedings: 11th Annual International Conference on Web Search and Data Mining (pp. 46–54).

  • Bobadilla, J., Ortega, F., Hernando, A., & Gutierrez, A. (2013). Recommender Systems Survey. Knowledge Based Systems, Vol., 46(C), 109–132.

    Google Scholar 

  • Bollen, D., Knijnenburg, B.P., Willemsen, M.C., Graus, M. (2010). Understanding choice overload in recommender systems. In: RecSys ‘10 proceedings of the fourth ACM conference on recommender systems (pp. 63–70).

  • Brank, J., Grobelnik, M., Mladenic, D. (2005). A survey of ontology evaluation techniques. In Proceedings of the conference on data mining and data warehouses (SiKDD).

  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101.

    Google Scholar 

  • Bray, T., Paoli, J., Sperberg-McQueen, C.M., Maler, E. Yergeau, F. (1998). Extensible Markup Language (XML) 1.0, W3C Recommendation 26 November 2008, Retrieved on July 15, 2018 from https://www.w3.org/TR/2008/REC-xml-20081126/. Accessed 15 July 2018.

  • Brickley, R.J., Guha, R.V. (1999). Resource description framework (RDF) Schema specification. Proposed Recommendation, World Wide Web Consortium: http://www.w3.org/TR/PR-rdf-schema. Accessed 15 July 2018.

  • Bridge, D., Göker, M., & Smyth, B. (2005). Case-based recommender systems. The Knowledge Engineering Review, 20(3), 315–320.

    Google Scholar 

  • Brynjolfsson, E., Hu, Y., & Smith, M. D. (2010). Long tails vs. Superstars: The effect of information technology on product variety and sales concentration patterns. Information Systems Research, 21(4), 736–747.

    Google Scholar 

  • Brooke, J. (1996). SUS – A “quick and dirty” usability scale. In P. Jordan, B. Thomas, & B. Weerdmeester (Eds.), Usability evaluation in industry (pp. 189–194). London, UK: Taylor and Francis.

    Google Scholar 

  • Burgess, S., Sellitto, C., Cox, C., & Buultjens, J. (2011). Trust perceptions of online travel information by different content creators: Some social and legal implications. Information Systems Frontiers, 13(2), 221–235.

    Google Scholar 

  • Burke, R. (1999). Integrating knowledge-based and collaborative-filtering recommender systems. In: Artificial Intelligence for Electronic Commerce: Papers from the AAAI Workshop (AAAI technical report WS-99-0 1, pp. 69–72).

  • Burke, R. (2002). Hybrid recommender systems: Surveys and experiments. User Modeling and User-Adapted Interaction, 12(4), 331–370.

    Google Scholar 

  • Burke, R., Mobasher, B., Williams, C., Bhaumik, R. (2006). Classification features for attack detection in collaborative recommender systems. In: Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD’06). New York: ACM.

  • Cacheda, F., Carneiro, V., Fernandez, D., and Formoso, V. (2011). Comparison of collaborative filtering algorithms: Limitations of current techniques and proposals for scalable, high-performance recommender systems. ACM Transactions. Web 5, 1, Article 2, pp.1–3. https://doi.org/10.1145/1921591.1921593.

  • Campos, P. G., Diez, F., & Cantador, I. (2014). Time-aware recommender systems: A comprehensive survey and analysis of existing evaluation protocols. User Modeling and User-Adapted Interaction, 24(1–2), 67–119.

    Google Scholar 

  • Castagnos, S., Brun, A., Boyer, A., (2013). When diversity is needed…but not expected! In: Proceedings of the 3rd International Conference on Advances in Information Mining and Management (pp. 44–50).

  • Castells, P., Vargas, S., Wang, J. (2011). Novelty and diversity metrics for recommender systems: Choice, discovery and relevance. International workshop on diversity in document retrieval at the ECIR 2011: The 33rd European conference on information retrieval, Dublin.

  • Champiri, Z. D., Shahamiri, S. R., & Salim, S. S. B. (2015). A systematic review of scholar context-aware recommender systems. Expert Systems with Applications, 42, 1743–1758.

    Google Scholar 

  • Chang, W.-L., & Jung, C.-F. (2017). A hybrid approach for personalized service staff recommendation. Information Systems Frontiers, 19(1), 149–163.

    Google Scholar 

  • Chen, N.-S., Kinshuk, Wei, C.-W., & Chen, H.-J. (2008). Mining e-learning domain concept map from academic articles. Computers and Education, 50(3), 1009–1021.

    Google Scholar 

  • Claypool, M., Gokhale, A., Miranda, T., Murnikov, P., Netes, D., Sartin, M. (1999). Combining content-based and collaborative filters in an online newspaper. In: Proceedings of ACM SIGIR workshop on recommender systems.

  • Cremonesi, P., Koren, Y., & Turrin, R. (2010). Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on recommender systems (pp. 39–46).

    Google Scholar 

  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318–330.

    Google Scholar 

  • Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982–1003.

    Google Scholar 

  • Dyer, J. S., Fishburn, P. C., Steuer, R. E., Wallenius, J., & Zionts, S. (1992). Multiple Criteria decision making, multiattribute utility theory: the next ten years. Management Science, 38(5), 645–654.

  • Felfernig, A., Burke, R. (2008). Constraint-based recommender systems: Technologies and research issues. In: ICEC ‘08 Proceedings of the 10th international conference on Electronic commerce, article no. 3.

  • Fernandez, M., Overbeeke, C., Sabou, M., & Motta, E. (2009). What makes a good ontology? A case-study in fine-grained knowledge reuse. In 4th Asian semantic web conference (ASWC 2009) (pp. 61–75). Shanghai, China.

  • Ge, M., Delgado-Battenfeld, C., Jannach, D. (2010). Beyond accuracy: evaluating recommender systems by coverage and serendipity. In: Proceedings of the fourth ACM conference on Recommender systems (RecSys ’10). (257–260). New York: ACM.

  • Fernández-López, M. and Gómez-Pérez, A. and Juristo, N. (1997). Methontology: From Ontological Art Towards Ontological Engineering. In: AAAI-97 Spring Symposium Series, Stanford University, EEUU.

  • Goldberg, D., Nichols, D., Oki, B. M., & Terry, D. (1992). Using collaborative filtering to weave an information tapestry. Communications of the ACM, 35(12), 61–70.

    Google Scholar 

  • Gomez-Perez, A. (1996). Towards a framework to verify knowledge sharing technology. Expert Systems with Applications, 11(4), 519–529.

    Google Scholar 

  • Gomez-Perez, A. (1999). Ontological engineering: A state of the art. Expert Updates: Knowledge Based Systems and. Applied Artificial Intelligence, 2(3), 33–43.

    Google Scholar 

  • Gómez-Pérez, A., Fernández-López, M., & Corcho, O. (2004). Ontological engineering: With examples from the areas of knowledge management, E-commerce and the semantic web. London: Springer.

    Google Scholar 

  • Gomez-Uribe, C., & Hunt, N. (2016). The Netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 1–19.

    Google Scholar 

  • Gretzel, U., & Fesenmaier, D. R. (2006). Persuasion in Recommender Systems. International Journal of Electronic Commerce, 11(2), 81–100.

  • Gruber, T. R. (1995). Towards principles for the design of ontologies used for knowledge sharing? International Journal of Human-Computer Studies, 43(5–6), 907–928.

    Google Scholar 

  • Gruninger, M. and Fox, M.S. (1995). Methodology for the design and evaluation of ontologies. In: Proceedings of the workshop on basic ontological issues in knowledge sharing, IJCAI-95, Montreal.

  • Hagel, J., & Singer, M. (1999). Net worth, shaping markets when customers make the rules. Boston: Harvard Business School Press.

    Google Scholar 

  • Herlocker, J. L., Konstan, J. A., Borchers, A., & Riedl, J. (1999). An algorithmic framework for performing collaborative filtering. In SIGIR '99 proceedings of the 22nd annual international ACM SIGIR conference on research and development in information retrieval (pp. 230–237).

    Google Scholar 

  • Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. In CSCW '00 proceedings of the 2000 ACM conference on computer supported cooperative work (pp. 241–250).

    Google Scholar 

  • Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), 22(1), 5–53.

    Google Scholar 

  • Hevner, A., March, S., Park, J., & Ram, S. (2004). Design Science in Information Systems Research. MIS Quarterly, 28(1), 75–105.

  • Ho, Y.-C., Chiang, Y.-T., Hsu Yung-Jen, J. (2014). Who likes it more?: mining worth-recommending items from long tails by modeling relative preference. In: Proceedings of the 7th ACM international conference on Web search and data mining (pp. 253–262). New York: ACM Press.

  • Hoffman, D. L., Novak, T. P., & Peralta, M. (1999). Building consumer trust online. Communications of the ACM, 42(4), 80–85.

  • Horridge, M., Mortensen, J.M., Parsia, B., Sattler, U., Musen, M.A. (2014). A study on the atomic decomposition of ontologies. Mika, P. et al. (Eds.) in proceedings: The Semantic Web – ISWC 2014: 13th International Semantic Web Conference (part II, LNCS 8797, pp. 65–80).

  • Hu, Y., Koren, Y. Volinsky, C. (2008). Collaborative filtering for implicit feedback datasets. 2008 Eighth IEEE Conference on Data Mining, Pisa (pp. 263–272).

  • Huang, S. (2011). Designing utility-based recommender systems for e-commerce: Evaluation of preference-elicitation methods. Electronic Commerce Research and Applications, 10(4), 398–407.

    Google Scholar 

  • Isinkaye, F. O., Folajimi, Y. O., & Ojokoh, B. A. (2015). Recommendation systems: Principles, methods and evaluation. Egyptian Informatics Journal, 16(3), 261–273.

    Google Scholar 

  • ISO 9241-11 (2018). Ergonomics of Human Computer Interfaces – Part 11: Usability: Definitions and Concepts. https://www.iso.org/standard/63500.html. Accessed 15 July 2018.

  • Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995–1006.

  • Jameson, A. (2004). More than the sum of its members: Challenges for group recommender systems. In: Working Conference on Advanced Visual Interfaces (48-54). New York: ACM.

  • Ji, A. T., Yeon, C., Kim, H. N., & Jo, G. S. (2007). Collaborative tagging in recommender systems. In M. A. Orgun & J. Thornton (Eds.), AI 2007: Advances in artificial intelligence. Lecture notes in computer science (4830). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Jonassen, D.H., (2005). Tools for Representing Problems and the Knowledge Required to Solve Them, In: (eds) Tergan, Sigmar-Olaf ; Keller, Tanja. Knowledge and Information Visualization (LNCS 3426, pp. 82-94).

  • KKR, Knowledge Representation and Reasoning Group (2018). HermiT OWL reasoner, information systems group, Department of Computer Science, University of Oxford. Retrieved on July 14, 2018 from http://www.hermit-reasoner.com/. Accessed 15 July 2018.

  • Kaminskas, M., & Bridge, D. (2017). Diversity, serendipity, novelty, and coverage: A survey and empirical analysis of beyond-accuracy objectives in recommender systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 7(1), 1–42.

    Google Scholar 

  • Kazman, R., Abowd, G., Bass, L., Clemens, P. (1996). Scenario based analysis of software architecture, IEEE Software, November 1996.

  • Kirakowski, J., & Corbett, M. (1993). SUMI: Software usability measurement inventory. British Journal of Educational Technology, 23(3), 210–214.

    Google Scholar 

  • Khan, M., Ibrahim, R., & Ghani, I. (2017). Cross domain recommender systems: A systematic literature review. ACM Computing Surveys (CSUR), 50(3), 1–34.

    Google Scholar 

  • Kimble, C., de Vasconcelos, J. B., & Rocha, Á. (2016). Competence management in knowledge intensive organizations using consensual knowledge and ontologies. Information Systems Frontiers, 18(6), 1119–1130.

    Google Scholar 

  • Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 441–504.

    Google Scholar 

  • Komiak, S. Y. X., & Benbasat, I. (2006). The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Quarterly, 30(4), 941–960.

    Google Scholar 

  • Koprinska, I., Yasef, K. (2015) People-to-People Reciprocal Recommenders, Recommender Systems Handbook 2nd Edition, (eds. Ricci, F., Rokach, L., Shapira, B.) (pp. 545–567). Springer, U.S., Boston, MA.

  • Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.

    Google Scholar 

  • Koutrika, G., Bercovitz, B., Garcia, H. (2009). FlexRecs: expressing and combining flexible recommendations. In: Proceedings of the 35th SIGMOD International Conference on Management of Data (pp. 745–757). Providence: ACM.

  • Kretzer, M., & Maedche, A. (2018). Designing social nudges for enterprise recommendation agents: An investigation in the business intelligence systems context. Journal of the Association for Information Systems, 19(12), 1145–1186.

    Google Scholar 

  • Krulwich, B. (1997). Lifestyle Finder: Intelligent User Profiling Using Large-Scale Demographic Data. Artificial Intelligence Magazine, 18(2), 37–45.

  • Lam, S.K., Frankowski, D., Riedl, J. (2006). Do you trust your recommendations? An exploration of security and privacy issues in recommender systems. In: Müller G. (eds) Emerging Trends in Information and Communication Security. Lecture Notes in Computer Science (3995. pp. 12–29). Springer, Berlin, Heidelberg.

  • Lamb, R., & Kling, R. (2003). Reconceptualizing users as social actors in information systems research. MIS Quarterly, 27(2), 197–235.

    Google Scholar 

  • Lee, A. S., & Hubona, G. S. (2009). A scientific basis for rigor in information systems. MIS Quarterly, 33(2), 237–262.

    Google Scholar 

  • Levandoski, J.J., Sarwat, M., Eldawy, A., Mokbel, M.F. (2012). Lars: A location-aware recommender system. In Proceedings: IEEE 28th International Conference on Data Engineering (450–461). Washington, DC: IEEE.

  • Li, B., Yang, Q., & Xue, X. (2009). Transfer learning for collaborative filtering via a rating-matrix generative model. In ICML '09 proceedings of the 26th annual international conference on machine learning (pp. 617–624).

    Google Scholar 

  • Li, T., & Unger, T. (2012). Willing to pay for quality personalization? Trade-off between quality and privacy. European Journal of Information Systems, 21(6), 621–642.

    Google Scholar 

  • Li, Y., Thomas, M. A., & Osei-Bryson, K. M. (2017). Ontology-based data mining model management for self-service knowledge discovery. Information Systems Frontiers, 19(4), 925–943.

    Google Scholar 

  • Linden, G., Smith, B., York, J. (2003). Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, Vol. 7(1), pp. 76–80.

  • Lorenzi, F., & Ricci, F. (2003). Case-based recommender systems: A unifying view. In B. Mobasher & S. S. Anand (Eds.), Intelligent techniques for web personalization. Lecture notes in computer science (Vol. 3169). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Lu, L., Medo, M., Yeung, C. H., Zhang, Y. C., Zhang, Z. K., & Zhou, T. (2012). Recommender Systems. Physics Reports, 519(1), 1–49.

    Google Scholar 

  • Lu, J., Wu, D., Mao, M., Wang, W., & Zhang, G. (2015). Recommender system application developments: A survey. Decision Support Systems, 74, 12–32.

    Google Scholar 

  • Ma, H., Zhou, D., Liu, C., Lyu, M.R., King, I. (2011) Recommender Systems with Social Regularization. In: Proceedings of the fourth ACM international conference on Web search and data mining (287-296). New York: ACM.

  • Mahmood, T., & Ricci, F. (2007). Learning and adaptivity in interactive recommender systems. In Proceedings of the ninth international conference on electronic commerce (pp. 75–84).

    Google Scholar 

  • Mahoney, M. P., Hurley, N. J., & Silvestre, G. C. M. (2006). Detecting noise in recommender system databases. In IUI '06 proceedings of the 11th international conference on intelligent user interfaces (pp. 109–115).

    Google Scholar 

  • Malone, T., Grant, K., Turbak, F., Brobst, S., & Cohen, M. (1987). Intelligent information-sharing systems. Communications of the ACM, 30(5), 390–402.

    Google Scholar 

  • Markus, L. (1997). The Qualitative Difference in Information Systems Research and Practice, In: Information Systems and Qualitative Research: Proceedings of the IFIP TC8 WG 8.2 Meeting in Philadelphia. A.S. Lee, J. Liebnau, and J.I. DeGross (eds.) (pp. 11–27). London, Chapman and Hall, Ltd.

  • Matera, M., Rizzo, F., & Carughi, G. T. (2006). Web usability: Principles and evaluation methods. In N. Mosely & E. Mendes (Eds.), Web engineering (pp. 143–156). New York: Springer.

    Google Scholar 

  • Masthoff, J. (2011). Group Recommender Systems: Combining Individual Models. In F. Ricci, L. Rokach, B. Shapira, & P. Kantor (Eds.), Recommender Systems Handbook. Boston: Springer.

  • McDonald, D.W., Ackerman, M.S. (2000). Expertise recommender: a flexible recommendation system and architecture. In: CSCW ’00: Proceedings of the 2000 ACM conference on Computer supported cooperative work (pp. 231–240). New York: ACM.

  • McNee, S. M., Riedl, J., & Konstan, J. A. (2006). Being accurate is not enough: How accuracy metrics have hurt recommender systems. In Proceedings of CHI ‘06 extended abstracts on human factors in computing systems (pp. 1097–1101).

    Google Scholar 

  • Mobasher, B., Burke, R., Bhaumik, R., & Williams, C. (2007). Towards trustworthy recommender systems: An analysis of attack models and algorithm robustness. ACM Transactions on Internet Technology (TOIT), 7(4), Article 23.

    Google Scholar 

  • Muter, I., & Aytekin, T. (2017). Incorporating aggregate diversity in recommender systems using scalable optimization approaches. Information Systems Research, 29(3), 405–421.

    Google Scholar 

  • Narock, T., Zhou, L., & Yoon, V. (2012). Semantic similarity of ontology instances using polarity mining. Journal of the Association for Information Science and Technology, 64(2), 416–427.

  • Nielsen, J. (1993). The Usability Engineering Lifecycle. In Usability engineering. Cambridge, MA: Academic Press.

    Google Scholar 

  • Nguyen, T. T., Maxwell Harper, F., Terveen, L., & Konstan, J. A. (2017). User personality and user satisfaction with recommender systems. Information Systems Frontiers, 20(6), 1173–1189.

    Google Scholar 

  • Noy, N., McGuinness, D. (2001) Ontology Development 101: A Guide to Creating Your First Ontology. <http://www.ksl.stanford.edu/people/dlm/papers/ontology101/ontology101-noy-mcguinness.html>. Accessed 31 May 2018.

  • Novak, J. D., & Godwin, D. B. (1984). Learning how to learn. New York: Cambridge University Press.

    Google Scholar 

  • O’Donovan, B., & Smyth, B. (2005). Trust in recommender systems. In IUI '05 proceedings of the 10th international conference on intelligent user interfaces (pp. 167–174).

  • O’Mahony, M., Hurley, N., Kushmerick, N., & Silvestre, G. (2004). Collaborative recommendation: a robustness analysis. ACM Transactions on Internet Technology, 4(4), 344–377.

  • Palau, J., Montaner, M., López, B., & de la Rosa, J. L. (2004). Collaboration Analysis in Recommender Systems Using Social Networks. In M. Klusch, S. Ossowski, V. Kashyap, & R. Unland (Eds.), Cooperative Information Agents VIII. CIA 2004. Lecture Notes in Computer Science, vol (Vol. 3191). Berlin: Springer.

  • Pan, W., Xiang E.W., Liu, N.N., Yang, Q. (2010). Transfer Learning in Collaborative Filtering for Sparsity Reduction. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, (AAAI-10) (230–235). Atlanta.

  • Pang, B., Lee, L., Vaithyanathan, S. (2002). Thumbs up? Sentiment classification using machine learning techniques. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) (79–86). Hanover: Now Publishers Inc.

  • Panniello, U., Gorgoglione, M., & Tuzhilin, A. (2016). In CARSs we trust: How context-aware recommendations affect customers’ trust and other business performance measures of recommender systems. Information Systems Research, 27(1), 182–196.

    Google Scholar 

  • Park, S.-H., & Han, S. P. (2014). From accuracy to diversity in product recommendations: Relationship between diversity and customer retention. International Journal of Electronic Commerce, 18(2), 51–72.

    Google Scholar 

  • Pathak, B., Garfinkel, R., Gopal, R. D., Venkatesh, R., & Yin, F. (2010). Empirical analysis of the impact of recommender systems on sales. Journal of Management Information Systems, 27(2), 159–188.

    Google Scholar 

  • Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. New York: Cambridge University Press.

    Google Scholar 

  • Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, M. (2011). Scikit:LearnL machine learning in python. Journal of Machine Learning Research, 12, 2825–2830.

    Google Scholar 

  • Pennock, D. M., Horvitz, E. (2000). Collaborative filtering by personality diagnosis: A hybrid memory- and model-based approach. In: Uncertainty in Artificial Intelligence Proceedings (pp. 473-480). San Francisco: Morgan Kaufmann Publishers.

  • Peska, L., & Vojtas, P. (2017). Using implicit preference relations to improve content based recommending. Journal on Data Semantics, 6(1), 15–30.

    Google Scholar 

  • Pollock, J. T., & Hodgson, R. (2004). Ontology design patterns. Adaptive Information: Improving Business through Semantic Interoperability, Grid Computing, and Enterprise Integration (pp. 145–194). Chichester: John Wiley & Sons.

  • Portugal, I., Alencat, P., & Cowan, D. (2018). The use of machine learning algorithms in recommender systems: A systematic review. Expert Systems with Applications, 97, 205–227.

    Google Scholar 

  • Porzel R, Malaka R (2004) A task-based approach for ontology evaluation. In: Proceeding of ECAI 2004 workshop on ontology learning and population, Valencia, Spain.

  • Pu, P., Chen, L., & Hu, R. (2011). A user-centric evaluation framework for recommender systems. In RecSys '11 proceedings of the fifth ACM conference on recommender systems (pp. 157–164).

    Google Scholar 

  • Rad, H.S., Lucas, C. (2007). A recommender system based on invasive weed optimization algorithm. IEEE Congress on Evolutionary Computation, CEC 2007, (pp. 4297–4304). Singapore: IEEE.

  • Raad, J., Cruz, C. (2015). A survey of ontology evaluation methods. In: Proceedings of the International Conference on Knowledge Engineering and Ontology Development, part of the 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, Lisbonne, Portugal.

  • Rashid, A. M.,Albert, I., Cosley, D., Lam, S. K., McNee, S. M., Konstan, J. A., Riedl, J. (2002). Getting to Know You: Learning New User Preferences in Recommender Systems. In: Proceedings of the International Conference on Intelligent User Interfaces (pp. 127-134). New York: ACM Press.

  • Ramaprasad, A., & Syn, T. (2015). Ontological Meta-analysis and synthesis. Communications of the Association of Information Systems, 37(7), 138–153.

    Google Scholar 

  • Rao, L., Mansingh, G., & Osei-Bryson, K.-M. (2012). Building ontology based knowledge maps to assist business process re-engineering. Decision Support Systems., 52(3), 577–589.

    Google Scholar 

  • Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3), 56–58.

    Google Scholar 

  • Resnick, P., & Sami, R. (2008). Manipulation-resistant recommender systems through influence limits. ACM SIGecom Exchanges, 7(3), 1–4.

    Google Scholar 

  • Ricci, F. (2002). Travel recommender systems. IEEE Intelligent Systems, 55–57.

  • Rong, W., Peng, B., Ouyang, Y., Liu, K., & Xiong, Z. (2015). Collaborative personal profiling for web service ranking and recommendation. Information Systems Frontiers, 17(6), 1265–1282.

    Google Scholar 

  • Ruiz-Primo, M., & Shavelson, R. (1996). Problems and issues in the use of concept maps in science assessment. Journal of Research in Science Teaching, 33(6), 569–600.

    Google Scholar 

  • Sahoo, N., Singh, P. V., & Mukhopadhyay, T. (2012). A hidden Markov model for collaborative filtering. MIS Quarterly, 36(4), 1329.

    Google Scholar 

  • Saldana, J. (2016). The coding manual for qualitative researchers. Sage Publications, Thousands Oaks, CA USA.

  • Sarwar, B., Karypis, G., Konstan, J., & Riedl, J. (2000a). Analysis of recommendation algorithms for E-commerce. In Proceedings of the 2nd ACM conference on electronic commerce (pp. 158–167).

    Google Scholar 

  • Sarwar, B., Karypis, G., Konstan, J., Riedl, J. (2000b). Application of dimensionality reduction in recommender system - a case study. U.S. Army Research Office, URL: http://www.dtic.mil/get-tr-doc/pdf?AD=ADA439541 (accessed on 11/16/2017).

  • Sehlimmer, J. C., Granger, R. H. (1986). Beyond incremental processing: Tracking concept drift. In: Proceedings of the Fifth National Conference on Artificial Intelligence (pp. 502-507). Philadelphia: Morgan Kaufmann.

  • Schafer, J. B., Konstan, J. A., & Riedl, J. (2001). E-commerce recommendation applications. In Applications of data mining to electronic commerce (pp. 115–153).

    Google Scholar 

  • Schafer, J.B., Frankowski, D., Herlocker, J., Sen, S. (2007). Collaborative filtering recommender systems. In: P. Brusilovsky, A. Kobsa, W. Nejdl (Eds.), The Adaptive Web. Springer, Berlin 2007, 291–324.

  • Shani, G., Heckerman, D., & Brafman, R. I. (2005). An MDP-based recommender system. Journal of Machine Learning Research, 6, 1265–1295.

    Google Scholar 

  • Shani, G., & Gunawardana, A. (2011). Evaluation recommendation systems. In F. Ricci, L. Rokach, B. Shapira, & P. B. Kantor (Eds.), Recommender systems handbook (Vol. 8, pp. 257–297). Springer US. https://doi.org/10.1007/978-0-387-85820-3.

  • Schein, A. I., Popescul, A., Ungar, L. H., & Pennock, D. M. (2002). Methods and metrics for cold-start recommendations. In SIGIR '02 proceedings of the 25th annual international ACM SIGIR conference on research and development in information retrieval (pp. 253–260).

    Google Scholar 

  • Sie, R. L. L., Bitter-Rijpkema, M., & Sloep, P. B. (2010). A simulation for content-based and utility-based recommendation of candidate coalitions in virtual creativity teams. Procedia Computer Science, 1(2), 2883–2888.

    Google Scholar 

  • Simon, H. (1955). A behavioral model of choice. Quarterly Journal of Economics, 69(1), 99–118.

    Google Scholar 

  • Smyth, B., McClave, P. (2001). Similarity vs diversity. In: Proceedings of the 4th International Conference on Case-Based Reasoning (pp. 347-361). Berlin: Springer-Verlag.

  • Starr, R. R., & de Oliveira, J. M. P. (2013). Concept maps as the first step in ontology creation. Information Systems, 38(5), 771–783.

    Google Scholar 

  • Strasunskas D, Tomassen S (2008) Empirical insights on a value of ontology quality in ontology-driven web search. OnTheMove 2008 confederated international conferences (OTM 2008), Monterrey, Mexico (pp 1319–1337).

  • Su, X., & Khoshgoftaar, T. M. (2009). A survey of collaborative filtering techniques. Advances in Artificial Intelligence, 2009. https://doi.org/10.1155/2009/421425.

  • Suárez-Figueroa, M. C., Gómez-Pérez, A., & Fernández-López, M. (2012). The NeOn methodology for ontology engineering. In M. Suárez-Figueroa, A. Gómez-Pérez, E. Motta, & A. Gangemi (Eds.), Ontology engineering in a networked world. Berlin, Heidelberg: Springer.

    Google Scholar 

  • Tao, L., Cao, J., & Liu, F. (2017). Quantifying textual terms for similarity measurement. Information Sciences, 415-416, 269–282.

    Google Scholar 

  • Thieblin, E., Haemmerle, O., Trojahn, C. (2018). Complex matching based on competency questions for alignment: a first sketch. Emerging Topics in Semantic Technologies. ISWC 2018 Satellite Events. E. Demidova, A.J. Zaveri, E. Simperl (Eds.), ISBN: 978–3–89838-736-1, 2018, AKA Verlag Berlin.

  • Tintarev, N., & Masthoff, J. (2007). Survey of explanations in recommender systems. 2007 IEEE 23rd International Conference on Data Engineering Workshop (pp. 801–810).

    Google Scholar 

  • Tuzhilin, A. (2012). Customer relationship management and web mining: The next frontier. Data Mining and Knowledge Discovery, 24(3), 584–612.

    Google Scholar 

  • Tuzlukov, V. (2010). Signal processing noise. Electrical Engineering and Applied Signal Processing Series, CRC press.

  • Vargas, S., Castells, P. (2014). Improving sales diversity by recommending users to items. In: Proceedings of the 8th ACM Conference on Recommender Systems (pp. 145–152). New York: ACM.

  • Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478.

    Google Scholar 

  • Verbert, K., Manousellis, N., Ochoa, X., Wolpers, M., Drachsler, H., Bosnic, I., & Duval, E. (2012). Context-aware recommender Systems for Learning: A survey and future challenges. IEEE Transactions on Learning Technologies, 5(4), 318–335.

    Google Scholar 

  • Vrandecic, D., Pinto, S., Tempich, C., & Sure, Y. (2005). The diligent knowledge processes. Journal of Knowledge Processes, 9(5), 85–96.

    Google Scholar 

  • W3C (2012). OWL 2 Web Ontology Language Document Overview (Second Edition), World Wide Web Consortium. Retrieved July 15, 2018 from https://www.w3.org/TR/owl2-overview/. Accessed 15 July 2018.

  • W3C (2008). SPARQL query language for RDF. World Wide Web Consortium. Retrieved February 7, 2018 from https://www.w3.org/TR/rdf-sparql-query/. Accessed 15 July 2018.

  • W3C (2014). Resource description framework. World Wide Web Consortium. Retrieved July 15, 2018 from https://www.w3.org/RDF/. Accessed 15 July 2018.

  • Wang, H., Wang, N., & Yeung, D. (2015). Collaborative deep learning for recommender systems. In KDD '15 proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1235–1244).

    Google Scholar 

  • Wang, Y. F., Chiang, D. A., Hsu, M. H., Lin, C. J., & Lin, I. L. (2009). A recommender system to avoid customer churn: A case study. Expert Systems with Applications, 36, 8071–8075.

    Google Scholar 

  • Warren, C., McGraw, A. P., & Van Boven, L. (2011). Values and preferences: defining preference construction. Wiley Interdisciplinary Reviews: Cognitive Science, 2(2), 193–205.

  • Wong, W., Liu, W., & Bennamoun, M. (2012). Ontology learning from text: A look back and into the future. ACM Computer Surveys, 44(4), Article 20.

    Google Scholar 

  • Xiao, B., & Benbasat, I. (2007). E-commerce product recommendation agents: Use, characteristics, and impact. MIS Quarterly, 31(1), 137–209.

    Google Scholar 

  • Yang, X., Guo, Y., Liu, Y., & Steck, H. (2014). A survey of collaborative filtering based social recommender systems. Computer Communications, 41, 1–10.

    Google Scholar 

  • Yao, Y. Y. (1995). Measuring retrieval effectiveness based on user preference of documents. Journal of the American Society for Information Science, 46, 133–145.

    Google Scholar 

  • Yoon, V. Y., Hostler, R. E., Guo, Z., & Guimares, T. (2013). Assessing the moderating effect of consumer product knowledge and online shopping experience on using recommendation agents for customer loyalty. Decision Support Systems, 55(4), 883–893.

    Google Scholar 

  • Zhang, S., Yao, L., Sun, A. (2017). Deep learning based recommender system: A survey and new perspectives. ArXiv e-prints: https://arxiv.org/pdf/1707.07435.pdf (accessed 11/16/17).

  • Zibriczky, D. (2016) Recommender Systems meet Finance: A literature review. In: Proceedings of the 2nd International Workshop on Personalization & Recommender Systems in Financial Services, Bari.

  • Zimmermann, A., Lorenz, & Oppermann. (2007). An operational definition of context. In modeling and using context: 6th international and interdisciplinary conference, CONTEXT 2007, Roskilde, Denmark, august 20-24, 2007. Proceedings (Vol. 4635, lecture notes in computer science, pp. 558-571). Berlin, Heidelberg: Springer Berlin Heidelberg.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lawrence Bunnell.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Glossary of Issues Affecting Recommender Systems

In this appendix we provide a brief summary description of each of the System and User issues affecting recommender systems derived from our research development of a recommender systems issues ontology. In order to distinguish issues among broad themes, we have included screenshots from our recommender systems Concept Map to serve as headings and have arranged the issues descriptions starting with the top-level User and System nodes in a hierarchical manner beginning with recommender systems User issues first. Starting from the bottom of the Concept Map with User Adoption related issues are displayed continuing with the next lower level issue nodes outward through each level towards the end of the branches, followed by a similar treatment of the recommender systems System related issues. Where available, we have discussed the solutions offered for the challenges they afford from the literature.

1.1 User Issues in Recommender Systems

1.1.1 Theme: User Adoption

  1. 1.

    User Adoption. Adoption has been a well addressed topic in IS research. The technology acceptance model (TAM) instructs that intention to adopt is influenced by factors such as perceived usefulness (PU) and perceived ease-of-use (PEU) (Davis et al. 1989). Recommender systems present a somewhat different adoption challenge in that users are not necessarily corporate employees who have no choice but to accept (or attempt to thwart) an IT implementation. Several authors have argued that purely cognitive decisions about performance, usefulness or effort may not be the only factors and that trust, reputation and user familiarity also strongly that influence recommender systems user adoption (O’Donovan and Smyth 2005; Komiak and Benbasat 2006; Mobasher et al. 2007).

figure a
  1. 2.

    Intention to Use/Intention to Transact. Behavioral intention influences the use of recommender systems, just as with any other technology. Behavioral intention for future use of recommender systems has been associated with trust, perceived ease of use (PEOU), perceived usefulness (PU) and satisfaction (Xiao and Benbasat 2007). One of the primary purposes of e-commerce recommender systems is to persuade users to purchase the items recommended. Towards this end, the factors associated with intention to use also affect intention to transact or purchase. Recommender systems have been shown to play a significant role in purchasing decisions (Pathak et al. 2010; Adomavicius et al. 2018). When it comes to making a financial decision, factors such as persuasiveness, customer loyalty and risk may influence a user’s intention to purchase (Pu et al. 2011). Several other researchers have also suggested that trust in a recommender system is an important factor in behavioral intention (Komiak and Benbasat 2006; Lam et al. 2006).

  2. 3.

    Perceived Accuracy. From a subjective standpoint, perceived accuracy has been defined as the degree to users feel that a recommender system’s recommendations match their preferences. As opposed to decision support or statistical accuracy, perceived accuracy is a personal assessment by the user of how well the recommender systems understands his or her preferences and tastes (Pu et al. 2011).

  3. 4.

    Financial Risk. One form of risk in recommender systems has to do with the risk correlated with the recommendation of particular items. Certain recommender systems, such as stocks or investment portfolio recommenders, carry corresponding risks for the user. Rating systems that allow the user the indicate their risk tolerance may help to allay user reticence to utilize recommender systems (Shani and Gunawardana 2011).

  4. 5.

    Information Sufficiency. In regard to recommender systems adoption and intent to purchase, sufficiency of information has been noted as an important interface feature of recommendations. Users prefer that recommender systems result pages provide enough information (e.g. price, quantity, image, user reviews, etc.) with which to make a decision (Pu et al. 2011).

  5. 6.

    User Expertise. User expertise becomes a consideration for recommender systems when the user’s knowledge about the product is important in terms of preference elicitation. If users are unfamiliar with the product they may answer questions inappropriately or inaccurately leading to poor recommendations. On the other hand, if they are highly familiar with a product, the questions asked may produce more stable and defined preferences leading to higher quality recommendations (Xiao and Benbasat 2007). Understanding and accounting for user expertise in recommender systems interface design can influence users’ perceptions and satisfaction as well as their intent to provide feedback (Pu et al. 2011).

  6. 7.

    User Knowledge. User knowledge in recommender systems may refer either to the user’s knowledge surrounding the recommender system’s products or the recommender system’s knowledge about the user. Recommender system’s role as “infomediaries” necessitate that they assist users in sifting through large volumes of data in order to allow them to make high-quality decisions under knowledge constraints (Pu et al. 2011). Knowledge about the user in the form or demographic or specific information about their needs is required in order to make quality recommendations (Burke 2002).

  7. 8.

    Perceived Value. In information retrieval literature, value is associated with a document’s query relevance and similarity to other documents retrieved (Kaminskas and Bridge 2017). How well a recommender system addresses a user’s personal needs may influence their perceived value of a system’s preference elicitation process. It also affects the ability of the system to persuade users that a recommendation is of use (Gretzel and Fesenmaier 2006).

  8. 9.

    Persuasiveness. Persuasiveness is the ability of a recommender systems to persuade a user to accept a recommendation. Persuasive recommender systems interfaces may result in improved performance (Herlocker et al. 2000). Gretzel and Faisenmaier (2006) note that cues from the preference-elicitation process, such as relevance, transparency, effort, perceived enjoyment and perceived value influence the persuasiveness of a recommender systems (Gretzel and Faisenmaier 2006). Providing information sufficiency and explanation are also viewed as means of increasing the persuasiveness of a recommender systems (Schafer et al. 2007). In e-commerce systems, the persuasiveness of a recommender systems may be measured in terms of the average increase in sales (Tintarev and Masthoff 2007).

  9. 10.

    Customer Lifetime Value (LTV). Customer LTV is an economics-oriented view of the customer that attempts to establish the value of a customer to the business through an estimation of the potential revenue derived from the customer over the estimated tenure of the customer with the company (Tuzhilin 2012). CLV can be defined as the sum of the net present value of a customer’s future cash flows (Park and Han 2014). With regard to recommender systems, Customer LTV is associated with business value which may be increased through cross-sell opportunities and improving customer retention (Lu et al. 2012).

  10. 11.

    Loyalty. In evaluating user loyalty to a recommender system, generally the number of times a user reuses the recommender systems or shares the recommender systems with their friends is the metric by which loyalty is determined. Loyalty can also be determined by the acceptance or purchase of recommended items which infers a degree of recommendation quality (Pu et al. 2011).

  11. 12.

    Retention. Retaining customers over a period of time is a key driver for obtaining business value from an recommender systems. Retention rates are calculated by dividing the number of users who repeatedly engage with a recommender system (usually measured through transactions) over a time period by the total number of users. Improving engagement, measured by the time users spend viewing content, and/or enter into transactions as a result, is correlated with improving retention (Gomez-Uribe and Hunt 2016).

  12. 13.

    User Churn. Churn in recommender systems usually refers the issue of user or customer churn, where users quickly enter and leave the system never to return. User churn can have several deleterious effects on recommender systems implementations. For one, infrequent users can create noise in terms of inconsistent ratings because they may be either 1) not invested in providing accurate feedback to the system or 2) maliciously providing inaccurate ratings (Mahoney et al. 2006), thereby seeding the system’s filtering algorithm with potentially inaccurate data. Second, high levels of customer churn may mean that 1) users are not finding value in the recommendations provided, 2) users are unwilling to provide the effort required by the system, or 3) overall lack of user satisfaction. In any case, a high customer churn negatively impacts ecommerce recommender systems customer retention, loyalty and ultimately, profitability (Wang et al. 2009).

  13. 14.

    Effectiveness. Effectiveness in recommender systems is most often associated with the ability of a recommender systems to match users with items (Linden et al. 2003). The effectiveness of a recommender systems may be seen as 1) its ability to match user preferences with items in the catalog, 2) the methods used to explain and support the recommendation and 3) the range of functionalities available to support the user with their decision (Ricci 2002). It has also been associated with accuracy measures such as precision and recall (Tintarev and Masthoff 2007). The effectiveness of ecommerce applications, many of which incorporate recommender systems technologies, is usually measured in terms of click-through rate and conversion of user visits to actual sales (Linden et al. 2003; Ge et al. 2010).

figure b

1.1.2 Theme: User Acceptance

  1. 15.

    User Acceptance. The concept of user acceptance in the field of IS has been long associated with the Technology Acceptance Model (TAM) (Davis 1989). Along with the attributes generally associated with technology acceptance, such as perceived ease of use and intention to use, a number of factors may contribute to a user’s acceptance of a recommender systems, including transparency, explanation, diversity, trust, and user satisfaction (Kaminskas and Bridge 2017; Xiao and Benbasat 2007). With regard to recommender systems, user acceptance may also refer to the user’s acceptance of the recommendations provided by the system. Explanations providing transparency as to how recommendations are derived may improve user acceptance of recommendations (Castagnos et al. 2013).

  2. 16.

    Perceived Usefulness. In order to provide a satisfying user experience, factors beyond accuracy such as usefulness need to be considered in recommender systems development (Herlocker et al. 2004). A recommendation may provide objective utility in that it accurately meets a set of requirements, whether or not it optimally matches the user’s desired use. However, usefulness connotes a recommendation that the user subjectively finds contextually matches or improves their desire for performance. Perceived usefulness is seen as the extent to which a user anticipates that use of a recommender systems will comparatively improve their performance over their experience without a recommender system (Pu et al. 2011). Pu et al. (2011) offer two aspects of e-commerce recommender systems usefulness: 1) decision support, which measures the extent to which users feel the recommender systems improves their ability to make decisions, and 2) decision quality, which can be measured by the level of confidence the user has that the correct choice has been made with the help of a recommender systems (Pu et al. 2011).

  3. 17.

    Perceived Fit. Perceived fit has been defined as the user’s impression that a recommender system is able to provide a recommendation that satisfies their personal needs and desires. Perceived fit is linked to a user’s evaluation of how well a recommender system captures their preferences (Gretzel and Fesenmaier 2006).

  4. 18.

    User Satisfaction. User satisfaction is associated with loyalty and likelihood of repeated use of a recommender system (Pu et al. 2011). There appears to be a tradeoff between the accuracy and diversity of recommendations towards user satisfaction. Recommender systems deemed as highly accurate may rate lower among users than systems that provide a more diverse result set. Users tend to prefer large and varied recommendations, however recommender system’s that provide large sets of recommendations that are all attractive appear to increase the number of difficult trade-offs resulting in increased choice difficulty and lower user satisfaction (Bollen et al. 2010).

  5. 19.

    User Experience. User experience (UX) in recommender systems relates to the user’s subjective response to a recommender system. Because of its subjective nature, UX evaluation techniques are numerous and there exists much debate about what constitutes a “good” user experience. A distinction between behaviors and the attitudes behind these behaviors has been made wherein it is posited that both pragmatic and pleasurable elements are taken into account by the user. Personal and contextual attributes may also come into play in determining whether a user has a satisfactory or pleasurable experience. Contrary to popular research evaluation methods, algorithm accuracy does not always lead to a better user experience (Knijnenburg et al. 2012).

  6. 20.

    Presentation/User Interface. Presentation of recommendation results is a part of user interface development and falls under the UX human-computer interface (HCI) area of IS research. Simple, intuitive user interfaces that provide easy navigation enhance the user experience and lead to improved user satisfaction (Lu et al. 2012). For Netflix, page construction has become an area of focus with A/B testing being carried out so as to optimally address users’ diverse moods, needs, contexts and situations in a way that complements personalized ranking of results (Gomez-Uribe and Hunt 2016).

  7. 21.

    Result Explanation/Interpretability of Results. Understanding why a particular result or recommendation set has been returned helps to improve user trust in recommender systems (Hu et al. 2008). Further, result explanations can assist in refining results by providing the reasoning behind a recommendation and allowing the user to modify their query in recommender systems that allow for some degree of flexibility (Herlocker et al. 2000). A correlation exists between the persuasiveness of a recommender systems and its ability to explain why the recommendation is the correct one (Schafer et al. 2007). Explanations may also increase a user’s perception of system competence (Knijnenburg et al. 2012).

  8. 22.

    Scrutability. A scrutable recommender systems is one that allows the users to not only understand the reasoning behind a recommendation, but also to adjust the parameters that drive recommendation results. Scrutability is associated with the usability principle of User Control (Tintarev and Masthoff 2007). Scrutability in recommender systems has been offered as a way of achieving the multiple goals of both system designer and user by offering a more understandable and modifiable user experience.

  9. 23.

    Flexibility. Many recommender systems implementations have fixed recommendation techniques that cannot be altered by the user (Adomavicius and Tuzhilin 2005). For instance, CF systems generally rely upon architectures tailored to recommend specific types of items often through the implementation of a single clustering algorithm (McDonald and Ackerman 2000). The ability for a user to refine their preferences and for a recommender system to interactively adapt to their current needs as well as taking into account contextual factors can improve overall recommendation quality (Mahmood and Ricci 2007; Xiao and Benbasat 2007). Koutrika provides a novel framework for a recommendation system, FlexRecs that allows for parametrization of workflows to generate flexible recommendations (Koutrika et al. 2009).

  10. 24.

    User Control. User control of information systems is one of the 5 measures of the Software Usability Measurement Inventory (SUMI) framework (which also include efficiency, affect, helpfulness and learnability) used by developers to assess the quality of a software system (Kirakowski and Corbett 1993). User control refers to a user’s ability to influence their interactions with and results derived from a recommender system. User control has been shown to improve user trust and satisfaction (Xiao and Benbasat 2007).

  11. 25.

    Configurability. In a number of recommender systems instances, highly customizable products may be recommended. For example, purchasing a computer may require the selection of a number of component attributes before recommendations may be made. Allowing the user to select attributes and components for a personal computer, for example, is an instance where the configurability of recommendation results is essential for user satisfaction and intention to transact (Felfernig and Burke 2008).

  12. 26.

    Adaptivity. Adaptivity addresses the recommender systems challenges of personalization and flexibility. Adaptivity denotes a recommender system’s ability to adapt to changing user preferences over time or in different contextual situations (Khan et al. 2017). Conventional recommender systems filtering methods are hard-coded into the system and are unable to adapt to user interaction. Adaptive recommender systems employ non-rigid recommendation policies representing user-computer interaction strategies that allow the user to modify their queries in an iterative manner to improve results. In this way, the recommender system is able to modify its strategy autonomously and make decisions regarding whether to obtain additional information from the user or to provide a set of results (Mahmood and Ricci 2007).

  13. 27.

    Personalization. One of the main foci for a recommender system is to provide personalized recommendations by customizing the results provided by the recommender systems to the unique preferences of the user (Malone et al. 1987). Personalization of RS systems is generally derived through similarity metrics (e.g. similarity with other users or other items) (Tintarev and Masthoff 2007). In order to achieve personalization of recommendations, a RS must obtain information about users’ preferences. The most straightforward way to do with is through explicit feedback (e.g. asking users for ratings) (Rashid et al. 2002). Providing unusual or surprising items with recommendations through diversity, novelty and serendipity have been offered as means of achieving personalization in RS (Castells et al. 2011; Ge et al. 2011; Herlocker et al. 2004; Kaminskas and Bridge 2017; McNee et al. 2006).

  14. 28.

    Attractiveness. Often with RS, in order to lower choice difficulty and increase the likelihood of user selection, recommendations must not only be accurate and novel, but also attractive to the user. Attractive recommendations are ones that are “capable of stimulating users’ imaginations and evoking a positive emotion of interest or desire” (Pu et al. 2011). To be attractive, a recommendation should be personalized to a user’s tastes (Vargas et al. 2014). Diversity is one factor that has been shown to increase choice satisfaction and contribute to item recommendation attractiveness (Knijnenburg et al. 2012).

  15. 29.

    Relevance. Over the years, there has been much disagreement in the field of information retrieval as to a clear definition of relevance, with many evaluation methods utilizing an objective query-based approach rather than a user preference perspective (Herlocker et al. 2004). This, of course, is contrary to the recommender systems purpose of personalization of results. From a user preference point of view, whether or not an item is relevant is a difficult thing for a recommender system to predict. Obtaining a true measure of relevance would require that the user rate all items. Obviously, with all but the smallest of data sets, this would be impractical, if not impossible. Additionally, what is relevant for a user may change due to temporal conditions or other contextually related attributes. For this reason, relevance is most often determined by subjective inference along the lines of what is appealing and useful (Yang et al. 2014).

  16. 30.

    Usability. Usability is associated with increased user trust in a recommender system (Ricci 2002). Usability has been widely studied in the area of Human Computer Interfaces (HCI) and e-commerce Web design (Matera et al. 2006; Nielsen 1993). The general concept of usability refers to the quality of the appropriateness to a purpose of a particular artifact (Brooke 1996). The ISO Ergonomics of HCI standard for usability defines usability as being comprised of 1) effectiveness: accuracy of completeness with which specified users can achieve specified goals in particular environments, 2) efficiency: the resources expended in relation to the accuracy and completeness of goals achieved, and 3) satisfaction: the comfort and acceptability of the work system to its users and other people affected by its use (ISO 9241-11: 2018).

  17. 31.

    Efficiency. The purpose of recommender systems is to make the item search and selection process more efficient for users by reducing cognitive overload through information filtering processes. The efficiency of a recommender systems is associated with usability and indicates how quickly a recommendation can be performed through a user interaction. (Tintarev and Masthoff 2007).

  18. 32.

    User Needs. Understanding a user’s needs is a key to recommender systems quality. Recommendations that articulate the user’s needs demonstrate the requirement of personalization of a recommender system. This may be accomplished through the questions asked and explanations provided by the recommender systems interface (Komiak and Benbasat 2006).

  19. 33.

    Usefulness. In order to provide a satisfying user experience, factors beyond accuracy such as usefulness need to be considered in recommender systems development (Herlocker 2004). A recommendation may provide objective utility in that it accurately meets a set of requirements, whether or not it optimally matches the user’s desired use. However, usefulness connotes a recommendation that the user subjectively finds contextually matches or improves their desire for performance.

  20. 34.

    Utility. Multi-Attribute Utility Theory (MAUT) posits that a persons’ decisions are generally determined by a number of attributes which are evaluated by their impact on maximization of utility for the individual (Dyer et al. 1992). Utility based recommender systems attempt to compute the utility of an item for a user (Burke 2002). Utility refers to the fit of recommendations to a user’s requirements. In order to provide utility-based recommendations to a user, a recommender system needs to gather a sufficient amount of information about the user’s needs which may lead to the effort versus accuracy trade-off for the user (Huang 2011).

  21. 35.

    Perceived Ease of Use. Perceived ease of use (PEOU) is one of the factors associated with Davis’ (1989) seminal work on intention to use from the technology acceptance model (TAM) (Davis 1989; Davis et al. 1989). It refers to the perception held by a user of the anticipated effortlessness, speed and efficiency with which they would be able to accomplish a task utilizing a specific technology. Ease of use in recommender systems has been evaluated using objective task completion time, however it has been suggested that perceived ease of use may be a more appropriate measure (Pu et al. 2011).

  22. 36.

    User Effort. In order to provide personalized results, gaining an understanding of the user’s preferences is required. However, the amount of effort a user is required to expend in terms of additional work may affect their desire to provide preference information required for explicit ratings. This is known as the effort versus accuracy challenge. User effort is usually measured by decision time and the extent of product search, indicators of user effort (Xiao and Benbasat 2007).

  23. 37.

    Decision Making. A key function for recommender systems is to assist users with making choices and decisions. Recommender systems are valuable tools for helping users cope with information overload. Research has produced several different models of human decision making. Traditional economic models posit that humans make decisions in an outcome optimizing manner through a rational evaluation process. However, research has shown that human’s often make completely irrational choices due to the inherent instability and temporal modification of preferences (Simon 1955). Context, goals, constraints and personal experience all have been shown to exert a strong influence on decision making (Warren et al. 2010). In the “effort-accuracy” decision making framework, decision processes are seen as a contextually adaptive trade-off between decision making effort and the accuracy of the outcome (Payne et al. 1993).

  24. 38.

    Cognitive Effort. recommender systems developers must also take into consideration the psychology of users in making decisions (Felfernig and Burke 2008). One of the goals of recommender systems is to reduce the cognitive overload of wading through numerous choices. Understanding the interplay between the effort users are willing to expend and the limits of cognitive ability in decision-making contexts and user satisfaction is critical to successful recommender systems development (Gretzel and Fesenmaier 2006).

  25. 39.

    Choice/Information Overload. Choice overload is a term to describe the difficulty users have when presented with too many choices. When asked in the abstract, most people indicate that they want to be able to see all of the available choices, anticipating that more available choices will help them to make a better decision. However, in practice, research has shown that users are actually more satisfied with fewer choices (Iyengar and Lepper 2000). Users are generally more satisfied with a larger number of choices when the selections are of high quality and the choices between them require less cognitive effort. Research is still required to determine the exact optimal number of Top-N choices (Bollen et al. 2010).

  26. 40.

    Complexity. Some recommender systems may contain items that present varying levels of complexity that the user may find difficult to understand. Designing a recommender system in such a way as to reduce the users’ cognitive effort and improve decision quality may improve overall user satisfaction. Recommender systems that clearly provide clear, comparative and detailed explanations of recommendations may be found to be more useful and satisfactory in these complex scenarios (Xiao and Benbasat 2007).

  27. 41.

    Performance Expectancy. Performance expectancy is a key construct in UTAUT which posits that intention to use and accept an IT artifact is affected by a user’s expectations of the systems performance. One way to manage performance expectancy in recommender systems is to level set user expectations through communication with the users regarding the limitations and requirements of the system (e.g. the kind and amount of user input required in order to obtain accurate, relevant results) (Xiao and Benbasat 2007).

figure c

1.1.3 Theme: User Preference

  1. 42.

    User Preference. Because personalization is a key differentiator between recommender systems and simple IR, user preference is an important topic in recommender systems research (Balabonovic and Shoham 1997; Burke 1999; Adomavicius and Tuzhilin 2005). User preference provides a recommender system with information with which to perform filtering (Bell and Koren 2007). Optimal methods for ascertaining a user’s preference have been the subject of a number of research articles in recommender systems. Two forms of preference elicitation mechanisms are explicit and implicit feedback (Shani et al. 2005). Preferences gathered explicitly may be in the form of user ratings, comments or by simply asking the user to state their preferences up front, or interactively during a recommendation session. Implicit preferences are those implied by user actions, such as purchasing a recommender systems catalog item or inferred through click or browsing history.

  2. 43.

    Feedback/Preference Elicitation. Recommender systems rely on various types of input in order to ascertain user preferences. User preferences serve as feedback for a recommender system. Feedback is derived either from implicit knowledge about the user or explicit knowledge given by the user. User ratings of items serve as explicit feedback. Purchase history or click-through are means of collecting implicit feedback (Hu et al. 2008).

  3. 44.

    Time Degradation/Decay. User preference profiling is obtained by a recommender system through explicit and/or implicit feedback over time (Adomavicius and Tuzhilin 2005). Time degradation or decay refers to the temporal context of profile data and the loss of accuracy of the data captured as user preferences may change over time. Weighting of profile data has been utilized as a means of dealing with time degradation by assigning increasing weights to newer user feedback (Campos et al. 2014).

  4. 45.

    Explicit Feedback. Explicit feedback is the preferred method whereby users indicate their preferences directly through ratings, question answering, critiques, weighting of item attributes or indication of specific needs (Knijnenburg et al. 2012). However, explicit feedback comes with costs in terms of user effort, too much of which may cause users to abandon a recommender system (Rashid et al. 2002). Additionally, in certain systems explicit feedback may not be available.

  5. 46.

    Sentiment Analysis. Much of the work in classification for recommender systems has been done in the area of topic categorization. However, sentiment analysis of natural language is a growing area of research that allows for a deeper understanding of more subtle classification clues than mere keywords. Interpreting user feedback in the form of text, (e.g. movies ratings’ comments), can provide insight into user acceptance of recommendations and quality of recommendations. This requires a deep understanding of natural language which may be provided through sentiment analysis (Pang et al. 2002). Zhang notes the use of deep learning models for understanding textual information which may be appropriate for sentiment analysis (Zhang et al. 2017).

  6. 47.

    Ratings. Recommender systems ratings are the scores given to items by users and are used to predict future user preferences. CF recommender systems utilize peer opinions, in the form of ratings, to predict the interests of the target user. CB recommender systems analyze ratings information associated with items and generate predictions based on the similarities between the items. Ratings may be obtained explicitly or implicitly. With explicit ratings often a five-star likert scale is utilized. Implicit ratings, on the other hand, use attributes such as click-through patterns, browsing and purchase history to predict user preferences. Methods to infer predictions about ratings are an attempt to complete the user-item ratings matrix of a recommender systems. Model-based recommender systems approaches use user-item ratings matrices to learn user similarities through such methods as Bayesian classifiers, neural networks, and genetic algorithms to predict user preferences; memory-based approaches use similarity metrics to directly predict ratings by expressing the distance between users or items in the matrix (Shani et al. 2005; Yang et al. 2014).

  7. 49.

    Grey Sheep/Black Sheep/Straddlers. Within most recommender systems there exist users whose tastes are eclectic or unusual. In these cases, pure CF systems struggle to provide accurate results due to the fact that majority preferences do not agree with these Grey Sheep users, those who opinions differ from the rest of the community. Hybrid methods have been employed to improve results where distinctive users exist (Claypool et al. 1999; Balabonovic and Shoham 1997). Black sheep refer to users whose tastes are so completely different from other users that no similarity to other users may be calculated. This may be an acceptable situation for electronic recommender systems developers as this same one-off situation occurs with manual recommendations as well (Su and Khoshgoftaar 2009).

  8. 50.

    Value of Information (VOI). One of the costs associated with obtaining feedback in CF systems is that users may tire of the effort of continuously providing feedback. VOI is a method of obtaining additional information about users that may enhance explicit and implicit feedback by identifying, through cost-benefit analysis, the most valuable context dependent information to acquire from a user during the initial or subsequent user visits to enhance the recommender system’s ability to differentiate preferences between users (Pennock et al. 2000). “For instance, knowing that a user likes the universally-popular movie “Toy Story” reveals less than knowing that she likes “Fahrenheit 9/11,” which has a higher level of diversity among users’ opinions. This is the basis of an idea proposed by Pennock and Horvitz that says if one can calculate how useful a given piece of information is (a value-of-information or VOI metric), then one can tune a system to optimize its data collection process by soliciting user preferences on items that have the most value” (Lam et al. 2006).

  9. 51.

    Free-Ride. Since many recommender systems rely on user ratings to make recommendations, it is important that users have some incentive for rating items. For a large number of users, due to either sloth or simply insufficient time and interest to rate items for others, there may be a tendency to “free-ride” on the evaluations of other users without giving any ratings of their own. This tendency leads to a situation of too few evaluations for a recommender system to make effective recommendations. (Avery and Zeckhauser 1999).

  10. 52.

    Incentives. One problem a recommender system may have is in obtaining recommendations from users; this especially problematic for pure CF systems when items are newly introduced into the system. There may be a tendency for users to “free-ride” on the recommendations of others without providing any of their own. Other than relying on the altruism of users to rate items for the benefit of others, incentives may play a role in encouraging users to provide ratings. Several suggestions have been offered in the literature to deal with the “free-ride” issue including 1) subscription services, 2) pay-per-use, 3) compensation for ratings, 4) and exclusion from recommendations, (Avery and Zeckhauser 1999; Resnick and Varian 1997).

  11. 53.

    Popularity. An often-used filtering strategy in recommender systems is to utilize item popularity as a method of producing recommendations. The rationale is that items that are popular tend to be liked by most users (Rashid et al. 2002). Popularity based methods are prone to the long-tail issue wherein new or seldom rated items are never recommended (Ho et al. 2014). Because diversity and novelty have been shown to improve user satisfaction, popularity-based approaches may not provide the useful or personalized methods for achieving recommendation diversity and overall quality (Adomavicius and Kwon 2011; Kaminskas and Bridge 2017).

  12. 54.

    Implicit Feedback. Recommender systems may infer user preference from implicit feedback by observing user behavior through online activities such as purchase history and search or mouse click patterns (Hu et al. 2008). Implicit feedback may also be used to enrich explicit feedback for a more accurate assessment of user preference. With implicit feedback, the system analyses the users’ behavior patterns, such as which items they purchase, to develop a profile of the user. (Knijnenburg et al. 2012).

  13. 55.

    Uncertainty. Uncertainty in recommender systems refers to a lack of assurance in the quality of a rating value. This occurs when there are few explicit ratings with which to gauge user preference. Collection and aggregation of multiple variables of implicit ratings that are predictive is seen as a means of reducing uncertainty in recommender systems (Schafer et al. 2007).

  14. 56.

    Entropy. Entropy in recommender systems quantifies the uncertainty surrounding user preference information in recommender systems literature and deals with the ratings sparsity issue. As a recommender system receives more information about a user’s preference, uncertainty of predictive accuracy should decrease. Because entropy is the opposite of predictability, a measure of predictability may be measured through entropy. Entropy treats all rating levels as equal without regard to the numerical value of the actual rating. Further, items with few ratings have high entropy and give little predictive power in determining which users to associate with a given user and thereby little help in providing quality recommendations. However, when combined with item popularity, entropy may increase predictive accuracy in recommender systems (Rashid 2002).

  15. 57.

    Personality Type. The personality type of a user has been offered as an additional dimension for CF recommender systems in determining the similarity weighting of user preferences. Assuming that a user’s preferences are a manifestation of their personality type, a probability model may be derived to predict whether or not users will like an item previously liked by other user with the same personality type (Pennock, et al. 2000).

  16. 58.

    Decision Styles. Recommender systems researchers have shown that personalized decision styles may influence decision making and should be taken into consideration in recommender systems development. For instance, with travel recommender systems, a user’s decision style may impact their preference for manual selection from a list of destinations or whether to be given inspiration through an icon-based interface (Ricci 2002). Research on decision making in recommender systems argues for flexibility and configurability in user interface design and accommodation to account for decision styles, the varied way humans make decisions dependent upon various factors (context, goals, constraints, experience, decision styles, etc.) that may influence their decisions.

  17. 59.

    Demographics. Making accurate, relevant and useful recommendations requires knowledge of user preferences. In many cases, recommender systems suffer from a lack of knowledge surrounding user preferences due to small numbers of users, limited ratings and insufficient similarity between users. Obtaining additional user attributes such as age, gender and income through preference elicitation feedback mechanisms may improve a recommender system’s ability to cluster certain users together for the purpose of making more precise recommendations. Demographics have been considered as a means of making recommendations based on the types of users who have rated items within a recommender system (Kruwlich 1997). Researchers have noted that utilizing user demographics is one method for addressing the rating sparsity challenge in recommender systems to assist in calculating user similarity through demographic segmentation (Burke 2002; Adomavicius and Tuzhilin 2005).

1.2 System Issues in Recommender Systems

The following issues are most often associated with recommender systems. Although incorporated in the recommender systems Concept map, we will skip the various recommender systems types and algorithms as they have been extensively covered in other literature surveys (Sarwar et al. 2000b; Adomavicius and Tuzhilin 2005; Bobadilla et al. 2013). As with the User Issues described above, we will provide segments of the concept maps as headings to orient the reader with regard to the recommender systems Concept map issue nodes covered, starting at the bottom of the left side of the recommender systems Concept Map and working upwards.

figure d

1.2.1 Theme: Security

  1. 60.

    Security. Security has been associated with user trust in recommender systems (Komiak and Benbasat 2006). In the literature, security in recommender systems has been addressed in terms of 1) maintaining the privacy of sensitive personal information, and 2) manipulation of user ratings to influence item recommendations (Lam et al. 2006). As with any online communication system that involves storage of personal information, security is an issue that must be addressed in recommender systems development (Malone et al. 1987). Several frameworks currently exist to address the issue of stored data security (Verbert et al. 2012). Assuming that the stored personal information is kept secure, however, still doesn’t prevent malicious users from attempting to manipulate recommender systems recommendations. Identification of and removal of malicious noise, in the form of illegitimate ratings, prevents unethical users from biasing recommendation results (Mahoney et al. 2006). The main types of attacks on recommender systems are known as product push and product nuke whereby a user attempts to promote (“push”) their own product or damage (“nuke”) ratings of competing products (Burke, et al. 2006). To address these issues, researchers have developed security techniques using encryption and shared keys (Schafer et al. 2007).

  2. 61.

    Risk. In order to provide more accurate results, recommender systems attempt to collect more information from users regarding their preferences. Risk in recommender systems may be associated with the risk of the unauthorized exposure of user data, especially user personally identifiable data. With the burgeoning problem of identity theft, securing user data has become a critical issue for IS in general (Lam et al. 2006).

  3. 62.

    Personally Identifiable Information. Loss of control over privacy of personally identifiable information (PII) has become a critical issue for today’s consumers. Consumers are concerned, with good reason, that providers will sell their personal information to third parties without their knowledge. Feelings over this loss of control range from tolerance to resignation and disgust (Hoffman et al. 1999). With recommender systems, there is a trade-off for users between privacy and accuracy of results. In order to provide accurate recommendations, recommender systems require user preference information from the user while the more information a recommender system has, the more accurate its recommendations become. However, in order to encourage users to divulge information about themselves and their preferences, a recommender system needs to ensure users that their personal information 1) will remain private from third parties, 2) will be used for the user’s purposes, and 3) will provide value to the user (Schafer et al. 2007; Shani and Gunawardana 2011).

  4. 63.

    Noise. Separating the signal from the noise refers to filtering out irrelevant data so that the true signal or message can be received (Tuzlukov 2010). Because most recommender systems are open systems, it is likely that certain users may insert inaccurate preferences (representing noise or unwanted information), either inadvertently or intentionally, in the form of feedback to the system. This situation, of course, may lead to inaccurate ratings and recommendations. Noise in recommender systems has been classified into two categories, natural noise and malicious noise. Natural noise represents inaccurate explicit feedback obtained in error from users or through misinterpretation of implicit feedback. Malicious noise is intentionally inserted into the system by means of explicit feedback in an attempt to bias the system either in favor of or against a particular item or product (Mahoney et al. 2006).

  5. 64.

    Manipulation. Manipulation of results is obviously an issue of importance in ecommerce applications where a malicious user wishes to improve the ranking of particular items in order to boost sales (“push”) or dampen sales of a competitor (“nuke”) (O’Mahoney et al. 2004). Many of these attacks involve “profile injection” or “shilling” whereby users create multiple accounts aimed at biasing the overall ratings of items within a recommender system (Burke et al. 2006). Mobasher et al. (2007) identify six attack models for manipulation of recommender systems: 1) random attack – to boost the trust and impact of their fake profile and avoid detection, the attacker selects random filler items and rates them in accordance with the global mean of the ratings within the recommender systems while giving a biased rating for the target item(s); 2) average attack – the attacker uses the individual mean of the filler items to rate them while giving the target item(s) their own biased rating; 3) somewhat like the random attack, in the bandwagon attack, the attacker selects a small number of frequently rated filler items with high visibility to rate in order to blend in with a large number of users; 4) in the segment attack, the malicious user selects filler items in the same segment or genre of the item they wish to manipulate, rating them along with their target item(s); 5) the love/hate attack involves randomly selecting filler items high ratings while giving the target item(s) low ratings; and 6) in the reverse bandwagon attack selects items that are rated poorly by users as filler items while giving the same low rating for the target item (Mobaser et al. 2007). The first 4 attacks are designed to work as either push or nuke, while attacks 5 and 6 are nuke attacks. These attacks are highly effective against pure CF recommender systems and may be utilized against CB recommender systems as well. Hybrid systems may provide certain defenses against such attacks. Resnick describes a recommender systems algorithm to limit the influence of manipulation of dishonest raters by modifying ratings based on user reputation and credibility (Resnick and Sami 2008).

  6. 65.

    Robustness. Robustness in recommender systems refers to the ability of a recommender systems to provide stable recommendations in the presence of various kinds of attacks (see “Noise” and “Manipulation”). False information in the form of fake user accounts and ratings may be provided to a recommender system in an attempt to influence recommendations. Robust recommender systems algorithms are able to detect such attacks and not be unduly influenced by them. Understanding the vulnerabilities of particular algorithms and the various attack methods may be an effective deterrent to such attacks (Mobasher et al. 2007).

figure e

1.2.2 Theme: Quality

  1. 66.

    Quality. In recommender systems, quality has been viewed from several different standpoints. Often it is related to the quality of a prediction measured by absolute error, accuracy or coverage. Evaluation of quality metrics can be classified as 1) predictive accuracy, 2) coverage, 3) rank recommendation, and 4) diversity metrics. In terms of prediction and recommendation, a quality measure should also be reliable. A prediction based on a large number of users would be more reliable than one based on few ratings and therefore should be of higher quality (Bobadilla et al. 2013). Precision, recall and F-1 Measure are the most popular metrics for quality evaluation of recommender systems (Bobadilla et al. 2013). Precision is the ratio of relevant items selected to total number of items selected. It represents the probability that an item selected by the system is relevant to the user (Herlocker et al. 2004). In information retrieval system evaluation, F1 is an accuracy metric that gives equal weight to both precision (P) and recall (R). Since Precision and Recall move inversely to one another, F1-Measure is seen as a better indication of overall classification accuracy. Taken together, they represent a measure of recommendation quality (Herlocker et al. 2004).

  2. 67.

    Overspecialization/Over-specification/Over-fitting. Because CB recommender systems recommend items based on a user’s preference history, there is a tendency to recommend items similar to those already rated by a user (Adomavicius and Tuzhilin 2005). For example, a user who has never traveled to Spain would never receive recommendations for Spanish hotels, although they may well be of interest to the user. Introducing novel or serendipitous items into the mix is seen as a method of improving the quality of recommendations (Ge et al. 2011). Several techniques have been offered to deal with the overspecialization problem in recommender systems including social network-based recommender systems, fuzzy recommender systems, context-aware recommender systems (Lu et al. 2015).

  3. 68.

    Similarity. One of the fundamental challenges in information retrieval in general, and recommender systems in particular, is determining similarity. Similarity in recommender systems literature may refer to either the similarity of 1) rating histories of users or 2) between items recommended. The idea behind using similarity-based algorithms in CF systems is that users who have similar interests will often prefer similar recommendations and in CB, users who prefer particular items will often prefer similar items. Nearest neighbor techniques (k-NN) have often used for similarity measurement (Sarwar et al. 2000a, 2000b), however, a number of other approaches to determining similarity have been utilized in the literature. Similarity measures have also been used in Case-Based recommender systems to determine similarity between cases.

  4. 69.

    Redundancy. Redundancy in recommender systems is related to the overspecialization problem and may viewed as the opposite of novelty (Kaminskas and Bridge 2017). Similarity assessment is impacted by factors such as limited content analysis, knowledge engineering and polysemy. Latent and abstract features may also interfere with a recommender system filtering methods ability to determine similarity or diversity among items leading to redundancy of recommendations (Smyth and McClave 2001). With recommender systems where items may be classified by genre, such as movie recommendations, providing a number of titles from the same genre may be redundant, especially knowing that some genres, such as drama, may be very diverse in nature, while others, such as western, fairly narrow. Coverage, size awareness and redundancy are worthy considerations for achieving recommendation diversity (Vargas et al. 2014).

  5. 70.

    Diversity. Accuracy of recommendation results does not guarantee useful results. Recommender systems that base recommendations solely on a user’s historical ratings lead to a high degree of similarity in recommendations and aggravate the long-tail issue wherein new items are seldom, if ever, recommended. Diversity of recommendations ensures that users are exposed to a variety of results. Diversity may also be considered in terms of genre dissimilarity rather than the dissimilarity between items (Vargas et al. 2014). Two types of diversity have been delineated in the literature: 1) individual diversity refers to the dissimilarity between item pairs recommended to a user; 2) aggregate diversity pertains to coverage and the variety of recommendations over of the entire catalog of items available (Adomavicius and Kwon 2011; Muter and Aytekin 2017).

  6. 71.

    Long-Tail. Another challenge for recommender systems is provide new and useful items to users. Because many recommender systems use popularity-based metrics to filter recommendations, those items that are new to the system or have yet to receive any reviews tend not to be presented in the top-N recommendation results (Castells et al. 2011). These items comprise the long-tail of recommendation items. Presenting long-tail items in diversified recommendation results is seen as a means to engender customer loyalty and improved the user’s experience by helping them to discover novel and serendipitous items (Ho et al. 2014; Kaminskas and Bridge 2017).

  7. 72.

    Superstar. Superstar theory posits that the most popular items tend to be selected more often in a bandwagon effect and therefore, if steps are not taken to utilize algorithms that include niche or less popular items, a recommender system’s recommendations may tend to be concentrated on particular most popular items, referred to as “SuperStars” (Brynjolfsson et al. 2010.).

  8. 73.

    Novelty. Many recommender systems focus on accuracy at the sake of presenting new or surprising items that may be of interest to users (Ge et al. 2010). Novelty in recommender systems denotes the presentation of items different from users have previously seen in recommendation results (Castells et al. 2011). Making users aware of previously unknown or “non-obvious” items was been put forth in the literature as an open problem as early as 2004 (Herlocker et al. 2004). Novelty is one means of overcoming the overspecialization problem with recommender systems whereby users tend to receive very similar items in their recommendation results (Bobadilla et al. 2013).

  9. 74.

    Serendipity. One of the well-known issues with recommender systems, especially CB recommender systems, is that popular items tend to be recommended more frequently. Therefore, care has to be taken in recommender systems development to ensure that new and rarely rated items are presented as part of recommendation results. Serendipity refers to recommendation results that are not just novel, but also somewhat unexpected, pleasant, and relevant to the user (Kaminskas and Bridge 2017). Serendipity tends to lead to improved recommendation quality (Ge, et al. 2011). Evaluating the serendipity of a recommendation requires that obtaining information about whether the user is accepting or purchasing unusual items and whether or not they are pleased with the item. In other words, the item was highly rated or had fewer returns (Herlocker et al. 2004). (See also “diversity”, “novelty”, “redundancy).

  10. 75.

    Balance. Balance in recommender systems refers to the ability of a recommender systems to provide users with results from the complete distribution of available items (Ho et al. 2014). A balanced recommender system allows non-popular items to receive a reasonable chance of being returned in recommendation results.

  11. 76.

    Coverage: Accuracy alone does not determine a recommender system’s usefulness. For example, a recommender system may be highly accurate but only able to make rating predictions for a small number of items. In terms of recommender systems, coverage is defined as 1) the degree to which a recommender system provides recommendations for the entire catalog of available items and 2) the percentage of all items that can be effectively recommended to all potential users. It is a quality metric that indicates how well a recommender system provides the user with an in-depth and detailed view of the items available within a given domain (Ge et al. 2010). Coverage is generally computed as a percentage of user-item pairs for which a recommendation could be made and tells us how well a recommender system’s predictions cover all of the items within the system (Sarwar et al. 2000a). There are several different types of coverage: 1) Standard coverage indicates the percentage of total items for which a recommender system can make predictions; 2) Catalog coverage indicates the percentage of items for which the system is able to produce predictions. It measures the diversity and novelty of recommendations; 3) Prediction coverage indicates the percentage of items a recommender system would return out of the total number of items a user would be interested in (Herlocker et al. 2004).

  12. 77.

    Plasticity. Recommender systems recommendations tend to suffer from a static view of the user based on preference history. Often, no accommodation is made for the changes in inclinations and desires of the user as they age, move to a different locale or associate with different people. Plasticity in recommender systems refers to the ability of the system to adapt to changes in user preferences over time (Burke 2002). Several authors suggest use of “concept-drift” techniques to model user changes in preferences (Lorenzi and Ricci 2003; Lu et al. 2015; Sahoo et al. 2012).

  13. 78.

    Stability. Having established a user’s preferences through a sufficient number of ratings, a recommender system may tend to recommend the same types of items. This is known as the stability of recommendations. While moving past the “cold-start” challenge associated with learning-based techniques, this issue of stability of recommendations may conflict with a user’s wish to view novel and serendipitous items (Burke 2002). Stability of recommendations also works against providing recommendations based on contextual factors such as location or changes in user preference that take place over time. Stability may also refer to a recommender system’s ability to remain stable under attack scenarios (Burke et al. 2006). The stability of a recommender system’s recommendations may influence user trust towards the system (Bobadilla et al. 2013). (See also, “plasticity”).

  14. 79.

    Concept Drift. The issue of concept drift has long been recognized in statistics and machine learning and refers to data distribution changes over time in dynamic environments (Sehlimmer and Granger 1986). In recommender systems research, user preferences are known to change or drift over time (Burke 1999; Burke 2002). The ability of a system to adapt to these changes is known as plasticity. Learning algorithms have been utilized to account for this concept drift by 1) appropriate weighting of more recent or contextually relevant user ratings or 2) fitting various models to subsets of data (Sahoo et al. 2012).

  15. 80.

    Accuracy. Accuracy is a key issue in recommender systems research and one with several different facets. One way in which accuracy can be thought of in recommender systems is as the system’s capacity to correctly return personalized recommendation results to a specific user in the same way that the user would have chosen if they were cognitively able to manually select from all of the available choices. Accuracy in recommender systems can also be thought of in terms of whether a recommendation is interesting and relevant to a user (Huang 2011). Accuracy is a measure of a recommender systems system’s 1) ability to predict how users would actually rate recommendation results 2) make correct decisions regarding which recommendations to return to the user and 3) return recommendations in the same order in which the user would rank these items (Herlocker et al. 2004). Measures of accuracy may be statistical or decision support. To address the challenge of providing accurate results, statistical measures of accuracy compare estimated ratings against user ratings in order to determine the degree to which the system effectively correlates its predictions with actual ratings. Simple methods of measuring statistical accuracy include Mean Absolute Error (MAE) and root mean square error (RMSE). Other methods for determining statistical accuracy utilize the correlation between actual ratings and predicted ratings. Correlation accuracy measures include Pearson product-moment correlation, the Spearman correlation and Kendall’s Tau. The Pearson correlation determines the extent to which a linear relationship exists between the actual and predicted ratings. The Spearman correlation determines if the relationship exists between the rankings of the items. Kendall’s Tau measures the extent to which the two rankings agree on the exact values of the ratings (Lu et al. 2012). For situations in which the ranking order is weak, the Normalized Distance Performance Measure (NPMD) has been proposed to show the total number of strict preference relationships in the actual rankings (Yao 1995). Decision support measures determine how well a recommender system predicts whether recommendations are of value to users. These metrics include Receiver Operating Characteristics (ROC) and the Precision Recall Curve (PRC) (Herlocker 2004; Isinkaye et al. 2015). ROC measures indicate how well a recommender system filtering technique can distinguish relevant items from noise. It does this by comparing the probability that relevant items will be recommended with that of irrelevant items. ROC sensitivity is a binary measure of the diagnostic power of a recommender system (i.e. the user will either select or reject an item recommended). The area under the ROC curve plots the probability of a randomly selected item being recommended (sensitivity) and the probability of irrelevant items being rejected (specificity) in a hit or miss fashion. The area under the ROC curve increases as the recommender systems is able to recommend more relevant items while rejecting irrelevant ones (Herlocker et al. 1999).

  16. 81.

    Rank Accuracy. Along with providing result accuracy, the order in which recommendations are presented matters to recommender systems users (Adomavicius and Tuzhilin 2005; McNee et al. 2006). Recommender systems ranking is a means of providing recommendation results that are personalized to a user in the order of predicted relevance and usefulness and separate recommender systems from simple information retrieval (Burke 2002). Since most users will only review the top-N results (1st or 2nd page of results), ranking accuracy has become a key focus for recommender systems research (Kaminskas and Bridge 2017; Zhang et al. 2017). To evaluate ranking scores, the most often used measures are recall, precision and Normalized Discounted Cumulative Gain (NDCG) (Yang et al. 2014; Zhang et al. 2017).

  17. 82.

    Confidence. Confidence describes the ability to trust in a recommender system’s predictive accuracy. One way to increase confidence is by an increase in user ratings. As the amount of ratings data increases, confidence scores may be improved with the likelihood that a given recommendation will satisfy the user’s request (Shani and Gunawardana 2011). Confidence in recommender systems research has also been used in the behavioral sense of a user’s satisfaction with their recommendation choice as in the ability of the recommender system to convince users of the information or items recommended. In this sense, confidence addresses the level of certainty a user has in their recommended selection (Pu et al. 2011).

  18. 83.

    Error Rate. The error rate of a recommender system is the number of incorrect recommendations it makes divided by the total number of recommendations. Error rates in recommender systems are generally approximations since the computation is most often limited to recommendations where a rating is available (Herlocker et al. 2004).

  19. 84.

    Performance. Along with accuracy and relevance, computational performance is also important for user satisfaction with recommender systems. Aside from the usual computational performance factors affecting electronic information systems in general, (e.g. system memory, availability, throughput, network latency, processing speed, etc.), and applying techniques from machine learning, recommender systems computational performance may be calculated as a combination of prediction latency, (the time it takes an recommender systems to make a prediction), and prediction throughput, (the number of predictions an recommender systems can deliver in a given amount of time). Factors associated with prediction latency are 1) size of the feature set, 2) data representation and sparsity, 3) the model used to make predictions and 4) feature extraction latency. Prediction throughput is a factor of 1) the number of features and 2) the efficiency of the model utilized (Pedregosa et al. 2011). Memory-based CF methods that utilize correlations between user preferences for items, for example, are known to be very accurate, however they are the most computationally expensive as the systems must examine each user and every item in order to produce recommendations (Linden et al. 2003). To improve computational performance of recommender systems, researchers have utilized intelligent model-based methods such as Bayesian or Markov techniques and machine learning coupled with updates performed offline at fixed intervals to lower computational cost while improving predictive accuracy (Shani et al. 2005; Isinkaye et al. 2015).

  20. 85.

    Business Value/Business Performance: In e-commerce implementations, recommender systems not only provide relevant results to users they also create value to the business. Recommender systems provide companies with a means of creating value for the business by providing their customers with products they prefer to purchase. Measures of business value include return on investment (ROI) and customer lifetime value (LTV) (Adomavicius and Tuzhillin 2005). Recommender systems can increase business value through cross-sell opportunities and improving customer retention (Lu et al. 2012).

  21. 86.

    Scalability. The sheer size of recommender systems catalogs can be overwhelming. With millions of products and users, scalability becomes a critical issue for recommender systems developers seeking to meet consumer latency expectations (Schafer et al. 2001). Memory-based CF recommender systems methods, although deemed to be more accurate, are particularly susceptible to scaling issues. Reviewing every customer and item rating prevents this type of recommender systems from scaling to very large data sets. Generally, CB recommender systems methods scale better than CR recommender systems (Hu et al. 2008). Singular Value Decomposition (SVD) of the user-item ratings matrix has been demonstrated to be both and accurate and scalable method for filtering recommendations (Hu et al. 2008). Zhang et al. (2017) apply the success of deep-learning based methods in big data analytics to address the scalability challenge of recommender systems. However, generally, there are tradeoffs between scalability and predictive accuracy (Su and Khoshgoftaar 2009).

  22. 87.

    Learning Rate. Various recommender systems algorithms achieve recommendation quality at different learning rates. Herlocker distinguishes between overall learning rate - recommendation quality as a function of overall number of ratings in the system, per item learning rate-quality as a function of number of ratings available for an item and user learning rate-quality as a function of the number of ratings the user has contributed (Herlocker et al. 2004). The faster an algorithm is able to provide quality recommendations with fewer ratings, the higher the learning rate of the algorithm.

figure f

1.2.3 Theme: Objects

  1. 88.

    Objects. Objects within our Recommender Systems Issues Ontology refer to the items provided by the recommender system. These objects comprise the sets of recommendations made by a recommender system.

  2. 89.

    Knowledge Engineering. CB recommender systems require an amount of knowledge engineering effort in order to describe and classify items for recommendation. As a recommender system become larger and item churn begins to take place, knowledge needs to be carefully updated on a regular basis in order to avoid classification errors and faulty recommendations (Felfernig and Burke 2008). This is seen as a limitation of the approach as such efforts are usually complex and prone to error (Lorenzi and Ricci 2003). Collaborative tagging has been offered as a means of crowd-sourcing the onerous job of classifying items for a recommender system (Ji et al. 2007).

  3. 90.

    Classification. One issue for content-based recommender systems is that items must be classified in some manner in order to provide a means for selecting items with similar features. This can be a laborious process in which methods need to be employed to catalog each items’ attributes such as title, description, author, etc. In these situations, maintenance of the metadata surrounding an item can quickly become overwhelming for administrators and similarity measurement techniques must be adequately deployed (Tao et al. 2017).

  4. 91.

    Collaborative Tagging. Collaborative tagging has been suggested as a method of allowing users to assist in the process of providing searchable tags or “folksonomy” for filtering of relevant items (Ji et al. 2007). With collaborative tagging, users can add their own descriptive keywords (tags) to items in order to assist with classifying and filtering.

  5. 92.

    Limited Content Analysis. CB recommender systems make recommendations based on the descriptions and tags associated with the items in its database. One limitation of CB recommender systems is that only shallow analysis of certain kinds of content may be extracted from the items in the system (Balabonovic and Shalom 1997). It’s a fairly simple process to extract certain attributes from recommended items such as books or movies, such as authors, characters, year created, etc., however automatic methods for feature extraction tend to struggle over attributes such as theme, mood and plot subtleties. Even advanced natural language processing (NLP) efforts are prone to error and misinterpretation. This limitation of the analysis of item content leads to item classification similarity whereby items that may be very different in reality are indistinguishable to the system. This, of course degrades recommendation quality (Adomavicius and Tuzhilin 2005).

  6. 93.

    Feature Weighting. Weighting in recommender systems connotes an understanding of the relative importance of the features of a user profile. The idea behind giving various weights to different features is that users may rely more heavily on one aspect than others in their decision-making analysis. For instance, with a restaurant recommendation system, location of the user may more heavily influence their decision while traveling than when they are selecting dining locations in their home town. Various methods have been utilized in feature weighting with recommender systems (Rad and Lucas 2007).

  7. 94.

    Synonymy. Synonymy becomes a challenge for recommender systems when similar items are called different things. For instance, it is difficult for correlation-based recommender systems to make a distinction between “gems” and “jewels”. Correlation based systems would find no match between these terms unless specifically programmed to do so (Sarwar et al. 2000a, 2000b). Several methods have been put forth to deal with the challenge of synonymy including automatic term expansion, inclusion of a thesaurus and latent semantic indexing (Isinkaye et al. 2015).

  8. 95.

    Polysemy. The Oxford Dictionary defines polysemy as the “coexistence of many possible meanings for a word or phrase”. With regard to recommender systems, polysemy refers to a situation wherein item descriptions or metadata are non-unique or vague so as to cause confusion among item-to-item correlations. Solutions for polysemy include enhanced knowledge engineering such as the use of experts for item description (Sarwar et al. 2000a).

  9. 96.

    Item Churn. Item Churn refers to the turnover of items within a recommender systems database. With a system such as Amazon or eBay, new products are continuously being added to the system, while old products are being removed due to manufacturing shortages, discontinuation of product line or lack of sales. Given that many recommender systems implementations accept user generated content, item churn creates a dynamic environment in which items are constantly being added and updated within a catalog. The effect of frequent item churn is that certain items may never receive the number of ratings required rise to the top-N level of recommendations before they are discontinued (Felfernig and Burke 2008).

1.3 Issues Affecting Both Users and Systems

Several issues have crosslinks between concepts of System and User as illustrated below in the Sparsity section of the recommender systems Concept Map. For instance, the issue of Sparsity is associated with a dearth of system item ratings as well as a lack of User Ratings. Likewise, the issues of Trust and Context may be associated with both an Item that is part of the System (e.g. where an item is located geographically) and User (e.g. where the user is located geographically) depending on its usage within the literature. The following issue descriptions provide summaries of the issues that may associate with both System and User.

figure g

1.3.1 Theme: Sparsity

  1. 97.

    Sparsity. Sparsity in recommender systems refers to a paucity of data points needed to correlate user preferences. With recommender systems containing millions of items, it is likely that users have rated only a small subset of the items available. Nearest Neighbor approaches using correlation coefficients defined by users who have rated items in common may lack correlation (Sarwar et al. 2000). In such cases, contextual information about the users, such as demographic or locational data may be used as a method overcoming the sparsity challenge by calculating similarity based on the contextual data (Adomavicius and Tuzhilin 2005; Abass et al. 2015).

  2. 98.

    Cold-Start. The cold-start issue (also known as the “ramp-up” problem) is a well-known issue in recommender systems (Herlocker et al. 2000; Burke 2002; Adomavicusand Tuzhilin 2005). It describes the situation where a recommender system initially knows very little or nothing about a user’s preference due to lack of ratings history with which to offer accurate recommendations. There are three types of cold-start issues: 1) new user 2) new item and 3) new community (Schafer et al. 2007). With pure CF systems, in a cold-start situation there are no similarity bases for classifying users along with other users in order to make predictions regarding recommendations. Likewise, new items added to a recommender system lack initial user ratings or purchase history. Cold-starting a new community is particularly difficult for recommender systems since there are neither user nor item ratings with which to provide comparisons. In these cases, a number of methods have been employed, such as hybrid recommender systems that combine user and item information as well as the use of incentives to users for providing ratings, to obtain the needed user preference data (Schein et al. 2002). The challenge for recommender systems is to develop methods that ameliorate the cold-start issue through the use of alternative approaches, such as demographics, social network, knowledge-based and contextually-aware recommender systems; or obtaining the preference information by explicitly asking the user about their preferences in order to provide accurate and relevant recommendations (Adomavicius and Tuzhilin 2011; Burke 2002; Felfernig and Burke 2008; Knijnenburg et al. 2012). Transfer learning has also been used as a method of addressing the cold-start issue. In transfer learning, information from one domain are transferred to the target domain (Li et al. 2009).

  3. 99.

    Learning. At the outset, a recommender system knows virtually nothing about a user’s preferences or the similarity between item attributes and therefore has no basis for making a recommendation. This is known as the cold-start problem (Herlocker et al. 2000; Burke 2002; Adomavicius and Tuzhilin 2005). Various machine learning techniques have been employed in recommender systems development to improve recommendation quality (Wang et al. 2015; Zhang et al. 2017).

  4. 100.

    Transfer Learning. Transfer learning is the making use of data from one recommender systems to solve the data sparsity problem in another. When little is known about users and items, integrating knowledge from another domain may help improve recommendations. If one can assume that user tastes are similar across a number of domains and that certain domain items are similar in terms of properties, one may conclude that use of shared knowledge about these users and items across domains may reveal preferences in common. For example, movies, music and books share several properties (e.g. register, style, genre, etc.). Users of recommender systems that provide recommendations in these areas often have users in common whose tastes in one domain may be shared across the others. Several studies have shown the value of transfer learning in recommender systems may improve accuracy of results assist in solving the data sparsity problem (Li et al. 2009; Pan et al. 2010).

  5. 101.

    Early Rater. When a user is the first to rate an item for a particular category or within a yet unrated neighborhood in a recommender system, there is very little the system can do to provide them with recommendations. Related to the cold-start problem, the Early Rater issue refers to the fact that the recommender systems is dependent upon a number of altruistic users willing to provide recommendations without receiving any in return (Sarwar et al. 2000a, 2000b).

figure h

1.3.2 Theme: Context

  1. 102.

    Context. Users of recommender systems may have situational needs that reflect the context of their current need for recommendations (Adomavicius and Tuzhilin 2011). For example, a user who shops online may be shopping for themselves or for others whose needs and preferences are dissimilar to their own. As further example, a recommender system that provides restaurant recommendations will require an understanding of the user’s current location or the fact that they may be eating with someone else who has dietary requirements. Likewise, contextual factors may affect the recommendation of items within a recommender systems database.

  2. 103.

    Identity. Identity is generally a fixed contextual characteristic of the user referring to a user’s profile information. Identity is an often used contextual factor for recommendations comprising the users demographic, personal information, general interests and research interests (Champiri et al. 2015)

  3. 104.

    Group Awareness. Group recommender systems are used in situations where groups of people need a common recommendation. Vacations, restaurants, and movies may all be instances where a group of people needs to receive and agree upon a recommendation. For these situations, a recommender system must aggregate group users’ preferences, in some manner such as averaging user’s individual preferences, so that group members have awareness of other members’ preferences (Jameson, et al. 2004).

  4. 105.

    Communication. Group recommender systems are a special type of recommender systems in which recommendations are made for a group of individuals (Jameson, et al. 2004). With group recommender systems, the system requires a means of allowing individual users to communicate with other members of the group in order to share preferences and exchange rationale for recommendation choices.

  5. 106.

    Temporality. Attention to the temporal components of recommender systems acknowledges that user preferences and item popularity may change over time. The ability to adapt to these changes over time reflects the systems’ need to recognize the dynamic nature of user-item interaction. As an example of a temporal change in user preference, a user’s taste in music while in their youthful 20’s may be very different by the time they reach their more mature 40’s or 50’s (Koren et al. 2009). An item’s temporal components may affect recommender systems accuracy and relevance. For instance, an actor’s rise in popularity may affect the rating or popularity of a movie in which they acted earlier in their career (Felfernig and Burke 2008). Likewise, a change in user ratings over time may influence users trust in an item (Abass et al. 2015).

  6. 107.

    Locality. Knowledge regarding the locality of a user or an item allows for the inclusion of spatial aspects in the formation of appropriate recommendations. Locality may be determined through explicit feedback from the user or implicit feedback using GPS location services. A number of recommender systems such as FaceBook and FourSquare now include geographic information so as to determine the spatial aspect of recommendations within their filtering algorithms. Travel and restaurant are examples of recommender systems that may improve recommendation quality by inclusion of information regarding locality in order to more closely associate users with items of interest (Levandoski et al. 2012).

  7. 108.

    Surroundings. Surroundings is an example of an environmental contextual condition that may be utilized in mobile recommender systems characterized by dynamic changes in the environment (Champiri et al. 2015).

  8. 109.

    Reciprocity. In people to people-based recommender systems such as social, dating or career-based applications, reciprocity must satisfy the needs of both parties involved in the recommendation. Reciprocal recommender systems need to account for the potential for inaccuracies in user profiles due to a desire on the part of the user to have a more attractive profile. User churn is also an issue for reciprocal recommender systems. (Koprinska and Yasef 2015).

  9. 110.

    Aggregation. Users sometimes have a need for aggregated results rather than lists of individual items. For instance, users may want to shop for items by brand or location. Aggregation of results into categories may assist in providing flexibility and increase user satisfaction (Adomavicius and Tuzhilin 2005). Group recommender systems also have need for means of aggregation in terms grouping of individual member preferences in order to provide recommendations for the group (Jameson et al. 2004). Several aggregation strategies have been proposed in the research to deal with effective aggregation of individual preferences in group settings including average (average individual ratings), Borda count (count and sum points from items’ rankings in individual’s preference list), Copeland rule (count how often an item beats other items through majority vote minus how often it loses), plurality (uses “first past the post” repetitively; item with the most votes wins), least misery (minimum of individual ratings), most pleasure (maximum of individual ratings) (Mashoff 2011).

  10. 111.

    Multidimensionality/Multicriteria. Multidimensionality in recommender systems may refer to the utilization of varied attributes in ascertaining user preferences. For instance, incorporation of demographic information such as age, gender or contextual information such as location or time along with user preference history and item attributes may help to improve recommendation results (Adomavicius and Tuzhilin 2005). Some recommendations require multiple criteria. For instance, with a movie recommendation system, users may wish to view different types of movies during the holiday season than they do during the Summer; and when they are with friends rather than family. At the same time, a user may wish to see only movies that have appeared in theaters within the past 2 years. In this situation, season, companions and currency need to all be taken into account by the recommender systems filtering technique. A recommender system should be able to assess that a user may have a number of different preference criteria and that selections based on a single criterion such as ratings may result in reduced user satisfaction (Adomavicius and Tuzhilin 2005).

  11. 112.

    Reachability. Reachability is a contextual term in recommender systems that may refer to the proximity of recommendation items or users to recommendation items (Champiri et al. 2015). It has also been used as a description for the reachability of products within an online recommender systems implementation based on their prominence and ease of access on the website (Chen et al. 2008).

  12. 113.

    State. State in recommender systems research may refer to activity, motion, physical or emotional state based on object oriented concepts (Champiri et al. 2015).

  13. 114.

    Relational. Relational context refers to the relations an entity has to other entities. These may be 1) social relationships such as the relationships between two or more people, 2) functional relationships wherein entities make use of other entities for a specific purpose, or 3) compositional relationships where a whole/part relationship between entities exists (Zimmermann et al. 2007).

figure i

1.3.3 Theme: Trust

  1. 115.

    Trust. Trust is recommender systems may refer to the trust a user has in other users of the system or their trust in the specific recommender systems itself. A user may trust the ratings of other users based on their reliability in terms of providing useful ratings or recommendations; in that case, trust may be measured by the percentage of accurate recommendations a user has contributed in the past. Lam offers two components of trust in recommender systems: 1) a user’s perception that the system will protect their information and 2) a user’s perception that that the recommendations offered can be relied upon as accurate (Lam et al. 2006). Trust has been offered as a means of solving for the sparsity of ratings and cold start problems inherent in many systems that rely solely on the similarity of selections between users. To wit, trust of a user may also be established through implicit information obtained through a social network (Bobadilla et al. 2013; Lu et al. 2015). Alternately, several recommender systems allow users to rate other users to establish a trust level (e.g. EBay seller or buyer ratings). In these situations where users have the ability to create “fake accounts”, users may be allowed to rate other users to establish a trust network. Trust metrics can then estimate the trustworthiness of users. System trust (or competence) is a measure of “the overall ability of the system to provide consistently good recommendations to its users” (O’Donovan and Smyth 2005).

  2. 116.

    Transparency. Recommender systems that allow users to understand the logic behind recommendations are said to be transparent. Transparent recommender systems are associated with perceived value and user confidence in the system leading to positive evaluations and actions taken on recommendations (Gretzel and Fesenmaier 2006). By displaying information to the user about how recommendations are derived, recommender systems can become more transparent increasing the likelihood of product purchase (Pu et al. 2011).

  3. 117.

    Reliability. Reliability, along with competence, has been established as an attribute necessary for cognitive trust in electronic information systems (Komiak and Benbasat 2006). Reliability in recommender systems research may also refer to the confidence with which a recommender system may rely on user feedback. Explicit feedback is seen as being more reliable than implicit feedback because it does not require inferring preferences from user actions (Isinkaye et al. 2015). It is also used in the context of reliability of recommendations in that a recommender system requires sufficient ratings data in order to be seriously considered as less liable to be wrong (Bobadilla et al. 2013).

  4. 118.

    Credibility. Credibility is closely related to trust in recommender systems literature but refers more precisely to users’ perception of expertise (knowledge or skill) and trustworthiness (character and/or personal integrity). Shopping sites’ such as eBay utilize user feedback in the form of ratings as a means of establishing the credibility of its vendors (Isinkaye et al. 2015). Xiao and Benbasat posit that the credibility of recommendation agents is determined by the type and reputation of its providers. Recommender systems types might include those owned by a seller, a third party commercially linked to sellers, or third-party websites not commercially linked to sellers (Xiao and Benbasat 2007).

  5. 119.

    Familiarity. A user’s familiarity indicates their level of experience and use of a recommender system. Users who have had many interactions with a particular recommender systems implementation understand how to indicate their preferences, what type of results and explanations to expect and how to interpret the recommendations provided. Familiarity may either increase or decrease cognitive trust in the recommender systems. Users who are familiar and who have had a positive experience with a recommender system tend to trust in the recommender systems and are more prone to adoption (Komiak and Benbasat 2006).

  6. 120.

    Social Networks. Social network analysis (SNA) has been offered as a means of improving user experience with recommender systems by allowing users to interact with one another while, at the same time, improving the ability of the recommender systems to provide recommendations (Lu et al. 2015). Social networks can provide contextual information such as, which other users the user trusts, who they follow, who their friends are, etc. This knowledge can enhance a recommender system’s ability to perform accurate similarity measurements and in doing so can help the recommender systems to address the sparsity challenge (Bobadilla et al. 2013).

  7. 121.

    Homophily. Homophily is a term used in sociology and psychology to indicate that people tend to associate with and make bonds to other people who are like themselves. With regard to recommender systems, the use of social networks in filtering methods may improve predictive accuracy and trust in a recommender system with the idea that people are more likely to accept recommendations from and willing to share preferences with friends rather than strangers (Yang et al. 2014).

  8. 122.

    Reputation. Reputation in recommender systems literature is closely aligned with trust (Lu et al. 2015; Ma et al. 2011). Social filtering approaches may utilize the number of followers a user has or the extent and reputation of the social network a user is associated with, in order to calculate the reputation of users and items the user has rated (Bobadilla et al. 2013; Palau et al. 2004). The reputation of recommender systems providers may affect user trust beliefs in their competence, benevolence, and integrity (Xiao and Benbasat 2007).

  9. 123.

    Privacy. One of the challenges of personalization in recommender systems is that users also desire privacy and anonymity and may be reticent to provide information with which to assess their preferences (Abass et al. 2015). Knijnenburg et al. (2012) note that the amount of information that users of recommender systems are willing to provide is a trade-off between perceived usefulness and privacy concerns (Knijnenburg et al. 2012).

  10. 124.

    Intrusiveness. Intrusiveness as a recommender systems issue is related to the issue of user privacy. Many recommender systems need to acquire feedback from the user in order to improve the accuracy of recommendations. With these systems, recommender systems developers should consider the concern people may have for divulging information about themselves in order to assist in the rating process. Recommender systems that attempt to solve for Sparsity challenges by over-intrusively probing users for preference information run the risk of alienating users who may not wish to divulge their personal tastes (Adomavicius and Tuzhilin 2005).

Appendix 2: Recommender Systems Issues Concept Map

figure j

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bunnell, L., Osei-Bryson, KM. & Yoon, V.Y. RecSys Issues Ontology: A Knowledge Classification of Issues for Recommender Systems Researchers. Inf Syst Front 22, 1377–1418 (2020). https://doi.org/10.1007/s10796-019-09935-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-019-09935-9

Keywords

Navigation