Abstract
This paper presents results from a web-based study that investigates users’ attitudes toward smart devices, focusing on acceptability. Specifically, we conducted a survey that elicits users’ ratings of devices in isolation and devices in the context of tasks potentially performed by these devices. Our study led to insights about users’ attitudes towards devices in isolation and in the context of tasks, and about the influence of demographic factors and factors pertaining to technical expertise and experience with devices on users’ attitudes. The insights about users’ attitudes provided the basis for two recommendation approaches based on principal components analysis (PCA) that alleviate the new-user and new-item problems: (1) employing latent features identified by PCA to predict ratings given by existing users to new devices, and by new users to existing devices; and (2) identifying a relatively small set of key questions on the basis of PCs, whose answers account to a large extent for new users’ ratings of devices in isolation and in the context of tasks. Our results show that taking into account latent features of devices, and asking a relatively small number of key questions about devices in the context of tasks, lead to rating predictions that are significantly more accurate than global and demographic predictions, and substantially reduce prediction error, eventually matching the performance of strong baselines.








Similar content being viewed by others
Notes
We distinguish between acceptability, which takes place prior to usage, and acceptance, which is related to trust, and happens after a user has been exposed to a device (Verberne et al. 2012). However, acceptability is influenced by the available information, and may change over time.
As noted in Sect. 5, owing to budgetary limitations, we could only perform a preliminary assessment of the application of PCA-LR to new users.
A survey of active learning approaches to address cold-start problems appears in (Rubens et al. 2015).
Our study did not feature speech-enabled assistants, e.g., Amazon’s Echo or OK Google, as such devices were not commonly available at the time our study began. Further, we assumed that in principle, any device can be endowed with a spoken communication capability, so this was not a distinguishing feature. However, our study included questions regarding preferred communication modalities, whose results are reported in (Zhan and Zukerman 2016).
In retrospect, we should have asked about ethnicity, rather than country of residence.
There is a 0.998 correlation between the device-only ratings of the 136 participants who completed only the first two segments of the survey and the device-only ratings of the 94 participants who completed the entire survey. However, owing to this difference in populations, the device-only results in this paper differ slightly from those in our UMAP’2016 paper (Zhan et al. 2016), which were obtained from 93 participants.
Autonomous tasks were removed from this comparison owing to their low ratings.
These numbers are slightly different from those in Fig. 4a, which were computed for 136 participants.
An interesting exercise involves studying the impact of having a user rate fewer than 13 devices when calculating the PC weights for the remaining device. Notice, however, that the results of this exercise would depend on the way in which the devices are selected.
Other PC-inspection protocols are possible, e.g., preferring higher-impact PCs.
We employed SVD with Ky Fan norms, sourced from www.mathworks.com/matlabcentral/fileexchange/48406-svd-free-matrix-completion-for-recommender-system-design.
References
Agarwal, D., Chen, B.: Regression-based latent factor models. In: KDD’2009: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, pp. 19–28 (2009)
Ahn, H.J.: A new similarity measure for collaborative filtering to alleviate the new user cold-starting problem. Inf. Sci. 178, 37–51 (2008)
Alexandersson, J., Schäfer, U., Rekrut, M., Arnold, F., Reifers, S.: Kochbot in the intelligent kitchen–speech-enabled assistance and cooking control in a smart home. In: AAL-Kongress, Frankfurt/Main, Germany, vol. 8, pp. 396–405 (2015)
Bartneck, C., Suzuki, T., Kanda, T., Nomura, T.: The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI Soc. 21(1), 217–230 (2006)
Bhagat, S., Weinsberg, U., Ioannidis, S., Taft, N.: Recommending with an agenda: Active learning of private attributes using matrix factorization. In: RecSys’14: Proceedings of the 8th ACM Conference on Recommender Systems, Foster City, California, pp. 65–72 (2014)
Bogue, R.: Robots in healthcare. Ind. Robot Int. J. 38(3), 218–223 (2011)
Broadbent, E., Kuo, I., Lee, Y., Rabindran, J., Kerse, N., Stafford, R., MacDonald, B.: Attitudes and reactions to a healthcare robot. Telemed. e-Health 16(5), 608–613 (2010)
Conti, D., Cattani, A., Di Nuovo, S., Di Nuovo, A.: A cross-cultural study of acceptance and use of robotics by future psychology practitioners. In: RO-MAN 2015: Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication, Kobe, Japan, pp. 555–560 (2015)
Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. Manag. Inf. Syst. Q. 13(3), 319–340 (1989)
de Visser, E.J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., Parasuraman, R.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, Boston, Massachusetts, pp. 263–267 (2012)
DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S., Morbini, F., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., Morency, L.-P.: SimSensei Kiosk: a virtual human interviewer for healthcare decision support. In: AAMAS 2014: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, Paris, France, pp. 1061–1068 (2014)
Elliott, C., Rickel, J., Lester, J.: Lifelike pedagogical agents and affective computing: an exploratory synthesis. In: Wooldridge, M.J., Veloso, M. (eds.) Artificial Intelligence Today, pp. 195–212. Springer, Berlin (1999)
Eurobarometer. Public attitudes towards robots. Technical Report 382, European Commission, Directorate General for Information Society and Media (2012)
Ferguson, E., Cox, T.: Exploratory factor analysis: a users’ guide. Int. J. Sel. Assess. 1(2), 84–94 (1993)
Fernández-Tobías, I., Braunhofer, M., Elahi, M., Ricci, F., Cantador, I.: Alleviating the new user problem in collaborative filtering by exploiting personality information. User Model. User Adapt. Interact. 26, 221–255 (2016)
Fischinger, D., Einramhof, P., Papoutsakis, K., Wohlkinger, W., Mayer, P., Panek, P., Hofmann, S., Koertner, T., Weiss, A., Argyros, A., Vincze, M.: Hobbit, a care robot supporting independent living at home: first prototype and lessons learned. Robot. Auton. Syst. 75(Part A), 60–78 (2016)
Gabrilovich, E., Markovitch, S.: Wikipedia-based semantic interpretation for natural language processing. J. Artif. Intell. Res. 34, 443–498 (2009)
Gesundheit, N., Brutlag, P., Youngblood, P., Gunning, W.T., Zary, N., Fors, U.: The use of virtual patients to assess the clinical skills and reasoning of medical students: initial insights on student acceptance. Med. Teach. 31(8), 739–742 (2009)
Gong, L.: How social is social responses to computers? the function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 24(4), 1494–1509 (2008)
Graesser, A., McNamara, D.: Self-regulated learning in learning environments with pedagogical agents that interact in natural language. Educ. Psychol. 45(4), 234–244 (2010)
Gulz, A., Silvervarg, A., Haake, M.: Extending a teachable agent with a social conversation module—effects on student experiences and learning. In: Proceedings of the 15th International Conference on Artificial Intelligence in Education, Auckland, New Zealand, pp. 106–114 (2011)
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)
Houlsby, N., Hernandez-Lobato, J.M., Ghahramani, Z.: Cold-start active learning with robust ordinal matrix factorization. In: ICML2014: Proceedings of the 31st International Conference on Machine Learning, Bejing, China, pp. 766–774 (2014)
Jayawardena, C., Kuo, I., Unger, U., Igic, A., Wong, R., Watson, C., Stafford, R., Broadbent, E., Tiwari, P., Warren, J., Sohn, J., MacDonald, B.: Deployment of a service robot to help older people. In: IROS 2010: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, pp. 5990–5995 (2010)
Kaasinen, E.: User acceptance of mobile services. In: Lumsden, J. (ed.) Handbook of Research on User Interface Design and Evaluation for Mobile Technology, vol. 1, pp. 102–121. IGI Global (2008)
Kaiser, H.F.: The Varimax criterion for analytic rotation in factor analysis. Psychometrika 23(3), 187–200 (1958)
Kaiser, H.F.: The application of electronic computers to factor analysis. Educ. Psychol. Measur. 20(1), 141–151 (1960)
Karimi, R., Freudenthaler, C., Nanopoulos, A., Schmidt-Thieme, L.: Non-myopic active learning for recommender systems based on matrix factorization. In: IRI2011: IEEE International Conference on Information Reuse and Integration, Las Vegas, Nevada, pp. 299–303 (2011)
Koren, Y., Bell, R., Volinsky, C.: Matrix factorization techniques for recommender systems. IEEE Comput. 42, 42–49 (2009)
Lee, E.-J.: Factors that enhance consumer trust in human computer interaction: an examination of interface factors and the moderating influences. Ph.D. thesis, The University of Tennessee, Knoxville, Tennessee (2002)
Link, M.W., Armsby, P.P., Hubal, R.C., Guinn, C.I.: Accessibility and acceptance of responsive virtual human technology as a survey interviewer training tool. Comput. Hum. Behav. 22(3), 412–426 (2006)
Liu, H., Hu, Z., Mian, A., Tian, H., Zhu, X.: A new user similarity model to improve the accuracy of collaborative filtering. Knowl. Based Syst. 56, 156–166 (2014)
Long, S.K., Karpinsky, N.D., Bliss, J.P.: Trust of simulated robotic peacekeepers among resident and expatriate Americans. In: Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting, Austin, Texas, pp. 2091–2095 (2017)
Lyons, R., Johnson, T.R., Khalil, M.K., Cendán, J.C.: The impact of social context on learning and cognitive demands for interactive virtual human simulations. PeerJ 2, e372 (2014)
Macedonia, M., Groher, I., Roithmayr, F.: Intelligent virtual agents as language trainers facilitate multilingualism. Front. Psychol. 5, 295 (2014)
Masthoff, J., Grasso, F., Ham, J.: Preface to the special issue on personalization and behavior change. User Model. User Adapt. Interact. 24, 345–350 (2014)
Miao, Z., Yan, J., Chen, K., Yang, X., Zha, H., Zhang, W.: Joint prediction of rating and popularity for cold-start item by sentinel user selection. IEEE Access Spec. Sect. Intell. Sens. Mobile Soc. Media Anal. 4, 8500–8513 (2016)
Mo, K., Liu, B., Xiao, L., Li, Y., Jiang, J.: Image feature learning for cold start problem in display advertising. In: IJCAI2015: Proceedings of the 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, pp. 3728–3734 (2015)
Morandell, M.M., Hochgatterer, A., Fagel, S., Wassertheurer, S.: Avatars in assistive homes for the elderly. In: Proceedings of the 4th Symposium of the Workgroup on Human-Computer Interaction and Usability Engineering of the Austrian Computer Society, Graz, Austria, pp. 391–402 (2008)
Nass, C., Isbister, K., Lee, E.-J.: Truth is beauty: researching embodied conversational agents. In: Cassell, J., Sullivan, J., Prevost, S., Churchill, E.S. (eds.) Embodied Conversational Agents, pp. 374–402. MIT Press (2000)
Pak, R., Fink, N., Price, M., Bass, B., Sturre, L.: Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55(9), 1059–1072 (2012)
Pazzani, M.J., Billsus, D.: Content-based recommendation systems. Adapt. Web 4321, 325–341 (2007)
Rashid, A., Albert, I., Cosley, D., Lam, S., McNee, S., Konstan, J., Riedl, J.: Getting to know you: learning new user preferences in recommender systems. In: IUI 2002: Proceedings of the 7th International Conference on Intelligent User Interfaces, San Francisco, California, pp. 127–134 (2002)
Reich, N., Eyssel, F.: Attitudes towards service robots in domestic environments: the role of personality characteristics, individual interests, and demographic variables. Paladyn J. Behav. Robot. 4(2), 123–130 (2013)
Reich-Stiebert, N., Eyssel, F.: Learning with educational companion robots? Toward attitudes on education robots, predictors of attitudes, and application potentials for education robots. Int. J. Soc. Robot. 7(5), 875–888 (2015)
Rubens, N., Elahi, M., Sugiyama, M., Kaplan, D.: Active learning in recommender systems. In: Ricci, E., Rokach, L., Shapira, B. (eds.) Recommender Systems Handbook, 2nd edn, pp. 809–846. Springer, Berlin (2015)
Saveski, M., Mantrach, A.: Item cold-start recommendations: learning local collective embeddings. In: RecSys’14: Proceedings of the 8th ACM Conference on Recommender Systems, Foster City, California, pp. 89–96 (2014)
Silva, J., Carin, L.: Active learning for online Bayesian matrix factorization. In: KDD’12 – Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 325–333. ACM, Beijing (2012)
Son, L.H.: Dealing with the new user cold-start problem in recommender systems: a comparative review. Inf. Syst. 58, 87–104 (2016)
Spagnolli, A., Guardigli, E., Orso, V., Varotto, A., Gamberini, L.: Measuring user acceptance of wearable symbiotic devices: validation study across application scenarios. In: Symbiotic 2014: Proceedings of the 3rd International Workshop on Symbiotic Interaction, Helsinki, Finland, pp. 87–98 (2014)
Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. Manag. Inf. Syst. Q. 27, 425–478 (2003)
Verberne, F.M., Ham, J., Midden, C.J.: Trust in smart systems: sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Hum. Factors 54(5), 799–810 (2012)
Vizine Pereira, A.L., Hruschka, E.R.: Simultaneous co-clustering and learning to address the cold start problem in recommender systems. Knowl. Based Syst. 82(C), 11–19 (2015)
Wei, J., He, J., Chen, K., Zhou, Y., Tang, Z.: Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst. Appl. 69, 29–39 (2017)
Wu, P., Miller, C.: Results from a field study: the need for an emotional relationship between the elderly and their assistive technologies. Found. Augment. Cognit. 11, 889–898 (2005)
Xu, J., Yao, Y., Tong, H., Tao, X., Lu, J.: RaPare: a generic strategy for cold-start rating prediction problem. IEEE Trans. Knowl. Data Eng. 29, 1296–1309 (2017)
Yaghoubzadeh, R., Kramer, M., Pitsch, K., Kopp, S.: Virtual agents as daily assistants for elderly or cognitively impaired people. In: IVA 2013: Proceedings of the 13th International Conference on Intelligent Virtual Agents, Edinburgh, UK, pp. 79–91 (2013)
Yoshida, S., Shirokura, T., Sugiura, Y., Sakamoto, D., Ono, T., Inami, M., Igarashi, T.: RoboJockey: designing an entertainment experience with robots. IEEE Comput. Gr. Appl. 36(1), 62–69 (2016)
Zanatto, D., Patacchiola, M., Goslin, J., Cangelosi, A.: Priming anthropomorphism: can our trust in humanlike robots be transferred to non-humanlike robots? In: Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction, Christchurch, New Zealand, pp. 543–544 (2016)
Zhan, K., Zukerman, I.: Which smart devices do you like? Factors that affect device acceptability. In: HAIDM2016 Proceedings: the 5th International Workshop on Human-Agent Interaction Design and Models, New York, NY (2016)
Zhan, K., Zukerman, I., Moshtaghi, M., Rees, G.: Eliciting users’ attitudes toward smart devices. In: UMAP2016 Proceedings: the User Modeling, Adaptation and Personalization Conference, Halifax, Canada, pp. 175–184 (2016)
Acknowledgements
This material is based upon work supported by the Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development (AOARD), under award number FA2386-14-1-0010. The authors thank Gwyneth Rees and Masud Moshtaghi for their help in the initial stages of this research, and the three anonymous reviewers for their helpful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
This paper significantly expands, adds detail to and revises some of the analysis in the paper entitled “Eliciting Users’ Attitudes toward Smart Devices”, co-authored by Kai Zhan, Ingrid Zukerman, Masud Moshtaghi and Gwyneth Rees, published in UMAP’2016 Proceedings: the User Modeling, Adaptation and Personalization Conference, pp. 175–184, Halifax, Nova Scotia, Canada, https://doi.org/10.1145/2930238.2930241. This paper or a similar version is not currently under review by a journal or conference. This paper is void of plagiarism or self-plagiarism as defined by the Committee on Publication Ethics and Springer Guidelines.
Rights and permissions
About this article
Cite this article
Zhan, K., Zukerman, I. & Partovi, A. Identifying factors that influence the acceptability of smart devices: implications for recommendations. User Model User-Adap Inter 28, 391–423 (2018). https://doi.org/10.1007/s11257-018-9210-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11257-018-9210-0