Abstract
Creating trust in wireless solutions and increasing their social acceptance are major challenges to achieve the full potential of the Internet of Things (IoT).
In the current study we investigate various methods to increase trust in wireless systems like user feedback, usability, and product labeling. This work is part of the European project SCOTT (Secure COnnected Trustable Things; https://scottproject.eu) with the goal to develop wireless solutions that are safe, trusted, and acceptable.
Participants watched three videos of three use cases and then rated the expected impact of controllability, accountability, user feedback, usability, and product labeling on their trust in the technologies. Also, two labels were evaluated: a uni-dimensional label that reflected the privacy of user data as well as a multi-dimensional label that combined the dimensions privacy, product quality, manufacturing, usability, and cost.
The results indicate that trustworthiness aspects of the system like controllability, accountability and usability have the strongest impact on positive trust formation among all the investigated methods. Furthermore, the results indicate that both the uni-dimensional and the multi-dimensional labeling conditions seemed to increase user trust at the maximum trust level and increased the participants indicated willingness to use the product. However, only the uni-dimensional label showed a positive influence on trust formation at the medium quality level.
Results of this study highlight that service and product providers have various methods at hand to help increase trust among their customers. Consumers want control over their technologies, as well as accountability of technology vendors. Furthermore, in the long term, the results suggest that customers could benefit from digital competence education that may allow them to learn to use and rely on otherwise relatively complex multi-dimensional labeling systems. Next steps for this research are suggested.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
1.1 Study Motivation
Industry currently spends considerable effort on improving system reliability and its perception of trustworthiness for the development of continuously increasing numbers of wireless and smart products. To support these efforts the Internet of Things (IOT) Security & Privacy Trust FrameworkFootnote 1 provides strategic principles toward developing secure IOT devices. This framework requires comprehensive disclosures concerning the product, data collection, usage and sharing. Also, recommendations are given to manufacturers to increase transparency and communication on device capability and privacy issues.
However, translating this high-level guidance into concrete measures during development is not straight-forward. Increasing communication is good, but which communication and in which form? Who is responsible for determining this information and what does success mean? These questions are investigated in the European project SCOTT (“Secure Connected Trustable Things”, see https://scottproject.eu/) that intends to bring together human users, operators, engineering and management to develop new, more trustworthy wireless technology. In this paper the results of a study that investigated trust formation as part of the SCOTT project are described.
1.2 Trust
Lee and See (2004) [1] define trust as “… the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.”. Based on this definition, certain conditions must be met in order to create a situation in which trust is required, namely, a level of uncertainty, risk and a certain level of interdependence, as well as the necessity to make a choice between at least two alternatives.
Trust has been defined either as an attitude, intention, belief or behavior [1]. Lee and See (2004) argue that those differences in definition can be harmonized by taking the theory of reasoned action [2] into account. In this context, trust is described as an attitude, which influences the intention of using the automated system and subsequently the actual reliance behavior. The translation from attitude, to intentions and finally to actual reliance behavior is influenced by additional factors such as context, operator state, and pre-existing knowledge (Lee & See, 2004). Lee and See (2004) describe a dynamic model of trust formation and the subsequent influence on reliance behavior.
Similarly, [3] differentiate trust prior to the interaction with a system (initial learned trust) with trust that develops during the interaction with a system (dynamically learned trust). We refer to (initial learned trust) from now on as “initially formed trust” to better reflect that this trust formation is not based on direct interaction with the system. Initially formed trust is influenced by pre-existing knowledge, expectations, and understanding of the system. The result is an initial reliance strategy that is relatively little influenced by system characteristics itself, because direct interactions with the system have not yet commenced. Instead, initial trust formation is influenced by the information that is available about the system and the culture and attitudes that are formed. Dynamically learned trust on the other hand relies on direct system interactions and is therefore directly influenced by system characteristics under the prevalent conditions of system use and the task goals.
We believe that information that a user receives about a system should influence the initial formation of trust. Supporting the users’ initial trust formation of a system is of paramount importance for manufacturers and vendors: without sufficient initially formed trust, customers may never buy a system or product. Therefore, in this paper we investigate different methods that those offering products and services can undertake to improve trust formation by offering additional trust relevant information about the product to the user.
One method to provide information of a system or a product is labeling. Labeling is already used for a variety of different products. One of the more well-known approaches to labeling are food and energy labelsFootnote 2, which provide the user with information regarding a product. A similar labeling scheme may also be beneficial for the formation of user trust into new products or services, but it is unclear how such labeling could be used to positively impact trust formation. How would the effectiveness of such trust related labeling compare with other relevant information about the product or service such as user feedback, controllability or accountability? These questions are the main focus of this study.
1.3 Research Hypotheses
Offering customers additional information about the trustworthiness of their products could help increase their trust. Therefore, we hypothesize that labeling the trustworthiness of a system is expected to lead to a higher willingness to use the presented system. Additionally, it is expected that providing more and different types of trustworthiness information about a system (multi-dimensional labeling) should also show a stronger impact on positive trust formation and willingness to use a system than uni-dimensional labeling.
We also expect that multi-dimensional labels should have a higher reported trust formation compared to uni-dimensional labels at both medium and high levels.
User feedback that other users have given about a product or service should provide customers with valuable information about its trustworthiness. Furthermore, if this feedback is provided by an independent company, we expect this to have a bigger impact on trust formation than when it is provided by the company offering the system. This is because the information is provided by a party without self-interest.
Users should feel more inclined to trust a system if they receive feedback and are able to exert control, therefore we expect a significant influence of system control, system feedback and support features on trust formation.
2 Method
2.1 Participants
There was a total of 32 participants (17 female) with a mean age of 25 (M = 25.78, SD = 5.36). The study took about an hour per participant. Participants received 15 Euro per person for partaking in the study, if they were recruited within the company, they received a small gift instead. The study was conducted at the Virtual Vehicle Research GmbH.
2.2 Materials
Videos
The videos used were all about three minutes long, two videos were developed by SCOTT partners, one was not part of the project.
The “Safety-Access-System”, describes a system that rearranges the access to company grounds and makes it possible to regulate and monitor the behavior of drivers on company grounds.
The smartphone application “Coyero” aims to help the user by combining different functions within a single app. The video shows several examples like renting of a car via app, unlocking of a rented vehicle and the option to purchase highway tolls.
The Smart-Home learns your habits to enable it to help with daily tasks and routines, claiming to allow a lifestyle where daily tasks do not pose as much of a challenge anymore.
Labeling Systems
Two different labeling system were used within this study. Descriptions of the labeling systems can be found in Table 1 and Table 2.
Questionnaires
One of the questionnaires used in this study is the Affinity for Technology Interaction (ATI) Scale. Additionally, participants answered a questionnaire consisting of two parts. The first part questions the influence of customer support, system feedback, controllability as well as user feedback on trust formation. In the second part of the questionnaire, participants are introduced to one of the two labeling systems and answer questions regarding the system they were just presented, as well as the influence the labeling system has on their trust formation towards the system they just familiarized themselves with.
Task and Procedure
Participants first read about the study intent and its process and were asked to indicate their informed consent. They were encouraged to ask questions during the study, if they had any, except during video presentation.
Before every video, participants were informed about the use case video they were about to watch, after watching the video, participants indicated their comprehension by answering questions about the video.
Every participant was offered the opportunity to re-watch a given video in case some aspect of it stayed unclear. Following the presentation of the video, participants immersed themselves into the usage of the system and were interviewed on problems they may encounter as well as privacy and data protection settings they would find acceptable in the system. After this immersion they were asked about their willingness to use the system, followed by a questionnaire that allowed participants to express their opinion about different aspects of the system and their influence on individual trust formation. The second part of the questionnaire allowed participants to rate the label they were presented and its influence on their perceived trust towards the system. Every participant saw and rated all use cases, which were pseudorandomized like the assignment to one of the labels.
3 Results
Use cases were combined for data analysis because results indicate that the answers for the uses cases did not differ significantly.
Labeling
Both labels have a significant impact on the willingness to use a system (F(1, 28) = 4.88, p = .036), but analysis with a mixed ANOVA show, that the labels do not differ significantly from each other in their influence on the willingness of participants to use a system (F(1, 30) = 0.16, p = .689). The interaction between label and time of presentation is also not significant (F(1, 30) = 0.14, p = .706).
The results show, that the labels are able to differentiate between the highest (A) and medium (C) label level, but a significant interaction shows, that only at the medium level we find a difference between the uni- (M = 0.71, SD = 1.91) and the multi-dimensional label (M = −0.98, SD = 1.16) (F(1, 29) = 4.64, p = .039).
User Feedback
User Feedback provided by an independent organization (M = 1.52, SD = 1.83) has a significantly higher influence on trust formation than user feedback provided by the company selling the system (M = 0.58, SD = 1.49), (t(29) = −2.82, p = 0.0086) as analysis with a t-test show. No provided feedback (M = −0.92, SD = 2.14) leads to significantly lower reported levels of trust than feedback provided by the company selling the system (t(29) = 3.08, p = 0.005). The same goes for user feedback provided by an independent organization, which also leads to significantly higher levels of trust than no provided user feedback (t(29) = 3.58, p = 0.001).
System Design and Support Features
In addition to user feedback and labeling, several system design and support features were investigated. Table 3 displays the results for these investigated system design and support futures concerning reported trust formation. All system aspects differ significantly from zero, which shows, that they have an impact on trust formation towards a system – which is independent from labeling. In the table we can see that there are several system aspects that have a higher impact on trust formation than labeling. The most influential being control, which stands for user control over a given system. Accountability, the accountability of a system producer towards a system user in case of problems concerning the system, also has a positive impact on trust formation. The usability of a system induces more trust than labeling, which shows, that it is also important for trust formation, to have a product that is easy to understand and use.
4 Discussion
This study shows, that while labeling has an important influence on trust formation, there are other system aspects, like control, accountability and usability which have a comparable or even bigger positive impact on trust formation. In this study, control was related to the level of control that users had over the access options of the installed systems to their device. It stands to reason that in other systems, like a system of artificial intelligence, the system aspect of control would play a comparable role for the user. This, however, indicates a need for further research because in today´s rapidly advancing world, it is possible that users could be confronted with a system that they are not able to control, because they lack the ability or possibilities to do so. It is necessary to study which aspects of control are helpful to increase trust formation.
Similarly, the results indicate that perceived accountability has a notable influence on trust formation. This has an important implication for how a system vendor could increase customer trust by clarifying his responsibilities and accountability concerning system malfunctions or other use issues.
Usability was also found to impact trust formation such that a more usable system should also be easier to form trust. However, it would be interesting to further explore how additional individualization and control options may interfere with the experience of usability of a system: adding options and user choices may increase the experienced system complexity.
As expected, labeling has a significant influence on trust formation towards a system and willingness to use it. However, in contrast to our hypothesis, participants did not indicate a higher willingness to use a system with the multi-dimensional label. It is interesting to consider what could have caused this, because the multi-dimensional label contains more information than the uni-dimensional label. The ratings of the influence on trust formation by the label levels even indicate, that at a medium level, the uni-dimensional label is at an advantage in trust formation in comparison to the multi-dimensional label. These results show, that while information usually plays a very important role in trust formation, it is important to focus on the kind of information that is presented, as well as the format it is presented in.
A possible approach to explain these results is that participants in the multi-dimensional label condition were faced with a choice overload, as explained by [4]. The multi-dimensional label would offer too much information in a format that is not fitting to the unexperienced reader. This may mean, that the participants did not have a mental script [5] on how to analyze the information they were presented and therefore could not take advantage of this additional information. In this way, the multi-dimensional labels did not provide the expected advantage over uni-dimensional labels. This may indicate a need for digital competence education, to familiarize customers with (standardized) multi-dimensional labeling systems and enable them to build a mental script to take advantage of this information to form trust. We think future studies should investigate the digital competences needed for citizens to make informed decisions about complex technologies and adopt them to their interests and needs. As an immediate next step, we recommend investigating the trust formation on other user populations and increase the range of age and background knowledge of participants for future research.
References
Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors J. Hum. Factors Ergon. Soc. 46(1), 50–80 (2004)
Ajzen, I.: From intentions to actions: a theory of planned behavior. In: Kuhl, J., Beckmann, J. (eds.) Action Control. SSSSP, pp. 11–39. Springer, Heidelberg (1985). https://doi.org/10.1007/978-3-642-69746-3_2
Hoff, K.A., Bashir, M.: Trust in automation integrating empirical evidence on factors that influence trust. Hum. Factors J. Hum. Factors Ergon. Soc. 57(3), 407–434 (2015)
Lurger, B., Vogrincic-Haselbacher, C., Caks, F., Anslinger, J., Dinslaken, I., Athenstaedt, U.: Consumer decisions under high information load: how can legal rules improve search behavior and decision quality? SSRN Electron. J. (2016). SSRN 2731655
Light, L., Anderson, P.: Memory for scripts in young and older adults. Mem. Cogn. 11(5), 435–444 (1983)
Acknowledgment
SCOTT (www.scott-project.eu) has received funding from the Electronic Component Systems for European Leadership Joint Undertaking under grant agreement No 737422. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation programme and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland, Portugal, Netherlands, Belgium, Norway. The publication was written at VIRTUAL VEHICLE Research Center in Graz and partially funded by the COMET K2 – Competence Centers for Excellent Technologies Programme of the Federal Ministry for Transport, Innovation and Technology (bmvit), the Federal Ministry for Digital, Business and Enterprise (bmdw), the Austrian Research Promotion Agency (FFG), the Province of Styria and the Styrian Business Promotion Agency (SFG).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ewerz, B., Moertl, P. (2020). Evaluating Multiple Approaches to Impact Trust Formation: Labeling, Design, and Support Features. In: Stephanidis, C., Antona, M. (eds) HCI International 2020 - Posters. HCII 2020. Communications in Computer and Information Science, vol 1226. Springer, Cham. https://doi.org/10.1007/978-3-030-50732-9_68
Download citation
DOI: https://doi.org/10.1007/978-3-030-50732-9_68
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50731-2
Online ISBN: 978-3-030-50732-9
eBook Packages: Computer ScienceComputer Science (R0)