Abstract
Many smartphone apps pose a privacy risk to their users and use sensitive data, which is not visible during daily app usage. App permissions are accessible but not comprehensible for average users, thus leading to information asymmetry between app providers and users. We want to minimize information asymmetries by making app information flows visible and understandable. To determine the information needed and how it should be presented, a survey (N = 227) and a laboratory study (N = 31) were conducted. In sum, users desired a credible tool that shows, explains and valuates information flows of apps. Furthermore, it should provide options to act in a privacy protective way. This led to a framework of user requirements, which can guide the development of analytic tools and nudge mobile application users towards privacy, make informed privacy decisions, and possibly change apps from the provider side.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Starting in 2017, the majority of mobile phone users worldwide (51%) owned a smartphone [32]. An average smartphone user has 33 applications (apps) installed, 12 of which they use every day [3]. Furthermore, two and a half million apps exist in the Google Play Store [1] available on Android, which is the most common operating system for smartphones [2]. Despite the importance and popularity of mobile apps, data protection and privacy issues create potential downsides for the user. Previous analyses revealed that mobile apps might request more permissions than needed to accomplish tasks [5]. Moreover, two-thirds of the tested Android apps suspiciously used sensitive data with implicit or explicit user consent [17]. Therefore, the Android’s permission system has attracted a lot of research interest during the last years [27].
In Android versions prior to 6.0, the user has to accept permissions during the installation process [28]. Research showed that these permission screens are hardly considered or comprehended [6, 7, 14], and thus users fail to remember granted permissions [15]. Furthermore, users often feel uncertain about the appropriateness of permission requests [14] and are more comfortable with additional explanations [16, 18]. In case of uncertainty, users rather rely on the expectation that apps only incorporate the personal information required by their functionality [15].
Since Android version 6.0. install time are complemented by runtime permission requests [27], asking for user approval once a permission group is needed [19, 27]. However, critically important is that permissions are explained exclusively in the smartphone settings and only permission groups ranked as “dangerous” (e.g., for location or microphone access) are requested during runtime. Permissions classified as “normal” or “signature” are granted by default during installation [19]. One example for an install time permission is the internet access [27], requested from 91% of tested Android apps and often not secured even though used to send personal data [4].
In sum, regardless of the Android version, there are large hurdles for most of the users in making informed privacy decisions during the usage of mobile apps. Particularly, there is an information asymmetry between app users and providers. To address these issues, user-centered designed tools [9] are needed to provide users with clear information about the behavior of their apps. As insufficient usability prevents users from effective use of privacy functionality offered [36], the aim of our studies is to formulate guidelines to design a user-friendly analytic tool to clear hurdles and enable informed privacy decisions for mobile application users.
2 Related Work
Different analytic approaches have been developed to identify possible privacy risks of mobile apps [5]. For example, the static analytic approach analyzes the app code and identifies possible sources and sinks of data leakage. Dynamic monitoring investigates app behavior during runtime. Moreover, permission applications read the manifests of installed applications and notify users about the requested permissions [5]. However, there is little research on how these approaches could be combined to achieve the greatest possible transparency for users [16], and eventually nudge users to preserve privacy.
The aim of nudging is to improve individual well-being without limiting the freedom of choices [13]. In general, nudges are interventions that can, for instance, encourage users towards more beneficial privacy choices by accounting for hurdles in human decision-making [12]. One hurdle is incomplete or asymmetric information [13]. It is conceivable that these decision-making hurdles also apply to mobile app interaction.
In particular, previous authors suggested that “dedicated mobile apps can assist users with nudges in making beneficial privacy decisions” [13], p. 30, or for example “a nudge may take the form of an alert that informs the user of the risk” [8]. This raises the question of how nudges should be designed. Acquisti et al. [13] already described six design dimensions: 1. Information: reduces information asymmetries and provides a realistic perspective of risks, 2. Presentation: contextual cues in the user interface to reduce cognitive load and convey appropriate risk level, 3. Defaults: configuring the system according to user’s expectations, 4. Incentives: motivate users to behave according to stated preferences, 5. Reversibility: limits the impact of mistakes, 6. Timing: defines the right moment to nudge. These nudging dimensions can be assigned to mobile privacy interventions described in the literature. For example, Dogruel, Jöckel and Vitak [24] examined different default settings during the decision procedure. The authors found that privacy default features are highly valued by users.
Bal and colleagues’ studied [25] nudges aiming on the presentation and timing dimensions. First, the authors derived design guidelines from the literature and applied them on a permission app. Second, they conducted a user study, which showed that privacy concerns of the permission app group were significantly lower than in the control group. The authors concluded that a permission app designed in a usable manner could be a promising tool to address privacy concerns. However, it remains unclear how the authors designed the alternative approach of the control group in detail and what characteristics account for a usable permission app.
Another study conducted by Almuhimedi et al. [26] focused on information, presentation, and timing. The authors examined behavioral consequences of a weak (daily message) and a strong nudge (daily overlays). Results showed that the weak nudge lead to privacy protective behavior, however strong privacy nudges can reinforce this effect. The authors derived three design recommendations: First, an effective nudge should be personalized (e.g., adapted to previous user decisions). Second, users should be able to configure the nudges (e.g., the timing and form of delivery). Third, a nudge should be salient without being annoying (especially repetitive notifications). The authors’ recommendations on the individuality and salience of a nudge are valuable but provide little help to understand the app user and its privacy requirements comprehensively.
Kelley et al. [14] also addressed the information, presentation and the timing dimensions. They compared a Google Play Store permission screen with a modified privacy facts sheet displayed before the app download. Results showed a significant increase in selecting privacy-friendlier apps. However, all participants wanted a better understanding of why apps request certain permissions. Therefore, the authors demanded for information on frequencies and purposes of permission utilization [14].
In summary, previous related work covered the development of different analytic approaches enabling the identification of possible privacy risks raised by and during the usage of mobile apps [5, 18]. Combining these approaches within one tool should achieve the greatest gain for the user [13, 16], and thus offer strong potential to nudge users towards privacy. Previous study results [14, 25, 26] serve as a starting point, however, questions remain how analytic tools should be designed in a user-centered way to clear hurdles regarding information and presentation of complex results of risk analysis.
3 Research Question
The aim of our research was to formulate user-centered design guidelines for a mobile application analytic tool to overcome hurdles of privacy decisions with regard to information asymmetries and the presentation of (complex) information [13, 16]. We derived the following research questions:
-
1.
Which user requirements regarding the information provision and presentation of mobile application analytic tools need to be considered?
-
2.
Which guidelines can be derived from the user requirements for mobile application analytic tools?
We used a two-step approach to investigate our research questions and conducted an online survey to identify users’ informational needs. Subsequently, we ran a laboratory study to formulate user requirements, especially in terms of the presentation dimension.
4 Study 1 – Online Study
4.1 Materials and Methods
Sample.
Our online survey was conducted in Germany. We received N = 227 completed surveys from 81 (36%) female and 146 male respondents. The respondents were on average 35 years old (SD = 12.22). Our sample differed from the German population [10] in terms of gender and age, but corresponded to the age distribution of German smartphone users [21, 23]. The majority (78%) held a university degree, which exceeds the average German education level (31%, [10]). Furthermore, our respondents indicated an average smartphone use of 2 hours a day (M = 109 min), in accordance with available German studies (140 min of daily usage; [22]). A quarter of our participants indicated using 11 to 20 apps, which is also the most frequently ranked category among German smartphone users [31]. The distribution of mobile operating systems among our survey respondents (69% Android, 25% iOS and 4% Windows) was representative as well [2]. Most respondents indicated using a messenger app (86%) and a navigation/map app (85%). About half of the respondents used a weather app and 27% stated using a shopping app. The ordering followed official download statistics [29, 30]. In sum, our sample was highly representative for German smartphone users.
Procedure.
The respondents of our online survey were invited via newsletter, personal and panel based invitations. For compensation, participants could take part in a raffle for 20 Euros. The survey questions included open and closed-ended questions. It took about 30 min to complete the survey, which commenced with a short explanation of it’s purpose, a guarantee of data anonymization, and a consent on voluntary participation.
The first part included items examining the respondents’ perception of privacy threat for different data types - more or less necessary for the operation of different app groups. To ensure personal relevance, we started with the query whether participants use navigation/map-apps, weather apps, messenger apps, and/or shopping apps. If they confirmed, we asked to indicate the level of agreement (ranging from 1 = “strongly disagree” to 6 = “strongly agree”) with the statement “I feel my privacy is threatened if my [map/navigation app, messenger app, weather app, or shopping app] uses…”. We always presented 15 different types of data with a short explanation (see Appendix A.1). Although this list is not exhaustive, it provides a reasonable set of necessary and unnecessary data to fulfill an app service. Hence, the level of necessity of each data type varies according to the respective app group.
In the second part, the participants answered an open-ended question: “How could privacy protection be improved in the mobile sector? Do you have any requests or ideas for implementation?”. The survey concluded with questions about demographics and smartphone/app usage (number of installed apps; operating system; estimation of app usage time) and optional on previous negative experiences with privacy violation.
Data Analysis.
Quantitative items were analyzed descriptively using median (Mdn), mean (M) and standard deviation (SD). Depending on the distribution, non- or parametric inferential statistics were applied to identify differences between depending variables. Relationships were analyzed via bivariate parametric or nonparametric correlation coefficients.
We used an inductive category formation [35] to analyze the qualitative answers. Categories were built bottom-up from the participants’ answers, which were split up into single suggestion (multiple answers possible). Two levels of categories were formed within this process. To comply with the requirements of exclusiveness comparable degrees of abstraction, the second level categories is reported. A second coder was included to ensure reliability of codings. Intercoder-reliability (unweighted Kappa) account for κ = .79 indicating an “excellent” (>0.75) [37] agreement. Discrepancies of codings have been eliminated and relative response frequency of this consensus solution was analyzed descriptively to identify the most common suggestions.
4.2 Results
Quantitative Results.
To obtain an overview of participants’ evaluation of privacy threat, we analyzed their ratings on the 15 different data types and the 4 different app groups. The mean evaluation was M = 5.16 (“strongly agree”; SD = .97), indicating respondents feeling their privacy was threatened in general. Further, we calculated participants’ mean agreement on the potential threat for each of the 15 data types across all four app groups as well as for each separate app group (for all 15 data types see Appendix A.2). The distribution of descriptive data suggests that the evaluation of data types differed depending on the need for requesting data from the four app groups. To verify this assumption, we defined three different levels of necessity (1 = “data is necessary”, 2 = “data is partially necessary”, 3 = “data is not necessary” to provide the app service). An expert group (N = 9 persons working as researchers in the field of mobile security) allocated each data type and app group combination to these three necessity levels. Figure 1 presents participants’ mean agreement with the privacy threatening potential of data requests separated by the data necessity.
Evaluations appeared in the following plausible order: necessary data (M = 4.53; SD = 1.13; n = 218), partially necessary data (M = 5.14; SD = 0.99; n = 218), unnecessary data (M = 5.29; SD = 1.05; n = 219). To identify statistical differences between these three necessity levels, we conducted a Friedman’s ANOVA, as the Kolmogorov-Smirnov test showed that the data for all three levels of necessity violated the assumption of distribution normality (Dnecessary(217) = .12; p < .001; Dpartly_necessary(217) = .19; p < .001; Dunnecessary (217) = .21; p < .001). Results revealed a significant difference (χ2(2) = 136.75, p < .001) between the perceived privacy threat across the defined necessity levels. According to the post-hoc Wilcoxon-signed-rank-tests, the respondents viewed using highly necessary data (Mdn = 4.75) as significantly less threatening (z = 9.74; p < .001; r = .66) than partially necessary data (Mdn = 5.40). They rated unnecessary data (Mdn = 5.70) as significantly more threatening (z = 3.05; p = .002; r = .21) than partially necessary data, however the effect size was rather small [33]. Accordingly, there was also a significant difference between highly necessary data and unnecessary data (z = 9.44; p < .001; r = .63). All comparisons were made using a Bonferroni correction (α = .0167).
We examined how individual difference variables in our sample (age, gender, prior experiences with privacy violations) related to users’ perceived threat evaluations. We used Spearman’s Rho (rs) for all correlations as the assumption of distribution normality across the different levels of necessity (results are presented above) and for the average perception collapsed across all types of data were violated (Doverall(219) = .19; p < .001). The overall perception of privacy threat level indicated slightly increasing levels with advancing age (rs = .20; p = .003). Separated by level of necessity, only the perceptions for highly necessary (rs = .20, p = .005) and unnecessary data (rs = .19; p = .005) showed small significant correlations with users’ age. There were no significant differences between males’ and females’ perception of privacy threat level either across separate data necessity levels or for overall necessity.
The same procedure was applied for analyzing previous experience with privacy violations. No differences existed between those who indicated to have these experiences and those who did not regarding the overall perception of privacy threat level across all data types and app groups (U = 4471.50; z = −1.78; p = .075; r = −.12). Furthermore, we did not find differences between these two groups in the necessary (U = 4700.00; z = −1.11; p = .268; r = −.08) or partially necessary data condition (U = 4788.50; z = −.90; p = .366; r = −.06). There was one significant difference between those who had experienced a privacy violation (Mdn = 5.83; n = 76) and those who had not (Mdn = 5.60, n = 138) across unnecessary data specifically (U = 4345.00; z = −2.07; p = .038; r = −.14). This indicates that users who experienced a previous privacy violation tend to be more sensitive to using unnecessary data. However, the effect size was rather small.
Qualitative Results.
We asked for suggestions to improve mobile privacy protection within our online survey. In total n = 154 respondents (68%) answered this open-ended question, resulting in 240 single suggestions. We assigned these suggestions to six categories: “security techniques”, “functions, strengthen the user control”, “increased transparency for the user”, “legal control and punishment”, “social and economic change of values”, and “avoidance of service usage”(see Table 1 for illustration). Most statements could be assigned to the categories “functions, strengthen the user control” (36%), “security techniques” (26%), and “increased transparency for the user” (16%).
4.3 Discussion
The purpose of our online study was to identify users’ informational needs regarding privacy invading app behavior and contribute to a set of user requirements. The findings indicate that the users’ perception of privacy are linked to the necessity of data requests. The survey respondents perceived using unnecessary data as more threatening towards their privacy than (partially) necessary data. In line with Lin et al. [18], these results emphasize the importance of a reasonable relation between the necessity of data usage and user’s decisions on mobile apps. We conclude that transparency in terms of unnecessarily used data should be particularly emphasized by application analytic tools. For this purpose, crowd sourced perceptions or (in our case) app group specific threat perceptions across different data types could supplement automated app analyses and serve as a user based indicator of privacy risk or as a default (in line with [13]).
With regard to individual differences in the survey, we discovered only marginally significant correlations between participants’ age and their perceived level of privacy threat. Furthermore, we found a significant difference, although small effect size, for unnecessary data between respondents who did and did not experience a past privacy violation. Even though these effects improve understanding of privacy behavior, they do not warrant adjusting privacy nudges and tools based on individual user characteristics in general. Therefore, we refrain from recommending customized privacy nudges according to these variables per se. An individual adjustment would require access to this personal data, which directly counteracts the privacy protection.
Respondents’ qualitative statements underlined requests on “functions, strengthen the user control” and “security techniques” actively applied to protect privacy. Furthermore, they desired “increased transparency” on data access. Additionally, the overall perception of privacy threat caused by data access was high. This underlines the high level of concern app users in Germany generally have.
Concluding, these basic results confirm app users perception of incomplete and asymmetric information in Germany. To overcome incomplete and asymmetric information, transparency in app behavior is a prerequisite for privacy-related decision making [20] and privacy preserving behavior. Static and dynamic analyses for example can deliver detailed information on app information flow. The major challenge is to adjust the presentation of this information to fit user requirements. For this purpose, the laboratory study results below could provide valuable assistance.
5 Study 2 – Laboratory Study
The laboratory study’s aim was to identify presentation requirements that must be addressed when developing a mobile application analytic tool. We employed the user experience (UX) concept to assess the presentation dimension as suggested by Aquisti [13]. The CUE-Model (components of UX-Model) [34] enabled a subjective evaluation of the applications and thus served as a theoretical background. In the main study, we compared the UX of three privacy apps. In preparation, we conducted a pre-study to adjust UX-facets, enabling the main evaluation of the applications and the selection of the privacy apps.
5.1 Pre-study
Materials and Method.
The pre-study’s first aim was to identify useful UX-facets for the assessment of privacy apps. The term “privacy app” will be used to describe permission apps and mobile application analytic tools. Whereas permission apps only depicted the manifest of a scanned app to provide information about these permissions, mobile application analytic tools use (e.g., static or dynamic) analyses to gain information.
Identifying useful UX-facets for privacy apps assessment was an exploratory process that also incorporated the results of the online survey. First, free available permission applications were downloaded from the Google Play Store. Second, a UX expert explored the apps to obtain a first impression of permission apps. Next, a suitable UX questionnaire (the AttrakDiff2 [11]) was selected that was adaptable for the assessment of privacy apps. Upon choosing a UX-questionnaire, the permission apps were explored again and the facets (see Table 4 in the Appendix) were adapted and extended.
The pre-study’s second aim was selecting permission apps serving as comparable tools for a mobile application analytic tool. The mobile application analytic tool (Fig. 2) was used, because it was similar to the tool we wanted to develop (static and dynamic analyses included). For the selection of permissions apps, two UX experts evaluated 17 permission apps using the adjusted facets. They rated the extent to which the permission applications fulfilled the criteria for the facet definitions. Furthermore, the facets were weighted from two other UX-researchers to calculate an aggregated value. After that, two different privacy apps (less and more user-friendly) were selected.
Results.
In the pre-study, nine UX-facets (see Table 4) relevant to evaluating permission apps could be identified. The two (more and less) user-friendly apps were compared to the mobile application analytic tool. Figure 2 presents sample screenshots of the three apps.
5.2 Main Study
Material and Method
Sample.
Our sample consisted of N = 31 participants (65%) females; Mage = 23 years, SDage = 2.73). All participants were students and received credit points for participating. They stated using their smartphone an average of 116 min per day (SD = 65 min), which is representative of the German smartphone users [22]. Most often (29%) the participants indicated using from 11 to 20 apps, typically for German users [31] and the mobile operating systems used (71% Android, 26% iOS, and 3% Windows) was also comparable [2]. Most participants (77%) had never used a permission app before.
Procedure.
Two Android smartphones were available to allow for parallel testing of participants. The mobile application analytic tool and the two permission apps were preinstalled. Additionally, four other apps (Skype, eBay Kleinanzeigen, wetter.com, WhatsApp) were installed to examine them via the privacy apps. Participants first received an introduction to the test’s purpose and signed a consent form about the data recording. A monitor presented the three tasks (within-subjects design, randomized order) and assessed the privacy apps (closed-ended and open-ended questions). The main study concluded with questions on demographics and individual usage behavior of smartphones.
Study Design and Data Analysis.
The main study’s independent variable was the tested privacy app (mobile application analytic tool vs. permission app 1 vs. permission app 2; see Fig. 2).
The participants completed three questions for each privacy app: 1. “Can the application (Skype/WhatsApp/eBay Kleinanzeigen) collect location data?”, 3. “Does the permission application provide any information about the risk of eBay Kleinanzeigen/Skype/WhatsApp?”). The facets identified during the pre-study served as dependent variables to evaluate the apps. The participants indicated the extent to which the three apps fulfilled the criteria of the facets’ definitions (from −3 = “not fulfilled” to +3 = “fulfilled”). In addition, participants assessed the importance of each UX-facet (from 1 = “not at all important” to 10 = “extremely important”). The ratings on the UX-facets including “credibility” and “information about the analytic tool and provider” were omitted because of the time-saving pre-installation of the privacy apps. In addition, the participants could have made qualitative statements on perceived problems and the amount of required support supplied by the app. After completing all tasks, participants recorded their perceived advantages “What do you like about this permission application?” and disadvantages “Do you see room for improvement?”.
We used inferential statistics to analyze the quantitative data and a deductive category assignment [35] to categorize the qualitative responses. In the subsequent sections, we first present the quantitative assessment results, followed by a summary of the open-ended question responses.
Results
UX-Facets – Assessment of the Privacy Apps.
Participants indicated the extent to which the definitions of the UX-facets (Table 4) were fulfilled. Friedman’s ANOVA was used to test whether privacy apps differed across facet assessment (Table 3 presents the results). Permission app 2 received the best evaluations across all UX-facets (all differences between permission app 1 and the mobile application analytic tool were significant, except the difference between permission app 2 and 1 in the “navigation” facet). Permission app 1 received significantly higher ratings than the mobile application analytic tool in the following UX-facets: “overall attractiveness”, “navigation” and “comprehensibility”. Neither applications differed across the following facets: “description and valuation of permissions”, “options for action”, “stimulation” and “identity”.
UX-Facets – Level of Importance.
The participants rated overall importance of the nine UX-facets. The aim of the analysis was to determine which facets were most important for assessing user experience of privacy apps. The Wilcoxon-signed-ranktest was used due to the violation of the distribution normality assumption based on the Kolmogorov-Smirnov test. We tested whether the distribution representing the single facets differed from the median (Mdn=8.00) of all facets. The results (Table 3) show that participants rated the facets “description and valuation of permissions”, “credibility”, “comprehensibility” and “navigation” on average as significantly more important than all facets.
Additionally Required Functions.
After assessing the privacy apps, the participants could indicate whether they felt any functions were missing. The most common answers included: (de)activation of single permissions (8 participants), information on the necessity of permissions for the function of a scanned app (4), suggestions for alternative applications with less risk (4), and individualization of the risk score (2).
Frequencies of Qualitative Responses.
Participants’ responses to open-ended questions (problems/support demand, positive/negative aspects) yielded a total of 536 answers. Common answers (one participant same app) were included only once. The most frequent responses could be assigned to the facets “description and valuation of permissions” (40%), “navigation” (37%) and “stimulation” (15%). Only 6% of the replies could be allocated to the other facets (3% “overall attractiveness”, 2% “comprehensibility”, 1% “options for action”), whereas 2% could not be assigned to any facet. No answers could be attributed to the facets “identity”, “information about the analytic tool and the provider” and “credibility” (Table 2).
Content of Qualitative Responses.
Most answers could be assigned to the UX-facet “description and valuation of permissions”. The evaluation of the mobile application analytic tool reveals that 65% of the participants criticized the English terms and that the permissions were difficult to understand (61%). Furthermore, they criticized the listing of permissions, their poor explanations, and the absence of providing possible consequences for the user’s privacy. Participants (42%) were bothered by the use of technical terms and the absence of a risk score (29%). Participants mainly (81%) criticized permission app 1 because of the lack of any risk valuation of a scanned app. On the other hand, almost one third (29%) liked the categorization of single permissions into groups. The participants appreciated that permission app 2 explained all single permissions and the possible consequences/risks (45%). Also valued was an overall numerical risk score for all installed apps (39%) and for a scanned app with additional information (36%). In contrast, some participants remarked that they were not pleased with the lack of a risk valuation for each single app (19%) and the non-transparent calculation of the overall risk score (10%).
Many of the responses could be assigned to the “navigation” facet (37%). Most participants (71%) criticized the mobile application analytic tool because of a confusing presentation of information when scanned apps sorted permissions. In contrast, some (26%) stated that the sorting by permission groups supported navigation. Participants broadly assessed the “navigation” facet of permission app 1 very positively due to it being “clear” (68%) and that the sorting of information by scanned apps and permission group was implemented very well (55%). A few participants (16%) stated that the “hierarchical structure of the privacy app was good”. Similarly, participants widely assessed the navigation of permission app 2 as “clear” (84%) and “simple, intuitive and fast” (74%). Here, several participants (39%) felt that the function to sort permissions by permission groups across all scanned apps was missing.
The mobile application analytic tool scored well regarding the “stimulation” facet. Participants appreciated the presentation of potential risk (32%), especially with the use of traffic light colors. Nearly one third (29%) criticized the design of the permission list due to the extensive use of red. More specifically, these participants felt that permission app 1 “seemed to be incomplete and without any highlighting of the potential risk” of a scanned app. Comparable to the previously described UX-facets, participants responded positively to the permission app 2 regarding the stimulation facet. Respondents stated that the application’s design was “attractive and clear”. They also perceived the presentation of potential risk via traffic light colors as intuitive (32%).
Only a few answers could be assigned to the “overall attractiveness” UX-facet. For instance, one participant stated that permission app 1 entailed the essential function and 13% of the participants evaluated permission app 2 as useful. Some statements could be considered as part of the “comprehensibility” facet. For example, one participant acknowledged the simple language of permission app 1. Only a few comments about the “options for actions” facet existed. Two participants (7%) liked the option to delete scanned apps, whereas another wanted a function that provides suggestions for alternative applications.
Discussion
User Experience Facets
Overall Attractiveness.
Participants rated the overall attractiveness as fairly important. Qualitative responses could be rarely assigned to this facet. This could possibly be explained by the facet’s global nature. Participants were asked if they needed support or if they encountered problems while using the privacy apps. Answers to these questions were mainly specific and could therefore be assigned to other facets.
All three apps tested differed in their attractiveness. Permission app 2 was considered more attractive than permission app 1. However, both were rated more attractive than the analytic tool. This is reflected in the assessment of the other UX-facets. Therefore, we hypothesize that the “overall attractiveness” could serve as a global measurement for the overall UX-evaluation of a mobile application analytic tool.
Navigation.
The importance level of the navigation facet and the number of assigned qualitative responses were rather high. The participants appreciated a simple, fast, hierarchical and intuitive navigation. In addition, participants acknowledged the opportunity to switch between the sorting of information by a scanned app and by permission groups. The quantitative assessment partly revealed these navigation aspects. The mobile application analytic tool, lacking a hierarchical navigation structure, received the lowest score. The permission apps did not differ, although permission app 2 provides no information sorting by permission groups. A possible explanation of this balanced assessment of the permission apps is that participants perceived the navigation of permission app 2 as more clear and simple compared to permission app 1. Therefore, the sorting by permission groups seems to be not crucial.
Description and Valuation of Permissions.
Participants rated this facet as the most important and provoked the most commentaries. To comprehend the permissions, it was essential that single permissions were explained. Grouping permissions enhanced understanding. In contrast, using another language or technical terms diminished comprehension. In addition to the explanation of permissions, participants felt that a valuation of the arising risks was important. They questioned whether there was a compelling necessity for using certain permissions and what possible consequences could arise. Although participants appreciated an overall risk score, some wanted a numerical score for each scanned app. Based on the open-ended answers regarding additional functions, it appears that individualizing such a risk score would also be useful. Overall, the results indicate that the contents assigned to this facet are the most important for developing a mobile application analytic tool.
Comprehensibility.
Participants rated the “comprehensibility” facet as important, however the number of qualitative responses was rather small. A small number of responses possibly existed because the tested privacy apps were easy to understand. Another possible explanation is that the privacy apps with their specific functions only had limited components. Thus, there was little potential for it to be considered not comprehensible, which could be assigned to the “comprehensibility” facet rather than the “description and valuation of permissions” facet.
Options for Action.
Participants rated the “options for action” as moderately important. There were only a few qualitative responses. Some participants wanted a function to (de)activate single permissions and suggestions for alternative apps. The few qualitative responses received was unsurprising since the tested apps did not differ in their options. Therefore, it was remarkable to find significant differences between the three systems (permission app 2 being rated higher). This result could probably be explained by the halo-effect (permission app 2 received the most positive evaluations). However, because of the main study’s procedure, its importance was probably underestimated. Further research should examine the “options for action” in greater detail.
Stimulation.
This facet received the third most open-ended responses. Participants perceived a simple design and the presentation of the privacy risk of a scanned application through traffic light colors as intuitive. However, they rated importance of the facet rather low, which seems to contradict the number of qualitative responses. However, upon closer inspection, most responses referred to the design of core functions of the apps. In conclusion, the general findings assigned to the “stimulation” facet were crucial to support essential contents and functions of a mobile application analytic tool despite the overall rated level of importance.
Identity.
Assigned contents of the “identity” facet of the mobile application analytic tool did not appear that important given the user rating of its importance and the few qualitative responses. As with other UX-facets, permission app 2 differed significantly from permission app 1 as well as the mobile application analytic tool. A possible explanation could be the high correlation (r = .55) between the “identity” and “stimulation” facets previously reported in the original AttrakDiff 2 literature [11].
Information About the Analytic Tool and the Provider.
Based on the rated importance level and the few qualitative responses, this facet does not seem paramount for developing mobile application analytic tools. This result is likely because our participants did not download the privacy apps and therefore were limited in the available information during the study. For this reason, the “information about the analytic tool and the provider” facet requires further investigation.
Credibility.
A permission app is credible if it refrains from requesting permissions for its own purposes. The participants could not assess whether this facet was fulfilled because the applications were preinstalled. Participants rated importance level for the “credibility” facet as very high, receiving the highest median among all evaluated UX-facets. Therefore, “credibility” should be further investigated by including the tested tool’s download procedure.
Future Work and Limitations
During the assessment, we focused on the usage of mobile application analytic tools. Therefore, investigation of the “information about the analytic tool and the provider” and “credibility” UX-facets was limited due to the exclusion of the app download process. Thus, future studies should assess these facets via a holistic interaction process including download, scanning, and usage of mobile application analytic tools.
Our test setup most likely influenced the quantitative and qualitative UX-facet results. Basing user evaluations on performing three selected tasks may have led to overestimating the importance of task-related UX-facets and provoking a higher amount of qualitative responses to these facets (e.g., “description and valuation of permissions” or “options for action”).
Another limitation is that the UX-facets cannot be considered perfectly distinct from each other. However, the general findings (regardless of their facet assignment) were used to derive requirements and guidelines for designing mobile application analytic tools (see Sect. 6. Overall Conclusion).
6 Overall Conclusion
Both studies lead to a collection of user requirements, thus answering the first research question. In this section, these user requirements are used to derive guidelines for developing a mobile application analytic tool (research question 2). In summary, increased transparency is the central purpose. Moreover, a credible tool should explain and valuate information flows of mobile apps. Furthermore, it should provide options to act in a privacy protective way. Figure 3 presents the guidelines and accompanying sub-guidelines, which are not mutually exclusive and thus considered as a framework.
Increase Transparency to Overcome Information Asymmetries.
In accordance with previous research [13], our results underline the demand for increased transparency about app behavior. Hence, our first guideline is: “Increase transparency of mobile application information flows to overcome incomplete and asymmetric information access.”. Through a combination of different (static and dynamic) analytic approaches it is possible to gain maximum informational benefit for the user [16]. This guideline clearly refers to the information dimension, as the hurdles are asymmetric and incomplete information, availability, and overconfidence biases [13] in human decision making. Furthermore, we - as other authors [20] - concluded that transparency serves as an prerequisite. In other terms, the absence of transparency is the first hurdle to clear for making informed privacy decisions within the mobile app context.
Explain and Valuate Mobile Application Behavior.
Previous research has indicated that most app users did not comprehend permissions [6, 7]. This corroborates our lab study findings given the central importance of the “explanation and valuation of permissions” UX-facet. The results showed that a mobile application analytic tool should aid comprehension on two different levels: 1). Technical terms, acronyms, or foreign languages should be avoided, since these confused participants in our main study and finally led to the “Explain permissions as simply as possible.” guideline, assigned to the information dimension [13]. Based on the lab study results, a usable and hierarchical navigation could serve as a support. We attribute his sub-guideline to the presentation dimension [13]. 2). Lab study participants mentioned difficulties understanding potential personal risks. Therefore, we formulated the information guideline: “Evaluate the potential risk of a mobile application and show possible consequences to the user.” This is closely related to our survey result underlining the high importance for users to understand the data request necessity. This guideline could also be assigned to the default dimension in a broader sense, as crowd sourced privacy risks evaluations [18] can serve as defaults in a tool. In addition, some lab study participants also asked for information on the necessity of data requests. Therefore, we recommend that a tool should provide information about this necessity. The usage of traffic light colors seems to be appropriate to highlight potential risk. Furthermore, a comprehensible numerical score can aid the user’s risk assessment.
Provide Options for Action to Enable Privacy Preserving Behavior.
To nudge privacy preserving behavior, it is necessary to go beyond show, explain and evaluate. The online survey’s results showed a demand for functions that strengthen user control. This corresponds to the lab study participants’ statements desiring more options for action. According to these results, mobile application analytic tools should “Provide compre-hensive options for action to enable privacy preserving behavior.”. This could show users possible options for action, for instance altering existing privacy (pre-)settings or switching to another more privacy preserving app. Therefore, we assigned this guideline to the information and reversibility dimensions [13].
Ensure Credibility of the Mobile Application Analytic Tool.
As described above, transparency serves as a prerequisite for other guidelines. Therefore, a mobile application analytic tool itself should be transparent regarding privacy protection. The median rating for the importance of the “credibility” facet in the lab study received the maximum possible rating. This suggests that a tool must avoid using permissions for its own purposes. Moreover, the survey revealed an absence of significant correlations between privacy threat evaluations and demographic data. Thus, we conclude that a mobile application analytic tool should refrain from accessing personal demographic data. Therefore, the fifth guideline is to “Provide a transparent tool to meet moral requirements regarding privacy protection.”. It is associated with the information dimension [13] as it reduces incomplete and asymmetric information access or eliminates this hurdle with respect to the mobile application analytic tool itself.
7 Future Work
Our study results led to a user-driven framework of guidelines. They refer to designing privacy nudges and addressing hurdles of informed decision making when using mobile apps. It is conceivable that this framework can guide the design of privacy nudges and respective tools addressing present information asymmetries in another context. Therefore, future work could investigate the transferability of our results (e.g., to intelligent transportation systems or vehicles).
Our guidelines do not address the timing and incentive dimensions due to the methodology we chose. Enhancing motivation and temporal effects of mobile application analytic tools should be investigated in settings that are more naturalistic and in the long-term. Our future research will address this limitation and we will test our tool during a field trial. If user’s privacy preservation decisions are encouraged, this will probably entail changes in app implementation from the provider side.
References
AppBrain: Number of Android applications (2017). https://www.appbrain.com/stats/number-of-android-apps
Kantar: Marktanteile der mobilen Betriebssysteme am Absatz von Smartphones in Deutschland von Januar 2012 bis Oktober 2016 [Market shares of mobile operating systems in sales of smartphones in Germany from January 2012 to October 2016] (n.d.). https://de.statista.com/statistik/daten/studie/225381/umfrage/marktanteile-der-betriebssysteme-am-smartphone-absatz-in-deutschland-zeitreihe/
Caufield, K.P., Byers: Durchschnittliche Anzahl der installierten und täglich genutzten Apps pro Endgerät in den USA und weltweit im Mai 2016 [Average number of installed and daily used apps per device in the US and worldwide in May 2016] (n.d.). https://de.statista.com/statistik/daten/studie/555513/umfrage/anzahl-der-installierten-und-genutzten-apps-in-den-usa-und-weltweit/
Fraunhofer AISEC: 10.000 Apps und eine Menge Sorgen [10,000 apps and a lot of worries] (2014). http://www.aisec.fraunhofer.de/de/presse-und-veranstaltungen/presse/pressemitteilungen/2014/20140403_10000_apps.html
Geneiatakis, D., Fovino, I.N., Kounelis, I., Stirparo, P.: A Permission verification approach for android mobile applications. Comput. Secur. 49, 192–205 (2015). https://doi.org/10.1016/j.cose.2014.10.005
Kelley, P.G., Consolvo, S., Cranor, L.F., Jung, J., Sadeh, N., Wetherall, D.: A conundrum of permissions: installing applications on an android smartphone. In: Blyth, J., Dietrich, S., Camp, L.Jean (eds.) FC 2012. LNCS, vol. 7398, pp. 68–79. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34638-5_6
Felt, A.P., Ha, E., Egelman, S., Haney, A., Chin, E., Wagner, D.: Android permissions: user attention, comprehension, and behavior. In: Cranor, L.F. (ed.) Proceedings of the Eighth Symposium on Usable Privacy and Security, p. 3 (2012). http://dx.doi.org/10.1145/2335356.2335360
Balebako, R., et al.: Nudging users towards privacy on mobile devices. In: Proceedings of the CHI 2011 Workshop on Persuasion, Nudge, Influence and Coercion (2011)
DIN EN ISO 9241-210: Prozess zur Gestaltung gebrauchstauglicher interaktiver Systeme [Process for the design of usable interactive systems]. Beuth, Berlin (2011)
Statistisches Bundesamt Deutschland: Statistisches Jahrbuch Deutschland und Internationales [Statistical Yearbook Germany and International]. Statistisches Bundesamt, Wiesbaden (2015)
Hassenzahl, M., Burmester, M., Koller, F.: AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität [A questionnaire to measure perceived hedonic and pragmatic quality]. In: Ziegler, J., Szwillus, G. (eds.) Mensch & Computer 2003: Interaktion in Bewegung, pp. 187–196. B.G. Teubner, Stuttgart, Leipzig (2003). http://dx.doi.org/10.1007/978-3-322-80058-9_19
Thaler, R.H., Sunstein, C.R.: Nudge: improving decisions about health, wealth, and happiness. Const. Polit. Econ. 19(4), 356–360 (2008)
Acquisti, A., et al.: Nudges for Privacy and Security: Understanding and Assisting Users’ Choices (2016) SSRN https://ssrn.com/abstract = 2859227
Kelley, P.G., Cranor, L.F., Sadeh, N.: Privacy as part of the app decision-making process. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 3393–3402. ACM (2013). http://dx.doi.org/10.1145/2470654.2466466
King, J.: How Come I’m Allowing Strangers to Go Through My Phone? Smartphones and Privacy Expectations (2012). http://dx.doi.org/10.2139/ssrn.2493412
Tan, J., et al.: The effect of developer-specified explanations for permission requests on smartphone user behavior. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 91–100. ACM (2014). http://dx.doi.org/10.1145/2556288.2557400
Enck, W., et al.: TaintDroid: an information-flow tracking system for realtime privacy monitoring on smartphones. ACM Trans. Comput. Syst. (TOCS), 32(2) (2014), http://dx.doi.org/10.1145/2619091
Lin, J., Amini, S., Hong, J.I., Sadeh, N., Lindqvist, J., Zhang, J.: Expectation and purpose: understanding users’ mental models of mobile app privacy through crowdsourcing. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 501–510. ACM (2012) http://dx.doi.org/10.1145/2370216.2370290
Gerber, P., Volkamer, M., Renaud, K.: The simpler, the better? Presenting the COPING Android permission-granting interface for better privacy-related decisions. J. Inf. Secur. Appl. (2016). http://dx.doi.org/10.1016/j.jisa.2016.10.003
Hartwig, M., Reinhold, O., Alt, R.: Privacy awareness in mobile business: how mobile os and apps support transparency in the use of personal data. In: BLED 2016 Proceedings, vol. 46 (2016)
Bitkom: Anteil der Smartphone-Nutzer in Deutschland nach Altersgruppe im Jahr 2016 [Share of smartphone users in Germany by age group in 2016]. In Statista - Das Statistik-Portal (n.d.). https://de.statista.com/statistik/daten/studie/459963/umfrage/anteil-der-smartphone-nutzer-in-deutschland-nach-altersgruppe/
MyMarktforschung.de: Studie: Smartphones liebster Zeitvertreib der Deutschen [Study: Smartphones favorite pastime of the Germans] (2015). https://www.mymarktforschung.de/de/ueber-uns/pressemitteilungen/item/studie-der-alltag-der-deutschen.html
comScore: Geschlechterverteilung der Smartphone-Nutzer in Deutschland in den Jahren 2012 und 2016 [Gender distribution of smartphone users in Germany in 2012 and 2016] (n.d.). https://de.statista.com/statistik/daten/studie/255609/umfrage/geschlechterverteilung-der-smartphone-nutzer-in-deutschland/
Dogruel, L., Jöckel, S., Vitak, J.: The valuation of privacy premium features for smartphone apps: the influence of defaults and expert recommendations. Comput. Hum. Behav. 77, 230–239 (2017)
Bal, G., Rannenberg, K., Hong, J.: Styx: design and evaluation of a new privacy risk communication method for smartphones. In: Cuppens-Boulahia, N., Cuppens, F., Jajodia, S., Abou El Kalam, A., Sans, T. (eds.) SEC 2014. IAICT, vol. 428, pp. 113–126. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-55415-5_10
Almuhimedi, H., et al.: Your location has been shared 5,398 times!: a field study on mobile app privacy nudging. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 787–796. ACM, New York (2015). http://dx.doi.org/10.1145/2702123.2702210
Zhauniarovich, Y., Gadyatskaya, O.: Small changes, big changes: an updated view on the android permission system. In: Monrose, F., Dacier, M., Blanc, G., Garcia-Alfaro, J. (eds.) RAID 2016. LNCS, vol. 9854, pp. 346–367. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45719-2_16
Gerber, P., Volkamer, M.: Usability und Privacy im Android Ökosystem [Usability and privacy within the Android ecosystem]. Datenschutz und Datensicherheit 39(2), 108–113 (2015). https://doi.org/10.1007/s11623-015-0375-y
PocketGamer.biz: Ranking der Top-20-Kategorien im App Store im Januar 2017 [Ranking of the top 20 app store categories in January 2017] (n.d.). https://de.statista.com/statistik/daten/studie/166976/umfrage/beliebteste-kategorien-im-app-store/
Distimo: Anteil der im Google Play Store weltweit am häufigsten heruntergeladenen Apps nach Kategorien im Februar 2014 [Share of the most downloaded apps in the Google Play Store worldwide by category in February 2014]. In Statista - Das Statistik-Portal (n.d.). https://de.statista.com/statistik/daten/studie/321703/umfrage/beliebteste-app-kategorien-im-google-play-store-weltweit/
ForwardAdGroup: Wie viele Apps haben Sie auf Ihrem Smartphone installiert? [How many apps have you installed on your smartphone?] (n.d.). https://de.statista.com/statistik/daten/studie/162374/umfrage/durchschnittliche-anzahl-von-apps-auf-dem-handy-in-deutschland/
Website (internetdo.com): Prognose zum Anteil der Smartphone-Nutzer an den Mobiltelefonnutzern weltweit von 2014 bis 2020 [Forecast for the share of smartphone users in mobile phone users worldwide from 2014 to 2020]. In Statista - Das Statistik-Portal (n.d.). https://de.statista.com/statistik/daten/studie/556616/umfrage/prognose-zum-anteil-der-smartphone-nutzer-an-den-mobiltelefonnutzern-weltweit/
Cohen, J.: Statistical Power Analysis for the Behavioral Sciences, pp. 20–26. Lawrence Earlbaum Associates., Hillsdale (1988)
Mahlke, S., Thüring, M.: Studing antecendents of emotional experiences in interactive contexts. In: CHI 2007 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 915–918. ACM, New York (2007). http://dx.doi.org/10.1145/1240624.1240762
Mayring, P.: Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and Software Solution (2014). http://nbn-resolving.de/urn:nbn:de:0168-ssoar-395173
Hansen, M., Berlich, P., Camenisch, J., Clauß, S., Pfitzmann, A., Waidner, M.: Privacy-enhancing identity management. Inf. Secur. Tech. Rep. 9, 35–44 (2004). https://doi.org/10.1016/S1363-4127(04)00014-7
Fleiss, J.L., Levin, B., Paik, M.C.: Statistical Methods for Rates and Proportions. Wiley, New York (2013)
Acknowledgments
The studies are part of the research project “AndProtect: Personal data privacy by means of static and dynamic analysis for Android app validation”, funded by the German Federal Ministry of Education and Research. Furthermore, we want to thank Franziska Hartwich and Christiane Attig for their valuable comments on the first draft of the paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix A.1
I feel my privacy threatened if my [map/navigation app, messenger app, weather app, or shopping app] uses… | Strongly disagree | Largely disagree | Somewhat disagree | Somewhat agree | Largely agree | Strongly agree | |
---|---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | ||
…location data | Information about where I am | ||||||
…communication data | Dialogues with other persons in terms of text, picture, video and audio messages | ||||||
…contacts data | The stored contact information in my contact list (e.g., first name, last name, telephone number, and e-mail address of the contact) | ||||||
…motion sensor data | What kind of movements I execute (e.g., climbing stairs, running, walking) | ||||||
…app usage data | When and how often I use my [app-group] app | ||||||
…data about usage of other apps | What other apps I have installed | ||||||
…calendar data | Which appointments (content and timing) I have entered | ||||||
…call history data | With whom and when I had a call | ||||||
…local files | Files which I have stored on my smartphone (e.g., pictures, audio records, download files) | ||||||
…camera data | Pictures which are in the focus of my camera | ||||||
…WiFi state data | Information with which WiFi I’ m connected to, if I used this WiFi before, which wireless networks I have registered on my device and/or which WiFi my device is searching for | ||||||
…fitness data | Information about my physical activity (e.g., pedometer, heart rate, sleeping phase) | ||||||
…data about social network | Information on who I know, e.g. the names of my contacts, access of my contacts on personal data (pin board, pictures, status of persons), and unrestricted access to my data in a social network | ||||||
…shopping data | Information on which products I monitor, which products I bought and which means of payments and shipping address I used | ||||||
…audio data | Audio information which my smartphone microphone records |
Appendix A.2
Appendix A.3
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Döbelt, S., Halama, J., Fritsch, S., Nguyen, MH., Bocklisch, F. (2020). Clearing the Hurdles: How to Design Privacy Nudges for Mobile Application Users. In: Moallem, A. (eds) HCI for Cybersecurity, Privacy and Trust. HCII 2020. Lecture Notes in Computer Science(), vol 12210. Springer, Cham. https://doi.org/10.1007/978-3-030-50309-3_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-50309-3_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50308-6
Online ISBN: 978-3-030-50309-3
eBook Packages: Computer ScienceComputer Science (R0)