Keywords

1 Introduction

Privacy is a well-documented problem for users of social networks. The privacy paradox describes how stated levels of concern for personal privacy does not match observable behaviour within the network [1]. Behavioral psychology describes behaviour as a reaction to environmental stimulus [2]. Given that the User Interface (UI) of a social network can be considered the environment with which users react, the question is raised: how could the UI be designed to encourage more protective privacy behaviour? Theories of behavioral change such as the Theory of Planned Behaviour (TPB) [3] could provide a solution for guiding the design of UI’s as they describe the influential factors of individual action. Indeed, a user based approach may be required given that privacy is highly individual and fluid [4] making the UI ideally placed to address the privacy issue as it is the point of interaction.

This paper will outline the potential causes of poor privacy behaviour and propose how the TPB can be used to define solutions to the problems caused. The Personal Attitude (PA) feature in this model and its influence over intention is used as the basis to design a UI element aimed at informing or reminding users of their personal privacy needs at the point of interaction. An experiment is proposed to explore the effect of this UI addition where users register to a new social network by answering a series of questions. The amount of questions answered is compared to a control group for difference. Which questions answered in terms of sensitivity is also examined against the control group to determine if privacy is taken into consideration.

2 Literature Review

The causes of poor privacy behaviour are wide ranging and varied. The system itself could be designed to be persuasive, encouraging disclosure of information or not make privacy protection mechanisms obvious enough for users [5, 6]. For example, there may be a lack of privacy salient information embedded into the environment to inform and aid the user [7]. This is coupled with the role the user themselves may play as there level of technical skill or lack of privacy awareness could have an impact [8, 9]. Furthermore, the behaviour of one’s peers and social circle could impact on the decision to disclose certain pieces of information [10] as users give in to peer pressure.

However, users do state a desire for privacy that is not apparent from behaviour; a phenomenon known as the privacy paradox [11]. It has been suggested that privacy suffers from a secondary goal problem, that the idea of privacy and the user’s perception of it are not considered when in pursuit of some other goal [12]. Furthermore, Social Networks could potentially be a persuasive technology, encouraging users to behave in a way that they may not normally do [13]. The question is therefore raised: what would happen if the software was designed to remind users of their personal privacy preferences during interaction?

Indeed, the User Interface (UI) would be ideally placed to address the aforementioned potential causes of poor privacy behaviour; either by reminding users of their privacy or by informing and raising their awareness of privacy issues. The addition of privacy salient information to the UI aimed at achieving this could play an important role in promoting more protective privacy behaviour. However, what this salient information should look like and the role it could play in the UI is unclear.

The Theory of Planned Behaviour (TPB) (Fig. 1) could present a means of defining and informing such content. This presents three aspects of salience that influence an individual’s intention and behaviour [3], which include their personal attitude, their subjective norms and their perceived control.

Fig. 1.
figure 1

The Theory of Planned Behaviour [3]

This paper proposes that UI elements could be designed based on each of these salient aspects that aim to promote more protective privacy behaviour. Indeed, it has been suggested that there lacks a critical focus on the role of the UI in addressing privacy issues [14].

Behavioral attitude suggests that an individual’s awareness and perceptions of the consequences of an action inform their intention. This fits well with the more user centric causes outlined earlier and is the primary focus for this paper.

Subjective norms suggest that intention is influenced by one’s peers and perceived social pressures again this covers the previously outlined causes. A UI could inform users of good behaviour that is going on around them and promote privacy rather than disclosure.

Finally, an individual’s perceived control in terms of how easy certain behaviour is to perform influences both intention and action. A UI could be designed to make privacy protection easier and more accessible for example (results from this experiment are outlined in a separate paper and an overview given here in Sect. 2.1).

Each of these salient properties has been used as the basis for an experimental treatment aimed at encouraging more protective privacy behaviour. This paper focuses on the behavioral attitude treatment which sought to inform and/or remind users of the potential consequences of disclosure.

Utilizing the UI is not without precedence in exploring influencing factors on privacy behaviour and the paradox described previously. Research has found that the presence of counter-arguments displayed as messages during interaction has found that users’ privacy opinions are swayed, particularly among those users who have a low level of online knowledge [15].

2.1 Perceived Control Experiment

A similar paper from the same experiment examined the role of perceived control [16]. Participants were given the opportunity to review and edit their data in a privacy focused context. Following submission of their data, a UI with privacy oriented elements added allowed this review where participants could delete data items without the goal of account creation being as clear. A dynamic privacy score rated the amount of questions answered and their sensitivity which increased when items were deleted. Participants did disclose significantly less information than a control group after reviewing their data through the treatment. However, participants stated that “getting a low score” was the dominant reason for removing information rather than consideration of their personal privacy suggesting that their particular needs may still not be being met (albeit they may be safer). The power of the UI to persuade is clear but needs to be tempered to allow user freedom.

3 Methodology

The TPB’s behavioral attitude aspect posits that an individual’s personal knowledge and perceptions of the consequences of certain behaviour dictate whether or not they intend to act. In terms of privacy this may suggest that if a user is informed or reminded of the negative consequences of disclosure at the point of interaction they may behave differently. Hence, the following hypothesis is proposed:

H1. A User Interface that informs users of the consequences of personal information disclosure will influence behaviour and decrease the amount of sensitive information they provide.

3.1 Control Group

In order to test this hypothesis an experiment was devised that asked participants to register to a new social network service for their University (in this case Nottingham Trent). Participants were asked a variety of questions, the answers to which would form their profile on the network and put them in contact with like-minded peers. Figure 2 illustrates the main page of the experiment.

Fig. 2.
figure 2

Experiment home page

The design of the front page (and subsequent pages) draws inspiration from Facebook in order to promote the ecological validity of the experiment [17]. The majority of the questions asked of the participant appear on the second page (a “profile builder”). These questions vary in sensitivity and are based and adapted from similar work [18]. In total, participants view 33 questions over the course of the experiment spread across two web pages, a third page asked for privacy settings to be applied. Inputs varied for the questions ranging from text boxes to check boxes and drop-down menus; the aim of which being to test if this held an impact on their decision to answer. An example of the profile builder screen can be seen in Fig. 3.

Fig. 3.
figure 3

Profile builder page

The number of questions answered was measured in a database for comparison with a treatment group based on influencing a participant’s personal attitude.

3.2 Treatment Design

The treatment is designed to either remind participants of their privacy preferences or to inform them such that they can make a decision at the point of interaction. To that end a UI metaphor derived from a traffic light system sought to classify the information requested in the questions according to their potential risk. An illustration of this addition to the UI is shown in Fig. 4.

Fig. 4.
figure 4

Privacy traffic lights

A green light indicates that that particular question carries light risk and disclosure would generally be acceptable but may result in low level consequences: for example, social embarrassment. A yellow light indicates that caution should be exercised and that disclosure may not result in serious ramifications but could affect, for example, employment prospects. Finally, a red light indicates that there could potentially be serious consequences from disclosure: for example, breaking the law.

It is important to note that these are not intended to be clear cut classifications of sensitivity but are intended to only allow participants to take into consideration their personal privacy preferences during interaction.

The amount of questions answered is compared to the control and assessed for difference. Furthermore, the location of disclosure (i.e. within the defined sensitivity categories) within each group is also compared to the control in order to assess the potential influence of the treatment: has disclosure been significantly less in the defined higher sensitive areas as might be expected?

A short exit-survey follows the experiment aiming to assess the degree to which participants felt the treatment was useful and what the general perceptions are of it. This consists of specifying a level of agreement with the following statements:

  1. 1.

    I found the privacy information helpful.

  2. 2.

    The privacy information helped to select what to fill in.

  3. 3.

    I believe the privacy information would be beneficial in the long run.

  4. 4.

    I acted differently due to its presence.

Furthermore, a selection of participants took part in a focus group to further assess the motivation behind behaviour within the experiment.

3.3 Sample

In total, 43 participants were recruited to take part in the experiment and were randomly assigned to either the control or a treatment group. This resulted in 20 participants (16 male and 4 female) in the control group compared to 23 in the treatment group (17 male and 6 female). These participants were recruited from Nottingham Trent University’s Information Systems course and completed the experiment within a scheduled lab session – they were asked if they would like to sign-up to a new social network. The sample is pre-dominantly male and from what could be considered a technical background and as such may not be considered representative of a social network population.

4 Results

A summary of the total amount of disclosure for the experiment is illustrated in Table 1. This shows the percentage amount of questions answered as a whole (note, PA indicates the personal attitude treatment).

Table 1. Disclosure summary

The reduction in answered questions stands 16 % which is a statistically significant reduction in the total amount of questions answered with a p-value of < 0.0001 (Mann Whitney U). This would therefore suggest that the earlier stated hypothesis is true as participants within the treatment group disclosed significantly less than the control. A breakdown of the location of disclosure (illustrated in Table 2) would also seem to support this assumption.

Table 2. Spread of Disclosure

Disclosure was decreased in the higher sensitivity (as defined by the treatment) when compared to the control as the yellow and red category was reduced with statistical significance as summarized in Table 2.

This would certainly seem to suggest that participants took their privacy needs into consideration and were less likely to disclose data of a higher sensitivity. However, it is not clear if participants are being informed, reminded or persuaded by the treatment design. Table 3 illustrates the responses to the exit-survey statements that aimed to explore this point in greater detail and shall be referred to during the discussion section following.

Table 3. Exit survey responses

5 Discussion

Results from the experiment suggest that participants demonstrated a greater degree of thought during interaction where the treatment interface was present demonstrated by the reduction in answers to questions with a higher degree of sensitivity. From the exit-survey 75 % of participants found the extra information useful and the same amount felt that they did indeed behave differently. The effect is therefore acknowledged by participants and the majority appeared to have found the presence favorable. Post-experiment, in the focus group, a participant stated: they (the lights) did highlight ones that could cause problems, like address. The intended effect of the treatment would therefore seem to have been achieved with privacy concerns clearly highlighted and put into focus during the interaction.

However, for some participants it appeared to have the effect of reminding them rather than informing: I could see why (the ratings were in placed) but I made up my own mind and I made decisions based on my own common sense. This would suggest that reminding participants at the point of interaction will enable them to think about their own personal privacy needs. It may also be that participants are unwilling to admit to the potentially persuasive effects of the treatment and wish to still have ownership of their behaviour. Indeed, wider research has found that individuals tend to downplay the effect of the persuasive communication over themselves [19].

Despite their being a potential subconscious effect on participants that further work should address, the general consensus from participants appeared to be a positive perception of the UI addition (from the exit-survey). Indeed, one participant stated in the focus group: it made me think twice about the information I put on Facebook. It therefore would seem to have provided participants with a gentle nudge to think about their privacy without being too intrusive. “Privacy nudges” have been proposed in wider research [20] and work within this paper provides some data regarding their usability and look.

This is apparent positive experience is unlike the perceived control (PC) treatment [16]. This PC treatment exhibited less disclosure than the PA design described in this paper; however, perceived usefulness was also less. The PC treatment was designed to place the interaction squarely into a privacy focused one which participants may have felt took their ownership of their behaviour away. For example, a dynamic “P-Score” encouraged the removal of information, making the goal of interaction to get a lower score rather than enact appropriate desired behaviour. This would suggest that salient privacy information needs to be subtle enough to allow participant to maintain their ownership of behaviour and still feel like their needs are being met. Such persuasiveness may be described as a suggestion rather than a direction [21].

6 Conclusions and Further Work

This paper has presented an example of privacy salient information (informed by the Theory of Planned Behaviour) in the form of a UI element and examined its potential effect on end-users. Findings suggest that a privacy suggestion can play a role in encouraging more protective privacy behaviour. Such designs could be added to existing user interfaces through the use of browser extension or existing system API’s. This would allow for a more longitudinal study to take place examining if behaviour change is a long lasting alteration to habit. This would also examine if privacy salient nudges are enabling users to enact their privacy needs when privacy is taken into account with varying goals of system use. As there may be a danger that such UI elements are also persuading users to be more private than they perhaps need to be which may cause adverse social affects. It is unclear from this experiment what the effect would be if the goals of system use are user derived; here, the goal was defined by the experiment (to sign-up); however, what would the effect be if used with users live streams or post-account creation where the goals are perhaps more personal?

Future work could also incorporate eye-tracking software to gauge the extent to which each individual light is looked at or is the effect observed a cumulative one based on the presence of the lights as whole; i.e. how much focus is placed on each individual form elements, how much to surrounding content and how much consideration takes place?

There is evidence here, however, that simple additions to a UI can produce more protective privacy behaviour within the context defined in this experiment. More work is required to understand the driving psychology behind the observed effects and how the UI elements can be further used to address the privacy problem with longer lasting effects.