Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag January 14, 2020

CogniPGA: Longitudinal Evaluation of Picture Gesture Authentication with Cognition-Based Intervention

  • Christina Katsini

    Christina Katsini is a Ph.D. Candidate at the University of Patras, Greece. Her interests lie in understanding how people interact with systems and services, and in designing for the people with the people. In her Ph.D. research, she is investigating user choices in graphical user authentication from a human cognitive perspective.

    EMAIL logo
    , Nikolaos Avouris

    Nikolaos Avouris (MSc, Ph.D., https://sites.google.com/view/avouris) is an electrical and computer engineer with a research interest in human-computer interaction. He is a Professor of Software Technology and Human-Computer Interaction in Electrical and Computer Engineering Department of University of Patras, Greece. He is Head of Interactive Technologies Lab and HCI Group.

    and Christos Fidas

    Christos Fidas (Ph.D., http://cfidas.info) is an electrical and computer engineer, and senior researcher with an interest in cultural heritage informatics, usable and secure information systems, and human socio-cultural and cognitive factors. He is an Assistant Professor at the Department of Cultural Heritage Management and New Technologies, University of Patras, Greece.

From the journal i-com

Abstract

There is evidence that the visual behavior of users when creating graphical passwords affects the password strength. Adopting a cognitive style perspective in the interpretation of the results of recent studies revealed that users, depending on their cognitive style, follow different visual exploration paths when creating graphical passwords which affected the password strength. To take advantage of the inherent abilities of people, we proposed CogniPGA, a cued-recall graphical authentication scheme where a cognition-based intervention using gaze data is applied. This paper presents the longitudinal evaluation of the proposed scheme in terms of security, memorability, and usability from a cognitive style perspective. Results strengthen the assumptions that understanding and using the inherent cognitive characteristics of users could enable the design of user-first authentication schemes, where no compromises need to be made on security for benefiting usability or the other way around.

1 Introduction

User authentication is an important and critical task for individuals, as it enables them to access their sensitive and personal information, such as emails and medical data, and prohibits other users from non-authorized access. The most widely deployed user authentication scheme is the alphanumeric passwords. Strict password policies (e. g., use of uppercase and lowercase letters, symbols, and numbers) increase the effective password space and make the passwords resistant to brute-force attacks, but at the same time, they affect memorability because they are hard to memorize. As a result, users tend to select short and easy-to-remember passwords and they often use the same password to access multiple accounts; such behaviors introduce security vulnerabilities by dramatically reducing the effective password space of the deployed authentication scheme. To address the weaknesses and the vulnerabilities of the alphanumeric passwords, several graphical user authentication (GUA) schemes were introduced. In GUA, graphical elements are used instead of characters to create a password aiming to leverage the ability of the human memory to process visual information. There is a two-dimensional advantage compared to text-based authentication: memorability-wise and security-wise. Regarding the memorability, they leverage the picture superiority effect, according to which, people have a vast, almost limitless, visual memory, and pictures tend to be remembered far better and for longer than words [68], [59]; thus, GUA is expected to help users to better remember the passwords. Regarding the security, the advantage of GUA lies in the difficulty in communicating or recording pictures, which is expected to inhibit insecure practices.

Different GUA schemes have been proposed to exploit the power of pictures which are clustered into three categories based on De Angeli et al. [27]: cognometric or recognition based, locimetric or recall based and drawmetric or cued-recall based. Recognition based schemes are based on visual recognition of target images embedded among a set of decoy images. Examples of such mechanisms are Passfaces [15], Dejavu [28], VIP [26], ImagePass [61]. These passwords are easy to remember but a large image pool is essential in order to get high resistance to brute force attacks [41] and are prone to shoulder surfing [13]. Recall based schemes require the user to reproduce a pre-drawn outline. They lie at the borderline between biometrics and graphical mechanisms and they are quick and convenient to use. When using recall based schemes, users often make mistakes when redrawing their graphical password as they cannot accurately remember the target points [43]. In cued-recall based schemes, background images are introduced. Cues are used to help users identify target points within an image. PassPoints [90], Cued Click Points [19], Touchscreen Multi-layered Drawing [17] and PassBYOP [11] are examples of such schemes. In practice, very few graphical password schemes have been adopted, such as the Android Unlock Pattern scheme, which is a modified version of Pass-Go [79] with some adoptions to accommodate for the size of typical mobile devices, and Windows 8™ Picture Gesture Authentication (PGA) which comes with Windows 8™ operating system and is a modified version of Background Draw-a-Secret [29].

Despite the initial beliefs about the advantages of using graphical elements in authentication, research has revealed that people make predictable choices when using GUA schemes [86], [96]. Several researchers have proposed interventions for nudging users towards better password choices [22], [83], [16]. In spite of the promising results of such interventions, researchers have not leveraged the inherent characteristics of the users to create personalized GUA schemes and better support users in the authentication task. Cognitive style (i. e., the preferred way an individual processes information [54]) is such an inherent characteristic and several studies [9], [47], [50] have shown that people with different cognitive styles, have different behavior and develop different strategies when engaged in GUA tasks. A cognitive style that is highly interrelated with visual tasks, such as GUA tasks, is the Field Dependence-Independence (FD-I) theory [91], which suggests that individuals with different cognitive characteristics have different approaches in processing visual information. Therefore, we argue that a cognition-based intervention would better support users in making better password choices, which would improve the security, the memorability, and the usability aspects of a GUA scheme.

In our previous works [45], [47], [50] we reported the results of two exploratory lab studies. We showed that people with differences towards FD-I cognitive style follow different approaches when creating passwords using either recognition based [50] or cued-recall based [47] GUA schemes, which result in imbalances on the guessability of the graphical passwords. Aiming to extend our previous works and investigate the longitudinal aspect of adopting a cognition-based intervention on a cued-recall based GUA scheme in real life settings, this work reports a longitudinal field study where the cognition-based intervention was deployed and used within an academic semester by students of two different lab courses once-a-week. In the remainder of the paper, we discuss the cued-recall graphical password problem and the research attempts that have been made to understand and solve it. Then, we present the method of our long-term study along with its results. We interpret the results from a cognitive style perspective, discuss the implications of the present work, present the limitations of our research, and conclude the paper.

2 Background and Motivation

In this section, we first describe problems associated with the use of cued-recall graphical passwords. Following that, we present research attempts for solving these problems and discuss their strengths and weaknesses. Then, we introduce cognition, explain its importance in designing systems and services and we present the results of exploratory cognition-based studies in GUA. Finally, we briefly discuss implementations of cognition-based interventions in other domains and build our motivation in the last subsection.

2.1 The Cued-Recall Graphical Passwords Problem

Evaluation of cued-recall graphical passwords has revealed that they are relatively usable [18], [89], [90], [77]. Despite the promising results in terms of usability, the security concerns remain. A major security issue associated with these passwords is the hot-spots: people tend to select similar locations on images to draw their passwords. These are most often the salient points of images which draw their attention. Attackers can use either computational methods to extract such points or harvest real passwords, to build attack models and successfully guess the passwords.

To reduce the GUA schemes’ vulnerability associated with the hot-spots, researchers have introduced a number of interventions. Some interventions focused on restricting the user choices, such as the application of salient masks which prevented users from selecting salient points of images as part of their passwords [16]. Others focused on influencing users choices. Chiasson et al. [19] proposed Cued Click Points, where users selected one point on each of a sequence of five images aiming to avoid the formation of hot-spots on one image and to remove the memory burden of the order of the points. An improved version of Cued Click Points was the Persuasive Cued Click Points application, which aimed to persuade users to select points away from the salient points of the images [22]. Thorpe et al. [83] used the “drawing-the-curtain” effect, where the image grid was gradually revealed to the user, aiming to explore whether the presentation of the images affects the user choice by slightly shading the image except for a randomly positioned viewport. Similarly, [48] investigated the presentation effect of 2D and 3D image grid layouts and showed that users create stronger passwords when using a 3D layout. Clark et al. [24] attempted to affect user choices through changing the password policy.

2.2 Cognitive Theory in GUA

Despite the fact that the hot-spot problem has been associated with the visual attention, researchers have not attempted to take advantage of the inherent characteristics of users for influencing their choices. Among others, cognitive styles are such inherent characteristics and they have been associated with users’ visual search patterns. One well-researched cognitive style is the Field Dependence-Independence (FD-I) style [91]. FD-I is a one-dimensional model interrelated with the way people process visual information and is defined by a tendency to separate details from the surrounding context. This theory suggests that individuals have different habitual approaches, according to contextual and environmental conditions, in processing graphical information, and accordingly characterizes individuals as being either field-dependent (FD) or field-independent (FI). To the one end lie the people who exhibit field dependence and tend to follow a more holistic approach to process visual information and have difficulties in identifying details in complex visual scenes [91]. To the other end lie the people who exhibit field independence and tend to follow a more analytic approach to process visual information, pay attention to details, and easily separate simple structures from the surrounding visual context [91]. Therefore, considering that FD-I is related to the comprehension of the visual information, it has been selected as an appropriate cognitive framework to help researchers understand the underlying reasons of user choices in user authentication tasks.

In this respect, recent research explored how people with different cognitive styles use a cued-recall based and a recognition based GUA scheme. They revealed influences of the users’ FD-I cognitive style on the selected passwords when using a cued-recall based GUA scheme [47] and when using a recognition based GUA scheme [50]. This research also revealed a correlation between the visual behavior of the users during password creation and the guessability of the password when a salient-first brute force algorithm was used. A correlation between the users’ cognitive style and the login performance and memorability when using a recognition-based GUA scheme was revealed by Belk et al. [8].

In our recent research, we tried to nudge user choices by taking advantage of the user’s cognitive style. Aiming to leverage the different image exploration strategies of people with different cognitive styles, we proposed a cognition-based gaze mask intervention on a recall-based scheme and the preliminary results in terms of password strength where promising [47]. To the knowledge of the authors, this is the only attempt of providing cognition-based personalization in GUA. Nonetheless, cognition-based interventions have been successfully applied in other domains, such as in e-learning [84], in organization of digital resources [33], in gaming [55], [42], in cultural-heritage [72], in business and management [6], in e-commerce [44], and in marketing [57]. These research works have shown that when considering cognition as a design factor, the users enhance their experience, perform better, have an enriched visual and interactive behavior, and increase their effectiveness towards task objectives. Therefore, we expect that applying cognition-based interventions in GUA schemes would benefit the users not only in terms of security but also in terms of memorability and usability.

2.3 Motivation

Given the results of previous research, we argue that cognition-based GUA schemes can nudge users to create strong and memorable graphical passwords through a usable authentication process. Therefore, in this paper, we present the longitudinal evaluation of CogniPGA, a cognition-based graphical authentication scheme where a saliency mask is applied on top of an image depending on the users’ cognitive style, aiming to influence the visual exploration process of users by taking advantage of their inherent characteristics. The objectives of the reported research involve the comparison of CogniPGA with a typical recall based GUA scheme (hereafter PGA), which is a scheme similar to Microsoft’s Picture Passwords, in terms of password strength, memorability, and usability. We also consider the image complexity as a study factor based on earlier research which revealed that the image complexity affects the user choices [89], [47]. Hence, our paper addresses the question of whether a cognition-based recall-based GUA scheme can help individuals with different cognitive styles to create strong and memorable passwords through a usable interface.

3 Study Methodology

To answer our research question we designed a longitudinal field study. The study involved a between-subjects design, in which the participants (undergraduate students undertaking lab courses) were asked to create a password which they would use to access course material during their weekly lab course in the fall semester of the academic year 2018–19. Two GUA schemes were used: PGA and its cognition-based version, CogniPGA. In this section, we present the hypotheses, describe the two GUA schemes, present information about the participants, discuss the study instruments, metrics, and apparatus, and finally, present the study procedure.

3.1 Hypotheses

We expected that the a cognition-based intervention on the GUA scheme (i. e., CogniPGA) would enable users with different cognitive styles to create stronger and more memorable passwords than the users of the traditional GUA scheme (i. e., PGA) because it leverages the inherent visual processing abilities of people. We also suspected that the cognition-based intervention could affect the usability of the scheme.

Thus, we formed the following hypotheses:

  1. CogniPGA users create stronger passwords than PGA users.

  2. CogniPGA users create more memorable passwords than PGA users.

  3. CogniPGA is more usable than PGA.

3.2 GUA Schemes

PGA: Traditional GUA Scheme

PGA is a web-based mechanism (Figure 1), which resembles the workflow and appearance of Microsoft Picture Password PGA scheme. PGA is a cued-recall GUA scheme, used for creating gesture-based passwords using a background image as a cue. We selected this scheme because it resembles a well-established and commercial graphical scheme, rich in graphical content because of the background image. Each password consists of three gestures. Each gesture can be a tap, a line, or a circle. Free-line gestures, are converted to one of the three allowed gestures. To store the gestures, the image is divided into 100 segments on the largest dimension and then the shortest dimension is divided in segments by the same scale. The gestures are mapped to one of the three allowed types and the points are mapped to segments. Thus, for taps the corresponding segment is stored; for lines the starting segment and the ending segment are stored; for the cycle the centre segment, the radius, and the direction (clockwise or counterclockwise) are stored. A detailed analysis of the mechanism is presented by Pace [67]. During registration, the screen is divided in two parts as shown in Figure 1. On the left, we provide instructions for creating a password. Three numbers (1, 2, and 3) are displayed to indicate the active gesture. The “Start again” and the “Confirm password” buttons at the bottom of the left part enable the users to restart the registration process or confirm the created password.

Figure 1 
                The recall-based GUA scheme used in our study resembled Windows 8™ Picture Gesture Authentication. The users could create their password by making three of the following types of gestures: taps, lines, and circles.
Figure 1

The recall-based GUA scheme used in our study resembled Windows 8™ Picture Gesture Authentication. The users could create their password by making three of the following types of gestures: taps, lines, and circles.

On releasing each gesture, the shape of the gesture is displayed temporarily on the corresponding location, to inform the user that the gesture has been recorded. This step also informs the user that the intended gesture was recognized by the system (e. g., a drawn circle is recognized as a circle and not as a line). To confirm the password, the users are required to re-enter the three gestures. There is a segment tolerance of 36 segments around the selected segment, but no tolerance is provided for gesture type, direction, and order. During login, the same screen is displayed to the user, with the “Log in” and “Reset” buttons instead. The user must re-produce the three gestures and then the scheme informs about the outcome of the login (successful or unsuccessful). Login succeeds if a) the gestures (type, direction and order) match with the stored ones and b) the distance between the reproduced and the stored gestures is within the tolerance interval.

CogniPGA: Cognition-Based GUA Scheme

Previous research [25], [81], [95] has revealed the issue of hot-spots in cued-recall GUA schemes. Analysis of the visual behavior of users during the password creation process from a cognitive style perspective has revealed that people process visual information differently, depending on their cognitive style [47], [50]. Based on the results of previous research, we designed CogniPGA, aiming to leverage the unique cognitive characteristics of users and nudge them to make better password decisions.

Figure 2 
                In CogniPGA, a simple or a complex image is gradually revealed with the applied saliency mask tailored to the FD/FI cognitive style of the user.
Figure 2

In CogniPGA, a simple or a complex image is gradually revealed with the applied saliency mask tailored to the FD/FI cognitive style of the user.

Figure 3 
                Images of different complexity used in the study: a simple image showing a jet (left) and a complex image showing a workplace (right). At the bottom, the saliency maps of the images are depicted.
Figure 3

Images of different complexity used in the study: a simple image showing a jet (left) and a complex image showing a workplace (right). At the bottom, the saliency maps of the images are depicted.

To create this scheme, a cognition-based gaze mask is applied to PGA during password creation. Our previous studies revealed that the visual behavior of FDs is different from that of FIs when creating graphical password is different and is correlated with the password choices. Thus, a mask was created using the eye gaze data of FD and FI individuals from our exploratory study reported in [47]. This allowed us to create two different masks, one applied to the scheme intended to be used by FDs and one applied to the scheme intended to be used by FIs, considering that FDs and FIs tend to focus on slightly different points on images. The mask is applied on top of the background image. The image is gradually revealed because the aim is to drive the users attention away from the points they tend to focus when viewing an image for the first time. We used a Gaussian distribution algorithm to create layers of the cognition-based gaze mask and we used a fade-out effect to implement the gradual reveal. We started from the highest mask level (total black foreground) and gradually removed the mask layers until the image was displayed without any mask. We set the time to reveal the image to twenty seconds, following common practice [83]. To ensure the users would get full advantage of the provided mechanism when loading the scheme, the black foreground is displayed along with a start button. Once the user is ready to focus on the image, they hit the start button and the image is gradually revealed. The aim of fully revealing the image is two-fold: firstly, we did not want to constraint the user choices during registration, because previous research has revealed that when the registration image is masked and users are presented with the full image in login they face difficulties with remembering their passwords [3] and secondly, constraining the user choices during password creation may create new hot-spots which can in turn be exploited by attackers. The different layers of the mask applied to CogniPGA is shown in Figure 2.

Images Used in the GUA Schemes

Our previous research has revealed that the password strength is affected by the background image (e. g., image complexity) when controlling the FD-I factor [47]. Therefore, in our study, we used two images of different complexity (Figure 3): a simple one which consists of a main attention point and shows a flying jet (entropy = .453) and a complex one which consists of several attention points and shows a workplace (entropy = .983). The content of the images is representative of two popular image categories [96], [2].

3.3 Participants

We recruited participants who were enrolled students of two undergraduate lab courses at the University of Patras, Greece. Following a between-subjects design, we created eight groups, based on the GUA scheme (PGA or CogniPGA), the cognitive style (FD or FI), and the background image (simple or complex image). Each participant was assigned to only one group. We assigned participants to the groups at the beginning of the study based on their demographic information, aiming to create balanced groups (in terms of size, gender, etc.). Group sizes differ due to some participants dropping out the lab courses during the semester. A total of 320 participants were included in the data analysis. In Table 1, we provide details about the study participants.

Table 1

Information about the study participants.

GUA FD-I style Image N Age (years) Gender
PGA FD Simple 41 M = 22 , SD=3 22 females, 19 males
Complex 44 M = 21 , SD=3 24 females, 20 males
FI Simple 39 M = 23 , SD=4 19 females, 20 males
Complex 35 M = 22 , SD=3 16 females, 19 males
CogniPGA FD Simple 45 M = 23 , SD=3 24 females, 21 males
Complex 44 M = 22 , SD=2 24 females, 20 males
FI Simple 36 M = 23 , SD=4 18 females, 18 males
Complex 36 M = 22 , SD=4 17 females, 19 males

3.4 Instruments and Metrics

Group Embedded Figures Test (GEFT)

We used the original FD-I classification tool, the Group Embedded Figures Test (GEFT) by Oltman et al. [66] to classify participants as either FD or FI. GEFT is a credible and validated time-administered instrument [53], which measures the ability of an individual to identify a simple figure within a complex background. The test consists of three sections. In each section, the individual is asked to identify and outline a given simple pattern in a visually complex context within a given amount of time. The first section is used for practice and consists of seven pattern-recognition problems which the individual must solve within two minutes. The second and third sections consist of nine pattern-recognition problems each, and the individual has five minutes to complete each section. A raw score is calculated by summing the correct answers in the last two sections. The score ranges between 0 and 18 and the individuals are classified as either FD or FI with the use of a cut-off score. In literature [4], [40], [70], the mean GEFT score has been used as the cut-off score, and thus, in our study the cut-off score was determined to be 9. The cut-off score means that the participants who scored 9 or lower would be classified as FD, and those who scored between 10 and 18 as FI.

Attack Model

Based on the work of Zhao et al. [96], we apply a hot-spot assisted brute-force attack model. In contrast to pure brute-force attack, in hot-spot assisted brute-force attack, an attacker does not blindly guess the picture password without knowing any information about the background picture and the users’ tendencies, but, they assume that the user mainly performs drawings on the hot-spots of the background image. By hot-spots, we refer to points of the picture that draw the attention of the user, such as faces and single objects [74], [95]. To extract the hot-spots of the pictures we performed a saliency-map analysis based on the filters provided by Perazzi et al. [69] and depicted in Figure 3. The picture segments were divided into three zones: hot-spots (zone 1), neighboring segments (zone 2), and the rest segments (zone 3). A segment was considered as a neighboring segment in case it was within six segments from the closer hot-spot segment. After trying all segments of zone 1, our attack model continues with trying the segments of zone 2, and finally trying the segments of zone 3. In each zone, the model started checking the segments from the top left corner of the zone and continued traversing them row by row.

Password Strength Metrics

To measure the security aspect of the two schemes, we used two metrics: guessability and effective password space.

  1. Guessability. To measure the generated graphical passwords’ strength, we used password guessability, a widely used measure in the literature for assessing password strength [31], [51], [7], [73]. We followed a three-step attack approach by adopting the hot-spot assisted brute-force attack model presented in the previous paragraph. The password strength was measured in number of guesses required to crack each password. The higher the number of guesses, the more difficult the password to crack.

  2. Effective password space. One of the goals of CogniPGA is to increase the effective password space, by guiding the users’ attention during the password creation in points other that the hot-spots which we expected that would have an impact on the distribution of the selected points, without limiting in any way the choices. Users were free to select any point on the image to create their password. Following the approach proposed by Chiasson et al. [20], [22], we used Jfunction [85] to measure the level of clustering of the selected password areas within the dataset of each group. The Jfunction combines nearest-neighbour calculations and empty-space measures for a given radius r in order to measure the clustering of points. As discussed in the Graphical User Authentication Schemes section, both PGA and CogniPGA use a tolerance radius of 3 segments, and thus, we look at the Jfunction measures at r=3 segments. A result of J closer to 0 indicates that all of the selected areas cluster at the exact same coordinates, J=1 indicates that the dataset is randomly dispersed, and J>1 shows that the dataset is uniformly distributed.

Memorability Metrics

To measure the memorability of the created passwords, we used two metrics which are widely used in the literature [13], [22], [3]: the required login attempts and the number of password resets.

  1. Required login attempts. This measure examines the attempts required before a participant successfully authenticates. This is based on the intuition that a participant who requires all three permitted attempts before successfully authenticating would have more difficulty than a participant who is able to successfully authenticate on the first attempt.

  2. Number of password resets. When the users cannot remember their password, they typically use the password reset mechanism of the authentication scheme to create a new password [76]. The selected metric measures how many times a user reset the password; this metric has been widely used in the literature [75], [12], [92] for assessing the memorability of passwords.

Usability Metrics

To measure the GUAs’ usability, we used the time to create the password, the time to login, and the system usability scale (SUS) score.

  1. Time to create password. We measured the time to create a password from the moment the background image was loaded after entering the username until the moment the user hit the submit button after having performed all gestures. It was recorded to determine how much time users spent thinking about their password prior to submitting it to the system [62]. Time to create a password is often considered a usability metric in user authentication studies [90], [82], [35], [22].

  2. Time to log in. We measured the time to log in from the moment the background image was loaded after entering the username until the moment the user hit the submit button as a measure of memory retrieval. More memorable passwords tend to be remembered faster than less memorable ones [87]. Likewise time to create password, time to login is often considered as a usability metric in user authentication studies [78], [22], [21], [75], [60]. The analysis and discussion in this paper refers to the time to login for the successful logins, as described by Dunphy et al. [30].

  3. System Usability Scale (SUS) score. SUS [14] is a popular and reliable instrument for measuring system usability, which has been widely applied to evaluate authentication schemes [52], [23], [63], [64], [56] and other information systems [5], [1], [58], [32]. It consists of 10 Likert-type questions and the score ranges between 0 to 100, after making the appropriate adjustments [14]. We used it in the present study for measuring the user satisfaction and the perceived usability of the proposed scheme.

Post-Task Questionnaire

Aiming to reveal information about the users’ experience when interacting with the schemes during the study, we designed a post-task questionnaire which included close and open-ended questions. The data of the open ended questions were theme coded following the approach of Glaser and Strauss [38].

3.5 Apparatus

The participants used the desktop computers of the laboratory. They were powerful enough to support the GUA schemes and to not affect participants’ experience in the shade of poor performance. The screen resolution was 1920×1080 pixels in horizontal orientation. The screen size was 21”. The participants interacted with the schemes using the mouse and the keyboard.

3.6 Procedure

The study took place during the lab exercises of two courses of the fall semester which the students had to attend once every week. The procedure consists of 8 steps:

  1. Participants’ recruitment: we informed the students of the two courses about our study. We explained them that this was not a requirement and would not count towards the final grade of the lab and if they provided they consent they were able to opt out of the study any time they like. The students who agreed to take part in the study signed a consent form. Participants that did not provide their consent, did not complete GEFT. They were assigned to the original scheme and they were instructed to use the system for gaining access to course material; registration, login, and interaction data were not collected.

  2. GEFT session and demographics: During the fist lab exercise, the participants undertook GEFT. Next, they were asked to complete a short questionnaire about demographic information (gender, age, etc.).

  3. FD-I classification: We measured the GEFT scores and we classified each individual as either FD or FI. During the administration and the scoring of the GEFT, the directions about the materials, the test procedure, the scoring, and the time limits described in the scoring template provided by [91] were firmly followed.

  4. Group formation: Based on the FD-I classification and the participants’ demographics, we created eight groups of study participants (Table 1), in which each study participant used only one GUA scheme:

    1. Group FD-PGA-S: This group consists of FDs who used PGA with the simple image as the background image.

    2. Group FD-PGA-C: This group consists of FDs who used PGA with the complex image as the background image.

    3. Group FI-PGA-S: This group consists of FIs who used PGA with the simple image as the background image.

    4. Group FI-PGA-C: This group consists of FIs who used PGA with the complex image as the background image.

    5. Group FD-COG-S: This group consists of FDs who used CogniPGA with the simple image as the background image.

    6. Group FD-COG-C: This group consists of FDs who used CogniPGA with the complex image as the background image.

    7. Group FI-COG-S: This group consists of FIs who used CogniPGA with the simple image as the background image.

    8. Group FI-COG-C: This group consists of FIs who used CogniPGA with the complex image as the background image.

  5. Password creation: In the second lab exercise, the study participants were asked to create a graphical password using the assigned GUA scheme.

  6. Login process and exercise: After creating their graphical password, the study participants used it to log in the system and access the exercise material. The participants were required to perform the login process during each lab exercise (each week) to access the lab exercise material for the next 5 weeks. If a user failed to enter the gestures correctly three times, they were emailed an one-time text-based password and asked to use it to reset their graphical password.

  7. Questionnaires: On the last week of the study, the participants were asked to fill the SUS questionnaire and the post-task questionnaire so they could share their experiences with us.

  8. Data analysis: we analysed the collected data towards our research questions, as discussed in the next section.

3.7 Data Collection

We collected quantitative and qualitative data during the study. Computer logs were generated to record each interaction attempt made by participants with timestamps for each action from the moment the image was loaded until the moment the password creation or the login was completed. Computer logs were not only used for quantitative analysis but also for qualitative as they revealed valuable information about the way users interacted with the schemes. Apart from computer logs, the research team observed the participants during each lab exercise and took note of any difficulties, comments, and behaviors. The observation of the participants was done by a member of the research team who was present at each session and observed the students by sitting at the back of the lab without interacting with them or intervening to the tasks. Participants’ responses to GEFT, demographics, SUS, and post-task questionnaires were also collected. The users’ IP addresses were monitored so that the participants would access the authentication mechanisms only through the devices located at the lab.

4 Results

To analyze the results quantitatively, we followed statistical methods (e. g., ANOVA tests) for between-subjects design with the GUA scheme (PGA or CogniPGA), the cognitive style (FD or FI), and the image complexity (simple or complex) as the independent variables and the various metrics discussed in the aforementioned section (e. g., number of guesses, number of resets, time to create a password) as the dependent variables. We performed analyses for the main effects and the pairwise comparisons. We mainly report the statistically significant (alpha level of .05) and marginal effects. The dependent variables’ data that was not normally distributed was transformed according to the approach proposed by Templeton [80]. Besides that transformation, all tests met the required assumptions, unless stated otherwise in each of the following subsections. In the subsections, we first present the quantitative results and then we use the qualitative data and the cognitive theory to interpret the findings. The qualitative data derived from observing the participants during the tasks, qualitatively analyzing the log data, and from the post-task questionnaires.

Figure 4 
            FD and FI CogniPGA users created stronger passwords than the FD and FI PGA users respectively.
Figure 4

FD and FI CogniPGA users created stronger passwords than the FD and FI PGA users respectively.

4.1 Graphical Password Strength

Guessability

The analysis revealed that the CogniPGA users created passwords that were stronger than the passwords created by the PGA users (F=855.352, p<.001, η2=.715). Focusing on each cognitive dimension, both FDs and FIs who used CogniPGA created stronger passwords on the complex background image than the FDs and FIs who used PGA (FD: F=66.397, p<.001, η2=.163; FI: F=855.578, p<.001; η2=.715) respectively. Statistically significant differences were also revealed for the passwords drawn on the simple background image by FIs (F=243.100, p<.001, η2=.417), but not for FDs. The results are depicted in Figure 4.

Regarding the simple image, CogniPGA FI participants used more details of the image to draw their gestures on, but similarly to PGA FI users they focused on the main parts of the airplane (e. g., they draw a line connecting the two wings). However, CogniPGA FI participants also used the airplane line as a reference point to draw parallel lines more often. Despite the small increase on the strength of the passwords created by FIs who used CogniPGA, they still were less strong than passwords created by the FD users on the simple image. The strength of the passwords created by FDs was approximately the same both on PGA and CogniPGA. In both cases, they used the airplane and the airplane line as reference points and used their creative imagination to draw gestures around the reference points. In the case of FD users of CogniPGA, they also reported of using the mask as a reference point for drawing their passwords.

Regarding the complex image, both CogniPGA FD and FI participants created stronger passwords than those of PGA. On PGA, FDs selected social interactions to draw their gestures on, as they have a natural inclination towards social cues and they faced difficulties with exploring the complex image due to their holistic exploration approach. On the other hand, the cognition-based gaze mask enabled the FDs to gradually process visual information. Their attention was guided away from social interactions and they noticed objects on the image which they used as reference points for their gestures. CogniPGA helped the FIs to create stronger passwords too. The fact that the visual information was provided in chunks combined with their analytic ability of processing visual information enabled them to identify objects which were not within the salient points of the image. They were able to conceptualize the details of the image better and use them to create their passwords.

Effective Password Space

To further support the differences on the password strength, we also report the analysis of the clustering of the selected segments using the J function. Ideally, we would want J to be near 1, indicating that the selected segments were nearly indistinguishable from randomly generated segments. The results (Figure 5) showed that the segments selected by FD and FI users were more randomly dispersed when CogniPGA was used than when PGA was used. This applies for both background images, with the greater difference being revealed for the complex image. Regarding the complex image, J(3) was .53 for PGA FD users and .42 for PGA FI users. J was higher both for FD users (J(3)=.74) and FI users (J(3)=.76) who used CogniPGA. This reveals a greater distribution on the segments where the gestures where drawn when using CogniPGA. This means that CogniPGA has a bigger effective password space compared to PGA when using a complex image in our study. Regarding the simple image, despite that J was bigger for FD users who used for PGA (J(3)=.42) and FD users who used CogniPGA (J(3)=.48), the difference was very small. For FI users J was bigger for those who used CogniPGA (J(3)=.41) compared to those who used PGA (J(3)=.21). The results for the simple image reveal an increase to the effective password space since the larger J is the smaller the clustering of the selected segments.

Figure 5 
                
                  CogniPGA users selected more dispersed and more random areas of the images as part of their passwords, in contrast to PGA users.
Figure 5

CogniPGA users selected more dispersed and more random areas of the images as part of their passwords, in contrast to PGA users.

Figure 6 
                Most participants were able to log in on their first attempt, with FDs who used the simple image and PGA outperforming all the other groups. CogniPGA helped the FI users on the simple image and FDs on the complex image to log in on the first attempt (Note: the plot illustrates the range between 80%80\hspace{0.1667em}\%  and 100%100\hspace{0.1667em}\% ; the range between 0%0\hspace{0.1667em}\%  and 80%80\hspace{0.1667em}\%  is dedicated to the first attempt).
Figure 6

Most participants were able to log in on their first attempt, with FDs who used the simple image and PGA outperforming all the other groups. CogniPGA helped the FI users on the simple image and FDs on the complex image to log in on the first attempt (Note: the plot illustrates the range between 80% and 100%; the range between 0% and 80% is dedicated to the first attempt).

4.2 Memorability

Required Login Attempts

In Figure 6, we show the required attempts to successfully log in per cognitive style, GUA scheme, and image complexity. On the simple image, it is evident that most FDs who used PGA successfully logged in on the first attempt. FDs who used CogniPGA were able to log in on the first attempt less often compared to FDs who used PGA. As reported by CogniPGA FDs, their major issue was remembering the location of the gestures because they used cues related to the mask which was not displayed in the login screen. In many cases, they had to re-enter the password to find the exact location. The analysis on the FIs did not reveal a major difference between the login attempts for those who used PGA compared to those who used CogniPGA. The percentage of the successful logins on the first attempt for CogniPGA FIs was slightly bigger than that of PGA but the difference was not statistically significant (89% vs 88%).

The differences on the successful logins are more balanced on the complex image both for the FI (90% for both schemes) and the FD individuals (92% for CogniPGA and 91% for PGA), regardless of the scheme that was used. CogniPGA has mostly affected the FD users, where more logins were successful on the first attempt compared to that of the FD participants who used PGA. We observe that CogniPGA FDs required more often three attempts to successfully log in. As revealed by the computer logs, some of the gestures they drew were very small and their type was identified incorrectly by the scheme, thus they had to perform another attempt to log in.

Figure 7 
                All participants who used CogniPGA reset their passwords less times than participants who used PGA apart from FDs who used the simple image, where the opposite behavior was revealed.
Figure 7

All participants who used CogniPGA reset their passwords less times than participants who used PGA apart from FDs who used the simple image, where the opposite behavior was revealed.

Overall, both CogniPGA and PGA schemes performed well, as the 91% of the users of both schemes logged in successfully on the first attempt. PGA performed very well in terms of this memorability metric, and through our cognition-based intervention, we managed to maintain the same very high level of performance.

Number of Password Resets

The analysis revealed a statistically significant three-way interaction between FD-I cognitive style, GUA scheme, and image complexity (F=4.057, p=.045, η2=.013). Moreover, there is a statistically marginal effect between CogniPGA FDs and PGA FDs who used either the simple (F=3.287, p=0.061, η2=.010) or the complex (F=2.757, p=.078, η2=.009) background image. Focusing on each cognitive style, FD participants of PGA performed fewer resets than the FD participants of CogniPGA, while in the rest cases, the users of CogniPGA performed fewer resets. The results are depicted in Figure 7.

The overall better performance of CogniPGA users can be explained because of the gradual reveal of the visual information on CogniPGA. The fade-out effect enabled both the FD and the FI users to take in the visual information in chunks and decide more consciously which points of reference they would use to draw their gestures. Because they explored in more detail the images when creating the passwords, they were able to remember their choices.

However, focusing on the complexity of the background image, we observe that PGA FD participants reset their passwords less times than CogniPGA FD participants when the simple background image was used. As discussed earlier, FDs used the salient points of the images as reference points when creating their passwords. CogniPGA FDs used the mask as a reference point to draw their gestures and as a result they were unable to log in because the mask is not displayed during login, and they were unable to locate the exact points they have drawn their gestures. Therefore, this behavior did not only affect the number of required login attempts but also the number of password resets. The FD participants did remember the types of gestures and the approximate location (e. g., on the left of the airplane on the horizontal middle of the image), but having set the mask as a reference point, they failed to log in because the blue background of the image did not provide any cues for remembering the exact location of the gestures.

Figure 8 
                On the simple image, the CogniPGA participants created their password in less time than the PGA participants. The opposite behavior was revealed for the complex image.
Figure 8

On the simple image, the CogniPGA participants created their password in less time than the PGA participants. The opposite behavior was revealed for the complex image.

Regarding the complex image, this behavior was not observed, because the image offered a vast amount of visual cues which the FD participants could use as reference background points for drawing their gestures, regardless the way they selected those cues (e. g., spot a point of reference at the edge of a layer of the mask or at the left of the airplane). Such difference on the number of resets was not observed between the PGA and the CogniPGA FI users due to their analytic approach when exploring an image. They identified the details of the image, and drew their gestures on the details. In the case of the FIs, the mask acted as a mechanism for gradually revealing the visual content and enabled them to explore more details, whilst for the FDs the mask acted as an element that triggered their imagination.

4.3 Usability

Time to Create Password

The analysis revealed an interaction effect between the GUA scheme (PGA or CogniPGA) and the complexity of the background image (F=5.931, p=.016, η2=.032). PGA participants who used the simple image required more time to create their password compared to participants who used the complex image. On CogniPGA the opposite behavior was observed. Focusing on each cognitive dimension, a statistically marginal effect (F=3.650, p=.058, η2=.020) was revealed between the PGA and CogniPGA FDs who used the simple background image. The results are depicted in Figure 8.

On the simple image, PGA FD participants required about 65 seconds on average to create their password, despite the fact that the image did not offer much visual information. They were seeking cues that they could use to create their password and because of the holistic visual exploration approach they followed, they were unable to focus on the details of the airplane and the airplane line. They rather used both as a single point of reference and they created their passwords on the left or right side. CogniPGA FD participants, on the other hand, created the passwords faster because they used the airplane, the airline, and the edges of the mask as reference points for creating their passwords, thus, it was easier for them to identify points of reference. For the FI participants, a difference on the time to create password was also observed, but it was not statistically significant, given the analytic approach of the FIs when exploring a visual scene. They used the details of the airplane and the airplane line to draw their passwords on the simple background image in both cases.

On the complex image, PGA FD and FI participants created their passwords faster than CogniPGA participants. Despite the fact that there is a large amount of visual information on the complex image, the salient points drew the participants’ attention. In particular, FDs’ attention was drawn by the interactions happening on the image (e. g., drew lines to connect the gaze of people looking at each other), given that they are attentive to social cues. On the other hand, FIs, given their tendency to deconstruct the complex visual information into small chunks and their analytic skills to detect simple shapes, focused on details such as the hair, the eyes, the notebook, and the pen. As a result, they quickly selected the points they would use to draw their gestures. CogniPGA guided both FDs’ and FIs’ attention away from such points, enabled them to pay more attention to other visual content of the image. Thus, it took them longer to create their password because they explored more possible reference points, given that the masks drove their attention away from the points they would unconsciously focus on if the image was fully displayed to them immediately (i. e., without applying the cognition-based gaze mask).

Time to Log In

As depicted in Figure 9, no statistically significant differences were revealed between the PGA and CogniPGA participants. This result reveals that despite driving the users’ attention away from points they would normally use as points of reference, this was not reflected on the time to log in. The unconscious change of their visual exploration approach enabled them to process the images in more detail, and they made their choices consciously, based on points that they considered interesting for using as reference points. In addition, the fact that the full image was displayed when they were creating their password, enabled them to store the full information related to the location of their gestures. This was also reflected on the memorability analysis as discussed in the previous subsection.

Figure 9 
                No statistically significant differences were revealed between users regarding the time to log in.
Figure 9

No statistically significant differences were revealed between users regarding the time to log in.

System Usability Scale (SUS)

The SUS total score was comparably similar for the two schemes, with CogniPGA score being slightly higher than PGA score (78% vs 75%); the difference was not statistically significant. We focus on two SUS dimensions that are important for the GUA scheme: “The GUA scheme was unnecessarily complex” and “I found the GUA scheme cumbersome to use”. The score of both dimensions was higher for FD participants who used CogniPGA with the simple background image compared to the FD participants who used PGA with the simple background image (F=7.775, p=.006, η2=.039 and F=6.026, p=.015, η2=.023). Statistically significant difference was also found regarding the first dimension for FI participants who used CogniPGA with the simple background image (F=5.461, p=.020, η2=.026). Both FD and FI participants who used CogniPGA with the simple background image reported that the mask was unnecessary because some of the initially revealed layers contained chunks that provided similar information, such as parts of the blue background, thus the participants felt that the mask did not help them in the password creation process. Instead, they felt that the mask prevented them from completing the task faster which is inconsistent with the results of the time to create the password which revealed that CogniPGA participants required less time to complete the task compared to PGA participants.

5 Discussion

5.1 Summary of Findings

This paper presents a longitudinal field study of CogniPGA, a cognition-based GUA scheme, and explores security, memorability, and usability aspects from the FD-I cognitive style perspective for two different complexity levels of the background image. From a password strength perspective, CogniPGA enabled FD participants to create stronger passwords when using a complex background image, and enabled FI participants to create stronger passwords regardless of the complexity of the background image. The analysis of the effective password space, revealed a smaller clustering of areas for CogniPGA users compared to PGA users which also confirms the previous result.

In terms of memorability, most logins were successful on the first attempt regardless of the scheme for both FD and FI participants. Fewer passwords were reset by participants who used CogniPGA and the complex background image and by FI participants who used CogniPGA and the simple background image. However, CogniPGA negatively affected FDs who used the simple image in terms of memorability as they required more often a second attempt to successfully log in and they also reset their passwords more often. We should note the PGA scheme had already high performance in terms of memorability, and through the cognition-based intervention, we managed to maintain the same very high level of performance.

In terms of usability, CogniPGA affected the time to create password with participants who used the simple image requiring less time to create the password while participants who used the complex image requiring more time than PGA participants. No effect of the scheme was revealed for the time to log in. Regarding the SUS test, the scores of CogniPGA and PGA were comparably similar, with participants finding the scheme with the cognition-based mask unnecessarily complex and cumbersome to use, when the background image was of low complexity with several visually similar regions (e. g., blue sky).

Overall, the results confirm that using cognition-based interventions in GUA enables users to improve the strength of the passwords they create and there is a slight effect both on the memorability of the passwords and the usability of the scheme. This work provides evidence that applying interventions to existing GUA schemes that take advantage of the individual visual processing abilities of people enables them to unconsciously consider more options when creating graphical passwords.

5.2 Design Implications for Cognition-Based GUA

The results revealed that the effectiveness and efficiency of CogniPGA is associated with two factors: the users’ cognitive style and the image complexity. When PGA was introduced [67], it was suggested to use images with more than ten points of interest to ensure good password choices. This research provides evidence that this is not entirely truth and the “one size fits all” is not the best approach when it comes to GUA. People, depending on their ability to process the visual information holistically or analytically, make different password choices which affect password strength. GUA designers’ decisions and recommendations may unintentionally influence the users’ performance and experience depending on their cognitive style. For example, the design decision of allowing people to freely choose their background image could result to the selection of a simple background image (e. g., an image with less than five points of interest), which would be a burden for FIs to create a strong password but would not influence the FDs’ password selection strategy.

Focusing on the results of our study, our direct recommendation for GUA designers is not to apply CogniPGA for FD users when they select a simple background image. In all other cases, given that CogniPGA benefited the users in terms of security and the results in terms of memorability and usability were neutral to positive, we recommend the application of CogniPGA. More research is required to identify the level of complexity above which the FDs are positively affected by the application of cognition-based intervention such as the proposed herein. Regarding the FIs, this work provides evidence that they tend to select predictable passwords when using simple images due to their analytic approach of processing visual scenes. Despite the fact that CogniPGA helped them create stronger passwords on the simple image compared to PGA FIs, they still create rather weak passwords using the very few objects available on the image to draw their gestures on. Therefore, our recommendation is to discourage FI users from choosing simple background images, or when selected, the cognition-based gaze mask should be applied. Again, more research is required in order to identify the golden section of the image complexity level over which the FIs are benefited by their inherent ability to process complex visual scenes and deconstruct them in simple elements during the password creation process.

We argue that taking advantage of the inherent cognitive styles of users and unconsciously guiding them to make more secure password choices, can be a step towards solving the graphical passwords problem. Conventional user authentication schemes either provide instructions or enforce policies following a system-first approach. We adopt a user-first approach, place the user in the center of our design process and provide solutions where the system makes a step towards the user and not the other way around. Our previous research aimed at exploring and understanding the user choices [47] and the present work is a first step towards applying cognition-based adaptation rules to systems and services. More work needs to be done in this direction which requires interdisciplinary research and collaboration between technology experts and cognitive psychologists in order to build accurate behavioral user models and apply them in the design process of such services. The emergence of eye-tracking technologies enables not only the quantification of security aspects of GUA schemes [49] but also the observation of the conscious and unconscious visual behavior of the users, which can in turn shed light on the underlying visual processes when using technology. This could be combined with other tracking technologies, such as brain activity monitoring and enable the deeper understanding of the user behavior, which would unfold new unexplored design paths.

5.3 Cognition-Based GUA Framework

Through the implications of our work, it is evident that people with different cognitive characteristics would benefit (in terms of password selection, memorability, and usability) from personalized cognition-based interventions in different cases (e. g., the use of a simple or a complex image as the background cue of a cued-recall based GUA scheme). To this end, we propose a cognition-based framework for graphical user authentication, which provides the users with personalized cognition-based interventions to help them perform better in the authentication task (in terms of security, memorability, and usability), leveraging their inherent cognitive characteristics. The framework conceptually consists of two main modules: a) the modeling module, which is responsible for eliciting and storing the user’s overall context of use during interaction (human and technology specific), and b) the adaptation module, which is responsible to map the modeling factors with GUA design factors aiming to deliver the most optimized GUA scheme to each user. As expected, the results of longitudinal studies, like the reported one here, would drive the design of such adaptive GUA schemes, as they would provide specific context-based recommendation rules. For example, when the user who performs the authentication task is an FI, then the system would nudge them to choose a complex background image and would help them to more efficiently visually explore it by providing a cognition-based mask.

Deploying cognition-based GUA schemes entails three main challenges for the designers, who need to address the following questions: a) how does the system know the cognitive style of each user? b) how does the system know the complexity of the background image? and c) how are the cognition-based interventions built? Regarding the (a) cognitive style elicitation, it is important to stress that it should be performed implicitly with minimal (or even without) intervention in the user task process. Considering that the various psychometric tools are typically based on “pen-and-paper” techniques, it makes the implicit and transparent elicitation of cognitive style a burden for the designers. To overcome this issue, they could use third-party services [10], [46], [71], which focus on the identification of users’ cognitive characteristics in real-time without intervening in the user task, as they leverage interaction and eye-tracking data implicitly and transparently. The elicitation does not need to be performed through an authentication task, but it could be based on any pre-authentication activity. For example, when the users create their profiles, they could perform a short exploratory or a goal-oriented visual search task (e. g., look at a set of images to select their background image) to enable the system to unobtrusively elicit their style only within a few seconds [72].

Regarding the (b) elicitation of the image complexity, we should note that the images can be provided in two ways: the GUA scheme provides the users with pre-defined background images or the users are free to choose their own background images. In the first case, the image complexity can be easily measured through automatic or semi-automatic techniques, as the images are stored resources of the system. The challenge to measure the image complexity is raised when the users upload their own images to be used as background cues, because this image is probably a resource that is not stored in the system, and thus, the system is not aware of its characteristics. In this case, the complexity of the uploaded image can be assessed in real-time through the use of automatic computational methods, such as saliency detectors [69], entropy estimators [93] and content based image retrieval techniques [88].

Regarding the (c) creation of cognition-based interventions, such as masks, we should stress that they are based on the different information processing strategies (e. g., when selecting a gesture) the individuals with different cognitive styles follow. Based on ground-truth data, the designers could build predictive models that deploy such cognition-centered interventions. For example, in our scenario, we collected eye-tracking data of FD and FI participants when performing a GUA task. Based on that collected data, we identified differences on the visual behavior of FDs and FIs and we built the cognition-based gaze masks, which are unique for each cognitive style group. Another issue raised is how to collect the ground-truth data and build the basis of the predictive models that would automatically build the cognition-based interventions. To do that, user studies that explore the differences (e. g., gaze metrics, interaction data) between individuals with different cognitive characteristics must be conducted. The aforementioned components could be integrated and optimally build predictive tools that automatically generate data, which simulate the users’ behavior, on previously unseen images.

6 Limitations and Ethical Considerations

6.1 Limitations

While our findings provide valuable insights, our study also has limitations. First, our results are limited by the default images we used. While we selected images that were representative of the most popular image categories used as background images in GUA schemes and they provide opportunities for selecting passwords in any area of the image, still some areas may have been more attractive to users than others and hence have an impact on the selection of the password points. Moreover, the guessing algorithm we used to estimate the password strength was very simple, but the aim of our study was not to create and test another cracking algorithm, but instead use this as a valid approach for measuring and comparing the strength of a given set of passwords.

The participants were classified as either FD or FI with the use of GEFT. Given that this tool identifies cognitive differences along a continuum scale, the use of a cut-off score may not correctly classify participants who fall close to the cut-off score. Nonetheless, we should stress that the distribution of the score of the study participants is similar to that of general public [53], [65], [70]. Another limitation is the limited age-span and the non-diverse background of the study participants. Considering that cognitive styles rarely change through lifespan we are confident that our results are generalizable to samples with different age-span and backgrounds.

6.2 Ethical Considerations

To measure the guessability of the graphical passwords and the effective password space of the GUA schemes, we required access to the plaintext format of the passwords, which raises security, privacy, and ethical issues. To encounter this issue, we hashed and stored the graphical passwords, along with other interaction data (e. g., created gesture, which was part of the graphical password), in the databases of the PGA and CogniPGA schemes. The plaintext format of the graphical passwords along with the scheme (PGA or CogniPGA) and the cognitive style (FD or FI) that each user was associated with, were stored in a separate local repository during the stages of password creation and password reset, without binding information that could be associated with the participants. The research team clearly communicated this process to the participants and they provided their consent for the data collection, data storing, data analysis, and publication of the results.

7 Conclusion

In this paper we reported the results of a longitudinal study of CogniPGA. We showed that cognition-based interventions enabled the users to create stronger passwords and there was also a slight improvement on the memorability of the passwords and the usability of the scheme compared to PGA. Hence, this work provides evidence on the efficiency and effectiveness of considering the cognitive styles of users when designing GUA schemes. The usable security issue was originally raised because users’ inherent abilities, such as information processing skills, were not considered as design factors when designing user authentication schemes. Our work is a step towards the consideration of the human factor and a user-first design approach with the goal of making systems and services accessible for everyone.

Recently, research efforts have focused on user authentication on new emerging environments (e. g., augmented and virtual reality). For example, Yu et al. [94] and George et al. [36] explored how PINs and Patterns can be adjusted based on virtual reality modalities and Hadjidemetriou et al. [39] explored the use of PGA in mixed reality. George et al. [37] proposed a concept in which users’ authenticate by selecting a series of 3D objects in a room using a pointer. Similarly, Funk et al. [34] proposed LookUnLock, a scheme where passwords are composed of spatial and virtual targets. Considering that research efforts in authentication in the new emerging environments focus on graphical authentication, we believe that it is worth investigating the effects of the cognitive styles in GUA in such contexts, given the increased visual processing demand. Our goal is to design a centralized service to support GUA designers through recommendations related to human cognitive factors, GUA characteristics and technology, and enable them to provide more secure and usable schemes, thus benefiting not only the users but also the service providers.

Award Identifier / Grant number: 617

Funding statement: This research was supported by the General Secretariat for Research and Technology (GSRT) and the Hellenic Foundation for Research and Innovation (HFRI) – 1st Proclamation of Scholarships for PhD Candidates / Code: 617.

About the authors

Christina Katsini

Christina Katsini is a Ph.D. Candidate at the University of Patras, Greece. Her interests lie in understanding how people interact with systems and services, and in designing for the people with the people. In her Ph.D. research, she is investigating user choices in graphical user authentication from a human cognitive perspective.

Nikolaos Avouris

Nikolaos Avouris (MSc, Ph.D., https://sites.google.com/view/avouris) is an electrical and computer engineer with a research interest in human-computer interaction. He is a Professor of Software Technology and Human-Computer Interaction in Electrical and Computer Engineering Department of University of Patras, Greece. He is Head of Interactive Technologies Lab and HCI Group.

Christos Fidas

Christos Fidas (Ph.D., http://cfidas.info) is an electrical and computer engineer, and senior researcher with an interest in cultural heritage informatics, usable and secure information systems, and human socio-cultural and cognitive factors. He is an Assistant Professor at the Department of Cultural Heritage Management and New Technologies, University of Patras, Greece.

Acknowledgment

We would like to thank all the participants who took part in our study. Special thanks goes to the teaching staff of the two laboratories for their excellent cooperation.

References

[1] Yasemin Acar, Michael Backes, Sascha Fahl, Simson Garfinkel, Doowon Kim, Michelle L. Mazurek and Christian Stransky, Comparing the Usability of Cryptographic APIs, in: 2017 IEEE Symposium on Security and Privacy (SP), pp. 154–171, May 2017.10.1109/SP.2017.52Search in Google Scholar

[2] Florian Alt, Stefan Schneegass, Alireza Sahami Shirazi, Mariam Hassib and Andreas Bulling, Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes, in: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’15, pp. 316–322, ACM, New York, NY, USA, 2015.Search in Google Scholar

[3] Florian Alt, Mateusz Mikusz, Stefan Schneegass and Andreas Bulling, Memorability of Cued-recall Graphical Passwords with Saliency Masks, in: Proceedings of the 15th International Conference on Mobile and Ubiquitous Multimedia, MUM ’16, pp. 191–200, ACM, New York, NY, USA, 2016.Search in Google Scholar

[4] Charoula Angeli, Nicos Valanides and Paul Kirschner, Field Dependence–Independence and Instructional-Design Effects on Learners’ Performance with a Computer-Modeling Tool, Computers in Human Behavior25 (2009), 1355–1366.10.1016/j.chb.2009.05.010Search in Google Scholar

[5] Nalin Asanka Gamagedara Arachchilage, Steve Love and Konstantin Beznosov, Phishing Threat Avoidance Behaviour: An Empirical Investigation, Computers in Human Behavior60 (2016), 185–197.10.1016/j.chb.2016.02.065Search in Google Scholar

[6] Steven J. Armstrong, Eva Cools and Eugene Sadler-Smith, Role of Cognitive Styles in Business and Management: Reviewing 40 Years of Research, International Journal of Management Reviews14 (2012), 238–262.10.1111/j.1468-2370.2011.00315.xSearch in Google Scholar

[7] Adam J. Aviv, Devon Budzitowski and Ravi Kuber, Is Bigger Better? Comparing User-Generated Passwords on 3×3 vs. 4×4 Grid Sizes for Android’s Pattern Unlock, in: Proceedings of the 31st Annual Computer Security Applications Conference, ACSAC 2015, pp. 301–310, ACM, New York, NY, USA, 2015.10.1145/2818000.2818014Search in Google Scholar

[8] Marios Belk, Christos Fidas, Panagiotis Germanakos and George Samaras, The Interplay Between Humans, Technology and User Authentication, Computers in Human Behavior76 (2017), 184–200.10.1016/j.chb.2017.06.042Search in Google Scholar

[9] Marios Belk, Christos Fidas, Christina Katsini, Nikolaos Avouris and George Samaras, Effects of Human Cognitive Differences on Interaction and Visual Behavior in Graphical User Authentication, in: Human-Computer Interaction – INTERACT 2017 (Regina Bernhaupt, Girish Dalvi, Anirudha Joshi, Devanuj K. Balkrishan, Jacki O’Neill and Marco Winckler, eds.), pp. 287–296, Springer International Publishing, Cham, 2017.10.1007/978-3-319-67687-6_19Search in Google Scholar

[10] Shlomo Berkovsky, Ronnie Taib, Irena Koprinska, Eileen Wang, Yucheng Zeng, Jingjie Li and Sabina Kleitman, Detecting Personality Traits Using Eye-Tracking Data, in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, pp. 221:1–221:12, ACM, New York, NY, USA, 2019.10.1145/3290605.3300451Search in Google Scholar

[11] Andrea Bianchi, Ian Oakley and Hyoungshick Kim, PassBYOP: Bring Your Own Picture for Securing Graphical Passwords, IEEE Transactions on Human-Machine Systems46 (2016), 380–389.10.1109/THMS.2015.2487511Search in Google Scholar

[12] Robert Biddle, Mohammad Mannan, Paul C. van Oorschot and Tara Whalen, User Study, Analysis, and Usable Security of Passwords Based on Digital Objects, IEEE Transactions on Information Forensics and Security6 (2011), 970–979.10.1109/TIFS.2011.2116781Search in Google Scholar

[13] Robert Biddle, Sonia Chiasson and Paul C. van Oorschot, Graphical Passwords: Learning from the First Twelve Years, ACM Computing Surveys44 (2012), 19:1–19:41.10.1145/2333112.2333114Search in Google Scholar

[14] John Brooke, SUS - A Quick and Dirty Usability Scale, Usability Evaluation in Industry (Patrick W. Jordan, Bruce Thomas, Bernard A. Weerdmeester and Ian L. McClelland, eds.), Taylor & Francis, London, UK, 1996.Search in Google Scholar

[15] Sacha Brostoff and M. Angela Sasse, Are Passfaces More Usable Than Passwords? A Field Trial Investigation, in: People and Computers XIV – Usability or Else! (Sharon McDonald, Yvonne Waern and Gilbert Cockton, eds.), pp. 405–424, Springer London, London, 2000.10.1007/978-1-4471-0515-2_27Search in Google Scholar

[16] Andreas Bulling, Florian Alt and Albrecht Schmidt, Increasing the Security of Gaze-based Cued-recall Graphical Passwords Using Saliency Masks, in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, pp. 3011–3020, ACM, New York, NY, USA, 2012.10.1145/2207676.2208712Search in Google Scholar

[17] Hsin-Yi Chiang and Sonia Chiasson, Improving User Authentication on Mobile Devices: A Touchscreen Graphical Password, in: Proceedings of the 15th International Conference on Human-computer Interaction with Mobile Devices and Services, MobileHCI ’13, pp. 251–260, ACM, New York, NY, USA, 2013.10.1145/2493190.2493213Search in Google Scholar

[18] Sonia Chiasson, Robert Biddle and Paul C. van Oorschot, A Second Look at the Usability of Click-based Graphical Passwords, in: Proceedings of the 3rd Symposium on Usable Privacy and Security, SOUPS ’07, pp. 1–12, ACM, New York, NY, USA, 2007.10.1145/1280680.1280682Search in Google Scholar

[19] Sonia Chiasson, Paul C. van Oorschot and Robert Biddle, Graphical Password Authentication Using Cued Click Points, in: Computer Security – ESORICS 2007 (Joachim Biskup and Javier López, eds.), pp. 359–374, Springer Berlin Heidelberg, Berlin, Heidelberg, 2007.10.1007/978-3-540-74835-9_24Search in Google Scholar

[20] Sonia Chiasson, Alain Forget, Robert Biddle and Paul C. van Oorschot, Influencing Users Towards Better Passwords: Persuasive Cued Click-points, in: Proceedings of the 22Nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 1, BCS-HCI ’08, pp. 121–130, British Computer Society, Swinton, UK, UK, 2008.10.14236/ewic/HCI2008.12Search in Google Scholar

[21] Sonia Chiasson, Alain Forget, Elizabeth Stobert, Paul C. van Oorschot and Robert Biddle, Multiple Password Interference in Text Passwords and Click-based Graphical Passwords, in: Proceedings of the 16th ACM Conference on Computer and Communications Security, CCS ’09, pp. 500–511, ACM, New York, NY, USA, 2009.10.1145/1653662.1653722Search in Google Scholar

[22] Sonia Chiasson, Elizabeth Stobert, Alain Forget, Robert Biddle and Paul C. van Oorschot, Persuasive cued click-points: Design, implementation, and evaluation of a knowledge-based authentication mechanism, IEEE Transactions on Dependable and Secure Computing9 (2012), 222–235.10.1109/TDSC.2011.55Search in Google Scholar

[23] Soumyadeb Chowdhury, Ron Poet and Lewis Mackenzie, A Comprehensive Study of the Usability of Multiple Graphical Passwords, in: Human-Computer Interaction – INTERACT 2013 (Paula Kotzé, Gary Marsden, Gitte Lindgaard, Janet Wesson and Marco Winckler, eds.), pp. 424–441, Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.10.1007/978-3-642-40477-1_26Search in Google Scholar

[24] Gradeigh D. Clark, Janne Lindqvist and Antti Oulasvirta, Composition Policies for Gesture Passwords: User Choice, Security, Usability and Memorability, in: 2017 IEEE Conference on Communications and Network Security (CNS), pp. 1–9, IEEE, October 2017.Search in Google Scholar

[25] Darren Davis, Fabian Monrose and Michael K. Reiter, On User Choice in Graphical Password Schemes, in: Proceedings of the 13th Conference on USENIX Security Symposium - Volume 13, SSYM’04, pp. 151–164, USENIX Association, Berkeley, CA, USA, 2004.Search in Google Scholar

[26] Antonella De Angeli, Mike Coutts, Lynne Coventry, Graham I. Johnson, David Cameron and Martin H. Fischer, VIP: A Visual Approach to User Authentication, in: Proceedings of the Working Conference on Advanced Visual Interfaces, AVI ’02, pp. 316–323, ACM, New York, NY, USA, 2002.10.1145/1556262.1556312Search in Google Scholar

[27] Antonella De Angeli, Lynne Coventry, Graham Johnson and Karen Renaud, Is a Picture Really Worth a Thousand Words? Exploring the Feasibility of Graphical Authentication Systems, International Journal of Human-Computer Studies63 (2005), 128–152.10.1016/j.ijhcs.2005.04.020Search in Google Scholar

[28] Rachna Dhamija and Adrian Perrig, DéJà Vu: A User Study Using Images for Authentication, in: Proceedings of the 9th Conference on USENIX Security Symposium - Volume 9, SSYM’00, pp. 45–58, USENIX Association, Berkeley, CA, USA, 2000.Search in Google Scholar

[29] Paul Dunphy and Jeff Yan, Do Background Images Improve “Draw a Secret” Graphical Passwords?, in: Proceedings of the 14th ACM Conference on Computer and Communications Security, CCS ’07, pp. 36–47, ACM, New York, NY, USA, 2007.10.1145/1315245.1315252Search in Google Scholar

[30] Paul Dunphy, Andreas P. Heiner and N. Asokan, A Closer Look at Recognition-based Graphical Passwords on Mobile Devices, in: Proceedings of the Sixth Symposium on Usable Privacy and Security, SOUPS ’10, pp. 3:1–3:12, ACM, New York, NY, USA, 2010.10.1145/1837110.1837114Search in Google Scholar

[31] Rosanne English and Ron Poet, Measuring the Revised Guessability of Graphical Passwords, in: 2011 5th International Conference on Network and System Security, pp. 364–368, September 2011.10.1109/ICNSS.2011.6060031Search in Google Scholar

[32] Yannick Forster, Frederik Naujoks and Alexandra Neukum, Your Turn or My Turn?: Design of a Human-Machine Interface for Conditional Automation, in: Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Automotive’UI 16, pp. 253–260, ACM, New York, NY, USA, 2016.10.1145/3003715.3005463Search in Google Scholar

[33] Enrique Frias-Martinez, Sherry Y. Chen and Xiaohui Liu, Evaluation of a Personalized Digital Library based on Cognitive Styles: Adaptivity vs. Adaptability, International Journal of Information Management29 (2009), 48–56.10.1016/j.ijinfomgt.2008.01.012Search in Google Scholar

[34] Markus Funk, Karola Marky, Iori Mizutani, Mareike Kritzler, Simon Mayer and Florian Michahelles, LookUnlock: Using Spatial-Targets for User-Authentication on HMDs, in: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA ’19, pp. LBW0114:1–LBW0114:6, ACM, New York, NY, USA, 2019.10.1145/3290607.3312959Search in Google Scholar

[35] Haichang Gao, Zhongjie Ren, Xiuling Chang, Xiyang Liu and Uwe Aickelin, A New Graphical Password Scheme Resistant to Shoulder-Surfing, in: 2010 International Conference on Cyberworlds, pp. 194–199, IEEE, October 2010.Search in Google Scholar

[36] Ceenu George, Mohamed Khamis, Emanuel von Zezschwitz, Marinus Burger, Henri Schmidt, Florian Alt and Heinrich Hussmann, Seamless and Secure VR: Adapting and Evaluating Established Authentication Systems for Virtual Reality, in: Proceedings 2017 Workshop on Usable Security, NDSS, Internet Society, 2017.10.14722/usec.2017.23028Search in Google Scholar

[37] Ceenu George, Mohamed Khamis, Daniel Buschek and Heinrich Hussmann, Investigating the Third Dimension for Authentication in Immersive Virtual Reality and in the Real World, in: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 277–285, IEEE, March 2019.10.1109/VR.2019.8797862Search in Google Scholar

[38] Barney G. Glaser and Anselm L. Strauss, Discovery of Grounded Theory: Strategies for Qualitative Research, Routledge, New York, NY, USA, July 2017.10.4324/9780203793206Search in Google Scholar

[39] George Hadjidemetriou, Marios Belk, Christos Fidas and Andreas Pitsillides, Picture Passwords in Mixed Reality: Implementation and Evaluation, in: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA ’19, pp. LBW0263:1–LBW0263:6, ACM, New York, NY, USA, 2019.10.1145/3290607.3313076Search in Google Scholar

[40] Jon-Chao Hong, Ming-Yueh Hwang, Ker-Ping Tam, Yi-Hsuan Lai and Li-Chun Liu, Effects of Cognitive Style on Digital Jigsaw Puzzle Performance: A GridWare Analysis, Computers in Human Behavior28 (2012), 920–928.10.1016/j.chb.2011.12.012Search in Google Scholar

[41] Wei Hu, Xiaoping Wu and Guoheng Wei, The Security Analysis of Graphical Passwords, in: 2010 International Conference on Communications and Intelligence Information Security, pp. 200–203, October 2010.10.1109/ICCIIS.2010.35Search in Google Scholar

[42] Gwo-Jen Hwang, Han-Yu Sung, Chun-Ming Hung, Iwen Huang and Chin-Chung Tsai, Development of a Personalized Educational Computer Game based on Students’ Learning Styles, Educational Technology Research and Development60 (2012), 623–638.10.1007/s11423-012-9241-xSearch in Google Scholar

[43] Ian Jermyn, Alain Mayer, Fabian Monrose, Michael K. Reiter and Aviel D. Rubin, The Design and Analysis of Graphical Passwords, Proceedings of the 8th Conference on USENIX Security Symposium - Volume 8, SSYM’99, USENIX Association, Berkeley, CA, USA, 1999, pp. 1–14.Search in Google Scholar

[44] Maurits Kaptein and Petri Parvinen, Advancing E-Commerce Personalization: Process Framework and Case Study, International Journal of Electronic Commerce19 (2015), 7–33.10.1080/10864415.2015.1000216Search in Google Scholar

[45] Christina Katsini, Christos Fidas, Marios Belk, Nikolaos Avouris and George Samaras, Influences of Users’ Cognitive Strategies on Graphical Password Composition, in: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’17, pp. 2698–2705, ACM, New York, NY, USA, 2017.10.1145/3027063.3053217Search in Google Scholar

[46] Christina Katsini, Christos Fidas, George E. Raptis, Marios Belk, George Samaras and Nikolaos Avouris, Eye Gaze-driven Prediction of Cognitive Differences During Graphical Password Composition, in: 23rd International Conference on Intelligent User Interfaces, IUI ’18, pp. 147–152, ACM, New York, NY, USA, 2018.10.1145/3172944.3172996Search in Google Scholar

[47] Christina Katsini, Christos Fidas, George E. Raptis, Marios Belk, George Samaras and Nikolaos Avouris, Influences of Human Cognition and Visual Behavior on Password Strength During Picture Password Composition, in: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, pp. 87:1–87:14, ACM, New York, NY, USA, 2018.10.1145/3173574.3173661Search in Google Scholar

[48] Christina Katsini, George E. Raptis, Christos Fidas and Nikolaos Avouris, Does Image Grid Visualization Affect Password Strength and Creation Time in Graphical Authentication?, in: Proceedings of the 2018 International Conference on Advanced Visual Interfaces, AVI ’18, pp. 33:1–33:5, ACM, New York, NY, USA, 2018.10.1145/3206505.3206546Search in Google Scholar

[49] Christina Katsini, George E. Raptis, Christos Fidas and Nikolaos Avouris, Towards Gaze-based Quantification of the Security of Graphical Authentication Schemes, in: Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, ETRA ’18, pp. 17:1–17:5, ACM, New York, NY, USA, 2018.10.1145/3204493.3204589Search in Google Scholar

[50] Christina Katsini, Christos Fidas, Marios Belk, George Samaras and Nikolaos Avouris, A Human-Cognitive Perspective of Users’ Password Choices in Recognition-Based Graphical Authentication, International Journal of Human–Computer Interaction (2019), 1–13.10.1080/10447318.2019.1574057Search in Google Scholar

[51] Patrick Gage Kelley, Saranga Komanduri, Michelle L. Mazurek, Richard Shay, Timothy Vidas, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor and Julio Lopez, Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms, in: 2012 IEEE Symposium on Security and Privacy, IEEE, May 2012.10.1109/SP.2012.38Search in Google Scholar

[52] Hassan Khan, Urs Hengartner and Daniel Vogel, Usability and Security Perceptions of Implicit Authentication: Convenient, Secure, Sometimes Annoying, in: Proceedings of the Eleventh USENIX Conference on Usable Privacy and Security, SOUPS’15, pp. 225–239, USENIX Association, Berkeley, CA, USA, 2015.Search in Google Scholar

[53] Mohammad Khatib and Rasoul Mohammad Hosseinpur, On the Validity of the Group Embedded Figure Test (GEFT), Journal of Language Teaching and Research2 (2011).10.4304/jltr.2.3.640-648Search in Google Scholar

[54] Maria Kozhevnikov, Cognitive Styles in the Context of Modern Psychology: Toward an Integrated Framework of Cognitive Style, Psychological Bulletin133 (2007), 464–481.10.1037/0033-2909.133.3.464Search in Google Scholar PubMed

[55] Oskar Ku, Chi-Chen Hou and Sherry Y. Chen, Incorporating Customization and Personalization into Game-based Learning: A Cognitive Style Perspective, Computers in Human Behavior65 (2016), 359–368.10.1016/j.chb.2016.08.040Search in Google Scholar

[56] Ximing Liu, Yingjiu Li and Robert H. Deng, Typing-Proof: Usable, Secure and Low-Cost Two-Factor Authentication Based on Keystroke Timings, in: Proceedings of the 34th Annual Computer Security Applications Conference, ACSAC ’18, pp. 53–65, ACM, New York, NY, USA, 2018.Search in Google Scholar

[57] Jia-Jiunn Lo and Yun-Jay Wang, Development of an Adaptive EC Website With Online Identified Cognitive Styles of Anonymous Customers, International Journal of Human-Computer Interaction28 (2012), 560–575.10.1080/10447318.2011.629952Search in Google Scholar

[58] Andrew Luxton-Reilly, Emma McMillan, Elizabeth Stevenson, Ewan Tempero and Paul Denny, Ladebug: An Online Tool to Help Novice Programmers Improve Their Debugging Skills, in: Proceedings of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2018, pp. 159–164, ACM, New York, NY, USA, 2018.10.1145/3197091.3197098Search in Google Scholar

[59] Stephen Madigan, Picture Memory, Imagery, Memory and Cognition: Essays in Honor of Allan Paivio (John C. Yuille, ed.), Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1983, pp. 65–89.Search in Google Scholar

[60] Michelle L. Mazurek, Saranga Komanduri, Timothy Vidas, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, Patrick Gage Kelley, Richard Shay and Blase Ur, Measuring Password Guessability for an Entire University, in: Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security, CCS ’13, pp. 173–186, ACM, New York, NY, USA, 2013.10.1145/2508859.2516726Search in Google Scholar

[61] Martin Mihajlov and Borka Jerman-Blažič, On Designing Usable and Secure Recognition-based Graphical Authentication Mechanisms, Interacting with Computers23 (2011), 582–593.10.1016/j.intcom.2011.09.001Search in Google Scholar

[62] Deborah Nelson and Kim-Phuong L. Vu, Effectiveness of Image-based Mnemonic Techniques for Enhancing the Memorability and Security of User-generated Passwords, Computers in Human Behavior26 (2010), 705–715.10.1016/j.chb.2010.01.007Search in Google Scholar

[63] Toan Nguyen and Nasir Memon, Tap-based User Authentication for Smartwatches, Computers & Security78 (2018), 174–186.10.1016/j.cose.2018.07.001Search in Google Scholar

[64] Toan Nguyen, Napa Sae-Bae and Nasir Memon, DRAW-A-PIN: Authentication Using Finger-drawn PIN on Touch Devices, Computers & Security66 (2017), 115–128.10.1016/j.cose.2017.01.008Search in Google Scholar

[65] Efi A. Nisiforou and Andrew Laghos, Do the Eyes Have It? Using Eye Tracking to Assess Students Cognitive Dimensions, Educational Media International50 (2013), 247–265.10.1080/09523987.2013.862363Search in Google Scholar

[66] Philip K. Oltman, Evelyn Raskin and Herman A. Witkin, Group Embedded Figures Test, Consulting Psychologists Press, Palo Alto, CA, USA, 1971.Search in Google Scholar

[67] Zach Pace, Signing in With a Picture Password, December 2011.Search in Google Scholar

[68] Allan Paivio and Kalman Csapo, Short-term Sequential Memory for Pictures and Words, Psychonomic Science24 (1971), 50–51.10.3758/BF03337887Search in Google Scholar

[69] Federico Perazzi, Philipp Krähenbühl, Yael Pritch and Alexander Hornung, Saliency Filters: Contrast Based Filtering for Salient Region Detection, in: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 733–740, IEEE, 2012.10.1109/CVPR.2012.6247743Search in Google Scholar

[70] George E. Raptis, Christos A. Fidas and Nikolaos M. Avouris, Do Field Dependence-Independence Differences of Game Players Affect Performance and Behaviour in Cultural Heritage Games?, in: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY ’16, pp. 38–43, ACM, New York, NY, USA, 2016.10.1145/2967934.2968107Search in Google Scholar

[71] George E. Raptis, Christina Katsini, Marios Belk, Christos Fidas, George Samaras and Nikolaos Avouris, Using Eye Gaze Data and Visual Activities to Infer Human Cognitive Styles: Method and Feasibility Studies, in: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, UMAP ’17, pp. 164–173, ACM, New York, NY, USA, 2017.10.1145/3079628.3079690Search in Google Scholar

[72] George E. Raptis, Christos Fidas, Christina Katsini and Nikolaos Avouris, A Cognition-centered Personalization Framework for Cultural-Heritage Content, User Modeling and User-Adapted Interaction29 (2019), 9–65.10.1007/s11257-019-09226-7Search in Google Scholar

[73] Karen Renaud, Peter Mayer, Melanie Volkamer and Joseph Maguire, Are Graphical Authentication Mechanisms as Strong as Passwords?, in: 2013 Federated Conference on Computer Science and Information Systems, pp. 837–844, September 2013.Search in Google Scholar

[74] Amir Sadovnik and Tsuhan Chen, A Visual Dictionary Attack on Picture Passwords, in: 2013 IEEE International Conference on Image Processing, pp. 4447–4451, September 2013.10.1109/ICIP.2013.6738916Search in Google Scholar

[75] Elizabeth Stobert and Robert Biddle, Memory Retrieval and Graphical Passwords, in: Proceedings of the Ninth Symposium on Usable Privacy and Security, SOUPS ’13, pp. 15:1–15:14, ACM, New York, NY, USA, 2013.10.1145/2501604.2501619Search in Google Scholar

[76] Elizabeth Stobert and Robert Biddle, The Password Life Cycle, ACM Transactions on Privacy and Security (TOPS)21 (2018), 13:1–13:32.10.1145/3183341Search in Google Scholar

[77] Elizabeth Stobert, Alain Forget, Sonia Chiasson, Paul C. van Oorschot and Robert Biddle, Exploring Usability Effects of Increasing Security in Click-based Graphical Passwords, in: Proceedings of the 26th Annual Computer Security Applications Conference, ACSAC ’10, pp. 79–88, ACM, New York, NY, USA, 2010.10.1145/1920261.1920273Search in Google Scholar

[78] Huiping Sun, Ke Wang, Xu Li, Nan Qin and Zhong Chen, PassApp: My App is My Password!, in: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, MobileHCI ’15, pp. 306–315, ACM, New York, NY, USA, 2015.Search in Google Scholar

[79] Hai Tao and Carlisle Adams, Pass-go: A Proposal to Improve the Usability of Graphical Passwords, International Journal of Network Security7 (2008), 273–292.Search in Google Scholar

[80] Gary F. Templeton, A Two-step Approach for Transforming Continuous Variables to Normal: Implications and Recommendations for IS Research, Communications of the Association for Information Systems (CAIS)28 (2011), 41–58.10.17705/1CAIS.02804Search in Google Scholar

[81] Julie Thorpe and Paul C. van Oorschot, Human-Seeded Attacks and Exploiting Hot-Spots in Graphical Passwords, in: Proceedings of the 16th Conference on USENIX Security Symposium, SS’07, pp. 103–118, USENIX Association, Berkeley, CA, USA, 2007.Search in Google Scholar

[82] Julie Thorpe, Brent MacRae and Amirali Salehi-Abari, Usability and Security Evaluation of GeoPass: A Geographic Location-password Scheme, in: Proceedings of the Ninth Symposium on Usable Privacy and Security, SOUPS ’13, pp. 14:1–14:14, ACM, New York, NY, USA, 2013.10.1145/2501604.2501618Search in Google Scholar

[83] Julie Thorpe, Muath Al-Badawi, Brent MacRae and Amirali Salehi-Abari, The Presentation Effect on Graphical Passwords, in: Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, CHI ’14, pp. 2947–2950, ACM, New York, NY, USA, 2014.10.1145/2556288.2557212Search in Google Scholar

[84] Judy C.R. Tseng, Hui-Chun Chu, Gwo-Jen Hwang and Chin-Chung Tsai, Development of an Adaptive Learning System with Two Sources of Personalization Information, Computers & Education51 (2008), 776–786.10.1016/j.compedu.2007.08.002Search in Google Scholar

[85] M.N.M. van Lieshout and Adrian Baddeley, A Nonparametric Measure of Spatial Interaction in Point Patterns, Statistica Neerlandica50 (1996), 344–361.10.1111/j.1467-9574.1996.tb01501.xSearch in Google Scholar

[86] Paul C. van Oorschot, Amirali Salehi-Abari and Julie Thorpe, Purely Automated Attacks on PassPoints-Style Graphical Passwords, IEEE Transactions on Information Forensics and Security5 (2010), 393–405.10.1109/TIFS.2010.2053706Search in Google Scholar

[87] Kim-Phuong L. Vu, Robert W. Proctor, Abhilasha Bhargav-Spantzel, Bik-Lam (Belin) Tai, Joshua Cook and E. Eugene Schultz, Improving Password Security and Memorability to Protect Personal and Organizational Information, International Journal of Human-Computer Studies65 (2007), 744–757.10.1016/j.ijhcs.2007.03.007Search in Google Scholar

[88] Xiang-Yang Wang, Yong-Wei Li, Pan-Pan Niu, Hong-Ying Yang and Dong-Ming Li, Content-based Image Retrieval using Visual Attention Point Features, Fundamenta Informaticae135 (2014), 309–329.10.3233/FI-2014-1124Search in Google Scholar

[89] Susan Wiedenbeck, Jim Waters, Jean-Camille Birget, Alex Brodskiy and Nasir Memon, Authentication Using Graphical Passwords: Effects of Tolerance and Image Choice, in: Proceedings of the 2005 Symposium on Usable Privacy and Security, SOUPS ’05, pp. 1–12, ACM, New York, NY, USA, 2005.10.1145/1073001.1073002Search in Google Scholar

[90] Susan Wiedenbeck, Jim Waters, Jean-Camille Birget, Alex Brodskiy and Nasir Memon, PassPoints: Design and Longitudinal Evaluation of a Graphical Password System, International Journal of Human-Computer Studies63 (2005), 102–127.10.1016/j.ijhcs.2005.04.010Search in Google Scholar

[91] Herman A. Witkin, Carol Ann Moore, Donald R. Goodenough and Patricia W. Cox, Field-Dependent and Field-Independent Cognitive Styles and Their Educational Implications, ETS Research Bulletin Series1975 (1975), 1–64.10.1002/j.2333-8504.1975.tb01065.xSearch in Google Scholar

[92] Nicholas Wright, Andrew S. Patrick and Robert Biddle, Do You See Your Password?: Applying Recognition to Textual Passwords, in: Proceedings of the Eighth Symposium on Usable Privacy and Security, SOUPS ’12, pp. 8:1–8:14, ACM, New York, NY, USA, 2012.10.1145/2335356.2335367Search in Google Scholar

[93] Honghai Yu and Stefan Winkler, Image Complexity and Spatial Information, in: 2013 Fifth International Workshop on Quality of Multimedia Experience (QoMEX), pp. 12–17, IEEE, 2013.Search in Google Scholar

[94] Zhen Yu, Hai-Ning Liang, Charles Fleming and Ka Lok Man, An Exploration of Usable Authentication Mechanisms for Virtual Reality Systems, in: 2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), pp. 458–460, October 2016.Search in Google Scholar

[95] Ziming Zhao, Gail-Joon Ahn, Jeong-Jin Seo and Hongxin Hu, On the Security of Picture Gesture Authentication, in: Proceedings of the 22Nd USENIX Conference on Security, SEC’13, pp. 383–398, USENIX Association, Berkeley, CA, USA, 2013.Search in Google Scholar

[96] Ziming Zhao, Gail-Joon Ahn and Hongxin Hu, Picture Gesture Authentication: Empirical Analysis, Automated Attacks, and Scheme Evaluation, ACM Transactions on Information and System Security (TISSEC)17 (2015), 14:1–14:37.10.1145/2701423Search in Google Scholar

Published Online: 2020-01-14
Published in Print: 2019-11-18

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 30.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2019-0011/html
Scroll to top button