Keywords

1 Introduction

Autonomous/robotic systems represent a significant portion of the future for Department of Defense (DoD) and civilian technologies. Human users of such systems can often interact with these platforms at great distances, improving reach, adaptability, and effectiveness of human-machine systems. Yet, inherent in these interactions is the notion of information security, as these technologies rely on digital links to command and control systems, human operators, and cloud-based networks. As a result, these systems may be susceptible to cyber attacks. Many scholars suggest that the most vulnerable link in the cyber security domain is the human operator/user of technology. These vulnerabilities may exist for a variety of reasons, as discussed below. The current paper will discuss two studies, each with relevance to cyber-related vulnerabilities. These studies, which were conducted as part of an effort to validate trait measures of suspicion, revealed an interesting pattern of low awareness of cyber threats.

A recent paper [1] reviewed the literature surrounding the construct of suspicion. They defined state suspicion as a “simultaneous state of cognitive activity, uncertainty, and perceived mal-intent about underlying information” from an external agent [1, p. 493]. Suspicion may be a key factor motivating operators to critically evaluate and question a particular stimulus. Yet little is known about baseline rates of suspicion during military oriented tasks such as image evaluation.

Intelligence, Surveillance, and Reconnaissance (ISR) analysts are often asked to exploit and examine imagery, full motion video, signals intelligence, and other forms of intelligence and to determine the presence of various objects, people, or actions. This information comes from a variety of sources (i.e., collection assets) and may be subject to variations in quality, timeliness, and pedigree – all of which can influence an analyst’s trust of the information [11]. These perturbations, in quality for instance, can be associated with either innocuous or perhaps pernicious causal mechanisms. For example, poor quality imagery may be driven by a bad sensor, unfortunate weather conditions, bad signal strength, or a low-quality information source. In addition, there are also potential malicious actors in the world who are constantly seeking to gain an advantage of US operations across the world. As such, intelligence may be subject to groups seeking to deny or disrupt ISR data, resulting in poor quality intelligence. Cyber attacks on ISR platforms are a very real threat to our military.

In the domain of cyber security, human errors and or vulnerabilities are often cited as the primary threat to cyber resilience. Estimates suggest that most known cyber-related breaches can be attributed back to human error [5, 9]. This is an interesting trend given that cyber awareness campaigns are common in society and cyber breaches are often highly publicized. Data breaches in the public and private sectors appear with alarming regularity. In the military, cyber awareness training is standard, as are mechanisms to both thwart cyber attacks (e.g., network firewalls) and policies to reduce cyber breaches (e.g., behavioral policies of DoD users). Yet, cyber breaches do occur and the frequency of cyber attacks are on the rise versus on the decline. Researchers and cyber professionals are left pondering – why are cyber attacks both so prominent and effective given a populace that should be aware of cyber threats?

The answers to the above questions are as diverse as they are ambiguous. Some researchers note the extreme challenges faced by cyber operators. As noted by [13], cyber operators must deal with a combination of high cognitive workload, poor user interfaces, and aids that tend toward high false alarm rates. High false alarm rates could result in distrust over time as operators discount guidance from such tools [10]. Thus, given the right tools, training, and work context, cyber operators may be more effective at addressing cyber-related challenges. However, others suggest that types of cyberattack make all of the difference in terms of understanding human responses and vulnerability. In an empirical paper by [4], cyber attacks were categorized according to five factors: (1) well-known and overt attacks [e.g., a computer or software crash], (2) low and slow attacks (which includes errors that may be common among operators) [e.g., the mouse is unresponsive or sluggish, the internet is unusually slow], (3) common errors that are not perceived to be the fault of the user [e.g., printer errors], (4) perceptual/memory errors [e.g., objects on the screen randomly appear or disappear], and (5) evidence of tampering or remote control of the computer [e.g., additional text is inserted in emails or posts]. The authors conclude that the low and slow attacks may be most pernicious to operators, as these attacks may be the least likely to be detected [4]. The reasons why they may be less detectable by operators are because these attacks may not violate what the operators view as normal system behavior, and they may not be consistent with mental models of cyber attacks (which may be more evident in the overt attack, common error attacks, and tampering attacks). This is unfortunate because the vast majority of cyber security research is geared toward technology solutions and applications (e.g., network intrusion detection sensors, algorithms, aid designs) rather than understanding the human vulnerabilities associated with the cyber domain [12]. Considerably more research is needed to fully understand human behavior in the context of cyber challenges. It is in this context, on the cusp of human suspicion for/human awareness of cyber attacks, the current paper investigated the propensity of individuals to perceive that a cyberattack event had occurred.

1.1 Participants

Participants engaged in an image degradation analysis task. There are two studies discussed in this paper. Study 1 had one hundred and nine participants from a Midwestern university. The average age was 20 (SD = 4) and the sample was 62.4% female. Study 2 had one hundred and six participants from a Midwestern university with the same general demographics as Study 1. All participants were above the age of 18 and the study was approved by the Air Force IRB prior to data collection.

1.2 Design

The participants were asked to view 80 images and to determine if the images were “degraded” or “not degraded”. In reality, 70 of the 80 images were degraded to the same degree (i.e., the number of pixels degraded for all degraded images was the same). The other ten images were not degraded at all. An example of a degraded image can be found in Fig. 1. Notably, while 70 of 80 images were systematically degraded, they were degraded only in so far as to motivate suspicion at a just-noticeable difference. Hence some ‘degraded’ images were likely be viewed as non-degraded. The authors piloted the 70 degraded images with an independent sample to ensure that some but not all of the participants would evaluate the images as degraded, signifying a just-noticeable difference subject to individual differences. In fact, in our first study. the image in Fig. 1 was perceived as degraded by 49% of the respondents, while 51% did not identify it as degraded. If participants believed an image was degraded, they were simply asked to choose a rationale for why they thought the image was degraded from the following list of options: weather interference, sensor malfunction, signal weakness (i.e., lack of signal strength), or cyber attack.

Fig. 1.
figure 1

Sample of degraded image

1.3 Apparatus and Materials

A Java-based interface was used for the study (see Fig. 2). Participants were shown an image and asked if they believed the image was degraded or not. If the participant noted a degradation in the image, they were asked to choose why they thought the image was degraded using the above a priori list of possible causes. The level of degradation was determined by the interaction of two methods. First, the percentage of pixels in an image that are changed can be adjusted, with a higher percentage causing more degradation. Second, the 24 bits that determine a pixel’s color can be adjusted systematically. Each additional bit that is changed of the 24 will cause the pixel’s color to change more dramatically. The joint effect of changes across these two variables determined the total amount of corruption in an image.

Fig. 2.
figure 2

Example of the experimental platform.

1.4 Procedure

Study 1.

As part of a larger study on suspicion, participants provided informed consent and were given instructions for the task. In the task instructions, participants were told that the images in the study might be degraded and that their task was to determine which ones were degraded. Participants were also told that aerial images could be degraded for multiple reasons including: weather conditions, sensor quality/performance, network strength/quality, or cyber attacks (e.g., hacking). More specifically, they were told: “weather conditions can reduce visibility and cause turbulence for aircraft taking pictures, reducing image quality. The sensors taking the pictures may not be performing optimally, or may need maintenance, reducing image quality. The network strength at a given time and space may be low, making it more difficult to send a high-quality image, reducing image quality. Outside groups may attempt to purposively disrupt the electronical signals (cyber-attack), reducing image quality. Finally, the larger the distance between the target and the camera, the more likely the picture is to look grainy and fuzzy. Your task will be to view a set of images, then decide if each image is degraded or not. If you think an image is degraded, we will ask you to identify the possible cause of that degradation. One factor - distance from target, will be held constant, meaning that all of your images will come from a distance that is approximately the same.”

Study 2.

The second study was modified to heighten cyber awareness. Specifically, the task instructions were prefaced by the following statement, “Cyber warfare is of great concern to the U.S. Air Force and our current experimental project reflects that concern. The Wall Street Journal reported back in 2009 that it is possible for hackers to hack into image feeds from military aircraft, which could influence the quality of images that are transmitted to drone operators. On the other hand, when taking such pictures from the air, there are other, unintentional factors that can influence the quality of the picture. In general, when a picture appears to be of lower quality than expected it might be considered to be “degraded” and, regardless of cause, poor quality images may reduce the effectiveness of military missions.” Participants were then given the normal task instructions as in Study 1. Then, participants were told that the results of this study “will be of vital importance to imagery and cyber analysts”. All other aspects were identical between studies 1 and 2. The images were identical and the four categories of degradation causes were the same.

2 Results

2.1 Study 1

Participants perceived, on average, that 53 (66%) of the images were degraded. The actual rate was close to 90% (70 of 80) suggesting that there were some ambiguities in the degradation perception, as expected. None of the non-degraded images were perceived as degraded. The results from Study 1 (see Fig. 3) demonstrated that when participants viewed an image as degraded, cyberattack was the least likely explanation provided for the degradation. Only 7% of the degraded images were believed to be due to cyberattack – relative to 20% weather, 34% sensor issues, and 39% signal strength. Thus, Study 1 evidenced a low base rate for cyber attack attributions when participants were asked to explain degraded images.

Fig. 3.
figure 3

Proportion of degraded image explanations for study 1.

2.2 Study 2

Study 2 sought to prime cyber awareness by telling participants that cyber attacks were prominent in Air Force operations and that this study was designed explicitly to examine perceptions of cyber attacks. Other than the differences in the initial instructions/priming, the remaining methods were identical to Study 1. Results from Study 2 (see Fig. 4) demonstrated that the priming increased the cyber attack rationale to 15% (versus 7% in Study 1; a statistically significant increase, p < .05), yet it was still low relative to the other explanations (weather 18%, sensor issues 30%, and signal strength 38%). Participants perceived, on average, that 50 (63%) of the images were degraded. Once again, none of the non-degraded images were perceived as degraded.

Fig. 4.
figure 4

Proportion of degraded image explanations for study 2.

3 Discussion

Human awareness of cyber-related vulnerabilities are often not well understood. Cyber attacks remain a significant threat among businesses, public organizations, and the military and humans are often believed to be the weakest link in the cyber security equation. As a result, researchers have called for additional research to better understand the role of humans in cyber-related phenomena [4, 12]. The current studies sought to better understand individuals’ propensity to attribute suspicious events to a cyber attack relative to other possible causes.

Study 1 demonstrated that individuals were quite reluctant to attribute potential image degradation to a cyber-related attack relative to attributions for weather, sensor problems, or poor signal strength. There are at least two potential reasons for this. First, participants may have believed that a cyber attack would manifest itself differently in the context of aerial images. For instance, similar to the overt attacks discussed in [4], participants may have believed that a cyber attack in an aerial image context may have resulted in the loss of the image in totality whereas a partially degraded image may be attributed to more common/familiar causes such as sensor issues and signal strength. Given the ubiquity of cell phones, it is highly likely that most individuals have encountered a bandwidth limitation at some point in their lives. This explanation is consistent with the mental models hypothesis posed by [4] which highlights the potential dangers of low and slow methods. “In the presence of a cyberattack, users’ propensity to become suspicious will be dependent on their mental models of the system. Users will only likely identify cyber attacks that (1) violate their expectations of the system’s normal behavior and (2) fit into a well-informed mental model that attributes such occurrences to cyber attacks rather than to human error or ‘finicky’ computers” [4, p. 30]. Given that there was no training involved before the experiment began, in the context of a degraded image, people may be more accustomed to having problems with sensors and bandwidth than in thinking about cyber vulnerabilities.

Second, it is possible that the participants did not believe that a cyber attack was likely given that this was an experimental context versus a natural interaction or actual work context. That is, perhaps actual military operators would be more vigilant of cyber-related threats in operational domains. Thus, the current study paradigm needs to be examined in the context of actual operators using tasks/platforms where real stakes are involved. It is interesting however, that the sample was drawn from an age range that would be typical of many entry-level military operators.

Study 2 sought to intentionally prime individuals to think about cyber attacks to overcome the low base rates of cyber attack attributions from Study 1. Participants in this study were explicitly told to think about cyber attacks, to be aware that cyber attacks were possible, and that the current study was explicitly designed to support operators in the cyber domain. The added cyber emphasis was successful in doubling the cyber attributions relative to Study 1. However, cyber attributions remained the lowest category for participants, despite the successful priming. Participants were still twice as likely to attribute a degraded image to a faulty sensor or to poor bandwidth. It is clear that participants’ mental models of what could cause a degraded image were not consistent with the notion of a cyber-related event, but were rather associated with bandwidth and sensor performance. Future research might examine the concept of mental models of cyber threats in an image context (as well as other contexts) to see if concepts such bandwidth and sensor problems are more familiar/accessible to individuals when considering threats to the image quality. It is also possible that a different degradation method would differentially influence suspicion. For instance, the degradation technique applied to the current images was applied in a systematic and uniform fashion to the images – meaning each part of the image had the same level of degradation. Degradation of a portion of the image may have motivated greater suspicion as that may be more inconsistent with common issues such as bandwidth limitations and sensor problems which may impact the whole image versus a portion of it.

3.1 Implications

The collective study results have implications for cyber awareness training, as well as the concept/design of interactions with future autonomous systems. Designers of cyber awareness training should be encouraged that increasing awareness of cyber-related vulnerabilities increases cyber-related attributions when humans encounter stimuli they view as suspicious. However, two points are notable. First, as shown in Study 1, individuals’ baseline levels of cyber awareness are insufficient to motivate cyber-related attributions. In other words, it might be ill-advised for cyber trainers to hope that individuals will be sensitive to cyber events based on awareness of public threats and vulnerabilities. That is clearly not the case as evidenced by the low base rates found in Study 1. Second, cyber trainers should not solely hope that by priming cyber awareness, individuals will be sufficiently sensitive to cyber-related threats unless those threats are well-aligned to individuals’ mental models of threats in that context. It is quite possible that if other explanations are more familiar to individuals, they will attribute suspicion to those factors rather than to cyber-related factors.

The current results are also interesting from the perspective of human-machine interaction. Similar to the commercial and public sectors, autonomous systems are believed to be an important aspect of future military operations [2, 3]. One of the grand challenges associated with autonomous systems is certifying the trustworthiness of these systems given that their behavior may change over time as a result of machine learning [7]. As discussed in [7], transparency needs to be injected into the design, training regime, and operator interfaces to facilitate appropriate reliance on novel systems. Transparency refers to the set of methods to foster shared awareness and shared intent between humans and machines [6]. Shared awareness and shared intent will help to facilitate aligned mental models between humans and autonomous systems which should aid human operators in understanding cyber-related vulnerabilities associated with the autonomous system. Given that human interactions with autonomous systems will be dependent on data links, software interfaces, and may involve distributed operations, cyber vulnerabilities will likely be present. Interestingly, operational pilots note concerns of cyber attacks for technologies that include greater autonomous capabilities relative to those that are more automated [8]. It is possible that ‘low and slow’ attack vectors will be even less salient to human operators for autonomous (relative to simply automated) technologies, because the autonomous systems may be less understood and predictable to operators. Thus, the performance of operators can be enhanced by future research and design solutions that enable human operators to have accurate mental models of the technology and the cyber-related vulnerabilities of that future technology.