1 Introduction

Individuals perform visual search tasks every day, for instance, looking for keys on a table or a book on the shelf. However, it seems that search performances depend on the target prevalence. Recent research has shown that participants missed the targets disproportionately often when they appeared rarely, which was called low prevalence effect [1]. Although previous research results [2, 3] have demonstrated that prevalence effects are very stubborn and robust, almost these studies presented stimuli with a static display in the laboratory setting. As Kunar and Watson [4] have suggested, when in more ecologically valid, realistic search conditions, for instance, using dynamic display, some fundamental characteristics observed in strictly controlled search tasks do not apply. The primary aim of this research is, using more ecologically valid setting, to examine whether different static and dynamic displays affect prevalence effects in screening tasks. The findings will help us get a more comprehensive picture of prevalence effects in visual searches.

2 Method

2.1 Participants

Thirty-six undergraduate or graduate students (18 females) were recruited to participate in this laboratory-simulated X-ray luggage-screening task and were paid for their participation. Their age ranged from 18 to 26 years and the average age was 21.6 years (SD = 2.3). All of them reported that they had normal or corrected-to-normal vision. According to display patterns, all participants were randomly assigned to three groups and each group had 12 participants (half females).

2.2 Stimuli

Stimuli in this study were JPEG X-ray images. All X-ray images were created in grayscale form in laboratory (see, e.g., Fig. 1) from real X-ray images of passengers’ luggage obtained from the Security Department of Beijing Subway. Each image had a set size of 9 or 18 total items. Items were rotated and placed randomly throughout the image and could overlap. The target-absent image consisted of a variety of everyday objects (e.g., shoes, keys, toys).The target-present image was composed of everyday objects and weapons (knives or guns) and it contained only a single weapon as target. At a viewing distance of 60 cm, the entire image (800 × 600 pixels) size was 23.54º of visual angle in width by 17.76º of visual angle in height.

Fig. 1.
figure 1

Sample X-ray image, set size 9, target absent

2.3 Apparatus

The experiment was programmed using SR Research Experiment Builder software (version 1.10.1241), running on Dell OptiPlex 390 computer. Stimuli were presented on a 17-inch CRT monitor with the screen resolution of 1024 × 768 pixels and the refresh rate of 85 Hz.

2.4 Design and Procedure

The present study was a 3(Display Pattern: Static vs. Dynamic Constant Velocity (Dynamic_CV) vs. Dynamic Varying Velocity (Dynamic_VV)) × 2(Prevalence: 50 % vs. 5 %) × 2 (Set Size: 9 vs. 18) mixed design.

The first factor, “Display Pattern”, was the between-subject factor and it varied in three levels. (1) Static: The stimuli were presented 4000 or 6000 ms across set size 9 or 18, respectively; (2) Dynamic_CV: The stimuli were presented 6000 ms, and they moved horizontally from left to right on the screen with a constant velocity of 304 pixels per second (angular velocity: 9.53 º/s); (3) Dynamic_VV: The stimuli were presented 4000 or 6000 ms, and they moved horizontally from left to right on the screen with a varying velocity of 456 pixels per second (14.25 º/s) or 304 pixels per second (9.53 º/s) across set size 9 or 18, respectively. And the other two factors, “Prevalence” and “Set Size”, were both within-subject factors.

Each trial began with a fixation cross in the center (static display) or left (dynamic display) of the screen and 500 ms later presented the stimulus, which disappeared when the participant responded or time was out. Preceding the next trial, there was a 500 ms blank interval. Participants pressed one key for target present and another key for target absent as soon and accurately as possible. Each participant responded 40 practice trials at 50 % prevalence with feedback and then 100 experimental trials at 50 % prevalence (high prevalence) and 1,000 trials (divided into 5 blocks) at 5 % prevalence (low prevalence). The sequence of prevalence was counterbalanced between subjects.

3 Results

3.1 Response Times (RTs)

Trials with no response or RTs less than 200 ms were removed as outliers (0.4 % of the data). Based on response types (Hit, Miss, Correct Rejection (CR), False Alarm), mean RTs were plotted in Fig. 2. Due to the minimal number of false alarm errors (less than 2.5 % in all conditions), RTs for this response type weren’t presented here. For three groups of display patterns, a repeated measure ANOVA with prevalence and set size was conducted respectively. With static display, there was a significant main effect of set size on Hit RTs, F(1,11) = 118.5, p < 0.001, \( \eta_{p}^{2} = 0.92 \);but neither prevalence, F(1,11) = 1.39, p > 0.05, \( \eta_{p}^{2} = 0.11 \); nor interaction between the two factors, F(1,11) = 0.02, p > 0.05, \( \eta_{p}^{2} = 0.001 \). While with dynamic display (Dynamic_CV and Dynamic_VV), there were main effects of prevalence, Fs(1,11) > 33.03, ps < 0.001, \( \eta_{p}^{2} s > 0.75 \) and set size, Fs(1,11) > 6.81, ps < 0.05, \( \eta_{p}^{2} \) s > 0.38; and a significant interaction between them, Fs(1,11) > 12.87, ps < 0.01, \( \eta_{p}^{2} s > 0.54 \) The simple effect analysis revealed that with Dynamic_CV display, at 50 % prevalence, there was no significant difference between set size 9 and 18 for Hit RTs, but at 5 % prevalence, Hit RTs for set size 18 was significantly slower than that for set size 9; with Dynamic_VV display, when set size was 9, there was no significant difference between 50 % and 5 % prevalence for Hit RTs, but when set size was 18, Hit RTs for 5 % prevalence was significantly slower than that for 50 % prevalence.

Fig. 2.
figure 2

Mean RTs for Hit, Correct Rejection, and Miss at 50 % and 5 % prevalence as a function of set size in three display pattern group. Error bars represent the standard error.

With Dynamic_CV display, there were significant interaction between prevalence and set size for both Miss RTs, F(1,11) = 7.10, p < 0.05, \( \eta_{p}^{2} = 0.42 \), and Correct Rejection RTs, F(1,11) = 20.00, p < 0.01, \( \eta_{p}^{2} = 0.65 \). Further analysis indicated that when set size was 9, there was no significant difference between 50 % and 5 % prevalence for Miss RTs, but when set size was 18, Miss RTs for 5 % prevalence was significantly faster than that for 50 % prevalence. RTs for Correct Rejection presented the same result pattern.

3.2 Miss Error and Low Prevalence Effect

An ANOVA with display pattern, prevalence and set size was conducted and the results were plotted in Fig. 3. There were main effects of prevalence, F(1,33) = 44.56, p < 0.001, \( \eta_{p}^{2} = 0.57 \); and set size, F(1,33) = 55.17, p < 0.001, \( \eta_{p}^{2} = 0.63 \); but not display pattern, F(2,33) = 0.56, p > 0.05, \( \eta_{p}^{2} = 0.03 \). The interaction between display pattern and set size was significant, F(1,33) = 5.81, p < 0.01, \( \eta_{p}^{2} = 0.26 \). Further analysis indicated that when set size was 9, miss error rate with Dynamic_VV display was markedly higher than those with static and Dynamic_CV display, and when set size was 18, miss error rate with Dynamic_CV display was markedly higher than those with static and Dynamic_VV display.

Fig. 3.
figure 3

Results of miss errors. a. mean miss error rate at 50 % and 5 % prevalence as a function of set size in three display pattern group. b. mean miss error rate for each of the display pattern plotted by set size. c. low prevalence effect for each of the display pattern. Error bars represent the standard error.

In present study, the low prevalence effect was the difference between miss error rate at 5 % prevalence and that at 50 % prevalence. For low prevalence effect, a One-Way ANOVA with display pattern revealed there was no significant main effect of display pattern, F (2, 33) = 0.38, p > 0.05.

3.3 Criterion and Sensitivity

According to Wolfe et al. [2], the low prevalence effect was related to the shift of decision criteria. Based on signal detection theory, decision criterion and sensitivity (measured by d’) was calculated. The decision criterion and sensitivity were plotted in Fig. 4. In all three groups of display patterns, there was a significant main effect of prevalence on decision criterion. Criteria were significantly more conservative at 5 % prevalence than at 50 % prevalence, F (1, 33) = 50.1, p < 0.001. Turning to sensitivity, in the Dynamic_CV display group, there was a significant main effect of prevalence: d’ was significantly greater at 5 % prevalence than at 50 % prevalence, t (11) = −2.7, p < 0.05. However, in another two display groups, there was no main effect of prevalence.

Fig. 4.
figure 4

Criterion and Sensitivity at 50 % and 5 % prevalence across different display patterns. Error bars represent the standard error.

4 Discussion

The present study examined whether different static or dynamic displays affected prevalence effects in X-ray luggage screening task. By manipulating the display movement velocity, we deployed three display patterns: static display, dynamic constant velocity (Dynamic_CV) display and dynamic varying velocity (Dynamic_VV) display. No matter the display pattern, strong prevalence effects were manifested in the present work here. Additionally, the size of low prevalence effect was almost same across these three display patterns. These results confirmed that the low prevalence effect not only existed in static searches but also did in more ecologically valid dynamic visual searches.

Despite the same size of low prevalence effect, different display patterns affected the search performance. Firstly, for miss errors, a significant interaction between display pattern and set size revealed that when there were relatively fewer items in the search field, faster movement of stimuli display caused more miss errors. This indicated that the miss errors increased with increasing display movement velocity, which was accordance with previous research [5, 6]. For example, Williams and Borow found that relative to static display when display movement velocity reached the range of 8–16 º/s, the search performance dropped markedly [5]. In the present experiment, the velocities were 9.53 or 14.25 º/s, falling within the range of 8–16 º/s. Secondly, the results pattern of RTs was different between static display and dynamic display. With dynamic display, target prevalence interacted with set size, which notably affected the response time. But there was no this interaction between prevalence and set size in static display setting. This indicated that search time varied with different display patterns.

As Wolfe et al. [2] suggested observer shifting decision criteria lead to the low prevalence effect. In this study, obvious shift of decision criteria was seen in both static and dynamic visual search. It may be inferred that there was the same mechanism of low prevalence effect both in static and dynamic search setting. Beyond that, in Dynamic_CV display group, participants’ sensitivity became higher at 5 % prevalence than it at 50 % prevalence. However, the improvement of sensitivity at low prevalence couldn’t explain the elevated miss error rates. In order to calculate decision criteria and sensitivity (d’), according to Macmillan & Creelman [7], the false alarm rate was adjusted. The change of sensitivity may relate to the adjustment. In static and Dynamic_VV group, the sensitivity didn’t vary with target prevalence.

Summarily, the low prevalence effect existed both in static search and more ecologically valid dynamic visual search. The disciplines of prevalence effects partially applied in dynamic visual search. This study extended the low prevalence effect research to dynamic search domain, which was helpful to improve the ecological validity.