Abstract
Unilateral spatial neglect (USN) is a complex spatial attentional disorder consisting of a failure to attend to the contralesional side of space, frequently seen after a stroke. However, the majority of cases go undiagnosed due to the lack of a valid and reliable tool that is able to assess USN and its many variants. Recent technological advances in virtual reality (VR) and physiological sensors, allow for the study of this disorder under controlled, and ecologically-valid environments, which hold the promise of reliable and early detection. This proof of concept study aims to evaluate the feasibility of a system for discriminating different attentional states using a multimodal dataset derived from a spatial attention task conducted in VR. Nine healthy young adults underwent two experimental conditions: a Control condition and a Left Occlusion condition. Participants performed a visual search task while their behavioral data, including performance metrics, eye-gaze, head, and controller movement data, were recorded. Additionally, electroencephalography data was synchroniously collected to capture neural correlates of attentional processing. Analysis of results of this within-subjects study found worse performance (higher RT), changes in behavior (right-ward gaze bias, left-ward bias in head and controller movement) in the Left Occlusion condition. Neural differences were found (parieto-occipital mean alpha band power and event related potentials) between the two conditions. If validated, this system could be utilized as a diagnostic VR tool, while it holds the potential to facilitate the participation of stroke patients with USN in VR-driven rehabilitation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Unilateral spatial neglect (USN) or hemineglect is an attentional/perceptual syndrome that most frequently occurs secondary to stroke in the right hemisphere, which often leads to a contralateral suppression of awareness to visual, auditory and tactile stimuli. Recent estimates (Esposito et al. 2021) reveal that prevalence of post-stroke USN is close to 30%, being higher after a right hemisphere stroke. Despite much research and a high number of available tools, currently there is no gold standard for the diagnosis of USN. This could be attributed to low sensitivity when using paper and pencil tests, which limits these tool’s ability to discriminate between the different USN profiles, as well as to predict functional performance in daily life (Ogourtsova et al. 2019). Also, USN is a highly heterogeneous disorder, usually concomitant with other perceptual, cognitive, and motor functions which difficult its assessment, including anosognosia (lack of disease awareness), also common after right hemisphere strokes (Parton et al. 2004). All of these factors contribute to an underdiagnosis of USN, which, if left unattended, may lead to difficulties in rehabilitation and increased burden and stress in family caregivers (Chen et al. 2016).
Recent technological advances in optics, biosensors and processing power are, at least in part, responsible for the rise in popularity of immersive virtual reality (VR). This has lowered costs and raised accessibility to consumers as well as researchers. This form of VR (using a head-mounted display, HMD) has been used consistently for the assessment of USN for almost two decades now (Kim et al. 2007), since it helps deliver a more immersive medium–when compared to pen and pencil and PC tests– where visual stimuli can be actually manipulated to examine the affected visual hemifield. Recent reviews show the potential of VR for both rehabilitation (Salatino et al. 2023; Martino Cinnera et al. 2022) and assessment (Terruzzi et al. 2023; Kaiser et al. 2022). On the other hand, eye-tracking has proven its usefulness in characterizing eye behavior in USN, where findings such as decreased saccade amplitude and total trajectory, as well as right fixation bias are commonly found in patients. Despite this, only a few recent studies using VR for the assessment of USN have also implemented eye-tracking (Hougaard et al. 2021; Kaiser et al. 2022; Martino Cinnera et al. 2024; Uimonen et al. 2024). We predict this number will raise in the short term, as most of the current commercially available headsets now include an eye-tracker.
Electroencephalography (EEG) has also been utilized to both evaluate and treat USN, by monitoring brain activity, but also by providing neurofeedback in a closed loop, like brain-computer interfaces (BCIs). Findings in recent studies show that patients with USN have increased power in the theta (Ros et al. 2022) and delta bands (Pirondini et al. 2020; Zhang et al. 2022), along with several anomalies in the alpha band, such as lower power (Ros et al. 2017, 2022; Saj et al. 2021; Pirondini et al. 2020) and interhemispherical asynchrony and disconnectivity (Ros et al. 2022; Zhang et al. 2022). To our knowledge, very few studies have used VR when performing an EEG on patients with USN (Khalaf et al. 2018; Mak et al. 2022); however, it’s possible that when used conjointly with extended reality devices, these neural signatures may be capable of detecting neglect even above behavioral metrics (Mak et al. 2022).
Despite these independent advances in the study of USN, no study has incorporated VR while simultaneously acquiring EEG and eye-tracking data. We believe that by using an immersive virtual scenario where stimuli can be manipulated to effectively appear on the patient’s affected visual hemifield, it will lead to higher quality data that better resembles actual daily-life behaviors, such as gaze and head movement biases. This information can be extracted from the HMD’s sensors (eye-tracker, accelerometers) and by simultaneously extracting EEG data and task-related neural correlates, we might have a better chance at evaluating this very heterogeneous disorder. Ultimately, this could lead into a closed-loop system, where the VR environment or task could be adapted based on the spatial attention level of the patient. Therefore, we propose this multimodal (VR; EEG; Eye-tracking) pilot study to examine the feasibility of this system in a healthy population. By using a visual search paradigm that partially resembles the experience of patients with USN, we expect to induce differential behavioral and neural responses in our participants.
2 Methods
2.1 Participants
All participants signed an informed consent before participating in the study in accordance with the 1964 Declaration of Helsinki. To be eligible, participants were required to be 18 years or older, as well as to have no history of neurological or psychiatric disorders that were currently managed with medication. A convenience sampling method was chosen for participant recruitment, procuring an equal number of males and females. We employed a within-subjects design where participants underwent two experimental conditions. Participants were randomly assigned to start with either of the two conditions in a counterbalanced manner. A total of 10 participants were recruited for this pilot study. One subject (male) was excluded due to faulty eye-tracking data. Our final sample size consisted of 9 participants (4 females), with a mean age of 29.5 years and mostly right-handed (7 out of 9).
2.2 Experimental design
To build up on previous research and available tools, we used the Attention Atlas (AA, (Norwood et al. 2022)). The AA is a virtual reality attention assessment platform focused on the detection and evaluation of USN. It takes the advantages of newer HMDs to administer a highly immersive visual search task and extract performance, eye-gaze and movement data. It also aims to deliver a highly sensitive, enjoyable and effective experience, able to provide clinically relevant feedback on the participant’s level of neglect (Painter et al. 2023, 2024). It is highly customizable, allowing the user to control the number of trials, the types of stimuli, difficulty, among other settings. This tool allows for a better characterization of the subject’s neglect profile.
In short, the AA is a visual search task where participants are immersed inside an invisible icosphere mesh, with its vertices serving as coordinates for visual stimuli (Fig. 1). The experimenter is allowed to choose the coordinate system to be used (spherical, icosphere), whether the stimuli have depth or not, the stimulus type, position, size and color, as well the search mode (serial, conjunction, unique feature). For this study, we configured an array based on spherical coordinates, consisting of 4 concentric rings of 8 stimuli at 4 different eccentricities (12.5\(^\circ \), 25\(^\circ \), 37.5\(^\circ \) and 50\(^\circ \)) located at a 2 m radius. This configuration gave subjects the impression of being in an empty, dark room with white concentric stimuli rings appearing in front of them.
Participants were asked to detect a target stimuli “T” among 31 distractors (“L”) as quickly and accurately as possible. The position of the target stimuli was randomized and counterbalanced, among any of the 32 possible stimuli locations. There was no timeout within each trial; if participants weren’t able to find the “T”, they were asked to select a distractor (coded as incorrect). Before each trial, participants were shown a recenter cue, where they were required to select a central target to redirect their eye-gaze back to the center of the screen.
To record responses, participants were instructed to use the right-hand controller, which functioned as a virtual laser pointer. They were asked to point at the target stimulus “T” in each trial and then press the trigger button. They were asked to do this as fast and accurately as possible.
To assess different attentional states, participants completed two conditions. First, the Control (C) condition, where participants performed the task as explained above. Additionally, we created a Left Occlusion (LO) condition, which occluded the participant’s left visual hemifield within the virtual scenario (Fig. 1). The occlusion consisted of a black, opaque virtual object positioned in such a way that it covered the virtual camera’s left hemifield in its entirety (just as if one was wearing a patch on the left eye). This way, if participants remained static, they would not be able to see the stimuli array’s left side. However, by rotating their head to the left, this left portion would be again visible. This was done to generate attentional behaviors similar to those observed in some patients with left (egocentric, extrapersonal) USN. A video demonstration of both task conditions is available online (Eudave 2024).
2.3 Experimental setup
2.3.1 Virtual reality setup
To project the immersive visual search task (the Attention Atlas), we used the HTC Vive Pro Eye HMD, which has a field of view of 110\(^\circ \) and a refresh rate of 90 Hz, along with a right-hand Vive controller. This HMD includes an integrated eye-tracker with a trackable field of view of 110\(^\circ \), at a frequency of 120 Hz. Collected data from this device included head position and rotation coordinates, eye-gaze raycast data, controller raycasts and button responses. From the Attention Atlas, we extracted cue and target’s onset and response times.
2.3.2 EEG setup
For the acquisition of EEG data, a mobile, wireless EEG amplifier (LiveAmp; Brain Products GmbH, Gilching, Germany) equipped with 32 active electrodes (+3 ACC) and operating at a sampling rate of 500 Hz was utilized. The spatial arrangement of the electrodes adhered to the 10–20 EEG system. Ground and reference electrodes were positioned at the central and forehead lobes, respectively. To acquire the EEG signals, we used a Bluetooth wireless interface between the portable EEG amplifier and a desktop computer, through a 2.4 GHz ISM band.
2.3.3 Data synchronization
To synchronize VR, eye-tracking, and EEG data we used the Lab Streaming Layer (LSL) software(LabStreamingLayer 2024). It is a framework that channels and receives different kinds of data streams, measuring their clock drift over a network connection that maps local timestamps and adjusts them into a single common timeline.
2.4 Procedure
First, participants were fitted with the EEG cap and the VR HMD. Both were adjusted to ensure low impedance and good EEG signal quality (<10 kOhms). Before launching the AA, participants underwent the HMD’s eye-tracking calibration procedure.
Before commencing the visual search task, the AA allows for individual head origin calibration, so that each participant is positioned in the center of the icosphere. Subsequently, participants went through a short tutorial, followed by a first run with either the C or LO condition, chosen by the experimenter in a pseudorandom fashion. When done, this was followed by a second run with the remaining condition. To acquire the same amount of EEG data for every participant, each run had a time limit of 10 min. On average, participants executed 148 (SD:33) trials in the C condition and 125 (SD:30) trials in the LO condition. Participants were not briefed on the existence of two different conditions. Including both the EEG setup and the experimental task, the whole procedure lasted approximately 60 min. A short video demonstration of this setup is available online (Eudave 2023).
Complete procedure for the experimental task. Setup time included cap and electrode placement, low impedance and signal quality checks. A short tutorial followed where participants are taught to select target Ts using the right controller; then participants were pseudorandomly assigned to one of the experimental conditions: Control or Left Occlusion (left visual hemifield occlusion, here highlighted in a semi-transparent red, but during the task it was opaque, completely occluding the left visual hemifield). This was then followed by the remaining condition. The experiment duration was between 50 and 60 min for most participants
2.5 Data analysis
2.5.1 Behavioral data
There were two categories of collected data: performance (accuracy and reaction time [RT]) and raw raycast data (headset, eye-tracking and controller). For each participant, we calculated Accuracy, defined as the percentage of correct trials (finding Ts) and RT as the mean reaction time (time between stimulus appearance and selection using the controller) for correct trials. To investigate how stimuli position moderated RT, we calculated the mean RTs for stimuli located in four different regions: top, bottom, left, and right hemifields. This analysis included 12 stimuli per region, excluding those located directly on the vertical or horizontal midlines (see Fig. 2b for an illustration of hemispace positions). Additionally, we analyzed RT by dividing the stimuli according to four levels of eccentricity (12.5\(^\circ \), 25\(^\circ \), 37.5\(^\circ \), and 50\(^\circ \) from the center), with 8 stimuli located at each eccentricity level (Fig. 2e). Additionally, as a baseline measure, we extracted the RTs from the recenter cues that were presented before each trial.
Raycasts (forward vectors invisible to the participants) hit the 2-meter radius spherical mesh (also invisible to the participant) which gives the experimenter a measurement of where the head, eye-gaze and controller are pointing at any given moment. Data was acquired in the 3D (x, y, z) hit position on the raycast mesh, and then converted to spherical coordinates (latitude, longitude) at a frequency of 90 Hz. Mean latitude and longitude values were calculated for head, eye-gaze and controller raycasts, a value meant to represent whether participants showed a left or right-ward bias or an up or downward bias respectively.
To compare the within-subjects results in both experimental conditions, paired samples t-tests were performed on Accuracy and RT measures, along with head, eye-gaze and controller latitude and longitude mean values. To calculate effect size, we employed Cohen’s d. For position and eccentricity RT analysis, we used 2x2 repeated measures ANOVAs. Position RT analysis included factors Condition (two levels: C-LO) and Position (two levels: top-bottom or left-right). The eccentricity RT analysis contained factors Condition (two levels: C-LO) and Eccentricity (four levels: 12.5\(^\circ \), 25\(^\circ \), 37.5\(^\circ \), 50\(^\circ \)). For non-parametric data (only the Accuracy measure), we used the Wilcoxon signed-rank test for paired sample comparisons, and the rank biserial correlation for calculating effect sizes.
To rule out whether our results were influenced by undesired effects, such as presentation order or dexterity in controller handling (as measured by the baseline RTs), we conducted paired samples t-tests comparing the first and second conditions for all behavioral measures. Statistically significant measures were then used as fixed effects in linear mixed-effects regression models, with RT as the dependent variable, to check for possible associations. For all statistical comparisons, the significance level was set at 5% (p< 0.05).
An additional eye-tracking data analysis was conducted to explore participants’ gaze patterns during both experimental conditions. A 2D t-map was constructed by dividing the visual space into a 110 × 110 voxel grid (corresponding to the HMD’s field of view), with each unit representing 1 visual degree. The origin of the grid was set at 0 ± 55 horizontally, corresponding to the center of the visual field.
The number of raycasts hitting each voxel in the grid were computed for both experimental conditions. These hits per voxel metrics were then utilized to generate separate hit maps for each condition. Subsequently, a two-sample t-test was employed to compare the hit counts between the conditions for each voxel. This statistical approach allowed for the identification of significant differences in gaze behavior, providing insights into the distinct patterns of attention allocation across the experimental manipulations. Due to the exploratory nature of this analysis, we did not correct for multiple comparisons.
2.5.2 EEG data analysis
For the post-hoc analysis, EEG signals were curated and processed using MATLAB R2023b (The MathWorks, MA, United States), and the EEGLAB toolbox v2023.1 (Delorme and Makeig 2004). The EEG data was processed in three stages. First, pre-processed in order to clean the EEG signals from artifacts and noise, and secondly, post-processed for extracting the power across different EEG-bands, and the event-related potentials between the two experimental conditions. Finally, to quantify the differences between the two conditions, we performed statistical analyses using the Wilcoxon signed-rank test, as the non-parametric test equivalent to the paired-sample t-test. For all statistical comparisons, the significance level was set to 5% (p< 0.05).
Pre-processing
After importing all channel information, signals were down-sampled to 128 Hz for reducing the data size but also for cutting-off unnecessary high-frequency information, followed by band-pass filtering between 0.5–40 Hz. Next, to remove bad channels and correct continuous noisy data, primarily due to movement, we applied the offline version of the Artifact Subspace Reconstruction (ASR) method (Kothe and Jung 2016). ASR is currently the most effective EEG artifact correction and signal reconstruction algorithm, with insignificant information loss (Plechawska-Wojcik et al. 2018). Further, after interpolating any removed noisy channels using ASR, we re-referenced the data to the common average. Finally, we ran an Independent Component Analysis (ICA) (Sejnowski 1996) for removing all remaining artifactual components from the EEG signals as an additional layer of data cleaning. For this, we also employed IClabel (Pion-Tonachini et al. 2019), an automated method that uses a trained classifier for EEG independent components (IC’s), removing automatically only “Muscle” and “Eye” components with a probability value of \(\ge \) 90% (Delorme 2023). Finally, the data were epoched on every trial cue onset between \(-\)1500 and 1500 milliseconds, baseline corrected (by subtracting from the \(-\)1500 to 0 milliseconds window from the entire epoch).
EEG band power
Following pre-processing, a time-frequency decomposition of the epoched data was performed using a 3-cycle wavelet (with a Hanning-tapered window applied) across the Delta (1–4 Hz); Theta (4–7 Hz); Alpha (8–12 Hz); and Beta (13–30 Hz) EEG-bands. EEG band power was extracted for all 32 electrodes, and normalized as a percentage across the total power of the spectrum. This allows for comparing the relative contribution of different frequency bands to the overall EEG signal, facilitating the analysis of brain activity patterns. Finally, topographical maps were generated in order to assess the spatial distribution of the EEG power over the scalp for each band.
EEG evoked potentials
The signals were baseline-corrected to adjust for any pre-stimulus activity. Next, averaging was performed across multiple epochs to enhance the signal-to-noise ratio, resulting in averaged event-related potentials (ERPs) as microvolts (uV) over time. Finally, topographical mapping was used to visualize the spatial distribution and infer the origin of evoked potentials at the scalp level.
3 Results
3.1 Behavioral results
A summary of our descriptive results can be found in Table 1. Performance data revealed no differences in accuracy between the LO and C conditions (mean difference = 0.072%, Wilcoxon statistic = 5, p = 0.552), but there was a significant increase in mean RT in the LO condition (mean difference = \(-\)792 ms, t = \(-\)4.694, p = 0.002, d = \(-\)1.565) (Fig. 2).
Performance in the visual search task in the Control (C, green) and Left Occlusion (LO, orange) conditions. a Total Reaction Time (RT); b–d Top(T)-Bottom(B) and Left(L)-Right(R) stimuli RTs; e, f Stimuli Eccentricity RTs. Solid lines represent significant main effects of Condition; dashed lines represent significant main effects of Position/Eccentricity. Asterisks represent statistically significant post-hoc pairwise comparisons
When comparing top vs. bottom stimuli, significant main effects of Condition (F(1,8) = 14.70, p = 0.005), Position (F(1,8) = 7.93, p = 0.023) were found, along with significant interactions (F(1,8) = 16.03, p = 0.004). Left vs. right comparisons revealed only a significant Condition main effect (F(1,8) = 9.96, p = 0.013). RT by eccentricity analysis showed significant Condition (F(1,8) = 18.85, p = 0.002), Eccentricity (F(3,24) = 40.26, p< 0.001) main effects and interactions (F(3,24) = 3.32, p = 0.037). Post-hoc tests revealed that LO RTs are significantly higher in bottom stimuli (p < 0.001) and at eccentricities 25\(^\circ \) (p = 0.019), 37.5\(^\circ \) (p = 0.007) and 50\(^\circ \) (p = 0.003). Summary descriptives and post-hoc comparisons are available in Supplementary Tables 1 and 2.
Raycast data (Fig. 3) showed significant differences between LO and C in all latitude measurements, where a left-ward bias was found for LO (i.e. spent more time exploring the left hemifield) for head (mean difference = 9.038\(^\circ \), t = 14.376, p< 0.001, d = 4.792) and controller (mean difference = 3.917\(^\circ \), t = 3.349, p = 0.010, d = 1.116) raycast positions. We also found a right-ward bias in eye-gaze (i.e. spent more time looking at the right hemifield) in participants in the LO condition (mean difference = \(-\)3.288º, t = \(-\)6.206, p< 0.001, d = \(-\) 2.069). Longitude measurements showed no significant differences between conditions.
To examine whether task learning or controller handling influenced the RT differences between C and LO conditions, we tested for the presence of an order effect in our behavioral data, which included the baseline measure. Results from paired samples t-tests revealed only a decrease in RT in the baseline measure and a reduction in left-ward bias in controller movement during the second session (see Supplementary Table 3). These changes may indicate improved dexterity in handling the controller. However, further linear mixed-effects regression analyses (Supplementary Table 4) did not provide evidence that neither these measures nor presentation order had a significant effect on task RTs.
Raincloud plots for behavioral results in the Control (C, green) and Left Occlusion (LO, orange) conditions. a–c Mean Latitude values in HMD, controller and eye-gaze (negative values = left hemifield; d–f mean Longitude values in HMD, controller and eye-gaze (negative values = bottom hemifield). Solid lines and asterisks represent statistically significant comparisons
Our exploratory eye-gaze t-map analysis (Fig. 4) confirmed our previous eye-gaze results and showed that compared to C, participants in the LO condition had fewer raycast hits in the left hemifield (in blue), particularly in and around the area where some stimuli were positioned.
3.2 EEG results
3.2.1 Differences in band power
In terms of EEG-bands, the spatial distribution of the power-spread across all bands is similar between C (Fig. 5a) and LO (Fig. 5b) conditions, although, in terms of power %, the results are different. Specifically, when we compute the percent change of the EEG power from Control to LO, we observe a reduced power in the occipital lobe in all EEG-bands but most focal in the Theta and Alpha bands. Further, we observe a reduction in the temporal lobes in Delta band, and reduction in Beta power in the Centro-parietal lobes in the LO condition (Fig. 5c). The Wilcoxon signed-rank test, reveal significant differences in the Delta band over the frontal electrode FC5, central C3, and temporal T8; Theta band over the Frontal Fp2, Temporal TP10 and T8, Parietal Pz, and Occipital O2 electrodes (Fig. 5d). Next, in the Alpha band, differences were found over the Frontal Fp2, Fronto-temporal FT9, Fontro-central FC5, and the Parietal P3 electrodes. Finally, Beta band had the most significantly different power in Frontal F3, F8, Fronto-central FC2, FC6, Centro-parietal CP6, Temporal T8 and Occipital Oz electrodes (Table 2).
Topographical plots of the Delta; Theta; Alpha; Beta EEG band power: a Control condition relative power (% across total spectrum power); b Left Occlusion (LO) condition relative power; c percent-change of LO from Control; d statistically significantly different areas. *p< 0.05 indicates a significant difference
3.2.2 Differences in evoked potentials
From the ERP analysis, we can identify two consistent evoked potentials. First, a positive response of 20 uV occurring at 100 ms (± 20 ms) post-stimulus (P1), and a negative response \(-\)20 uV occurring at 200 ms (± 50 ms) post-stimulus (N2) in the parietal and occipital areas (Fig. 6A). When we explore the topographical distribution of the P1 and N2 potentials, we see that both have an occipital origin (Fig. 6b).
In terms of P1 and N2 comparisons between conditions, no significant differences were found in any electrode.
4 Discussion
Results from this pilot study suggest that a multimodal assessment of spatial attention in VR is feasible, at least in healthy individuals. We were able to detect behavioral differences between a control and an experimental condition with an occluded left visual hemifield, while simultaneously acquiring valid eye-gaze and EEG data.
Regarding performance metrics, we found that, while all participants were similarly accurate, those in the LO condition had longer RTs, especially when target stimuli were located in the bottom hemifield, and increasingly as stimuli were more eccentric. This effect was not accompanied with a significant change in longitudinal head or control movement. It is possible that, given a limited field of view, participants scanning strategy tended to start in the top hemifield, taking longer times to find stimuli when located in the bottom hemifield. These measures, although not routinely explored, might also be markers of neglect as it has been previously observed in studies with patients (Numao et al. 2021; Painter et al. 2023). Unlike other studies with patients performing similar tasks (Martino Cinnera et al. 2024; Uimonen et al. 2024; Perez-Marcos et al. 2023; Ogourtsova et al. 2018), we did not find longer RTs in left hemifield stimuli in the LO condition, although values show a trend that could become significant with a bigger sample size. While this result might be expected, since healthy participants are able to overcome the occlusion and complete the task, results from our exploratory gaze pattern analysis show that participants struggled to find some left hemifield stimuli. Another interesting finding is that participants in the Control condition didn’t show the pseudoneglect phenomenon (bias in attention toward the left side of space), a frequent finding in these kinds of tasks (Nuthmann and Clark 2023).
Our aim with the added occlusion in the LO condition was to induce a measurable behavioral response, concretely a different pattern in eye, head and controller movement in response to the increase in RT in our experimental condition. Our results show a right-ward bias in eye-gaze, which was expected since at the start of every trial, participants had instant visual access only to the right hemifield where they spent more time looking for the T before turning their heads to the left and explore that hemifield. Then, the left-ward bias in head movement was also expected, since in order to have access to the leftmost stimuli participants had to turn their heads further to their left to avoid the occlusion. We assumed that controller movement would follow head movement (hence the left-ward bias), as the laser pointer needed to be close to the target once it was found. This behavioral pattern is what may have caused the drop in performance (RT) in the LO condition. Patients with neglect usually show a right-ward bias in head orientation, attributed to a neglect or lack of intention to initiate movements towards the left (Hougaard et al. 2021). This lack of intention is absent in our healthy sample, and participants just spent more time orienting their head in the direction where space was partially occluded. Likewise, patients often show a right-ward bias in eye gaze orientation, which was also found in the LO condition. As it is the hemifield most readily available (i.e. not occluded and most visible at the start of every trial), participants spent more time moving their eyes in this direction.
In terms of EEG-bands, no significant differences were found between Control and LO conditions. Nonetheless, the spatial distribution of the power in different EEG-bands reveals reduced activity in LO over the parietal and occipital regions, which are primarily involved in spatial perception, attention, visual processing (Babiloni et al. 2006).
Past research has suggested that alterations in EEG-bands may be associated with spatial neglect. Typically, lower alpha power has been associated with directing attention to visual stimuli, particularly in parieto-occipital areas. Task EEG studies suggest that deficits in stroke patients, with and without USN, may result in higher alpha power (Lasaponara et al. 2019). Alpha power alterations are also present in resting-state EEG, where patients exhibit an abnormally lower alpha power (Zhang et al. 2022; Pirondini et al. 2020). Additionally, in an augmented reality and EEG system, left occipital alpha power was used to successfully detect neglected stimuli (Mak et al. 2022). These findings suggest that alpha power might be a good candidate as a marker for attentional deficits or even a target for rehabilitation (Ros et al. 2017; Saj et al. 2021) in patients with USN.
In terms of evoked potentials, we identified P1 and N2 components. Concretely, the P1 component occurs around 80–120 milliseconds after the presentation of a visual stimulus and is often enhanced at attended locations compared to unattended locations (Luck 1995). It reflects the initial sensory processing of visual stimuli and can be modulated by spatial attention (Luck et al. 1990; Babiloni et al. 2006). The N2 component occurs around 200–300 milliseconds post-stimulus onset and is typically observed in tasks requiring the allocation of spatial attention (Wijers et al. 1997). Previous studies have shown that neglected stimuli in stroke patients with and without neglect are associated to alterations in certain EEG features (Khalaf et al. 2018), including a drop in P1 amplitude and latency (Lasaponara et al. 2019; Ye et al. 2019).
4.1 Limitations
This pilot study holds several limitations. Our small sample size (18 observations from 9 subjects) may have limited the power to detect some effects, so results must be interpreted with caution. However, this pilot study gives some reassurance on what behavioral and EEG differences we might expect when using the Attention Atlas. Even if our LO condition was able to evoke behaviors similar to those present in patients with USN in our young and healthy sample, it is important to note that our manipulation–left hemifield occlusion– is not necessarily what patients experience. Phenomenologically, patients refer to having no access to the left visual hemifield, unless an effort is made and even then, stimuli feel opaque or surreal (Klinke et al. 2015). Ultimately, the manipulation used in this study resembles more to a left sided blindness (hemianopia). Also, our procedure duration was relatively long, at least when compared to paper and pencil application times. However, for an VR + mobile EEG study, we believe this duration is not unlike other EEG studies, and this is compensated by the multimodal data acquisition, which could prove especially useful for patients.
5 Conclusion
By incorporating our performance, behavioral and EEG results, this study shows that a multimodal mapping of spatial attention using VR is possible, and its use could be translatable to patients suffering USN. Our hope is that by using a system that merges different types of neglect-relevant data, we can improve its correct diagnosis, assessment and classification. Ultimately, employing these multimodal data in a closed-loop system allows for real-time adaptation of VR tasks or environments, based on the user’s neurological responses, including EEG signals. This advancement holds the potential to facilitate the participation of stroke patients with USN in technology-driven rehabilitation, such as virtual rehabilitation, thereby reducing exclusionary barriers.
Data availibility
No datasets were generated or analysed during the current study.
References
Babiloni C, Vecchio F, Miriello M et al (2006) Visuo-spatial consciousness and parieto-occipital areas: a high-resolution eeg study. Cereb Cortex 16(1):37–46
Chen P, Ward I, Khan U et al (2016) Spatial neglect hinders success of inpatient rehabilitation in individuals with traumatic brain injury: a retrospective study. Neurorehabilitation Neural Repair 30(5):451–460. https://doi.org/10.1177/1545968315604397
Delorme A (2023) EEG is better left alone. Sci Rep 13(1):2372. https://doi.org/10.1038/s41598-023-27528-0
Delorme A, Makeig S (2004) Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis. J Neurosci Methods 134(1):9–21
Esposito E, Shekhtman G, Chen P (2021) Prevalence of spatial neglect post-stroke: a systematic review. Ann Phys Rehabil Med 64(5):101459
Eudave L (2023) Procedure Demo. https://youtu.be/vi55oOOg8BU
Eudave L (2024) Task Demo. https://youtu.be/QVKQzmG1vbs
Hougaard BI, Knoche H, Jensen J et al (2021) Spatial neglect midline diagnostics from virtual reality and eye tracking in a free-viewing environment. Front Psychol 12:742445
Kaiser AP, Villadsen KW, Samani A et al (2022) Virtual reality and eye-tracking assessment, and treatment of unilateral spatial neglect: systematic review and future prospects. Front Psychol 13:787382
Khalaf A, Kersey J, Eldeeb S et al (2018) EEG-based neglect assessment: a feasibility study. J Neurosci Methods 303:169–177
Kim J, Kim K, Kim DY et al (2007) Virtual environment training system for rehabilitation of stroke patients with unilateral neglect: crossing the virtual street. CyberPsychol Behav 10(1):7–15. https://doi.org/10.1089/cpb.2006.9998
Klinke ME, Zahavi D, Hjaltason H et al (2015) Getting the left right: the experience of hemispatial neglect after stroke. Qual Health Res 25(12):1623–1636. https://doi.org/10.1177/1049732314566328
Kothe CAE, Jung TP (2016) Artifact removal techniques with signal reconstruction. US Patent App. 14/895,440
LabStreamingLayer (2024) https://github.com/sccn/labstreaminglayer, original-date: 2018-02-28T10:50:12Z
Lasaponara S, Pinto M, Aiello M et al (2019) The hemispheric distribution of alpha-band EEG activity during orienting of attention in patients with reduced awareness of the left side of space (spatial neglect). J Neurosci 39(22):4332–4343
Luck SJ (1995) Multiple mechanisms of visual-spatial attention: recent evidence from human electrophysiology. Behav Brain Res 71(1–2):113–123
Luck SJ, Heinze H, Mangun G, et al (1990) Visual event-related potentials index focused attention within bilateral stimulus arrays. ii. Functional dissociation of p1 and n1 components. Electroencephalography and clinical neurophysiology 75(6):528–542
Mak J, Kocanaogullari D, Huang X et al (2022) Detection of stroke-induced visual neglect and target response prediction using augmented reality and electroencephalography. IEEE Trans Neural Syst Rehabil Eng 30:1840–1850
Martino Cinnera A, Bisirri A, Chioccia I et al (2022) Exploring the potential of immersive virtual reality in the treatment of unilateral spatial neglect due to stroke: a comprehensive systematic review. Brain Sci 12(11):1589. https://doi.org/10.3390/brainsci12111589
Martino Cinnera A, Verna V, Marucci M et al (2024) Immersive virtual reality for treatment of unilateral spatial neglect via eye-tracking biofeedback: RCT protocol and usability testing. Brain Sci 14(3):283. https://doi.org/10.3390/brainsci14030283
Norwood MF, Painter DR, Marsh CH, et al (2022) The attention atlas virtual reality platform maps three-dimensional (3D) attention in unilateral spatial neglect patients: a protocol. Brain Impairment pp 1–20. https://doi.org/10.1017/BrImp.2022.15, https://www.cambridge.org/core/product/identifier/S1443964622000158/type/journal_article
Numao T, Amimoto K, Shimada T (2021) Examination and treatment of unilateral spatial neglect using virtual reality in three-dimensional space. Neurocase 27(6):447–451. https://doi.org/10.1080/13554794.2021.1999478
Nuthmann A, Clark CNL (2023) Pseudoneglect during object search in naturalistic scenes. Exp Brain Res 241(9):2345–2360. https://doi.org/10.1007/s00221-023-06679-6
Ogourtsova T, Archambault P, Sangani S et al (2018) Ecological virtual reality evaluation of neglect symptoms (EVENS): effects of virtual scene complexity in the assessment of poststroke unilateral spatial neglect. Neurorehabil Neural Repair 32(1):46–61. https://doi.org/10.1177/1545968317751677
Ogourtsova T, Archambault PS, Lamontagne A (2019) Exploring barriers and facilitators to the clinical use of virtual reality for post-stroke unilateral spatial neglect assessment. Disabil Rehabil 41(3):284–292. https://doi.org/10.1080/09638288.2017.1387292
Painter DR, Norwood MF, Marsh CH et al (2023) Immersive virtual reality gameplay detects visuospatial atypicality, including unilateral spatial neglect, following brain injury: a pilot study. J Neuroeng Rehabil 20(1):161. https://doi.org/10.1186/s12984-023-01283-9
Painter DR, Norwood MF, Marsh CH et al (2024) Virtual reality gameplay classification illustrates the multidimensionality of visuospatial neglect. Brain Commun 6(4):fcae145. https://doi.org/10.1093/braincomms/fcae145
Parton A, Malhotra P, Husain M (2004) Hemispatial neglect. J Neurol Neurosurg Psychiatry 75(1):13–21
Perez-Marcos D, Ronchi R, Giroux A et al (2023) An immersive virtual reality system for ecological assessment of peripersonal and extrapersonal unilateral spatial neglect. J Neuroeng Rehabil 20(1):33. https://doi.org/10.1186/s12984-023-01156-1
Pion-Tonachini L, Kreutz-Delgado K, Makeig S (2019) Iclabel: an automated electroencephalographic independent component classifier, dataset, and website. Neuroimage 198:181–197
Pirondini E, Goldshuv-Ezra N, Zinger N et al (2020) Resting-state EEG topographies: reliable and sensitive signatures of unilateral spatial neglect. NeuroImage Clin 26:102237. https://doi.org/10.1016/j.nicl.2020.102237
Plechawska-Wojcik M, Kaczorowska M, Zapala D (2018) The artifact subspace reconstruction (asr) for eeg signal correction. A comparative study. In: International conference on information systems architecture and technology, Springer, pp 125–135
Ros T, Michela A, Bellman A et al (2017) Increased alpha-rhythm dynamic range promotes recovery from visuospatial neglect: a neurofeedback study. Neural Plast 2017:1–9. https://doi.org/10.1155/2017/7407241
Ros T, Michela A, Mayer A et al (2022) Disruption of large-scale electrophysiological networks in stroke patients with visuospatial neglect. Network Neurosci 6(1):69–89
Saj A, Pierce JE, Ronchi R et al (2021) Real-time fMRI and EEG neurofeedback: a perspective on applications for the rehabilitation of spatial neglect. Ann Phys Rehabil Med 64(5):101561
Salatino A, Zavattaro C, Gammeri R, et al (2023) Virtual reality rehabilitation for unilateral spatial neglect: a systematic review of immersive, semi-immersive and non-immersive techniques. Neurosci Biobehav Rev p 105248
Sejnowski TJ (1996) Independent component analysis of electroencephalographic data. In: Advances in neural information processing systems 8: proceedings of the 1995 conference, MIT press, p 145
Terruzzi S, Albini F, Massetti G et al (2023) The neuropsychological assessment of unilateral spatial neglect through computerized and virtual reality tools: a scoping review. Neuropsychol Rev. https://doi.org/10.1007/s11065-023-09586-3
Uimonen J, Villarreal S, Laari S et al (2024) Virtual reality tasks with eye tracking for mild spatial neglect assessment: a pilot study with acute stroke patients. Front Psychol. https://doi.org/10.3389/fpsyg.2024.1319944
Wijers AA, Lange JJ, Mulder G et al (1997) An erp study of visual spatial attention and letter target detection for isoluminant and nonisoluminant stimuli. Psychophysiology 34(5):553–565
Ye LL, Cao L, Xie HX et al (2019) Visual-spatial neglect after right-hemisphere stroke: behavioral and electrophysiological evidence. Chin Med J 132(9):1063–1070
Zhang Y, Ye L, Cao L et al (2022) Resting-state electroencephalography changes in poststroke patients with visuospatial neglect. Front Neurosci 16:974712. https://doi.org/10.3389/fnins.2022.974712
Acknowledgements
This work is supported by the Fundação para a Ciência e Tecnologia (FCT) through the LARSyS—FCT Project (DOI: 10.54499/LA/P/0083/2020, 10.54499/UIDP/50009/2020, and 10.54499/UIDB/50009/2020), and the NOISYS project (DOI: 10.54499/ 2022.02283.PTDC).
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Contributions
CRediT Statement Conceptualization: L.E and A.V.; Funding Acquisition: A.V.; Investigation: L.E; Methodology: L.E. and A.V.; Software: L.E. and A.V.; Formal Analysis: L.E. and A.V.; Resources: L.E. and A.V.; Software: L.E. and A.V.; Visualisation: L.E. and A.V. Writing—original draft: L.E; Writing—review & editing: L.E. and A.V.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Eudave, L., Vourvopoulos, A. Multimodal mapping of spatial attention for unilateral spatial neglect in VR: a proof of concept study using eye-tracking and mobile EEG. Virtual Reality 29, 24 (2025). https://doi.org/10.1007/s10055-025-01103-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10055-025-01103-6