1 Introduction

People who are blind or have low vision in one eye lose their stereoscopic binocular vision and they also have reduced peripheral vision. The loss of stereoscopic vision can result in a lack of knowledge about depth and positioning elements. People who lose vision in one eye can obtain this missing information through other visual clues such as shadows, lighting and size comparisons. People with stereoscopic vision also have better skills relating their body to the environment. Skills such as reaching and grabbing items, motor skills, and balance are more challenging when stereoscopic vision is not an option [10]. People who are without vision in one eye typically have a reduction in depth perception making acquiring items within three feet of the person difficult [8].

In order to make up for the lack of peripheral vision, people without sight in one eye make more lateral head movements [5]. This adaptive head motion allows people to regain some peripheral vision, however it is more time consuming than glancing. The reason this motion is considered adaptive is that it is more prevalent in individuals who are blind in one eye when compared to individuals who have vision in both eyes but cover one eye up for the sake of the experiment [2]. These head movements are especially used when looking at objects within an arms length of the subject [5].

People who are blind in one eye are typically able to fulfill the requirements for a driver’s license in the United States. However several states including, Alaska, Pennsylvania, and South Carolina require people who have low vision in one eye to drive only vehicles equipped with side mirrors [7]. In addition to specifying additional pieces of equipment for driving, driver’s license regulations also contain minimum requirements for peripheral vision. The standards vary from state to state, some require 140° of vision, while others require 30° on either side of the point of focus. A pair of human eyes can see approximately 200° laterally. This is comprised of 120° seen by both eyes, and 40° on each side that is visible by only one eye [3].

Drivers who are blind in one eye statistically are more likely to be in car accidents [4]. Drivers who are blind in one eye must take time to move their head in order to scan the environment where a person with two functioning eyes can quickly scan the entire area by glancing. A formula one driver [11] with vision in only one eye was granted racing privileges for an entire season, and on a subsequent optical exam, his vision was determined to be unfit for competitive racing. As a result his professional license was revoked. His one season of racing contained no crashes, however there was one incident where he was unaware of a notification flag for seven consecutive laps that was being waved on the side of his functioning eye. It was felt that at such high speeds, even though there were no crash incidents, the chances of a crash incident initiated as a result of having vision in only one eye were high enough that the racing league did not want to put the other drivers in what they considered a dangerous situation.

The concept presented in this paper is an attempt to assist those who are blind in one eye. A virtual reality headset contains a screen that can display any necessary information. This dedicated screen is the only item that a user can see. This is different than augmented reality where a user can see directly with his eye and can also have a digital overlay. Using a virtual reality headset to show the video stream from two different cameras mimicking two human eyes on the built in screen may provide advantages for people who are blind in one eye.

2 Hardware Options

An Oculus Rift DK2 [6] was used to produce the virtual reality environment (Fig. 1). The DK2 contains a display that supports 960 × 1080 pixel resolution for each eye. It also contains a positional tracking camera to follow the movement of the player and use this movement to change the environment within the game. For this experiment, the positional tracking device was not needed. The live video feed from each of the cameras was projected down to one eye of the user on a constant basis. There were no changes to the display based on the position of the user’s head outside of the video captured by the cameras.

Fig. 1.
figure 1

User wearing peripheral vision enhancement

The webcams used in this setup are the Genius WideCam F100 [1]. Each of these cameras contain 120° field of view and can capture video at 1920 × 1080 pixels per frame at the rate of 30 frames per second. The cameras were attached to the top of an Oculus Rift with velcro strips which allowed for the user to position the cameras where they felt the most natural. Also, the cameras contained a swivel mechanism which allowed the user to rotate the cameras across the horizon.

3 Methods

In order to test the feasibility of using virtual reality as an enhancement for peripheral vision, several different methods were used to test the hardware setup for usability.

The first test showed live video from one of the cameras to one of the eyes. This was meant to simulate having vision in only one eye. Although the headset contained two cameras mounted on it, for this test, only the camera on the side of the chosen eye was used. For example, if the participant requested for the experiment to be done using his left eye, then the video feed from the left camera was presented to the left eye for all trials. The 1920 × 1080 resolution of the camera was compressed into the 960 × 1080 resolution of one side of the DK2. The image was scaled to fit the entire width while leaving the height constant.

The second implementation (Split Screen) method (Fig. 2) projected both cameras to the requested eye piece. The images were placed side by side and were meant to have the functionality of the full field of view compressed into one eye. The left camera was presented on the left side of the eyepiece, and the right was presented on the right side. The native widths of these images were cut in half, while the height remained constant. This slightly exaggerated the height of the presentation, however it was decided to fill up the entire viewport with motion images from the cameras, instead of leaving black bars in order to keep proper proportions. Another option would have been to zoom in on a section of the camera feed such that the aspect rations remained natural and the entire video filled up the screen. This created an environment where each camera used 480 × 1080 resolution of space on one half of the DK2 display.

Fig. 2.
figure 2

Sample view of two cameras combined into one eye using Split Screen Method

The third implementation (Slow Switch) presented the full view of both cameras in an alternating fashion. In order to overcome the issues found in the second implementation such as scaling, zooming, or compressing, this method alternated each camera to the requested side. The camera was set to alternate virtual eyes every one second. This would allow the participant to have the full view out of both eyes.

The fourth implementation (Fast Switch) was identical to the third implementation, however the rate at which the cameras were switched was different. Instead of alternating once per second, the images were switched five times a second. The hope with this implementation was that the brain would automatically stitch together the fast switching in order to give the appearance of the field of view of two eyes.

All of these methods were implemented in Unity3D which allows for easy portability to different types of operating systems. All experiments performed for this research were performed on a MacBook Pro running OSX Yosemite with a Oculus Rift DK2 and two USB Genius F100 webcams.

4 Experiment

11 users performed the experiment in isolation with only an administrator present. The users were informed that the experiment was being performed in order to possibly help people who have no or low vision in one eye. Three of the participants were blind in one eye, and the other participants reported no severe visual impairments. Users who wear corrected lenses were asked to keep their prescribed glasses on during the experiment.

Users were first asked to choose an eye to use for the experiment. All methods were projected only to this eye, while the video contents of the other eye remained black. Users went through the experiments one at a time and always in the same order described in the previous section. For each method, users were asked to stand a distance of 10 feet from a wall that contained a standard eye chart [9]. They were asked to read each line from the chart. Users were shown the focus knobs on the lenses and were allowed to modify the focus as needed. If the user could not read the first line on the chart, they slowly moved forward until it could be read. Their distance from the wall was recorded. Then the user was asked to move closer and closer to the wall until the last line could be read. After the last line was read aloud and verified to be accurate, the final distance was recorded.

After the first method, and before the start of the second method, users were given the opportunity to adjust the orientation of the cameras. In this second method, users were presented with a scaled image containing both the right and left cameras. Users could modify the rotation of the cameras such that the resultant combined image was pleasing. They then continued on with the rest of the methods of the experiment with the cameras in these positions.

Following the conclusion of user study, users were asked to rank their preference of the methods. They were also asked to rate how useful they considered each method.

5 Results and Discussions

In total, 12 people participated in the user study. None of the users had used an Oculus Rift prior to this experiment. One of the participants was extremely uncomfortable with the Rift and chose to not participate after first placing the headset on his head. The results discussed here contain the results from 11 users (all male, Avg age: 24.1 SD = 12.0), three of whom self reported being severely visually impaired in one eye.

At the conclusion of the user study, participants were surveyed (Table 1) on their preferences. Participants preferred split screen over the other choices and did not prefer the fast switch at all. Many participants were observed complaining about the fast switching mode. A large factor in the effectiveness of the fast switching mode was the difference in horizontal angles between the two cameras. A larger difference here makes a larger difference when the frame alternates. The alternating frames at 5 frames a second seemed to be very hard to deal with when the cameras were pointing in different directions. Participants whose cameras were pointing at the same angles seemed to complain less about the fast switching mode.

Table 1. User survey of techniques (1-Bad, 5-Great)

Table 2 shows the average distance from the eye chart that participants were in order to correctly read the bottom line on the chart. In order for a person to have 20/20 vision, the bottom line on the chart should be readable from a distance of 20 feet. These results show that participants were not anywhere near 20/20 vision even when wearing their prescribed corrective lenses. The average distance from the eye chart for the slow switching was better than the fast switch and both switching methods were better than the split screen. The resolution available to the switching methods for each camera was twice as much horizontally when compared to the split screen method. This can explain why those methods could see more at a further distance. The slow switch was better than the fast switch. This can be attributed to the disorienting feeling generated by the quick switching of the cameras in fast switch mode.

Table 2. Distance from eye chart where bottom line was visible

The results of the distance test may appear to be discouraging. Requiring the user to be within 1 foot of an object to see it as good as a person without any corrective lenses at 20 feet is a step backward. However, the limitations in this study may be do to the resolution of the display contained within the headset. When the display hardware matures, it may be able to avoid this requirement. The point to be taken away is that the switching method provides better results, however when the cameras are looking at drastically different objects it can be disorienting to the user, especially when it is switching at a fast rate. To increase the peripheral vision of a person who is blind in one eye using the switching method, the cameras must be pointed such that the entire range of vision is restored, and the rate of switching must be fast enough such that any delay does not result in a potential loss of information, yet slow enough that it avoids the disorienting side effects.

After various runs with the participants, it seemed as though allow them to adjust the angle at which the webcams were facing was a more limiting factor than a helpful one. One participant angled the webcams outwards, opposite of each other. While it extended the range of vision, it also caused much difficulty with both switching methods, as it showed nearly two completely different images. Participants also seemed to have slight trouble adjusting the camera, as they were unable to properly see the location of webcams and the output they gave at the same time. It may be more conducive to have the cameras permanently placed in a position that would directly emulate a set of human eyes.

During the experiment it was discovered that the effective peripheral vision enhancement was based completely on the angles the cameras were adjusted to (Fig. 3). Initially scheduled to be investigated was the increase in peripheral vision, however participants wishing to increase their peripheral vision would twist the cameras such that they each were 90° rotated essentially giving the user eyes on the side of his head. Although this greatly improved the peripheral vision, it also greatly reduced the usability. The tasks defined in this user study were up close tasks. Figure 3 shows when the cameras are rotated outwards, there is a larger empty zone right in front of the user where neither camera captures any video. For tasks involving far away items, this may be OK, but because the user needed to be so close to the eye chart in this experiment, that empty zone becomes important.

Fig. 3.
figure 3

Peripheral difference based on camera position

Instead of performing a standard peripheral vision test, participants were asked to adjust the cameras to produce a comfortable view. These adjustments were done when the display was set to split screen. After the participant was satisfied with the location and orientation of the cameras, a picture was taken and later analyzed to show the angles the cameras were set to. For the purposes of these results, a positive rotation value indicates the camera was rotated in the clockwise direction, while a negative value indicates the camera was rotated in the counter clockwise direction. A value of zero indicates that the camera was not rotated and was pointed straight ahead. If a participant desired more peripheral vision, the left camera would have a negative rotation value and the right camera would have a positive rotation value.

The average rotation for the right camera was 8.18° (SD = 16.0) and the average rotation for the left camera was –6.2° (SD = 13.2). Three of the participants left both cameras pointed straight ahead. Two participants did not move the right camera, and moved the left camera counter clockwise. Four of the participants moved the left camera counter clockwise and the right camera clockwise. One participant rotated the right camera counter clockwise and the left camera clockwise, and one participant rotated both cameras an equal value (10°) in the counter clockwise direction.

6 Limitations

The experimental setup described in this paper does have several limitations that are important to consider. First, although this does increase the peripheral vision of a user, it may not increase the functional peripheral vision. For example, a user who is sighted in one or both eyes can move his eyes horizontally to increase the field of view without moving the head. In the case of the projecting the vision into the virtual reality headset, the user does not have this option. Moving only the eyeballs will not change what is visible as the images being presented to the user are based only on the mounted position of the cameras and the position of the user’s head.

The limited resolution of the Oculus display is also an issue. Reading small print from a distance proved difficult for all users. While the cameras were able to pick up the details necessary for projecting the small font, the users were never able to actually see it. The viewing mode allowed for the user’s viewport to be mirrored to the computer running the software, and it was observed that small text was clearly visible on the computer screen, yet the participant was not able to read the text. It is expected, however, that this type of technology will significantly improve and this limitation may not be an issue in the near future.

Another issue participants ran into was the amount of cabling required to use this technology. The Oculus Rift by itself requires two USB ports, an HDMI port and a standard wall power plug. With the addition of two USB cameras, the total required ports jumped to four USB ports. The laptop used to run the software contained only 2 USB ports, which then created the additional requirement of a USB hub, and its required power supply. As a result, the hardware setup for this experiment required 4 USB ports, 1 HDMI port, 1 USB hub and 2 AC power outlets. This is not a portable setup as motion was very difficult and if the participant was moving it was required to have an administrator present to move the wires and computer to wherever the user was going. Due to this limitation, certain methods had to be omitted, such as gauging the comfort level of walking short distances and different directions.

There was also an issue concerning the live feed from the webcams. There was a slight, yet still noticeable, delay between the webcam and the feed it returned. Movement shown through the webcams was slower than how they actually moved, which was slightly disorienting. However, because the methods used didn’t require much movement, the issue was largely had no effect on the experiment. If the Oculus Rift were to be more portable and methods that focused on movements were used, this issue would have to be taken into consideration.

One last issue is about how the images were scaled to fit the display. The native resolution of the cameras was 1920 × 1080, however in the case of the side by side images, they were scaled to 480 × 1080. This results in the horizontal axis being one fourth of the original size while the height remained constant. This could have been part of the reason reading small text was difficult as the proportions of the letters were thrown off.

7 Future Work

The results presented here show some promise. Users were disoriented when the video display was switching between cameras too quickly. However, this test lasted only a few minutes. Our body may be able to adapt to handling visuals when they are presented in this method if given the time. Once the technology evolves to the point where basic tasks such as reading become possible, a longer term study should be completed to test the adaptability of this type of display.

8 Conclusion

This paper presented both a hardware and software implementation using a virtual reality headset to increase the peripheral vision for people with limited or no vision in one eye. A user study containing 11 adults (three with a visual impairment in one eye) compared three different methods of presenting visuals to one eye and found that the split screen method, where the images from the 2 cameras are scaled and placed side by side was preferred.