1 Introduction

Marine situation displays or digital sea charts are an essential tool and the basis for marine safety and security applications and define the core element for providing human operators with situational awareness while performing surveillance tasks and to support them in anticipating dangerous situations. They provide essential information by showing collected data and fused information to give the operator a preferably clear picture of a current situation to support in taking appropriate actions. Their usage are both common in the military sector as well as in civil applications such as coastal guard scenarios or for surveillance purposes for critical offshore infrastructures. State of the art digital sea charts used in on- and offshore applications are usually held two-dimensional with planar top-view showing the sea floor as shaded areas indicating sea depth. A planar visualization of the digital sea chart is currently predominant but lacks of providing the operator with spatial relationships among items and objects being shown in the digital map as soon as spatial data has to be reduced by one spatial dimension to match the standard two-dimensional data model which is predominantly used. Object placements in the real world are asynchronous in their altitude positioning especially when ground and air targets are visualized concurrently, e.g. in a rescue situation where sea and aircraft vehicles are deployed simultaneously. Modern sensor technology output such as processed data of sonars or radar provides three-dimensional data sets which has to be projected for being displayed in a 2D or 2.5D environment. Three-dimensional display systems such as stereoscopic displays or full immersive virtual reality display systems as the “Oculus Rift” or the “Samsung Gear VR” provide the user with virtual depth perception and facilitate the representation of data under maintaining their spatial dimension. Recent developments in spatial input technology, such as the “Microsoft Kinect 2” or the “Leap Motion Controller” make 3D interaction technology available to the consumer market.

We developed a prototype using a stereoscopic display to implement a maritime situation display showing a digital sea chart with vessel tracking data with a freehand pointing technique using the “Leap Motion”.

2 State of the Art

Zlatanova et al. [18] see a need of 3D information in geographical information systems (GIS) rapidly increasing and summarize practical applications of 3D GIS in areas that are relevant for marine and submarine data representation which are environmental monitoring, public rescue operations, geological and mining operations, transportation monitoring, hydrographical activities and military applications. The demanded consensus of functionality for a 3D GIS remains the same over 2D GIS and should imply the same functionality as data capture, data structuring, data manipulation, data analysis and data representation. Marine or maritime GIS systems hence are a strong tool where huge data sets are merged together and provide operators with requisite information. Burkle and Essendorfer [3] describe their concept of a system where data of short-range, medium-range and long-range surveillance systems are merged in a “universal ground station”, where operators observe fused data from all kinds of aerial and underwater sensors, cameras, land and underwater vehicles to ensure the operator to have a preferably broad view of a maritime area of interest. Prototypes of stereoscopic visualizations of GIS were developed and evaluated by Wartell et al. [16] and implemented as a head tracked stereoscopic environment but did not cover the aspect of interacting with the stereoscopic environment. Wittman et al. analyzed a stereoscopic visualization of an air-traffic controller work station in comparison to a classical 2D representation of data on a group of experts and non-experts. Their research indicates advantages in stereoscopic visualizations of spatial data in surveillance scenarios and give a clear recommendation for non-trained personnel using a stereoscopic visualization system as they had had a benefit in identifying conflicts in air traffic over the 2D representation [17].

The application of a stereoscopic visualization environment requires rethinking classical interaction paradigms conducted via mouse and keyboard and considering a spatial interaction method which provides a more natural and intuitive way to interact with a virtual spatial environment. Complex interaction tasks in a 3D stereoscopic environment require at least three translational and three rotatory degrees of freedom hence a stereoscopic visualization disqualifies the classical interaction paradigm of mouse and keyboard. Prototypes of three-dimensional digital sea chart applications have been described in previous work indicating positive results. Most prototypes work on a small set of sea chart data and use devices supporting two-dimensional input. Gold et al. [7] concluded that the three-dimensional visualization of digital sea chart data has the potential to reduce navigational risks and developed “The Marine GIS”, a 3D prototypical visualization system of a digital sea chart with two dimensional input using mouse and keyboard on a 2D display.

The use of 3D displays to visualize three-dimensional information suggests to extend input to three dimensions as suggested by Bowman [2]. Three-dimensional visualizations implicate challenges on the interaction side of a chosen interaction paradigm. Ren & O’Neill [13] evaluated marking menus optimized for 3D environments in a freehand selection scenario and point out that the selection of the menu has to be designed carefully since freehand pointing in virtual environments excludes physical touch of an interface or button clicking. Research conducted by Callahan et al. found pie menu structures reduce selection time and achieve a lower error rate in selection over linear menus [4]. Ni et al. [12] describe their concept of a pie menu being controlled by freehand input in a 2D large screen setting using their so-called “rotate and pinch” method. Selection of a pie menu button is conducted by rotation of the user’s wrist which highlights one of the pie menu buttons which is followed by a pinch-click of one of the user’s fingers to the thumb to activate a pie menu button. A similar setting is given in a stereoscopic desktop environment where physical feedback is not given when interacting with virtual entities. Hence, the combination of free hand input using a pie menu structure in a stereoscopic desktop environment indicates promising results.

3 Prototype Development

We developed a workflow to convert two-dimensional high resolution digital sea chart material into three dimensional elevation data which we use as a basis to visualize a geo-referenced three-dimensional model of the seabed and render it in a stereoscopic desktop environment. The conversed seabed data guarantees a high recognition value for marine operators since we plan to evaluate our system with end users from the domain of maritime security. The current state of our prototype collects live marine traffic data from a web service which is displayed geo-referenced in the virtual environment. Map objects collected from the web service are shown as two-dimensional track symbols based on the military standard 2525 (cf. Fig. 1 [14]. We subject to replace the two-dimensional symbology with 3D symbols in a consequent study. The menu interaction task is based on a scenario that has previously been assessed in the scope of this research where an operators task is to classify vessels that indicate malicious behavior [10]. The operator achieves the task by navigating through the menu structure choosing appropriate indications.

Fig. 1.
figure 1

Digital sea chart in stereoscopic environment showing positions of vessel traffic

We use the Leap Motion as an input device to track the users hand to provide a virtual model of her/his hand to be visualized in the virtual environment using a minimalistic visualization. A previous study conducted in the scope of this research on visualization of virtual hand model recommends modeling the virtual hand as a point cloud with interconnected lines to keep the visualization of the hand model minimalistic 2 [11]. This type of visualization of the virtual hand minimizes the effect of occluding data or interactive elements while interacting with the virtual environment.

3.1 Menu Interaction Development

On the basis of using the virtual hand model as a pointing device in the virtual environment we developed an interaction paradigm that utilizes direct pointing interactions on virtual entities with a successional menu interaction to manipulate classification properties that are shown on the three-dimensional digital sea chart.

Fig. 2.
figure 2

Virtual translucent menu shown in egocentric orientation

Our virtual menu is based on different free hand gestural input method approaches which all follow the principle of the so-called pie menu structure. We defined different activation methods of the pie menu buttons which were partly derived from observations from studies previously conducted in the scope of this research and combined with evaluated concepts from the literature. The interaction process consists of two steps: First the operator chooses the interactive entity in the virtual environment by a directed pointing movement on the virtual entity which activates the virtual pie menu either by proximity close to the object in either egocentric mode, where the menu orientation is adapted to the moving direction of the pointing finger, or in allocentric mode, where the menu is aligned perpendicular to the map basis (cf. figures 7 and 6). In a second step the user will choose actions from the activated pie menu structure. We will test three different activation methods for a pie menu button:

  1. 1.

    The finger tip of the operator must have a minimum velocity to activate a menu button when conducting a push gesture on the button (cf. 3). The user must conduct the pointing movement rapidly to evoke the menu button while the virtual finger is moved towards the virtual depth. Rigid postures are avoided by rapid movement. In a previous study we observed participants being challenged by directed pointing movements towards the display volume [11]

  2. 2.

    Two-layer-activation: The menu button is activated when two layers of the virtual button are penetrated in a certain order (cf. 4). This action must be conducted slowly and evokes controlled movements by the user. We expect a comparatively lower error rate for this activation method but increased muscular tension.

  3. 3.

    The finger has to be pulled beyond the radius of the pie menu structure to activate the menu button with a minimum velocity (cf. 5). We expect this activation method to enable the user to be quickly enabled to handle the selection task on the pie menu since there is no necessity correcting the virtual finger in depth. We expect even better results in performance times between activation and selection with the egocentric orientation of the menu since the trajectory correction is expected to be smaller in comparison to the allocentric orientation (cf. 7 and 6). The illustration also shows the difference of angle between the correction task in the allocentric and egocentric menu. Hence, we expect less muscular activity for the egocentric orientation resulting from a bigger angle which we will measure during our experimental task.

Fig. 3.
figure 3

Virtual finger tip trajectory for fast penetration of button with velocity activation of menu button

Fig. 4.
figure 4

Virtual finger tip trajectory for penetrating two layers of the virtual button slowly for menu activation

Fig. 5.
figure 5

Virtual finger tip trajectory for radial high velocity swipe-out movement for menu activation

Fig. 6.
figure 6

Illustration of virtual hand movement for menu activation in allocentric orientation

Fig. 7.
figure 7

Illustration of virtual hand movement for menu activation in egocentric orientation

4 Experimental Design

We currently conduct an experiment using a factorial design with untrained users where the three described menu activation methods are defined as independent variables. We use an electromyographical measuring device to measure physical stress and a NASA task load index to measure mental effort as dependent variables. Gramann et al. developed a procedure to determine a user’s orientation to prefer either allocentric or egocentric orientations [8]. A successive comparison of the user’s individual preference to the performance in each allocentric and egocentric can be conducted by using their method. The main task is conducted as a factorial design with the three different menu interaction types and the distinction of egocentric and allocentric menu orientation which results in six factors. Each factor has a repetition rate of 30 trials and include the independent variables which are activation of the menu to pie menu button selection time and error rate. Our prototype consists of a 27 stereoscopic display with passive stereoscopic glasses and pixel line altering polarization filters in a single user work place. We use the Leap Motion Controller as input device to converse a minimalist virtual representation of the human hand as a direct pointing input.

5 Anticipated Outcomes

Our menu design is part of a maritime operator’s workplace using a hybrid visualization of a small 2D display and a bigger three-dimensional stereoscopic screen. The different activation types for the pie menu buttons are designed to evoke different intensities of physical strain. Not much research about the ergonomic aspects of freehand interaction is currently found and influences of different trajectory movement in an interaction task in virtual desktop environment has not received much attention as most systems are still in a prototypical condition. The final state of our research covers the evaluation of ergonomic aspects in physical and mental effort of a stereoscopic maritime situation display for an operator’s workplace.