Keywords

1 Introduction

Situation Assessment processes, such as Data and Information Fusion, fed by multiple heterogeneous sources and computational intelligence are used to react to the environment changes and help humans providing them means for not only the development of the perception and understanding of what is going on at the environment, at a certain time and space dimension, but also the anticipation of events to come. This routine is known as Situation Awareness (SAW) [1].

A challenging issue on Situation Assessment community is to determine how the process can be redesigned for the enhancement of SAW, which can be severely degraded if low quality information is produced and propagated throughout the process, jeopardizing decision-making [2].

SAW-oriented systems in the support of decision-making rely on data and information quality to provide humans a better understanding of what is happening in the environment. Imperfect information provided by such systems affect the way users perceive cues of events or how they relate them in a meaningful situation, making users susceptible to SAW errors [3].

Both humans and systems may be overwhelmed by the challenges of information processing. Hence, operators and computers need to collaborate and share the responsibilities for the achievement of common goals and tasks the situation assessment may need [4].

When humans and systems interact towards the assessment of situations in an inference process, SAW can be better acquired, maintained and even restored. Many actual SAW-oriented interfaces describe the human role in a semi-automated fashion. Three main user centered views emerge to enrich the final picture with or without intermediate feedback: humans not only as consumers, but also producers of information and humans as actors on the information [5, 6]. Such interfaces seem to lack a deeper investigation on the implications and related issues of the human intervention to build and maintain SAW. The most recent approaches present opportunities for human interaction on each assessment level for enhancing SAW [7].

However, such interfaces, although including the human as actor in the situation assessment process, do not provide access to information across the process and from mono and multi sources.

Hence, this paper introduces a conceptual framework for SAW-Oriented User Interfaces to Emergency Dispatch Systems and also proposes a new interface to promote a tighter integration between situation assessment systems and humans to build and maintain SAW. The goal is to present an interface that can integrate spatial and temporal control of assets, information scoring, information filtering (by quality or domain-specific attributes), data and information modification, visualization cues adaptation and different level fusion-related services access.

Information from an urban robbery situation report is used to illustrate the features of our innovative interface. Such domain presents challenges that are better accommodate by a SAW-oriented interface, such as data acquisition from multiple heterogeneous sources (starting with a voice call from a victim of robbery), the monitoring of the assessment process and the refinement of assessment results.

2 Related Work

This section presents the state-of-the-art regarding SAW-oriented interfaces.. Existing solutions typically aim to empower the operator and the system by intensifying their relationship with information to build a more feasible representation of situations. Among the related work are interaction frameworks for SAW, specialized interfaces and general systems that relies on human-information discourse.

Nwiabu et al. [8] discussed UIs for command and control systems (C2) for the prediction of hydrate formation in pipelines and subsea pipelines. The interface is based on the results of a hierarchical task analysis which decomposes complex scenario objectives into small tasks. The UI is capable of automatic reconfiguration to adapt itself to the current situation and reduce the mental effort of the operator.

Yu et al. [9] presented a new visualization context through the UI, which has an interpretation engine for the operator’s needs, defining which information must be presented. For the improvement of the operator’s comprehension, a mechanism of fuzzy control was proposed to perform a diffuse search based on specific keywords of the application domain by operators’ interactions.

Onal et al. [10] developed a UI based on methods for improving SAW and also for minimizing the mental effort of heavy mining machines. The layout of the UI is based on guided interaction, support panels, virtual maps and multiple screens. Such components are integrated to help operators to avoid accidents due to information overload in operational duties. A Goal-Driven Task Analysis (GDTA) was applied to identify requirements.

Chai and Du [11] developed a framework to support SAW acquisition in a command and control system (attacks in the battlefield). Such framework is based on the use of fused data and classification rules to help recognizing and explaining enemy assets and evolution, in a dynamic environment.

Gomez et al. [12] created an interface prototype to increase SAW of decision-makers while monitoring soccer games in real time. Their performance enables a timely allocation of rescuers to incidents responses. A wireless sensor network is responsible for capturing heterogeneous data and sending them to the operator’s interface, which is entirely based on temporal context for representing the scenario. The visualizations include the location of rescuers and the location of the incident inside a stadium.

Feng et al. [13] developed a decision support system that incorporates shared SAW among agents that extract relevant information about entities and represent them to the operator. These agents have the following set of goals and strategies for each SAW level: missions, plans, actions and physical attributes. Then, they are responsible for generating recommendations about the scenario. The UI deals with the spatial-temporal aspects of the evolution of missions. A limited interaction with the recommendations is allowed.

Besides being efficient solutions for the specific application domains, such solutions are limited regarding the management of information being propagated throughout the cycle of situation assessment. Our approach innovates on promoting a full control of the information that is produced on each phase, using uncertainty representation and refinement methods as a resource to control the knowledge that is created, represented and used to assess situations.

3 A Conceptual Framework to Enhance Situational Awareness for Emergency Dispatch Systems

A conceptual framework to enhance Situational Awareness for Emergency Dispatch Systems and its main components that describe how the processes of information inference, assessment, visual representation and human-system information refinement can be exploited to support SAW mediated by a specialized UI are depicted in Fig. 1. Crucial components are highlighted (orange).

Fig. 1.
figure 1

Conceptual framework for describing the UI in the context of situation assessment (Color figure online)

The framework was used to develop a SAW-oriented interface to a robbery event reporting, as part of an emergency dispatch system.

Before performing any information analysis, our framework receives the output of an acquisition phase. Such acquisition performs a Natural Language Processing (NLP) to identify objects, attributes and properties from audio calls reported to the Military Police of São Paulo State (PMESP) in Brazil. As an output, it is produced a set of interrelated objects and properties that we call Situation. Such Situation is then submitted to Information Quality Assessment.

At the Information Quality Assessment, the produced situation is submitted to an analysis of the following required quality dimensions: completeness related to the presence or absence of objects or attributes that describe them; currency (timeliness dimension), which helps to determine the “age” of the information and so take timely actions; and uncertainty, when operators hold only a partial knowledge about a Situation. In our domain we consider uncertainty as a generalization of the other dimensions. Situations are entities that evolve overtime and must have their information quality indexes updated every time new information arrives or is produced, and also their uncertainty value.

The product of acquisition and quality assessment is a situation knowledge that must be represented. In our complete situation assessment system, the ontology model was chosen to represent the semantics of the generated information, due to the flexibility on representing relations among objects. In this phase it is already known the objects, attributes and possible relations among them. In situation assessment systems, it is called Level 1 and Level 2 of assessment, also corresponding to the levels of Perception and Comprehension of a Situation Awareness [14]. It is this knowledge that must be encoded into visualizations and managed by the UI.

Acquisition process generates entities that will be codified in visualization. To complement it, information integration can be performed using the already produced objects as input. This integration, known as Information Fusion, is capable to produce in a lower dimensionality, new objects and new semantic relations that also must be represented graphically. The product of this phase, named Situation, is submitted to information quality assessment, which enriches the existent situation knowledge and is encoded into posterior visualizations.

The UI itself holds the information management of the whole process. Our interface is a collaboration workbench where both system and the operator provide and convey information as a partial knowledge that evolves overtime. Therefore, every time new information is provided by one of these main actors, the other one must process and rebuild it as a new knowledge.

In the emergency dispatch domain, the interface is present where a human operator observes, get oriented, decide what to do and then take some action, which can be either request information refinement or take a domain specific decision. In our approach, this is where the system share partial knowledge generated by the other phases and then listen to the operators’ inputs in a cyclic fashion. The methods for refinement are discussed further in the paper.

Held by the UI, the visualizations convey the aggregated knowledge about situations. There are two graphical methods available for the analysis of the specialist: the map-based visualization and the hierarchical graph of relations.

The use of overlays in a geo-referenced map is a requirement for the emergency dispatch system operations, being dependent of location attributes, crucial to determine the attendance to an emergency event. Hence, the other objects that compose a situation complement the visualization with information about criminals, victims and stolen objects for instance, each one with their own description.

The adoption of a graph structure is justified by the need of hierarchical knowledge about how the information is built regarding situations and their objects. It is necessary that police operators know how each situation is composed, and with which objects and attributes. Such hierarchy was obtained through the requirement analysis with the PMESP. Situation is the central entity, composed by the relation among objects and their attributes and ramifications. Object can be identified and have no relations at all, hence, can be in independent hierarchies. In this case, even when not composing a situation, it must be represented. In our case study, a situation is a robbery event report and its objects are victims, criminal, location and robbery object, each one with a set of characteristics that we call attributes.

The next section introduces the development of our interface and practical results using information from the emergency management domain. Moreover, it is discussed how the principles of interface design from Endsley [14] were interpreted and employed in the development of our interface. Also, positive and negative aspects of each principle are presented (considering our robbery event case study).

4 The SAW-Oriented User Interface Development: Robbery Report

For the requirements elicitation of the development of our UI, two approaches were adopted: the Goal-Driven Task Analysis (GDTA) and the Guidelines for Designing for Situation Awareness, both introduced by Endsley [14]. The GDTA helps designers to list all needed information to stimulate each of the three SAW levels (Perception, Comprehension, Projection) and related tasks to help obtain them. To acquire such information, a questionnaire and an observation in site were applied to specialists. This approach also helped to define priorities and decisions that must be performed during the observation of information.

In this section it is presented the principles for the development of each component of our UI and the design choices based on the state-of-the-art. It is also highlighted the benefits and drawbacks of each design choice for our domain.

4.1 Organize Information According to the Goals

Organizing the information according to the goals, established in the GDTA analysis, is the confirmation of the crime and help operators to acquire SAW using the collected and processed information from the assessment system.

To obtain this result, the information was structured around goals, thus goal-driven and not data-driven. Hence, the interface was divided into three different but interconnected views. Figure 2 presents the main view interface for the acquisition of SAW in emergency events.

Fig. 2.
figure 2

Main view interface for the acquisition of SAW in emergency events

The first view (bottom left) in the UI is an object’s table for incoming events, containing: information source, objects found by acquisition and fusion, added time information and the information quality (overall uncertainty about object) assessed.

The second view is a map-based GIS-like window, with visualizations as overlays geo-located according to the location of the acquired data. Each object has its own overlay, e.g., criminal, stolen object, location of the event and victims. Each overlay is actionable by the user interaction to expose the attributes associated with the object. When a certain object is spotted in the map view, the correspondent object in the object’s table is highlighted. The reverse process also highlights the overlay on the map.

The third view is a frame to support the already mentioned relational graph. Similar to the map-based overlays and the object’s table, every node from the graph can be selected to extract more information and link with the other view of the interface. The selected node is highlighted when the complementary information is highlighted on the map and object table. The graph can also be rearranged and have their hierarchy updated overtime, to be discussed further.

4.2 Presenting Level 2 of Awareness

The goal is to present the information needed by a second level of awareness directly to support the comprehension as a result of minimum processing, as a first hint of a situation that is probably happening. The idea is to present some values already calculated instead of relying on specialist calculation over Level 1 of SAW data.

Some situations (made by objects and attributes) can be calculated a priori to reduce the mental calculation of the specialist that operates the system. For instance, the automated part can make information fusion of several objects of type “location”, identified in the acquired information. Hence, instead of presenting all input information separately, fused information with lower dimensionality and more significance can be adopted. Therefore all events with the same location, and other attributes such as a car or a kind of weapon, can be combined into single and meaningful information.

In case where no automatic fusion occurs, the operator can perform it him/herself in any of the three views by dragging one representation onto another. Such approach is known as interface fusion and is one of the refinement approaches to be detailed further. This approach helps to avoid cluttering in case of several simultaneous events but neglects a higher granularity in level of attributes.

4.3 Supporting Global Situation Awareness

The big picture of the situation must always be available. Global SAW is an overall view of the situation in a high-level language and in accordance with the objectives of the specialist. At the same time, detailed information regarding such objects must always be available if requested. In most systems of Situation Assessment, Global SAW is always visible and may be crucial to determine which objectives have major priorities. For such, the graph and the object’s table can be expanded and contracted on demand to expose and hide the hierarchy of objects that composes a relation, visually and textually.

Also, when a candidate relation is detected, indicated by the human or the system, the graph and the object’s table establishes a new graphical connection indicating a likely relation, that may or may not be accepted by the specialist. By new associations on any of the views, the others respond to the association and rearrange themselves. Inside the object’s table, the lines are grouped. On the map, the overlays are overlapped and on the graph, a new hierarchy is composed.

4.4 Information Filtering

To avoid the overload, non-related SAW information must be filtered. The interface must present only the crucial information to reach the SAW objectives in each task by each moment. For such, it was developed an interactive filter. As the information is inferred by the acquisition phase, the existent information about any of the objects can be omitted or highlighted to a specific analysis.

Such filter is useful for reducing search space and determining fusion candidates through visual analysis. However, SAW does not occur instantly. Humans take a certain time to get oriented regarding situations and critical attributes. A bad filtering can compromise the visibility and the dynamic of the system that changes overtime. Also, global SAW can be depredated and prevents human being proactive.

4.5 Explicitly Identify Absence of Information

Humans deal with the absence of meta-information as something positive. If there are positive readings, they think a missing reading is also positive when actually it can be extremely conflicting and imprecise. Humans act differently when they know if there are probabilities of something going wrong. Abscess of information is dealt as correct and reliable. There are two variations of the problem: no hazards, when information was analyzed and there is no threat; and no hazards unknown, when there is some places that were not covered or with sensor limitations.

Also, stress and workload can cause people to not pay attention to missing information. Humans are dependent on visual. Others just know because of experience. In military applications, dashed lines are used to represent the unknown. As is the case of an unknown attribute, the overall quality index (uncertainty) is the adjusted color.

4.6 Support the Verification of Information Reliability

People consider sensor reliability to support and weigh their reviews of the information produced and presented. Thus, they can benefit if they know that certain information is not reliable.

Although the reliability of values can be numerically presented, authors state that use of luminance levels are advised (brighter for the most reliable).

Not only about the reliability in general, but factors that allow the reliability of a sensor can be evaluated/accessed (factors determining the reliability - sensor reading context). For such our interface shows uncertainty with the use of rings (auras) around objects representation. The closer to the green, the higher is the quality of the data. The closer to the red, the worse is the quality of the data.

Furthermore, by interacting with nodes on graph, overlays on map and information in the object’s table, the composing information quality indexes are presented to illustrate how such uncertainty was inferred. Such approach allows specialists to verify local and global quality indexes on demand.

4.7 Representing Historical Events to Follow up Information Evolution

The UI presents graphic and interactive access to historical information by a timeline. To accomplish this, a time ruler with time intervals indicative of arrival information that was delivered to the system was implemented. In our approach of situation assessment system, a situation is something that evolves overtime. Past situations can be also restored and re-inferred.

Hence, specialists can access an event in history and view information objects, attributes and situations on demand. When selecting a historical event, the other views are set to show the information of the selected event.

Thus, there is a possibility of returning in the past and also to monitor events in real time, besides being able to advance directly to a specific time. As a negative side, there is a possible loss of focus on relevant current events and confusion about the actuality of events.

4.8 Support Access of Confidence and Uncertainty in Composite Information

The UI shows the quality scores of alternative forms. In this case, categorical information tends to be faster (high, medium, low) and the fewer categories, a little faster and tend to better accept the lowest rates. Numerical, analog and ranking tends to generate slower decisions.

For a complete understanding of the situation, multiple ways of representing the probability of information are used (integrated separated and geo-located).

The auras around representative icons have its color changed according to the quality attribute. In this case, the color scale follows a Likert scale of five colors according to the index. When information representing a combination of other information (fusion products) exists, the aura color is calculated in accordance with the combined attributes of the individual indices. These individual attributes can be accessed by interaction of the list of events in specific vision. All detailed information will follow the color scale and also have numerical values.

4.9 Support the Upgrade of Quality Levels for Users

As operators need to assess the quality of information readings, there must be a way for this to occur quickly and easily. The confirmation or contradiction/denial of relevant contextual information must be clearly shown.

If reliability is affected by some other value, the information must be displayed very close to the readings, to make a rapid assessment, by showing these specific values of a field situation rather than reliability. This allows the operator to know how much to trust particular pieces of data in the current instance or to adjust the system for more reliable data.

If the system infers information based on other data, they should be displayed in a way to allow rapid determination of what data is known and what is inferred.

The interface provides an option to enter a new quality index, so that the human can evaluate the information or individual event after the fusion. This interaction can occur in the list of individual events map or graph. A ruler is displayed so that the index is adjusted. As positive aspects, quality Scores are always updated (situation and objects/attributes). However, the dynamic scenario nature can prevent the constant updating.

4.10 Support Uncertainty Management by Information Refinement

When information is uncertain to the specialist, the UI must provide access to information refinement functions. In our approach, the specialist is able to perform refinements by three different process management functions: Sensor Management, Fusion Management and Knowledge Management.

By Sensor Management, the specialist is able to select information sources, request reading, set new operational parameters and also disprove acquired data. As our Acquisition also provide objects and attributes, this function can also trigger new mining and classification routines.

The Fusion Management was developed to allow specialists to manually determine the fusion parameters, instead of relying on the automatic process of integration right after the information acquisition, which combines every object and attributes found.

Finally, the Knowledge Management is about the manual contribution that specialists can make to the incremental knowledge that him and the system build overtime. Information can be corrected and semantically restructured. Hence, the associations between objects, made by other process can be redone. Certain nuances about the synergy of objects and the relationship among them in the scenario can only be inferred by humans.

5 Conclusions

This paper explores an UI for an emergency situation system and the efforts made to improve the Situation Awareness process through visualization and data fusion, represented in the UI.

These areas have been largely studied in literature, with several data fusion models and SAW-based interfaces to improve the Situation Awareness process. However, there is a lack of deeper investigation on the human intervention to build, maintain and recover Situation Awareness.

Therefore, the UI proposed aims at studying this collaboration between human and system to develop a better SAW. To demonstrate the effectiveness of this UI, several SAW UI design guidelines are analyzed and commented on their impact on how this UI was molded.

The conclusion is that empowering the specialist and the system, by intensifying their relationship to build a more feasible representation of situations, has a potential to improve SAW that has not yet been explored.

In future projects, this UI will be evaluated with user testing (usability) and a Situation Awareness evaluation, compared to UIs that do not allow human-system collaboration in the fusion process.