1 Introduction

All major airlines now control operations a network control center. These go under a number of names (Operations Control Center; Flight Operations Center; Airline Operations Center) but all perform essentially the same functions. The International Air Transport Association [1] suggests that Operations Control has three basic components. These all have various sub-functions within them:

  • Operations Control

    • Operations Management (including airline systems, fleet)

    • Dispatch Management (including aircraft routing/re-routing; flight and load planning; fuel planning; flight following and meteorology)

    • Maintenance Management (including maintenance control; technical specialists; aircraft on the ground; in-flight technical issues)

    • Crew Management (including crew scheduling and tracking at main operating base and downline and airport operations)

  • Service Components

    • Customer (including passenger service and reservations)

    • Financial control

    • Other (including cargo handling and Hotel reservations and transportation)

  • Support Components

    • Operational Coordination (including with airport management; air traffic control/management; news watch for geo-political instability, significant environmental events, etc.)

    • Operational Liaison (including Chief Pilot and base representatives)

    • Operational Support (security and safety)

    • Data Management and Analysis (delays, costs etc.).

Such centers operate 24 h/day and may employ just a few people undertaking all these functions, to several hundred personnel all performing one dedicated function. This depends on the size of the airline and the complexity of its operation. The major carriers will often have engineers from the aircraft and/or engine manufacturers embedded in these operating rooms to provide specialist technical support. Rolls-Royce, the engine manufacturer has recently opened its own dedicated engine services Airline Aircraft Availability Centre. From this facility it can monitor remotely aircraft using the latest generation of engines providing real-time support to pilots (if needed) and coordinating maintenance and repair world-wide. Indeed, this center has access to more information concerning the health and performance of the engines than do the pilots.

The objective of providing ground support from network control centers is to provide a fully integrated, multi-disciplinary support team to the pilots alleviating them of the mundane flight planning paperwork and providing them support during high-workload, non-normal and emergency operations. Providing a range of dedicated expertise should enable better decisions and minimize delays. Furthermore, ground-based monitoring should help to anticipate and pro-actively manage the impact of unplanned events.

Most aircraft manufacturers and avionics systems suppliers are developing technology for airliners that will be flown by just a single pilot. Embraer announced that it was hoping to provide single-pilot capabilities by 2020. Paul Eremenko, Chief Technology Officer has also openly stated that Airbus is developing technologies that will allow a single pilot to operate a commercial airliner. In support of the same objective, Boeing is planning to undertake initial experimental flights in 2018 where autonomous systems will take over some of the pilot’s decisions. In the UK work is being undertaken as part of the Open Flight Deck program to determine the technology requirements and optimal crewing strategies for a single crew airliner.

Several different high-level configurations for a single crew aircraft have been proposed [2,3,4] but all solutions rely, to a greater or lesser extent on ground-based support depending upon how much onboard automation (or autonomy) is proposed. Any ground-based support may or may not be provided in real-time during flight. Comerford et al. [4] outlined five basic, high-level configurations for a single pilot aircraft:

  1. 1.

    One pilot on board, who inherits the duties of the second pilot.

  2. 2.

    One pilot on board, with automation replacing the second pilot.

  3. 3.

    One pilot on board, with a ground-based team member replacing the second pilot.

  4. 4.

    One pilot on board, with onboard personnel as back-ups.

  5. 5.

    One pilot on board, with support of a distributed team.

A future single pilot aircraft is just one part of a wider operating system with several discrete components and functions within it, such as:

  • The aircraft itself, including:

    • the pilot

    • onboard automation/autonomous systems

  • Ground-based component including (but not limited to):

    • ‘Second pilot’ support station/office (or ‘super-dispatcher’ – see Bilimoria et al., [5])

    • Real-time engineering function

    • Navigation/flight planning function (including meteorology).

It can be seen that most of the functions and information required to support single pilot operations are already available in airline network control centers. The issue becomes one of how this support can be made available to the flight deck in a timely, optimal manner?

Irrespective of how the ground-based component is arranged, all configurations are essentially a problem in distributed cognition, and more specifically, Distributed Situation Awareness – DSA (see Stanton et al. [6, 7]. Different parts (human or machine actors) in the wider system hold different components of information and represent different views on the system depending upon their goals (which should be compatible but not necessarily the same). For DSA to occur there must be communication between the agents in the system (which may take many forms). Finally, one component in the system (human or machine) can compensate for degradation in Situational Awareness in another agent.

The concept of DSA operates at a system level, not at the individual level. It implies different but compatible, requirements and purposes and the appropriate information/knowledge relating to the task and the environment changes as the situation develops [8]. DSA can be represented by propositional networks [7]. Propositional networks, comprise ‘subject’ (noun), ‘relationship’ (verb) and ‘object’ (noun) network structures of the knowledge required to describe a situation. For electronic/computerized systems, these may be constructed from system logic diagrams; for the representation of human knowledge objects these are constructed from cognitive interviews.

For the design of the air and ground-support components in a distributed system such as the one proposed to support a single pilot aircraft, the question becomes how should this information be distributed and represented to support DSA? What can be learned from previous accidents about how not to do it?

The AcciMap approach was developed as an analysis methodology to identify the causal factors involved in an accident or incident within a sociotechnical context. The technique represents graphically the causal factors and maps multiple contributing factors across different levels of the sociotechnical system [9, 10]. AcciMap frames the possible causal influences underpinning a sequence of events into various organizational levels. AcciMap charts depict key input and output conditions of system components and their relationships. They are not restricted to analysis within a single organizational or functional entity, which makes this approach particularly applicable for the network of functions underpinning the operation of an aircraft. The AcciMap methodology has been adapted by authors for various applications [11].

Combining propositional networks with the concepts within the AcciMap accident analysis methodology may provide an understanding of failures of DSA across ground and air components. It is proposed that the elements within an AcciMap Analysis may be further decomposed in a semi-hierarchical manner using propositional networks. Furthermore, the approach can also be used in a pro-active manner to describe the potential inter-relationships between the elements in various high-level configurations in a single pilot air/ground system.

This paper re-analyses the Boeing 737 (G-OBME) accident at Kegworth in 1989 [12] using a modified AcciMap approach based around the standardized AcciMap methodology described by Branford et al. [11] and supplemented by further analysis using propositional networks. The accident scenario is then used as the basis for analysis for how the problem would be tackled using various configurations for the operation of a single crew aircraft.

2 Kegworth Accident, 1989

Much has previously been written about the accident at Kegworth in 1989 [12]. It is probably one of the most analyzed accidents in aviation history. However, the richness of the set of events leading up to the crash means that it bears analysis from many perspectives.

To summarize, just after leaving London Heathrow and approximately 13 min into the flight to Belfast, the pilots noticed a severe vibration as the aircraft was climbing through 28,000 feet just 20 nm to the South-East of East Midlands Airport (near Derby). This was subsequently found to be the result of a small portion of fan blade in the left-hand engine breaking off which resulted in heavy vibration, shuddering and compressor stalling which ceased after about 20 s. This was accompanied by some smoke on the flight deck. The Commander took control of the aircraft and disconnected the autopilot. As a result of the First Officer misidentifying the malfunctioning engine from mis-reading the engine vibration gauges (an error that was compounded by the Commander’s incorrect mental model of the air-conditioning system – he believed that all the flight deck air came from the first compressor stages on the right-hand engine, hence the smoke) the right-hand engine was throttled back and subsequently shut down. At this point the engine vibrations on the left-hand (damaged) engine reduced and the smoke also began to dissipate, suggesting that the decision to shut down the right-hand engine was indeed the correct one. However, reports from passengers and cabin crew who could see evidence of fire directly in the left-hand engine and which were transmitted to the flight deck were dismissed by the Commander.

At this point the airline asked the crew to divert into East Midlands Airport which coincidently, was also British Midland’s main operating base. This involved a right-hand turn and descent to flight level (FL) 100. The Commander elected to fly the aircraft manually. During this time the First Officer was engaged with various radio calls to both the airline’s main operating base and ATC. Simultaneously he was also attempting to re-program the Flight Management Computer – FMC (unsuccessfully) for the approach into East Midland’s airport. Having failed to do this he commenced the single engine approach checklist but was interrupted on several occasions by various radio calls from both ATC and British Midland’s maintenance facility. There was some attempt to review the situation but his was compromised by the high workload on the flight deck and the various interruptions. As a result of the reasonably tight turn and a high workload descent to land at the nearby airport, the Captain was required to extend his flight path further to the South of East Midland’s airport to increase the distance to the threshold.

The initial part of the descent and approach was normal (in the circumstances) and it was not until the landing gear was deployed and the power on the damaged engine was increased to compensate for the drag from the flaps and gear, that problems began to occur. About 2.4 nautical miles from touchdown there was an abrupt decrease in power as the left-hand engine failed completely. This was accompanied by fire warnings. The First Officer attempted to re-light the (undamaged) right-hand engine but there was not enough time and no procedure available. The aircraft crashed on an embankment of the M1 motorway near the village of Kegworth just 900 m short of the runway threshold. Forty-seven passengers were killed and 74 seriously injured.

3 Modified AcciMap Analysis of Events

A modified version of the standardized AcciMap methodology described by Branford et al. [11] was utilized for the initial analysis of events in the Kegworth accident. The standardized AcciMap model considers events at three levels prior to the final outcome: External, Organizational and Physical/Actor Events, Processes & Conditions. In the further modification used in this analysis the latter level is broken down into two further sub-levels: Aircraft and Aircrew, representing the avionics and pilots, respectively. The complete AcciMap analysis is rather large. A section of the analysis of the sequence of events leading up to the accident is presented in Fig. 1. The arrows depict causal (or contributory) relationships between factors, hence an arrow from one factor to another indicates that the former was necessary for the latter to occur [11].

Fig. 1.
figure 1

Initial section of the AcciMap analysis (adopting the modified method based upon Branford et al. [11]) describing the sequence of events leading up the Boeing 737-400 accident (G-OBME) at Kegworth, 1989 [12].

Described another way, the crew were faced with two basic problems:

  • What is wrong with the aeroplane (and by implication, what does this mean for the management of the flight) and

  • Where are we and where are we going have (and by implication how do we get there)?

Different parts of the overall system (both on aircraft and off-aircraft) held different pieces of information and represented them from different perspectives. A high-level view of the navigation problem in the Kegworth accident is described this manner in Fig. 2. This also begins to demonstrate the importance of communication between system elements for the development of DSA. One of the problems with the use of propositional networks when used to describe DSA is that there is an implicit assumption that transfer of data/information (communication) between actors is complete and accurate. However, any transmission of data/information between interfaces and users and/or person-to-person may not be perfect, especially if the interface is poor or the transmitter or receiver is under pressure. As a result, the links between agents in Fig. 2 have been adapted to include a representation of the quality of data exchange.

Fig. 2.
figure 2

A high-level view of the navigation problem in the Kegworth accident describing the communication linkages.

Some of the elements within the position problem can be further described in more detail as a propositional network (see Fig. 3) which in this case has been delineated to make it clear which physical parts of the system contained which properties.

Fig. 3.
figure 3

Propositional network describing the navigation problem distributed across the human and non-human actors.

From a consideration of the material contained in Figs. 1, 2 and 3 it can be seen that various pieces of data/information about the problem (the engine malfunction) and the solution (managing the engine problem and navigating safely to East Midlands airport) were held in a number of on- and off-aircraft locations and by both human and non-human elements. Briefly, the engine sensors and ‘knew’ which engine was malfunctioning (but not why – they contained good data but little information) and the symptoms were displayed on the secondary engine instruments (however these were not communicated adequately as a result of the poor interface – again more emphasis on data rather than information) hence the poor awareness of the crew. Air Traffic Control had a strategic view of the position and track of the aircraft relative to East Midlands airport (and other conflicting traffic) – good information – and an idea of the crews’ intentions. The aircraft’s FMS’knew’ the position and orientation of the aircraft relative to the airport (data) but could not communicate it nor enact it. The navigation intent was formulated by British Midland maintenance (land at East Midlands airport) and was shared by ATC and both pilots (but not the FMS). The intent was enacted cooperatively by the First Officer receiving vectors from ATC (tactical data) which were communicated to the Captain who was flying the aircraft manually. The First Officer had a limited tactical awareness of the navigation solution (disparate pieces of data); the Captain’s navigational awareness was even more limited, essentially restricted to the immediate altitude, course and speed communicated by the First Officer.

As the events preceding the accident progressed various knowledge objects were activated or de-activated (e.g. see Stewart et al. [8]) but when human agents were involved (either as transmitter or receiver of data/information) the quality of the data/information passed may not have been perfect. It can be seen that by representing the various actors at play in the Kegworth accident at the various levels in the AcciMap hierarchy, and by including the lines of data/information transmission, it becomes apparent that no one entity had a complete view of the situation. Situation awareness was not only distributed, it was inefficient and incomplete. Furthermore, the nexus of the communication activity (the First Officer) became overloaded, especially when dealing with the navigation problem when diverting to East Midlands airport (see Fig. 2).

It can be seen from the adapted AcciMaps in Figs. 4, 5 and 6 that when considering the navigation problem derived from the Kegworth accident scenario, the single pilot can rapidly become overloaded if the information/data exchange is not mediated by ground-based assistance (c.f. the role of CAPCOM – capsule communications – in NASA mission control). This is particularly the case when the inputs from a wider distributed team are considered (configuration 5 – Fig. 6). It can also be seen from the communication networks described in Fig. 4 that some potential modes of miscommunication (and hence error) are actually reduced. For example, single crew aircraft configuration 1 where there is just a single pilot on board. Having a ground-based pilot serves to alleviate some of the workload experienced by the pilot but only if they can be coordinated effectively (configuration 3). The greater the number of entities in the distributed system, the more critical the role of this function becomes.

Fig. 4.
figure 4

Baseline navigation problem faced by the crew at Kegworth

Fig. 5.
figure 5

Kegworth accident scenario re-described using single crew configurations 1 and 3.

Fig. 6.
figure 6

Kegworth accident scenario re-described using single crew configuration 5 and a modified version of configuration 5.