Abstract
This paper explores how different forms of anticipatory work contribute to reliability in high-risk space operations. It is based on ethnographic field work, participant observation and interviews supplemented with video recordings from a control room responsible for operating a microgravity greenhouse at the International Space Station (ISS). Drawing on examples from different stages of a biological experiment on the ISS, we demonstrate how engineers, researchers and technicians work to anticipate and proactively mitigate possible problems. Space research is expensive and risky. The experiments are planned over the course of many years by a globally distributed network of organizations. Owing to the inaccessibility of the ISS, every trivial detail that could possibly cause a problem is subject to scrutiny. We discuss what we label anticipatory work: practices constituted of an entanglement of cognitive, social and technical elements involved in anticipating and proactively mitigating everything that might go wrong. We show how the nature of anticipatory work changes between planning and the operational phases of an experiment. In the planning phase, operators inscribe their anticipation into technology and procedures. In the operational phase, we show how troubleshooting involves the ability to look ahead in the evolving temporal trajectory of the ISS operations and to juggle pre-planned fixes along these trajectories. A key objective of this paper is to illustrate how anticipation is shared between humans and different forms of technology. Moreover, it illustrates the importance of including considerations of temporality in safety and reliability research.



Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Notes
Interestingly, Haavik (2014a) argues that the theoretical frameworks Normal Accidents Theory (NAT) and High Reliability Organizations Resilience Engineering are relationally oriented in their initial conceptions.
There have been debates between Hutchins and Latour on whether or not cognitive explanations are necessary (Giere and Moffatt 2003). Also, Latour’s insistence that the agency of technology must be understood as symmetrical with the agency of humans is controversial.
Latour’s examples are trivial, but pedagogical. Consult Ribes et al. (2013) for a more empirically relevant discussion of delegation (viz. a networked organization managing a computing grid).
Fixating seeds means injecting chemicals into the seed cassettes which stops the biological mechanisms within the seeds in order to study them on the ground from the time of fixation.
The terminology being used for communication shadow ISS is loss of signal (LOS) and acquisition of signal (AOS) when the connection is good. Availability of S-band and KU-band is also commonly used to describe communication windows.
Mohammad et al. (2014) provides a thorough description of the methodological audiovisual set up.
Signatures refer to telemetry parameters, color-coded visual signs for errors or sensor interpretation, which indicate that the system is in a nominal or off-nominal state.
For example, they can be ways for management to show the authorities that a lesson has been learned from the incident, or to assign responsibility (or blame) for specific issues.
References
Almklov PG (2008) Standardized data and singular situations. Soc Stud Sci 38(6):873–897
Almklov PG, Antonsen S (2014) Making work invisible: new public management and operational work in critical infrastructure sectors. Public Adm 92(2):477–492
Almklov PG, Østerlie T, Haavik TK (2014) Situated with infrastructures: interactivity and entanglement in sensor data interpretation. J Assoc Inf Syst 15(5):263–286
Antonsen S, Almklov P, Fenstad J (2008) Reducing the gap between procedures and practice–lessons from a successful safety intervention. Saf Sci Monit 12(1):1–16
Bieder C, Bourrier M (eds) (2013) Trapping safety into rules: an introduction. Trapping safety into rules. How desirable or avoidable is proceduralization. Ashgate Publishing, Farnham
Dekker S (2006) Resilience engineering: chronicling the emergence of confused consensus. In: Hollnagel E, Woods DD, Leveson N (eds) Resilience engineering: concepts and precepts. Ashgate, Hampshire
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Fact J Hum Fact Ergon Soc 37(1):32–64
Giere RN, Moffatt B (2003) Distributed cognition: where the cognitive and the social merge. Soc Stud Sci 33(2):301–310
Haavik TK (2014b) On the ontology of safety. Saf Sci 67:37–43
Haavik TK (2014c) Sensework. J Comput Support Cooper Work 23(3):269–298
Hale A, Borys D (2013) Working to rule, or working safely? Part 1: a state of the art review. Saf Sci 55:207–221
Hayes J (2012) Use of safety barriers in operational safety decision making. Saf Sci 50(3):424–432
Hollnagel E (2015) Why is work-as-imagined different from work-as-done. Resilience in everyday clinical work. Ashgate, Farnham, pp 249–264
Hollnagel E, Woods DD, Leveson N (2006) Resilience engineering: concepts and precepts. Gower Publishing Company, Aldershot
Hutchins E (1995) Cognition in the wild. MIT Press, Cambridge
Hutchins E, Klausen T (1996) Distributed cognition in an airline cockpit. In: Engeström Y, Middleton D (eds) Cognition and communication at work. Cambridge University Press, Cambridge, pp 15–34
Kongsvik T, Almklov P, Haavik T, Haugen S, Vinnem JE, Schiefloe PM (2015) Decisions and decision support for major accident prevention in the process industries. J Loss Prev Process Ind 35:85–94
LaPorte TR, Consolini PM (1991) Working in practice but not in theory: theoretical challenges of “high-reliability organizations”. J Public Adm Res Theory 1(1):19–48
Latour B (1990) Technology is society made durable. Sociol Rev 38(S1):103–131
Latour B (1999) Pandora’s hope: essays on the reality of science studies. Harvard University Press, Cambridge
Mohammad AB, Johansen JP, Almklov P (2014) Reliable operations in control centers, an empirical study safety, reliability and risk analysis: beyond the horizon: proceedings of the European safety and reliability conference, ESREL 2013, Amsterdam, The Netherlands, 29 Sept–2 Oct 2013, CRC Press
Nathanael D, Marmaras N (2006) The interplay between work practices and prescription: a key issue for organizational resilience. In: Proceedings of the 2nd resilience engineering symposium, pp 229–237
Orr JE (1996) Talking about machines: an ethnography of a modern job. Cornell University Press, Ithaca
Østerlie T, Almklov PG, Hepsø V (2012) Dual materiality and knowing in petroleum production. Inf Organ 22(2):85–105
Ribes D, Jackson S, Geiger S et al (2013) Artifacts that organize: delegation in the distributed organization. Inf Organ 23(1):1–14
Roe E, Schulman PR (2008) High reliability management: operating on the edge. Stanford Business Books, Stanford University Press, Stanford
Rosness R, Evjemo TE, Haavik TK, Wærø I (2015) Prospective sensemaking in the operating theatre. In: Review for cognition, technology and work 1–17. doi:10.1007/s10111-015-0346-y
Schulman P, Roe E, Eeten MV, Bruijne MD (2004) High reliability and the management of critical infrastructures. J Conting Crisis Manage 12(1):14–28
Stanton NA, Stewart R, Harris D et al (2006) Distributed situation awareness in dynamic systems: theoretical development and application of an ergonomics methodology. Ergonomics 49(12–13):1288–1311
Suchman L (1987) Plans and situated actions. Cambridge University, New York
Watts JC, Woods DD, Corban JM et al (1996) Voice loops as cooperative aids in space shuttle mission control. ACM conference on computer-supported cooperative work
Weick KE (1993) The collapse of sensemaking in organizations: the Mann Gulch disaster. Adm Sci Q 38(4):628–652
Weick KE, Sutcliffe KM (2001) Managing the unexpected. Assuring high performance in an age of complexity, San Francisco
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Johansen, J.P., Almklov, P.G. & Mohammad, A.B. What can possibly go wrong? Anticipatory work in space operations. Cogn Tech Work 18, 333–350 (2016). https://doi.org/10.1007/s10111-015-0357-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10111-015-0357-8