Elsevier

Journal of Systems and Software

Volume 83, Issue 9, September 2010, Pages 1612-1621
Journal of Systems and Software

Means-ends and whole-part traceability analysis of safety requirements

https://doi.org/10.1016/j.jss.2009.08.022Get rights and content

Abstract

Safety is a system property, hence the high-level safety requirements are incorporated into the implementation of system components. In this paper, we propose an optimized traceability analysis method which is based on the means-ends and whole-part concept of the approach for cognitive systems engineering to trace these safety requirements. A system consists of hardware, software, and humans according to a whole-part decomposition. The safety requirements of a system and its components are enforced or implemented through a means-ends lifecycle. To provide evidence of the safety of a system, the means-ends and whole-part traceability analysis method will optimize the creation of safety evidence from the safety requirements, safety analysis results, and other system artifacts produced through a lifecycle. These sources of safety evidence have a causal (cause-consequence) relationship between each other. The failure mode and effect analysis (FMEA), the hazard and operability analysis (HAZOP), and the fault tree analysis (FTA) techniques are generally used for safety analysis of systems and their components. These techniques cover the causal relations in a safety analysis. The causal relationships in the proposed method make it possible to trace the safety requirements through the safety analysis results and system artifacts. We present the proposed approach with an example, and described the usage of TRACE and NuSRS tools to apply the approach.

Introduction

Usage of digitalized systems to control safety critical operations is ever-increasing. Examples of such safety control systems include digital control systems embedded in nuclear power plants, satellites, and missiles. A typical control system consists of the following components as presented in Fig. 1: plant, controller, actuators, and sensors. The primary concern in developing a safety control system is that the plant (P) must behave in a safe and acceptable way. The correctness of the controller (C) is the only means for ensuring a correct and safe plant behavior. The requirements of safety control systems should be correct from its functional, timing and safety aspects. When specifying the requirements (Sp) and proving its safety properties, one must consider the behaviors of both the plant (P) and a controller (C) (Ostroff, 1989). That is, the truth of the following proposition must be demonstratedPandCSp.

The control structure can be a three-layered structure. First, the automated controller without software controls the plant by only using the electro-mechanical control theories. Second, the automated controller with software controls the plant by using the software (e.g., control software, operating system, and device driver), computer, and electro-mechanical hardware controller. Usually, the software has a supervisory function above the hardware controller. Third, there can be the human supervisory controller above the software and the hardware controllers. Most of the safety-critical systems in nuclear power plants and airplanes for example, have this type of control architecture. Therefore, a human-centered control design for safety systems is important to maintain the safety of the human–machine interaction. The interaction among the components should also be considered to verify that the behavior of both the plant and a controller meets the functional requirements and achieves a safe behavior by an enforcement of the safety constraints.

The safety control systems must be designed (and operated) in a way not only to achieve the intended (goal-oriented) behavior but also to avoid an unintended (risk-oriented) behavior. To be safe, the original design must not only enforce the appropriate safety requirements and constraints on a behavior to ensure a safe operation, but it must also continue to operate safely as changes and adaptations occur over time (Woods, 2000). A prerequisite to achieve this is to perform a safety analysis as an integrated part of a system development process, with a well-supported change management process.

Traceability analysis of safety requirements and constraints is one of the most important activities of assessing the safety of a system. Effective traceability between a system development process and a safety analysis is critical to justify the safety and to maintain the stability among the components during a system evolution. The aim of this paper is to propose a traceability analysis method for the design and operation of safety control systems, which is based on a means-ends and whole-part concept as an approach for a cognitive systems engineering (Rasmussen et al., 1994). The rest of the paper is structured as follows. Section 2 presents our approach to a cognitive safety engineering to design a safety control system. Section 3 describes the means-ends and whole-part traceability analysis. Section 4 illustrates our approach by means of a simple example, and Section 5 concludes the paper.

Section snippets

Cognitive safety engineering for safety control systems

System engineering views each system as an integrated whole even though it is composed of diverse, specialized components, which can be physical, logical (software), or humans. According to a work domain analysis (Rasmussen et al., 1994), a whole-part relation presents a hierarchical decomposition of a physical system. In a means-ends abstraction, each level represents a different model of the same system (aggregation of components). At any point in the hierarchy, the information at one level

Means-ends and whole-part traceability analysis to provide safety evidence

In this section, we present how the traceability analysis using the means-ends and whole-part approach can be used for a system’s safety demonstration. Safety demonstration is a set of arguments and evidence elements which support a selected set of claims for a dependability – in particular safety – of the operation of a system important to safety used in a given plant environment (AVN et al., 2007). Evidence is located in various documents, and must be collected from these documents. The

Applicability of means-ends and whole-part traceability

We have presented how the means-ends and whole-part traceability analysis can be used in a real scenario using an example template presented in Fig. 10. In the future, we will evaluate the means-ends and whole-part safety engineering approach as well as a supporting traceability approach using a case study. We have developed the tools, TRACE and NuSRS, which can be used to evaluate the applicability of the approach. A short description of the tools and how they can be used for the case study is

Conclusion

The major challenge while developing a safety-critical control system is to justify the safety of the system. It is required to demonstrate that the safety goals of the system are achieved by a correct enforcement of the safety requirements and constraints on each system’s component, e.g., the human factors, the software, and the hardware. In this paper, we proposed a means-ends and whole-part safety engineering approach to identify, enforce, and trace the safety requirements and constraints

Jang-Soo Lee is a principal research scientist at Korea Atomic Energy Research Institute (KAERI). He has received his M.S. degree in computer science from the Korea Advanced Institute of Science and Technology (KAIST) in 1986, and PhD from KAIST in 2002. His research interests include safety analysis of software based system, software verification and validation, embedded system testing, formal methods, digital instrumentation and control architecture. He can be reached at [email protected].

References (12)

  • J. Yoo et al.

    A formal software requirements specification method for digital nuclear plants protection systems

    Journal of Systems and Software

    (2005)
  • AVN, BfS, CSN, ISTec, NII, SKI, STUK, 2007. Licensing of safety critical software for nuclear reactors. Common position...
  • Fenelon, P., Hebbron, B., 1994. Applying Hazop to Software Engineering Models. In: Risk Management and Critical...
  • Katta, V., Thunem, A.P.-J., 2007. Improving model-based risk assessment methods by integrating the results of...
  • S.R. Koo et al.

    NuSEE: an integrated environment of software specification and V&V for PLC based safety-critical systems

    Nuclear Engineering and Technology

    (2006)
  • Lee, J.S., Miedl, H., Choi, J.G., Lindner, A., Kwon, K.C., 2006. Software safety lifecycles and methods of a...
There are more references available in the full text version of this article.

Cited by (7)

  • Assurance cases and prescriptive software safety certification: A comparative study

    2013, Safety Science
    Citation Excerpt :

    For reasons of brevity, we do not define those criteria here. These software safety requirements will be further refined as more design detail becomes available (Lee et al., 2010). For example, with the low-level design for the command channel, as shown in Fig. 3, SSRs will be defined for the individual ‘In’, ‘Braking’, ‘ABS’, ‘CMD Modifier’ and ‘OUT’ functions of the Command component.

  • TraceBoK: Toward a Software Requirements Traceability Body of Knowledge

    2016, Proceedings - 2016 IEEE 24th International Requirements Engineering Conference, RE 2016
  • Safety evidence traceability: Problem analysis and model

    2014, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
View all citing articles on Scopus

Jang-Soo Lee is a principal research scientist at Korea Atomic Energy Research Institute (KAERI). He has received his M.S. degree in computer science from the Korea Advanced Institute of Science and Technology (KAIST) in 1986, and PhD from KAIST in 2002. His research interests include safety analysis of software based system, software verification and validation, embedded system testing, formal methods, digital instrumentation and control architecture. He can be reached at [email protected].

Vikash Katta is a research scientist at the Institute for Energy Technology (IFE), Norway. He received his B.E. degree in computer science from the Madras University, India, and M.S. degree in information and communication systems security from the Royal Institute of Technology (KTH), Sweden. He was employed as a student intern at SINTEF, Norway, and contributed to the CORAS project. His research interests include requirements engineering and traceability, safety cases, and safety analysis. He can be reached at [email protected].

Eunkyoung Jee was a Ph.D. candidate at the Korea Advanced Institute of Science and Technology (KAIST) when working on this research. She received her B.S., M.S., and Ph.D. degrees in computer science from KAIST. Her research interest includes safety-critical software, software testing, and formal method. She is currently a postdoctoral researcher at University of Pennsylvania. She can be reached at [email protected].

Christian Raspotnig is currently a Ph.D. candidate at the University of Bergen, Norway, and also employed at the Institute for Energy Technology (IFE), Norway. He has received B.S. and M.S. degrees in Computer Science at the University College of Østfold, Norway. In the past, Christian has worked as a Safety Advisor for the Norwegian Air Navigation Service Provider Avinor. At IFE, he conducted research in integrating Risk Assessment into Requirement Engineering, and performed consultancy work for Air Traffic Management and Nuclear industry, assessing safety-critical systems. He can be reached at [email protected].

View full text