Skip to main content
Log in

A study of methods for textual satisfaction assessment

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Software projects requiring satisfaction assessment are often large scale systems containing hundreds of requirements and design elements. These projects may exist within a high assurance domain where human lives and millions of dollars are at stake. Satisfaction assessment can help identify unsatisfied requirements early in the software development lifecycle, when issues can be corrected with less impact and lower cost. Manual satisfaction assessment is expensive both in terms of human effort and project cost. Automated satisfaction assessment assists requirements analysts during the satisfaction assessment process to more quickly determine satisfied requirements and to reduce the satisfaction assessment search space. This paper introduces two new automated satisfaction assessment techniques and empirically demonstrates their effectiveness, as well as validates two previously existing automated satisfaction assessment techniques. Validation shows that automatically generated satisfaction assessments have high accuracy, thus reducing the workload of the analyst in the satisfaction assessment process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. The above scenario examines the case of a developer, but in many mission- or safety-critical projects, an IV&V agent is responsible for performing satisfaction assessment.

  2. This threshold was determined based on discussions with IV&V analysts (not involved in the assessment study). Also, when other thresholds were used, it was the case that the number of requirements labeled as satisfied was unrealistically small or large (no requirements satisfied, all requirements satisfied).

  3. We did not ask the participants to review every ‘not satisfied” requirement due to participant time constraints. The participants were assigned review items so as to achieve coverage while also ensuring as much overlap as possible (for example, having reviewer 1 look at the first 30 items and reviewer 2 look at items 20 – 40).

  4. We consulted with a statistics professor during the answer set creation process.

References

  • Altman DG (1991) Practical statistics for medical research. Chapman & Hall

  • Antoniol G (2002) Recovering traceability links between code and documentation. IEEE Trans on Software Engineering 28(10):970–983

    Article  Google Scholar 

  • Asplaugh TA, Antón AI (2008) Scenario support for effective requirements. Inf Softw Technol 50(3):198–220

    Article  Google Scholar 

  • Baeza-Yates R, Ribeiro-Neto B (2003) Modern information retrieval. Addison-Wesley

  • Cleland-Huang J (2002) Automating speculative queries through event-based requirements traceability. Joint Conference on Requirements Engineering

  • Cleland-Huang J, Settimi R, BenKhadra O, Berezhanskaya E, Christina S (2005) Goal-centric traceability for managing non-functional requirements. In: Proceedings of the 27th international conference on Software engineering (ICSE '05). ACM, New York, NY, pp 362–371. doi:10.1145/1062455.1062525. http://doi.acm.org/10.1145/1062455.1062525

  • Cuddeback D, Dekhtyar A, Hayes JH (2010) Automated requirements traceability: the study of human analysts. In Proc. 18th International Conference on Requirements Engineering, Sydney, Australia

  • Di Lucca GA, Di Penta M, Antoniol G, Casazza G (2001) An approach for reverse engineering of web-based applications. BP - 231, Dipartimento di Informatica e Sistemistica

  • Diallo MH, Naslavsky L, Ziv H, Alspaugh TA, Richardson DA (2007) Evaluating software architectures against requirements-level scenarios. Workshop on the Role of Software Architecture for Testing and Analysis

  • Durán A, Ruiz A, Toro M (2001) An automated approach for verification of software requirements. Jornadas de Ingeniería de Requisitos Aplicada, Seville, Spain

  • GanttProject, http://ganttproject.biz/

  • Gotel OCZ, Finkelstein ACW (1996) Extended requirements traceability: a framework for changing requirements. Workshop on Requirements Engineering in a Changing World

  • Greenspan S, Mylopoulos J, Borgida J (1994) On formal requirements modeling languages: RML revisited. Proc. 16th International Conference on Software Engineering, p 135–147, Sorrento, Italy

  • Hayes JH, Dekhtyar A (2005) Humans in the traceability loop: can't live with 'em, can't live without 'em. Proc. of the 3rd International Workshop on Traceability in Emerging Forms of Software Engineering, Long Beach, California. TEFSE '05. ACM, New York, NY, 20–23

  • Hayes JH, Dekhtyar A, Osbourne J (2003) Improving requirements tracing via information retrieval. International Conference on Requirements Engineering

  • Hayes JH, Dekhtyar A, Sundaram S (2006a) Advancing requirements tracing: the study of methods. IEEE Trans Software Engineering 32(1):4–19

    Article  Google Scholar 

  • Hayes JH, Dekhtyar A, Sundaram S (2006) Advances in dynamic generation of traceability links. Tech Report, (TR 451–06)

  • Hayes JH, Dekhtyar A, Sundaram S, Holbrook A, Vadlamudi S, April A (2007) REquirements TRacing On target (RETRO): improving software maintenance through traceability recovery. Innovations in Systems and Software Engineering: A NASA Journal 3(3):193–202

    Google Scholar 

  • Holbrook EA (2009) Satisfaction assessment of textual software engineering artifacts, PhD dissertation, Dept. of Computer Science., University of Kentucky, Lexington, KY

  • Holbrook EA, Hayes JH, Dekhtyar A (2009) Toward automating requirements satisfaction assessment. Requirements Engineering Conference, 2009. RE '09. 17th IEEE International, vol., no., pp 149–158

  • ISO9000:2000, Quality Standard, International Organization for Standardization

  • Lecceuche R (2000) Finding comparatively important concepts between texts. Automated Software Engineering (ASE’00). Washington, DC, 55

  • Letier E, van Lamsweerde A (2004) Reasoning about partial goal satisfaction for requirements and design engineering. SIGSOFT Softw Eng Notes 29(6):53–62

    Article  Google Scholar 

  • Marcus A, Maletic JI (2003) Recovering documentation-to-source code traceability links using latent semantic indexing. In: Proceedings of the 25th International Conference on Software Engineering (ICSE '03). IEEE Computer Society, Washington, DC, pp 125–135

  • Marcus MP, Santorini B, Marcinkiewicz MA (1993) Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics 19:313–330

    Google Scholar 

  • Marcus A, Maletic JI, Sergeyev A (2005) Recovery of traceability links between software documentation and source code. Int J Softw Eng Knowl Eng, World Scientific 15(5):811–836

    Google Scholar 

  • Porter A, Votta L (1998) Comparing detection methods for software requirements inspections. Empir Softw Eng 3(4):355–379

    Article  Google Scholar 

  • Robinson WN (2005) Implementing rule-based monitors within a framework for continuous requirements monitoring. In: Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences - Volume 07 (HICSS '05), Vol. 7. IEEE Computer Society, Washington, DC, 188.1-. doi:10.1109/HICSS.2005.306. http://dx.doi.org/10.1109/HICSS.2005.306

  • Robinson WN (2009) Seeking quality through user-goal monitoring. IEEE Software, pp 58–65

  • Robinson WN, Pawlowski S (1999) Managing requirements inconsistency with development goal monitors. IEEE Trans. on Software Eng

  • Salton G (1983) Introduction to modern information retrieval. McGraw-Hill

  • Shull G, Rus I, Basili VR (2000) How perspective-based reading can improve requirements inspections. IEEE Computer 33(7):73–79

    Article  Google Scholar 

  • Spanoudakis G, Zisman A (2005) Software traceability: a roadmap. In: Chang SK (ed) Handbook of Software Engineering and Knowledge Engineering, vol. 3: Recent Advancements. World Scientific Publishing

  • Spanoudakis G, d’Avila A, Garcez A, Zisman A (2003) Revising rules to capture requirements traceability relations: a machine learning approach. In: Proceedings of the 15th International Conference in Software Engineering and Knowledge Engineering (SEKE 2003), pp 570–577, San Francisco

    Google Scholar 

  • Spanoudakis G, Zisman A, Pérez-Miñana E, Krause P (2004) Rule-based generation of requirements traceability relations. J Syst Softw 2(72):105–127

    Article  Google Scholar 

  • Spivey J (1988) Understanding Z. Cambridge

  • Sutcliffe A (1998) Scenario-based requirement analysis. Requir Eng 3(1):48–65. doi:10.1007/BF02802920. http://dx.doi.org/10.1007/BF02802920

    Google Scholar 

Download references

Acknowledgments

This work is funded in part by the National Science Foundation under NSF grant CCF-0811140. This work was partially sponsored by NASA under grant NNG05GQ58G. We thank David Pruett and the other evaluators for their help. We thank Hakim Sultanov and Bill Kidwell. Thanks to Stephanie Ferguson, Marcus Fisher, Ken McGill, Tim Menzies, Lisa Montgomery, and everyone at the NASA IV&V facility. Thanks also to fellow graduate students Jody Larsen, Senthil Sundaram, Liming Zhao, and Sravanthi Vadlamudi. We thank statistics professor Dr. Arnold Stromberg.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jane Huffman Hayes.

Additional information

Editor: Daniela Damian

Appendices

Appendix A: Dataset Examples

Below are examples from each of the datasets used in this paper.

1.1 CM-1

Requirements:

  • SRS5.12.2.2. The DPU-CCM shall process real-time non-deferred commands within B ms of receipt from the ICU or the SCU.

  • SRS5.13.1.1. The DPU-TMALI shall install callbacks for handling all DPU-DCI interrupts including Error interrupt, Ping-Pong Timeout interrupt, and Ping-Pong Complete Flag interrupt.

  • SRS5.2.3.1. The DPU-RTOS shall exclude failed DRAM from the system memory pool based on the contents of the BIT_DRAM results in the SYSTEM_BLOCK. The system memory table does not include the Interrupt Vector Table (IVT), nor the text and data segment.

Design Elements:

  • DPUSDS5.2.3.6.1. Install Exception Handlers In the diagnostic mode of operation, the RSC processor generates external interrupts for memory single-bit errors (SBEs), multiple-bit errors (MBEs), and address exceptions. The RSCVME Board Support Package of VxWorks? does not directly support access to these interrupts. Some custom routines must be provided to access the Memory Error Interrupt.

  • DPUSDS5.12.1.4.1. Memory Upload and Download Handling There are two ways to upload data to the DPU:* Memory Poke (D_MEM_DAT_POKE command), or* Memory Upload (D_MEM_DAT_UPLD command).The memory poke command is used when a small (<=Z bytes) of data need to be poked into a DPU memory location. The Z byte limitation is derived from the Company X command length constraint.

  • DPUSDS5.2.3.6.5. Install Exception Handlers The RSC processor also generates an external interrupt for the Power Fail Interrupt. The RSCVME Board Support Package of VxWorks? does not directly support access to this interrupt. Some custom routines must be provided to access this interrupt. These functions are described below, and are contained in sysLibSup.c.

1.2 Gantt

Requirements:

  • R6. Create Resources (person); GanttProject supports Persons as resources. Persons have names and holidays or vacation days. Persons can be assigned to work on tasks.

  • R12. Change Task Begin/End Times automatically with dependency changes; The start or end date should be changed automatically if links among tasks are changed

  • R16. Add/Remove Holidays and Vacation Days; Holidays and vacation days are properties of persons (resources). changing this information also changes the availability of a person on certain days.

Design Elements:

  • DE4-1. To add tasks as subtasks a method which indent the selected task nodes in GUI and change them to be subtasks is used. A manager of task hierarchy provides functions to update the relationship between tasks.

  • DE10-4. The human resource class can have multiple objects of resource assignments which assigns this resource to tasks. The class provides function to get the list of these objects.

  • DE14-1. A GUI class of graphic area provides a function to draw dependency. The function uses an object of the task manager to add dependencies.

Appendix B: Additional Tables

Table 6 Number of possible k-element rule sets
Table 7 Gantt dataset naïve method results
Table 8 Gantt dataset TF-IDF method results
Table 9 Gantt dataset rule-based method results
Table 10 Gantt dataset combination of rule-based and TF-IDF method results
Table 11 CM-1 dataset naïve method results
Table 12 CM-1 dataset TF-IDF method results
Table 13 CM-1 dataset rule-based method results
Table 14 CM-1 dataset combination of rule-based and TF-IDF method results
Table 15 Gantt results for naïve method
Table 16 Gantt results for TF-IDF method
Table 17 Gantt results for rule-based method
Table 18 Gantt results for combination method
Table 19 CM-1 results for naive method, 0.03 threshold
Table 20 CM-1 Results for naive method, 0.09 threshold
Table 21 CM-1 results for TF-IDF method
Table 22 CM-1 results for rule-based method
Table 23 CM-1 Results for combination method

Rights and permissions

Reprints and permissions

About this article

Cite this article

Holbrook, E.A., Hayes, J.H., Dekhtyar, A. et al. A study of methods for textual satisfaction assessment. Empir Software Eng 18, 139–176 (2013). https://doi.org/10.1007/s10664-012-9198-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-012-9198-8

Keywords

Navigation