Skip to main content
Log in

Learning better IV&V practices

  • Original Paper
  • Published:
Innovations in Systems and Software Engineering Aims and scope Submit manuscript

Abstract

After data mining National Aeronautics and Space Administration (NASA) independent verification and validation (IV&V) data, we offer (a) an early life cycle predictor for project issue frequency and severity; (b) an IV&V task selector (that used the predictor to find the appropriate IV&V tasks); and (c) pruning heuristics describing what tasks to ignore, if the budget cannot accommodate all selected tasks. In ten-way cross-validation experiments, the predictor performs very well indeed: the average f-measure for predicting four classes of issue severity was over 0.9. This predictor is built using public-domain data and software. To the best of our knowledge, this is the first reproducible report of a predictor for issue frequency and severity that can be applied early in the life cycle.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Wallace D, Fujii R (1989) Software verification and validation: an overview. IEEE Softw (May):10–17

  2. IEEE-1012 (1998) IEEE standard 1012-2004 for software verification and validation

  3. Boetticher R, Menzies T, Ostrand T (2007) The PROMISE repository of empirical software engineering data. http://promisedata.org/repository

  4. Boehm B, Papaccio P (1988) Understanding and controlling software costs. IEEE Trans Softw Eng 14(10): 1462–1477

    Article  Google Scholar 

  5. Dabney JB (2002–2004) Return on investment for IV&V, nASA funded study. Results available from http://sarpresults.ivv.nasa.gov/ViewResearch/24.jsp

  6. Shull F, Basili B, Boehm B, Brown AW, Costa P, Lindvall M, Port D, Rus I, Tesoriero R, Zelkowitz M (2002) What we have learned about fighting defects. In: Proceedings of 8th international software metrics symposium, Ottawa, Canada, pp 249–258. Available from http://fc-md.umd.edu/fcmd/Papers/shull_defects.ps

  7. Arthur J, Groner M, Hayhurst K, Holloway C (199) Evaluating the effectiveness of independent verification and validation. IEEE Comput (October):79–83

  8. Witten IH, Frank E (2005) Data mining, 2nd edn. Morgan Kaufmann, Los Altos

  9. Costello K (2005) Software integrity level assessment process (SILAP), NASA IV&V facility

  10. Fisher M, Menzies T (2004) Learning iv&v strategies. In: HICSS’06, 2006. Available at http://menzies.us/pdf/06hicss.pdf

  11. Jackson B, Griggs J, Costello K, Solomon D (2006) Systems level definition of iv&v. nASA document IVV 09-1, last revised March 16, 2006. Available online at http://www.nasa.gov/centers/ivv/pdf/170825main_IVV_09-1.pdf

  12. Zelkowitz MV, Rus I (2001) Understanding IV&V in a safety critical and complex evolutionary environment: the nasa space shuttle program. In: ICSE ’01: Proceedings of the 23rd international conference on software engineering. IEEE Computer Society, Washington, DC, pp 349–357

  13. Easterbrook, S, Lutz RR, Covington R, Kelly J, Ampo Y, Hamilton D (1998) Experiences using lightweight formal methods for requirements modeling. IEEE Trans Softw Eng 4–14

  14. Hayes JH, Dekhtyar A, Sundaram SK (2006) Advancing candidate link generation for requirements tracing: the study of methods. IEEE Trans Softw Eng 32(1):4–19. Available online at http://doi.ieeecomputersociety.org/10.1109/TSE.2006.3

    Google Scholar 

  15. Hayes J, Chemannoor I, Surisetty V, Andrews A (2005) Fault links: exploring the relationship between module and fault types. In: Proceedings of European dependable computing conference (EDCC), Budapest, Hungary, April. Available at http://selab.netlab.uky.edu/homepage/edcc2005_hayes_camera_ready_spring er.pdf

  16. Malin J, Throop D (2007) Basic concepts and distinctions for an aerospace ontology of functions, entities and problems. In IEEE Aerospace Conference, March

  17. Lutz RR, Mikulski IC (2004) Empirical analysis of safety-critical anomalies during operations. IEEE Trans Softw Eng 30(3):172–180. Available online at http://csdl.computer.org/comp/trans/ts/2004/03/e0172abs.htm

    Google Scholar 

  18. Leveson N (1995) Safeware system safety and computers. Addison-Wesley, Reading

  19. Heimdahl M, Leveson N (1996) NCompleteness and consistency analysis of state-based requirements. IEEE Trans Softw Eng (May)

  20. Madachy R (1997) Heuristic risk assessment using cost factors. IEEE Softw 14(3): 51–59

    Article  Google Scholar 

  21. Boehm B, Basili V (2001) Software defect reduction top 10 list. IEEE Softw (January):135–137

  22. Menzies T, Stefano JSD (2003) How good is your blind spot sampling policy? In: 2004 IEEE conference on high assurance software engineering. Available at http://menzies.us/pdf/03blind.pdf

  23. Menzies T, Greenwald J, Frank A (2007) Data mining static code attributes to learn defect predictors. IEEE Trans Softw Eng (January). Available at http://menzies.us/pdf/06learnPredict.pdf

  24. Menzies T, Chen Z, Hihn J, Lum K (2006) Selecting best practices for effort estimation. IEEE Trans Softw Eng (November). Available at http://menzies.us/pdf/06coseekmo.pdf

  25. Glass R (1997) Software runaways: lessons learned from massive software project failures. Pearson Education

  26. Raffo D, Nayak U, Setamanit S, Wakeland W (2004) Using software process simulation models to assess the impact of IV&V activities. In: Proceedings of the international workshop on software process simulation and modeling (ProSim’04), held in conjunction with the international conference of software engineering (ICSE), held in Edinburgh, Scotland, May

  27. Jiang J, Klein G, Chen H, Lin L (2002) Reducing user-related risks during and prior to system development. Int J Proj Manage 20(7): 507–515

    Article  Google Scholar 

  28. Ropponen J, Lyytinen K (2000) Components of software development risk: how to address them? a project manager survey. IEEE Trans Softw Eng (Feburary):98–112

  29. Takagi Y, Mizuno O, Kikuno T (2005) An empirical approach to characterizing risky software projects based on logistic regression analysis. Empir Softw Eng 0(4): 495–515

    Article  Google Scholar 

  30. Abe S, Mizuno O, Kikuno T, Kikuchi N, Hirayama M (2006) Estimation of project success using bayesian classifier. In ICSE 2006, pp 600–603

  31. David L (2000) Nasa report: Too many failures with faster, better, cheaper, space.com, 13 March 2000. Available at http://www.space.com/businesstechnology/business/spear_report_000313.ht ml

  32. Cohen W (1995) Fast effective rule induction. In: ICML’95, pp 115–123. Available online at http://www.cs.cmu.edu/~wcohen/postscript/ml-95-ripper.ps

  33. Kohavi R, John GH (1997) Wrappers for feature subset selection. Artif Intell 97(1–2):273–324. Available online at http://citeseer.nj.nec.com/kohavi96wrappers.html

    Google Scholar 

  34. Hall M, Holmes G (2003) Benchmarking attribute selection techniques for discrete class data mining. IEEE Trans Knowl Data Eng 15(6):1437– 1447. Available at http://www.cs.waikato.ac.nz/~mhall/HallHolmesTKDE.pdf

    Google Scholar 

  35. Dougherty J, Kohavi R, Sahami M (1995) Supervised and unsupervised discretization of continuous features. In: International conference on machine learning, pp 194–202. Available at http://www.cs.pdx.edu/~timm/dm/dougherty95supervised.pdf

  36. Higuera R, Haimes Y (1996) Software risk management. Technical report, June cMU/SEI-96-TR-012

  37. Quinlan R (1992) C4.5: programs for machine learning. Morgan Kaufman, Los Altos. ISBN: 1558602380

  38. Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Technical report. Wadsworth International, Monterey

  39. Basili V, McGarry F, Pajerski R, Zelkowitz M (2002) Lessons learned from 25 years of process improvement: the rise and fall of the NASA software engineering laboratory. In: Proceedings of the 24th international conference on software engineering (ICSE) 2002, Orlando, Florida. Available at http://www.cs.umd.edu/projects/SoftEng/ESEG/papers/83.88.pdf

  40. Menzies T, Cukic B (2002) How many tests are enough? In: Chang S (ed) Handbook of software engineering and knowledge engineering, vol II. Available at http://menzies.us/pdf/00ntests.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Menzies.

Additional information

This research was conducted at West Virginia University and the NASA IV&V Facility under NASA subcontract project 100005549, task 5e, award 1002193r. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement by the United States Government.

See http://menzies.us/pdf/07ivv.pdf for an earlier draft.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Menzies, T., Benson, M., Costello, K. et al. Learning better IV&V practices. Innovations Syst Softw Eng 4, 169–183 (2008). https://doi.org/10.1007/s11334-008-0046-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11334-008-0046-3

Keywords

Navigation