skip to main content
10.1145/2811681.2811697acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaswecConference Proceedingsconference-collections
short-paper

Threshold-based prediction of schedule overrun in software projects

Authors Info & Claims
Published:28 September 2015Publication History

ABSTRACT

Risk identification is the first critical task of risk management for planning measures to deal with risks. While, software projects have a high risk of schedule overruns, current practices in risk management mostly rely on high level guidance and the subjective judgements of experts. In this paper, we propose a novel approach to support risk identification using historical data associated with a software project. Specifically, our approach identifies patterns of abnormal behaviours that caused project delays and uses this knowledge to develop an interpretable risk predictive model to predict whether current software tasks (in the form of issues) will cause a schedule overrun. The abnormal behaviour identification is based on a set of configurable threshold-based risk factors. Our approach aims to provide not only predictive models, but also an interpretable outcome that can be inferred as the patterns of the combinations between risk factors. The evaluation results from two case studies (Moodle and Duraspace) demonstrate the effectiveness of our predictive models, achieving 78% precision, 56% recall, 65% F-measure, 84% Area Under the ROC Curve.

References

  1. B. W. Boehm. Software risk management: principles and practices. Software, IEEE, 8(1):32--41, 1991. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. J. Carr and S. L. Konda. Taxonomy-Based Risk Identification. Technical Report June, Software Engineering Institute, Carnegie Mellon University, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  3. M. Choetkiertikul, H. K. Dam, T. Tran, and A. Ghose. Characterization and prediction of issue-related risks in software projects. In Proceedings of 12th Working Conference on Mining Software Repositories (MSR-2015), page To Appear, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. de Bakker, A. Boonstra, and H. Wortmann. Does risk management contribute to IT project success? A meta-analysis of empirical evidence. International Journal of Project Management, 28(5):493--503, July 2010.Google ScholarGoogle ScholarCross RefCross Ref
  5. J. L. Eveleens and C. Verhoef. The rise and fall of the Chaos report figures. IEEE Software, 27(1):30--36, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. E. Giger, M. Pinzger, and H. Gall. Predicting the fix time of bugs. In Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering - RSSE '10, pages 52--56. ACM Press, May 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Hooimeijer and W. Weimer. Modeling bug report quality. In Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE '07, page 34. ACM Press, Nov. 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. L. Marks, Y. Zou, and A. E. Hassan. Studying the fix-time for bugs in large open source projects. In Proceedings of the 7th International Conference on Predictive Models in Software Engineering - Promise '11, pages 1--8, New York, New York, USA, Sept. 2011. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. B. Michael, S. Blumberg, and J. Laartz. Delivering large-scale IT projects on time, on budget, and on value. In McKinsey Quarterly, 2012.Google ScholarGoogle Scholar
  10. A. Pika, W. M. van der Aalst, C. J. Fidge, A. H. ter Hofstede, M. T. Wynn, and W. V. D. Aalst. Profiling event logs to configure risk indicators for process delays. Advanced Information Systems Engineering (CAISE 2013), pages 465--481, July 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. A. A. Porter, H. P. Siy, and L. G. Votta. Understanding the effects of developer activities on inspection interval. In Proceedings of the 19th international conference on Software engineering - ICSE '97, pages 128--138. ACM Press, May 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. R. Quinlan. C4. 5: programs for machine learning. Elsevier, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. J. Rousseeuw and A. M. Leroy. Robust regression and outlier detection, volume 589. John Wiley & Sons, 2005.Google ScholarGoogle Scholar
  14. P. Runeson, M. Alexandersson, and O. Nyholm. Detection of Duplicate Defect Reports Using Natural Language Processing. In 29th International Conference on Software Engineering (ICSE'07), pages 499--510. IEEE, May 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. E. Shihab, A. Ihara, Y. Kamei, W. M. Ibrahim, M. Ohira, B. Adams, A. E. Hassan, and K.-i. Matsumoto. Studying re-opened bugs in open source software. Empirical Software Engineering, 18(5):1005--1042, Sept. 2012.Google ScholarGoogle ScholarCross RefCross Ref
  16. L. Tichy and T. Bascom. The business end of IT project failure. Mortgage Banking, 68(6):28, 2008.Google ScholarGoogle Scholar
  17. C. Weiss, R. Premraj, T. Zimmermann, and A. Zeller. How Long Will It Take to Fix This Bug? In Proceedings - ICSE 2007 Workshops: Fourth International Workshop on Mining Software Repositories, MSR 2007. IEEE, May 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. T. Zimmermann, N. Nagappan, P. J. Guo, and B. Murphy. Characterizing and predicting which bugs get reopened. In 34th International Conference on Software Engineering (ICSE), 2012, pages 1074--1083. IEEE Press, June 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Threshold-based prediction of schedule overrun in software projects

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Other conferences
            ASWEC ' 15 Vol. II: Proceedings of the ASWEC 2015 24th Australasian Software Engineering Conference
            September 2015
            171 pages
            ISBN:9781450337960
            DOI:10.1145/2811681

            Copyright © 2015 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 28 September 2015

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • short-paper
            • Research
            • Refereed limited

            Acceptance Rates

            ASWEC ' 15 Vol. II Paper Acceptance Rate12of27submissions,44%Overall Acceptance Rate12of27submissions,44%

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader