skip to main content
10.1145/2786805.2804429acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
short-paper

REMI: defect prediction for efficient API testing

Published: 30 August 2015 Publication History

Abstract

Quality assurance for common APIs is important since the the reliability of APIs affects the quality of other systems using the APIs. Testing is a common practice to ensure the quality of APIs, but it is a challenging and laborious task especially for industrial projects. Due to a large number of APIs with tight time constraints and limited resources, it is hard to write enough test cases for all APIs. To address these challenges, we present a novel technique, REMI that predicts high risk APIs in terms of producing potential bugs. REMI allows developers to write more test cases for the high risk APIs. We evaluate REMI on a real-world industrial project, Tizen-wearable, and apply REMI to the API development process at Samsung Electronics. Our evaluation results show that REMI predicts the bug-prone APIs with reasonable accuracy (0.681 f-measure on average). The results also show that applying REMI to the Tizen-wearable development process increases the number of bugs detected, and reduces the resources required for executing test cases.

References

[1]
A. Arcuri and L. Briand. A practical guide for using statistical tests to assess randomized algorithms in software engineering. ICSE ’11, pages 1–10, New York, NY, USA.
[2]
L. Breiman. Random forests. Mach. Learn., 45(1):5–32, Oct. 2001.
[3]
E. Engström, P. Runeson, and G. Wikstrand. An empirical evaluation of regression testing based on fix-cache recommendations. In Software Testing, Verification and Validation (ICST), 2010, pages 75–78, April 2010.
[4]
B. Ghotra, S. McIntosh, and A. E. Hassan. Revisiting the impact of classification techniques on the performance of defect prediction models. ICSE ’15, pages 789–800.
[5]
M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H. Witten. The weka data mining software: an update. SIGKDD Explor. Newsl., 11:10–18, November 2009.
[6]
D. Hoffman and P. Strooper. Tools and techniques for java api testing. In Software Engineering Conference. Proceedings. Australian, pages 235–245, 2000.
[7]
A. Jorgensen and J. Whittaker. An api testing method. In Conf. on Softw. Testing Analysis and Review, 2000.
[8]
N. Kropp, P. Koopman, and D. Siewiorek. Automated robustness testing of off-the-shelf software components. In Fault-Tolerant Computing, 1998. 28th Annual International Symposium on, pages 230–239, June 1998.
[9]
C. Lewis, Z. Lin, C. Sadowski, X. Zhu, R. Ou, and E. J. Whitehead Jr. Does bug prediction support human developers? findings from a google case study. ICSE ’13, pages 372–381.
[10]
R. Moser, W. Pedrycz, and G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. ICSE ’08, pages 181–190.
[11]
N. Nagappan, T. Ball, and A. Zeller. Mining metrics to predict component failures. ICSE ’06, pages 452–461, New York, NY, USA, 2006.
[12]
T. Ostrand, E. Weyuker, and R. Bell. Predicting the location and number of faults in large software systems. TSE, 31(4):340–355, 2005.
[13]
F. Rahman and P. Devanbu. Ownership, experience and defects: A fine-grained study of authorship. ICSE ’11, pages 491–500.
[14]
C. Shelton, P. Koopman, and K. Devale. Robustness testing of the microsoft win32 api. In Dependable Systems and Networks, 2000. DSN 2000. Proceedings International Conference on, pages 261–270, 2000.
[15]
Understand. http://www.scitools.com/.

Cited By

View all
  • (2024)ML-Based Software Defect Prediction in Embedded Software for Telecommunication Systems (Focusing on the Case of SAMSUNG ELECTRONICS)Electronics10.3390/electronics1309169013:9(1690)Online publication date: 26-Apr-2024
  • (2024)When less is more: on the value of “co-training” for semi-supervised software defect predictorsEmpirical Software Engineering10.1007/s10664-023-10418-429:2Online publication date: 24-Feb-2024
  • (2024)Towards automatically identifying the co‐change of production and test codeSoftware Testing, Verification and Reliability10.1002/stvr.1870Online publication date: 11-Jan-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering
August 2015
1068 pages
ISBN:9781450336758
DOI:10.1145/2786805
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 August 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. API Testing
  2. Defect Prediction
  3. Quality Assurance

Qualifiers

  • Short-paper

Conference

ESEC/FSE'15
Sponsor:

Acceptance Rates

Overall Acceptance Rate 112 of 543 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)29
  • Downloads (Last 6 weeks)1
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)ML-Based Software Defect Prediction in Embedded Software for Telecommunication Systems (Focusing on the Case of SAMSUNG ELECTRONICS)Electronics10.3390/electronics1309169013:9(1690)Online publication date: 26-Apr-2024
  • (2024)When less is more: on the value of “co-training” for semi-supervised software defect predictorsEmpirical Software Engineering10.1007/s10664-023-10418-429:2Online publication date: 24-Feb-2024
  • (2024)Towards automatically identifying the co‐change of production and test codeSoftware Testing, Verification and Reliability10.1002/stvr.1870Online publication date: 11-Jan-2024
  • (2023)Revisiting the Identification of the Co-evolution of Production and Test CodeACM Transactions on Software Engineering and Methodology10.1145/360718332:6(1-37)Online publication date: 30-Sep-2023
  • (2023)Optimization Techniques for Model Checking Leads-to Properties in a Stratified WayACM Transactions on Software Engineering and Methodology10.1145/360461032:6(1-38)Online publication date: 30-Sep-2023
  • (2023)What Quality Aspects Influence the Adoption of Docker Images?ACM Transactions on Software Engineering and Methodology10.1145/360311132:6(1-30)Online publication date: 30-Sep-2023
  • (2023)An Empirical Study on GitHub Pull Requests’ ReactionsACM Transactions on Software Engineering and Methodology10.1145/359720832:6(1-35)Online publication date: 30-Sep-2023
  • (2023)CodeEditor: Learning to Edit Source Code with Pre-trained ModelsACM Transactions on Software Engineering and Methodology10.1145/359720732:6(1-22)Online publication date: 30-Sep-2023
  • (2023)Semantic-Enriched Code Knowledge Graph to Reveal Unknowns in Smart Contract Code ReuseACM Transactions on Software Engineering and Methodology10.1145/359720632:6(1-37)Online publication date: 30-Sep-2023
  • (2023)Open Problems in Fuzzing RESTful APIs: A Comparison of ToolsACM Transactions on Software Engineering and Methodology10.1145/359720532:6(1-45)Online publication date: 30-Sep-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media