skip to main content
10.1145/3475716.3484193acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Why Some Bug-bounty Vulnerability Reports are Invalid?: Study of bug-bounty reports and developing an out-of-scope taxonomy model

Authors Info & Claims
Published:11 October 2021Publication History

ABSTRACT

Background: Despite the increasing popularity of bug-bounty platforms in industry, little empirical evidence exists to identify the nature of invalid vulnerability reports. Mitigation of invalid reports is a serious concern of organisations running or using bug-bounty platforms as well as security researchers. Aims: In this work we aim to identify: (i) why some reports are considered as invalid? (ii) what are the characteristics of reports considered as invalid due to being out-of-scope? Method: We conducted an empirical study on disclosed invalid reports in HackerOne to examine the reasons these reports are marked as invalid and we found that out-of-scope is the leading reason. Since all out-of-scope reports were rejected according to the programs policy page we studied all programs policy pages in two major bug-bounty platforms to understand the characteristics of an out-of-scope report. We developed a generalised out-of-scope taxonomy model and we used our model to further analyse HackerOne out-of-scope reports to find the leading attributes of this model that contributes to the fate of these reports. Results: We identified out-of-scope followed by false-positive as two main reasons for a report to be deemed invalid. We found that the attribute of vulnerability type in our taxonomy model is the leading characteristic of out-of-scope reports. We also identified the top 9 out-of-interest vulnerability types according to policy pages. Conclusions: Our study can help bug-bounty platforms and researchers to better understand the nature of invalid reports. Our finding about the importance of vulnerability type in validating reports can be used to justify future works to develop automated classification techniques based on vulnerability types to better triage invalid reports. Our top 9 out-of-interest vulnerability types can be used as a blacklist to automatically classify possibly an out-of-scope report. Finally our generalised out-of-scope taxonomy model can guide organisations as a base model to create their policy page and tailor it as they need.

References

  1. 2008. Common types of non-qualifying reports. https://sites.google.com/site/bughunteruniversity/nonvuln. Google bughunter university.Google ScholarGoogle Scholar
  2. 2013. Facebook page was hacked by an unemployed Web developer. https://www.washingtonpost.com/news/the-switch/wp/2013/08/19/mark-zuckerbergs-facebook-page-was-hacked-by-an-unemployed-web-developer.Washingtonpost.Google ScholarGoogle Scholar
  3. 2015. Introducing reputation. https://www.hackerone.com/blog/introducing-reputation. HackerOne blog.Google ScholarGoogle Scholar
  4. 2015. Introducing signal. https://www.hackerone.com/blog/introducing-signal-and-impact. HackerOne blog.Google ScholarGoogle Scholar
  5. 2016. Improving public bug bounty programs with signal requirements. https://hackerone.com/blog/signal-requirements. HackerOne blog.Google ScholarGoogle Scholar
  6. 2018. Safe Harbor for Security Bug Bounty Participants. https://blog.mozilla.org/security/2018/08/01/safe-harbor-for-security-bug-bounty-participants/. Mozilla blog.Google ScholarGoogle Scholar
  7. 2021. Bug Bounty Programs". https://hackerone.com/bug-bounty-programs "HackerOne".Google ScholarGoogle Scholar
  8. 2021. Bugcrowd bug bounty platform. https://bugcrowd.com/.Google ScholarGoogle Scholar
  9. 2021. Concrete5 Bug Bounty Program Policy. https://hackerone.com/concrete5.Google ScholarGoogle Scholar
  10. 2021. Hackerone bug bounty platform. https://hackerone.com.Google ScholarGoogle Scholar
  11. 2021. Phabricator Bug Bounty Program Policy. https://hackerone.com/phabricator.Google ScholarGoogle Scholar
  12. 2021. Signal and Impact. https://docs.hackerone.com/hackers/signal-and-impact.html. HackerOne documentation.Google ScholarGoogle Scholar
  13. Sean Banerjee, Zahid Syed, Jordan Helmick, Mark Culp, Kenneth Ryan, and Bojan Cukic. 2017. Automated triaging of very large bug repositories. Information and software technology 89 (2017), 1--13.Google ScholarGoogle Scholar
  14. Rainer Böhme. 2006. A comparison of market approaches to software vulnerability disclosure. In International Conference on Emerging Trends in Information and Communication Security. Springer, 298--311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Oscar Chaparro. 2017. Improving bug reporting, duplicate detection, and localization. In 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C). IEEE, 421--424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Anne Edmundson, Brian Holtkamp, Emanuel Rivera, Matthew Finifter, Adrian Mettler, and David Wagner. 2013. An empirical study on the effectiveness of security code review. In International Symposium on Engineering Secure Software and Systems. Springer, 197--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Huw Fryer and Elena Simperl. 2017. Web science challenges in researching bug bounties. In Proceedings of the 2017 ACM on Web Science Conference. 273--277. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Cheng Huang, JiaYong Liu, Yong Fang, and Zheng Zuo. 2016. A study on Web security incidents in China by analyzing vulnerability disclosure platforms. Computers & Security 58 (2016), 47--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Karthik Kannan and Rahul Telang. 2005. Market for software vulnerabilities? Think again. Management science 51, 5 (2005), 726--740. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Aron Laszka, Mingyi Zhao, and Jens Grossklags. 2016. Banishing misaligned incentives for validating reports in bug-bounty platforms. In European Symposium on Research in Computer Security. Springer, 161--178.Google ScholarGoogle ScholarCross RefCross Ref
  21. Thomas Maillart, Mingyi Zhao, Jens Grossklags, and John Chuang. 2017. Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs. Journal of Cybersecurity 3, 2 (2017), 81--90.Google ScholarGoogle ScholarCross RefCross Ref
  22. Antonio Rene Marquez, Sergio Romulo Salazar, and Nathan Sportsman. 2019. Method and system for validating a vulnerability submitted by a tester in a crowdsourcing environment. US Patent 10,291,643.Google ScholarGoogle Scholar
  23. Dominique Matter, Adrian Kuhn, and Oscar Nierstrasz. 2009. Assigning bug reports using a vocabulary-based expertise model of developers. In 2009 6th IEEE international working conference on mining software repositories. IEEE, 131--140. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Patrick J Morrison, Rahul Pandita, Xusheng Xiao, Ram Chillarege, and Laurie Williams. 2018. Are vulnerabilities discovered and resolved like other defects? Empirical Software Engineering 23, 3 (2018), 1383--1421. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. G Murphy and D Cubranic. 2004. Automatic bug triage using text categorization. In Proceedings of the Sixteenth International Conference on Software Engineering & Knowledge Engineering. Citeseer.Google ScholarGoogle Scholar
  26. Andy Ozment. 2004. Bug auctions: Vulnerability markets reconsidered. In Third workshop on the economics of information security. 19--26.Google ScholarGoogle Scholar
  27. Eric Rescorla. 2005. Is finding security holes a good idea? IEEE Security & Privacy 3, 1 (2005), 14--19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Carl Sabottke, Octavian Suciu, and Tudor Dumitras. 2015. Vulnerability disclosure in the age of social media: exploiting twitter for predicting real-world exploits. In 24th {USENIX} Security Symposium ({USENIX} Security 15). 1041--1056. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Stuart Schechter. 2002. How to buy better testing using competition to get the most security and robustness for your dollar. In International Conference on Infrastructure Security. Springer, 73--87. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hung-Jen Su and Jen-Yi Pan. 2016. Crowdsourcing platform for collaboration management in vulnerability verification. In 2016 18th Asia-Pacific Network Operations and Management Symposium (APNOMS). IEEE, 1--4.Google ScholarGoogle ScholarCross RefCross Ref
  31. Pannavat Terdchanakul, Hideaki Hata, Passakorn Phannachitta, and Kenichi Matsumoto. 2017. Bug or not? bug report classification using n-gram idf. In ICSME'17. IEEE, 534--538.Google ScholarGoogle ScholarCross RefCross Ref
  32. Ju An Wang and Minzhe Guo. 2010. Vulnerability categorization using Bayesian networks. In Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research. 1--4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Jifeng Xuan, He Jiang, Zhilei Ren, Jun Yan, and Zhongxuan Luo. 2017. Automatic bug triage using semi-supervised text classification. arXiv preprint arXiv:1704.04769 (2017).Google ScholarGoogle Scholar
  34. Tao Zhang, Jiachi Chen, He Jiang, Xiapu Luo, and Xin Xia. 2017. Bug report enrichment with application of automated fixer recommendation. In 2017 IEEE/ACM 25th International Conference on Program Comprehension (ICPC). IEEE, 230--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Mingyi Zhao, Aron Laszka, Thomas Maillart, and Jens Grossklags. 2016. Crowdsourced security vulnerability discovery: Modeling and organizing bug-bounty programs. In The HCOMP Workshop on Mathematical Foundations of Human Computation, Austin, TX, USA.Google ScholarGoogle Scholar

Index Terms

  1. Why Some Bug-bounty Vulnerability Reports are Invalid?: Study of bug-bounty reports and developing an out-of-scope taxonomy model

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEM '21: Proceedings of the 15th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)
      October 2021
      368 pages
      ISBN:9781450386654
      DOI:10.1145/3475716

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 October 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      ESEM '21 Paper Acceptance Rate24of124submissions,19%Overall Acceptance Rate130of594submissions,22%
    • Article Metrics

      • Downloads (Last 12 months)67
      • Downloads (Last 6 weeks)10

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader