ABSTRACT
Background: Despite the increasing popularity of bug-bounty platforms in industry, little empirical evidence exists to identify the nature of invalid vulnerability reports. Mitigation of invalid reports is a serious concern of organisations running or using bug-bounty platforms as well as security researchers. Aims: In this work we aim to identify: (i) why some reports are considered as invalid? (ii) what are the characteristics of reports considered as invalid due to being out-of-scope? Method: We conducted an empirical study on disclosed invalid reports in HackerOne to examine the reasons these reports are marked as invalid and we found that out-of-scope is the leading reason. Since all out-of-scope reports were rejected according to the programs policy page we studied all programs policy pages in two major bug-bounty platforms to understand the characteristics of an out-of-scope report. We developed a generalised out-of-scope taxonomy model and we used our model to further analyse HackerOne out-of-scope reports to find the leading attributes of this model that contributes to the fate of these reports. Results: We identified out-of-scope followed by false-positive as two main reasons for a report to be deemed invalid. We found that the attribute of vulnerability type in our taxonomy model is the leading characteristic of out-of-scope reports. We also identified the top 9 out-of-interest vulnerability types according to policy pages. Conclusions: Our study can help bug-bounty platforms and researchers to better understand the nature of invalid reports. Our finding about the importance of vulnerability type in validating reports can be used to justify future works to develop automated classification techniques based on vulnerability types to better triage invalid reports. Our top 9 out-of-interest vulnerability types can be used as a blacklist to automatically classify possibly an out-of-scope report. Finally our generalised out-of-scope taxonomy model can guide organisations as a base model to create their policy page and tailor it as they need.
- 2008. Common types of non-qualifying reports. https://sites.google.com/site/bughunteruniversity/nonvuln. Google bughunter university.Google Scholar
- 2013. Facebook page was hacked by an unemployed Web developer. https://www.washingtonpost.com/news/the-switch/wp/2013/08/19/mark-zuckerbergs-facebook-page-was-hacked-by-an-unemployed-web-developer.Washingtonpost.Google Scholar
- 2015. Introducing reputation. https://www.hackerone.com/blog/introducing-reputation. HackerOne blog.Google Scholar
- 2015. Introducing signal. https://www.hackerone.com/blog/introducing-signal-and-impact. HackerOne blog.Google Scholar
- 2016. Improving public bug bounty programs with signal requirements. https://hackerone.com/blog/signal-requirements. HackerOne blog.Google Scholar
- 2018. Safe Harbor for Security Bug Bounty Participants. https://blog.mozilla.org/security/2018/08/01/safe-harbor-for-security-bug-bounty-participants/. Mozilla blog.Google Scholar
- 2021. Bug Bounty Programs". https://hackerone.com/bug-bounty-programs "HackerOne".Google Scholar
- 2021. Bugcrowd bug bounty platform. https://bugcrowd.com/.Google Scholar
- 2021. Concrete5 Bug Bounty Program Policy. https://hackerone.com/concrete5.Google Scholar
- 2021. Hackerone bug bounty platform. https://hackerone.com.Google Scholar
- 2021. Phabricator Bug Bounty Program Policy. https://hackerone.com/phabricator.Google Scholar
- 2021. Signal and Impact. https://docs.hackerone.com/hackers/signal-and-impact.html. HackerOne documentation.Google Scholar
- Sean Banerjee, Zahid Syed, Jordan Helmick, Mark Culp, Kenneth Ryan, and Bojan Cukic. 2017. Automated triaging of very large bug repositories. Information and software technology 89 (2017), 1--13.Google Scholar
- Rainer Böhme. 2006. A comparison of market approaches to software vulnerability disclosure. In International Conference on Emerging Trends in Information and Communication Security. Springer, 298--311. Google ScholarDigital Library
- Oscar Chaparro. 2017. Improving bug reporting, duplicate detection, and localization. In 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C). IEEE, 421--424. Google ScholarDigital Library
- Anne Edmundson, Brian Holtkamp, Emanuel Rivera, Matthew Finifter, Adrian Mettler, and David Wagner. 2013. An empirical study on the effectiveness of security code review. In International Symposium on Engineering Secure Software and Systems. Springer, 197--212. Google ScholarDigital Library
- Huw Fryer and Elena Simperl. 2017. Web science challenges in researching bug bounties. In Proceedings of the 2017 ACM on Web Science Conference. 273--277. Google ScholarDigital Library
- Cheng Huang, JiaYong Liu, Yong Fang, and Zheng Zuo. 2016. A study on Web security incidents in China by analyzing vulnerability disclosure platforms. Computers & Security 58 (2016), 47--62. Google ScholarDigital Library
- Karthik Kannan and Rahul Telang. 2005. Market for software vulnerabilities? Think again. Management science 51, 5 (2005), 726--740. Google ScholarDigital Library
- Aron Laszka, Mingyi Zhao, and Jens Grossklags. 2016. Banishing misaligned incentives for validating reports in bug-bounty platforms. In European Symposium on Research in Computer Security. Springer, 161--178.Google ScholarCross Ref
- Thomas Maillart, Mingyi Zhao, Jens Grossklags, and John Chuang. 2017. Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs. Journal of Cybersecurity 3, 2 (2017), 81--90.Google ScholarCross Ref
- Antonio Rene Marquez, Sergio Romulo Salazar, and Nathan Sportsman. 2019. Method and system for validating a vulnerability submitted by a tester in a crowdsourcing environment. US Patent 10,291,643.Google Scholar
- Dominique Matter, Adrian Kuhn, and Oscar Nierstrasz. 2009. Assigning bug reports using a vocabulary-based expertise model of developers. In 2009 6th IEEE international working conference on mining software repositories. IEEE, 131--140. Google ScholarDigital Library
- Patrick J Morrison, Rahul Pandita, Xusheng Xiao, Ram Chillarege, and Laurie Williams. 2018. Are vulnerabilities discovered and resolved like other defects? Empirical Software Engineering 23, 3 (2018), 1383--1421. Google ScholarDigital Library
- G Murphy and D Cubranic. 2004. Automatic bug triage using text categorization. In Proceedings of the Sixteenth International Conference on Software Engineering & Knowledge Engineering. Citeseer.Google Scholar
- Andy Ozment. 2004. Bug auctions: Vulnerability markets reconsidered. In Third workshop on the economics of information security. 19--26.Google Scholar
- Eric Rescorla. 2005. Is finding security holes a good idea? IEEE Security & Privacy 3, 1 (2005), 14--19. Google ScholarDigital Library
- Carl Sabottke, Octavian Suciu, and Tudor Dumitras. 2015. Vulnerability disclosure in the age of social media: exploiting twitter for predicting real-world exploits. In 24th {USENIX} Security Symposium ({USENIX} Security 15). 1041--1056. Google ScholarDigital Library
- Stuart Schechter. 2002. How to buy better testing using competition to get the most security and robustness for your dollar. In International Conference on Infrastructure Security. Springer, 73--87. Google ScholarDigital Library
- Hung-Jen Su and Jen-Yi Pan. 2016. Crowdsourcing platform for collaboration management in vulnerability verification. In 2016 18th Asia-Pacific Network Operations and Management Symposium (APNOMS). IEEE, 1--4.Google ScholarCross Ref
- Pannavat Terdchanakul, Hideaki Hata, Passakorn Phannachitta, and Kenichi Matsumoto. 2017. Bug or not? bug report classification using n-gram idf. In ICSME'17. IEEE, 534--538.Google ScholarCross Ref
- Ju An Wang and Minzhe Guo. 2010. Vulnerability categorization using Bayesian networks. In Proceedings of the Sixth Annual Workshop on Cyber Security and Information Intelligence Research. 1--4. Google ScholarDigital Library
- Jifeng Xuan, He Jiang, Zhilei Ren, Jun Yan, and Zhongxuan Luo. 2017. Automatic bug triage using semi-supervised text classification. arXiv preprint arXiv:1704.04769 (2017).Google Scholar
- Tao Zhang, Jiachi Chen, He Jiang, Xiapu Luo, and Xin Xia. 2017. Bug report enrichment with application of automated fixer recommendation. In 2017 IEEE/ACM 25th International Conference on Program Comprehension (ICPC). IEEE, 230--240. Google ScholarDigital Library
- Mingyi Zhao, Aron Laszka, Thomas Maillart, and Jens Grossklags. 2016. Crowdsourced security vulnerability discovery: Modeling and organizing bug-bounty programs. In The HCOMP Workshop on Mathematical Foundations of Human Computation, Austin, TX, USA.Google Scholar
Index Terms
- Why Some Bug-bounty Vulnerability Reports are Invalid?: Study of bug-bounty reports and developing an out-of-scope taxonomy model
Recommendations
The Benefits of Vulnerability Discovery and Bug Bounty Programs: Case Studies of Chromium and Firefox
WWW '23: Proceedings of the ACM Web Conference 2023Recently, bug-bounty programs have gained popularity and become a significant part of the security culture of many organizations. Bug-bounty programs enable organizations to enhance their security posture by harnessing the diverse expertise of crowds of ...
An empirical study of vulnerability discovery methods over the past ten years
AbstractIn recent years, hundreds of vulnerability discovery methods have been proposed and proven to be effective (i.e., Is Effective) by discovering thousands of vulnerabilities in real-world programs. However, the quantified ability to ...
Effort Estimates for Vulnerability Discovery Projects
HICSS '12: Proceedings of the 2012 45th Hawaii International Conference on System SciencesSecurity vulnerabilities continue to be an issue in the software field and new severe vulnerabilities are discovered in software products each month. This paper analyzes estimates from domain experts on the amount of effort required for a penetration ...
Comments