Understanding the Reproducibility of Crowd-reported Security Vulnerabilities
Open Access
- Author:
- Cuevas Villalba, Alejandro Ed
- Area of Honors:
- Security and Risk Analysis
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Peng Liu, Thesis Supervisor
Dinghao Wu, Thesis Honors Advisor - Keywords:
- vulnerability
cve
dataset
reproducibility
user-study
report - Abstract:
- Today’s software systems are increasingly relying on the “power of the crowd” to identify new security vulnerabilities. And yet, it is not well understood how reproducible the crowd-reported vulnerabilities are. In this study, we perform the first empirical analysis on a wide range of realworld security vulnerabilities (368 in total) with the goal of quantifying their reproducibility. Following a carefully controlled workflow, we organize a focused group of security analysts to carry out reproduction experiments. With 3600 man-hours spent, we obtain quantitative evidence on the prevalence of missing information in vulnerability reports and the low reproducibility of the vulnerabilities. We find that relying on a single vulnerability report from a popular security forum is generally difficult to succeed due to the incomplete information. By widely crowdsourcing the information gathering, security analysts could increase the reproduction success rate, but still face key challenges to troubleshoot the non-reproducible cases. To further explore solutions, we surveyed hackers, researchers, and engineers who have extensive domain expertise in software security (N=43). Going beyond Internet-scale crowdsourcing, we find that, security professionals heavily rely on manual debugging and speculative guessing to infer the missed information. Our result suggests that there is not only a necessity to overhaul the way a security forum collects vulnerability reports, but also a need for automated mechanisms to collect information commonly missing in a report.