A lot of people may not picture the Secunia Research team as socialising people, but once in a while we actually do go out and meet with like-minded individuals to have some nice food, beers, and exchange our latest and greatest security rants. Recently, at one of those occasions we were asked the question: "Why do you spend time verifying vulnerability reports?"
While the answer seems obvious to us at Secunia, it is a very good and valid question. It may not be as apparent to other people, even security researchers, why there is a need to perform additional verification. Why is our perception of this so different?
Secunia's advisory team invests countless of hours on a daily basis verifying vulnerability reports that turn out to be either non-issues or somewhat incorrect. You don't see us issuing advisories for all the non-issues, but there are actually days where the amount of reports "killed" by the advisory team outnumbers our published advisories. If you experience this day after day over several years, this will change your perspective.
Are there really that many fake reports? We don't really encounter many of those; we are more concerned about the partially incorrect reports or reports missing important information. A report may appear obviously wrong e.g. if a quick web search for the product name results in zero hits, however, what if the reporter doesn't speak English and auto-translated an Arabian product name? Suddenly, zero hits are not so surprising anymore. There are also cases where a report looks correct and legit, but there's a typo in the parameter or file name. How would you know without installing and testing the application for yourself? How can you properly evaluate the criticality or potential workarounds if you never spend time tracking down the core problem of a reported "buffer overflow"? How can you even be sure that it's not a completely different problem altogether? Is a kernel crash really just a crash or is there a potential for privilege escalation and how do you know without e.g. analysing the source code?
Spending countless hours, whether it's installing a dozen unknown web applications or reversing and analysing the latest exploits, is not only about catching incorrect reports; it also routinely leads to new information. Sometimes, it's simple things like confirming that the latest version is vulnerable when the original reports lacks version information. Often, it's more interesting cases like discovering that the "unspecified memory corruption" is actually an integer overflow to serious things like discovering that a reported DoS vulnerability in fact allows code execution, a vendor's patch is inadequate, additional attack vectors or even new vulnerabilities.
The short answer to the question would, of course, be: "To ensure that our customers and community receives the most accurate, trustworthy, and reliable Vulnerability Intelligence."