Bugcrowd, for anyone not familiar, is a managed crowdsourced bug bounty service. They’re getting into penetration testing and released a report to sell the benefits of this service versus traditional penetration testing. The report is found here.
Disclaimer: I am a penetration tester, and admit some bias. I have some limited experimented with bug bounties. I’ll keep an open mind and a neutral perspective because there are pros and cons to everything. My goal is to ensure the truth doesn’t get buried by marketing. The security landscape is dynamic, so I embrace these changes and welcome all new perspectives to keep people and organizations secure. I’m cheering for Bugcrowd to predict what the future holds.
Some background on bug bounties. A company, like Bugcrowd or HackerOne, lists organizations that participate in their bug bounty program along with details about each program. Particularly the applications or IP addresses “in scope” that a researcher may target and how much the companies are willing to pay for each type of vulnerability. Researchers then get to work and submit vulnerabilities, if found. Often, it is a race against the clock. Once a program is posted, researchers search for vulnerabilities and the first to be reported (though not necessarily fixed) is the first one to get credit (recognition or payment).
Bugcrowd takes 12 pages to get to the point that crowdsourced penetration testing finds more, higher-value vulnerabilities. They don’t define how quantity relates to risk and what “high value” actually means. A high-value finding for one client may not be so for another. These things take time and many conversations to distill. Can Bugcrowd leverage the intelligence of their crowdsourced bug finders and combine that with stellar client relationships?
Bugcrowd discusses a few disadvantages with penetration testing in an attempt to build some buzz. In my opinion, some (not all) are baseless, little is provided to back the claims, and little attempt is made to fairly compare each. These pile up and make me question the credibility. I get the feeling that this is going to be all one-sided to present Bugcrowd’s program in the best light. It isn’t all doom and gloom, thankfully there are good points that make sense and hint at changes that pen testers should prepare for. None of their points seem to be monumental.
Diving in, Bugcrowd lists several gaps in the current pen testing model:
Scheduling wait times and months of delay. Many pen testing shops are a group of consultants and operate like doctor’s offices. Clients are scheduled ahead of time to keep everyone busy. If a client cancels or delays, that leaves an opening for another job, so there is some wiggle room.
In my opinion, if a consultancy is booked out for months, they are either 1) in high demand and demonstrate they are very good at what they do or 2) are taking on too much work. A caveat is the busy 4th quarter when everyone suddenly realizes they want their testing done by the end of the year. From a client perspective, a pen test isn’t something to set up in a day. It should be planned and coordinated internally. What I notice commonly are clients delay starting, slow to sign contacts, or are uncertain about what assets to test. Having these conversations early, setting expectations, and thoroughly planning helps avoid delays and keeps problems to a minimum. In reality, a large percentage of organizations are barely ready for a pen test with three to four weeks of lead time.
Tests are difficult to extend. This issue is easily avoided with proper planning and scoping. It also varies throughout the year, just like scheduling. A good consultancy builds in some wiggle room. If the original tester is booked on a different project in the following weeks, another one can pick up the following week if a job needs to be extended.
Incompatible and lack of incentives. This is Bugcrowd’s double-edged sword in disguise. A client should not expect to receive the most experienced consultant, nor should they be concerned about getting someone who is incompetent. A junior consultant has a solid foundation with the backing of their entire team, they’re not allowed to run solo. A good QA process keeps everyone consistent and addresses quality.
Bugcrowd pays researchers to find and report as many vulnerabilities as fast as possible. This model incentivizes finding low hanging fruit. The first researcher who finds and reports a vulnerability is paid while all others are not. Even if subsequent researchers follow the first report by minutes. Even if significant time was spent. This could miss chaining vulnerabilities where one leads to another, and another, until the chain reaches a catastrophic breach. Researchers are disincentivized to dig deep where those “high value” vulnerabilities lurk at the risk of losing a payout.
The reality of pen testing work involved slogging through tough challenges that take hours, days, or more to crack. Patience and tenacity are essential skills.
Someone who works for a per-vulnerability, cash incentive is less concerned with what is best for the client. How many unpaid researchers would be on pre-engagement kickoff call to discuss a client’s needs, concerns, risk appetite, business cycles, and fragile systems? With no guarantee of being paid, there is no incentive to spend this time with a client.
Pen testers are paid to deliver quality. I can’t speak for all, but there’s a level a professionalism and passion for the work. A pen tester works overtime, delights the client, and goes above and beyond to deliver more value than expected. There is an incentive to go beyond surface-level findings because they care for the client, their relationship, and their reputation; not only for the dollar.
Slow results. Bugcrowd argues this causes a delay in fixing vulnerabilities. I agree, though dubious about the value. A report takes time to write, though a superior pen tester notifies clients of the highest-risk findings immediately. Some clients are ready and able to act on this. Unfortunately, most do not. People are busy and adding security remediation to their already overflowing plate is a challenge. We test client environments in past years and discover the same critical vulnerabilities exist, the same credentials are used, and the same exploits work. Some pen testing firms offer a client portal that provides up-to-date findings. This can be incredibly valuable for clients who need the information quickly and are ready to act on these results.
Skill fit. Not everyone can know everything. A quality consultancy makes an effort to match the skill set to client needs. Bugcrowd makes no mention of how they address this. I assume they draw from a large pool of testers, which can work.
Checklists can be good and bad, depending on how they are used. They help testers ensure all bases are consistently covered and act as a guideline. They maintain consistency from one test to another, from one tester or another, and over time. They must be flexible as no two environments are exactly alike. Checklists are a major reason why the airline industry has such a high safety record. Not using checklists in this type of work leads to inconsistency over time.
A point in time testing. Not to be blatantly obvious, but a penetration test is a point in time by design. It is part of a continuous vulnerability management or software development lifecycle (SDLC) program. Anything test can be continuous, yet each instance is still a point in time no matter how thin you slice it.
Lack of SDLC integration. From a security perspective, a client should be hesitant to integrate an outside organization into their ticketing/issue tracking system. That becomes another security vulnerability to manage. Mature pen testing shops generate reports in formats that work for the client.
Poor results
“…findings are interspersed with false positives and no-risk issues, making them hard to identify and resolve.”
I have to draw the line here and call this pure sales fluff. A quality pen test report sorts findings logically, typically risk-based, concerning common sense and what matters to the client. Findings are validated to minimize false positives. In rare cases where findings cannot be validated, they should be well documented with relevant variables and mitigations. In some cases, low or informational-risk findings are not validated though included for documentation purposes.
The ROI of pen testing. This is a valid point, yet incredibly difficult to measure qualitatively and repeatably. Proving this requires a comparison of their crowdsourced vs traditional penetration test. Due to the security-related sensitivity of these things, it is extremely difficult to compare something like this publicly.
Bugcrowd surveyed 129 individuals for this report, a seemingly tiny sample. Several pretty charts and cool-sounding buzzwords suggest the rise of crowdsourcing and traditional penetration testing losing popularity. The ultimate claim this is “the end of an era” is yet to be determined.
This isn’t Bugcrowd’s first foray into crowdsourced penetration testing. They’ve been at it for two or three years in what seems like uncoordinated efforts. I hoped Bugcrowd would uncover some good ideas and predictions for the future of pen testing. In some ways, they did but also left loose ends in their strategy. The main takeaway is that security providers, consultancies, and pen testers must embrace changes in our rapidly-evolving security landscape and our client needs.
Crowdsourced bug-bounty programs offer great value to organizations. I believe they’re moving things in a positive direction. With good execution, they can do pen testing. Using a crowdsourced strategy built upon their existing business model may not be the best approach.