A glimpse into GitHub’s Bug Bounty workflow
Last month, we announced the third anniversary of our Bug Bounty Program. While there’s still time to disclose your findings through the program, we wanted to pull back the curtain…
Last month, we announced the third anniversary of our Bug Bounty Program. While there’s still time to disclose your findings through the program, we wanted to pull back the curtain and give you a glimpse into how GitHub’s Application Security team triages and runs it.
Over the three years of our program, it has evolved to streamline our internal processes and to resolve (and pay out!) the submitted issues as quickly as possible. As with most processes, we have iteratively refined and formalized the steps we take for every bounty submission received through the program. Ideally, the details of our process in this post will help other security teams looking to launch or improve their bounty process, as well as offer transparency to our researchers.
Additionally, we have released a HackerOne API client library developed for our workflow. We hope other bug bounty teams utilizing HackerOne can leverage this to add or improve the automation within their program.
Initial contact
When the GitHub Application Security Team launched the program in 2014, we had several key goals in mind. One particular goal was to ensure that the people taking the time to research and find vulnerabilities in our products were treated and communicated to in a way that respected the time and effort they put into the program. We have strived to maintain a knowledgable and appreciative first response to every submission received.
As the Application Security team has grown in responsibility and duties, it has been hard to manage the effort required for sufficient review and communication amongst other daily tasks. To help maintain sufficient attention to the Bounty program, each member of the Application Security team rotates daily through our on-call, First Responder, schedule. One major task during this assigned day each week is to handle incoming Bug Bounty triage.
The volume of submissions to the program has increased significantly each year (2014, 2015). Given this growth, we identified that there are major advantages of having one person handle the Bug Bounty triage per day. This allows the First Responder to focus exclusively on the Bug Bounty program, without switching between other work. If it is a particularly slow day, they can spend the time catching up on triage backlog or pushing along previous issues that may have stalled on development or other tasks. The First Responder is then the owner of the issue until it is closed out or resolved. This helps to evenly distribute the load over time, reduce duplication of effort in getting up to speed on a submission, and ensure researchers have consistent communication throughout the process.
Having a daily First Responder also helps reduce the mental duplication of work. Even if another team member isn’t on call, it’s tempting to check the incoming reports to see if there are any urgent tasks to attend to. By having one person responsible per day, this allows other members to completely check out of the Bug Bounty and focus on other work, knowing that any high-risk issues will be handled immediately. In reality, specific members may be pulled in as an subject matter expert in an area related to a submission.
Finally, by setting a schedule and committing to consistent triage, it helps us keep the inbox from growing without bounds. It has also helped us avoid periods of Bug Bounty neglect when other team priorities, such as internal code review for new feature launches, demand our full attention.
First response action items
To ensure consistency throughout the team and create an easily flowing process with researchers, we have created some guidelines for the First Responder’s handling of initial triage. This is not a strict set of rules set upon the First Responder, but a workflow that we have found to be useful to best triage the submissions received. Additionally, to streamline the process for the First Responder, we utilize canned responses for a number of common submissions that typically do not make it to the further stages of triage.
The general steps taken during the initial handling of a submission are:
- Respond and close out the submission if the submission falls outside of the scope of the Bug Bounty program. By having a set scope of the Bounty program, it helps us to strategically release new targets in scope and to focus researchers in areas that we think are interesting to researchers and important to our core business.
- Respond and close out the submission if the issue has been previously identified, is an issue we are aware of and consider low risk, or is one of our commonly received and ineligible submissions. Sometimes we receive reports that are obviously invalid or very low risk. We close out those reports during this intial stage as well.
- If the submission looks valid, risky, and new, we respond to the researcher letting them know that validation is underway and that we will be in touch once we have an update. Responding before validation allows us to quickly follow up with researchers to let them know we are taking their report seriously, while still being able to take our time to sufficiently vet and understand the issue at hand.
- Validation can typically be performed directly by the Application Security team member performing the triage. All members of the team have a very strong understanding of our products, access to testing and development environments, logs, as well as source code. This allows us to perform the bulk of validation before escalating to the development teams. At this point we can either move the submission forward with triaging or respond back to the submitter asking for more information to help us reproduce the issue.
- If we have validated the issue, or have reached the limits of our initial validation and need some expertise from the engineering team, we open an internal tracking issue under the relevant source repository on GitHub. In this issue we provide the full details from the researcher, any details from our initial validation, and typically a set of questions we need clarified by the feature’s or product’s engineers. We then use this issue to work through the root cause of the submission and the best methods for remediation. We communicate with the engineering team to help derive an initial risk of the issue, using critical, high, medium, or low severity buckets and attaching it to the issue using a label. This helps the engineering team to appropriately prioritize any resulting engineering effort. If the issue was not previously validated by the First Responder, they contact the researcher after the engineers have helped us to determine the submission’s validity.
When we communicate our decision on the validity of the issue to a researcher, we also detail next steps in the process. These steps include the engineering team or Application Security team developing a fix based on the discussed remediation and the Application Security team determining the finalized risk of the issue.
Risk assessment
To determine the risk used for our internal prioritization and as a mapping to our payout structure for bounty submissions, we group issues into fairly broad buckets:
- Critical: Critical severity issues present a direct and immediate risk to a broad array of our users or to GitHub itself. These issues often impact relatively low-level or foundational components of our application stacks or infrastructure. Some examples of Critical issues would be remote code execution, exploitable SQL injection, or a full authentication or authorization bypass.
- High: High severity issues allow an attacker to read or modify highly sensitive data that they are not authorized to access. High severity issues are generally more narrow in scope than critical issues, though they may still grant an attacker extensive access. Some examples would be Cross-Site Scripting (XSS) that also includes a bypass of our Content Security Policy (CSP) or a gap in our authorization enforcement that allows a significant escalation in privilege.
- Medium: Medium severity issues allow an attacker to read or modify limited amounts of data that they are not authorized to access. Medium severity issues generally grant access to less sensitive information than high severity issues. A couple of examples would be an XSS issue that does not bypass CSP, a bypass of CSRF protection for a low impact endpoint, or an access control issue that provides a very limited disclosure of sensitive information.
- Low: Low severity issues allow either extremely limited, or even no, access to data that they are not authorized to access. Low risk issues frequently violate an expectation for how something is intended to work, but it allows nearly no escalation of privilege for an attacker. These issues could potentially be used as part of an exploit chain, but provide little risk on their own. Additionally, issues that are a violation of best practices, but are not exploitable, will fall into this bucket.
We use this same rating to determine the payout of a vulnerability, as well as to express prioritization to the engineering teams. The Application Security team maps each of these risk buckets to also determine a recommended target time to fix, the urgency in which we should escalate the issue, and – if it affects GitHub Enterprise – what should the patch release cycle look like.
Fixing the Issue
All vulnerabilities identified, either internally or externally through the Bug Bounty program, are handled within GitHub’s Engineering teams the same as any other bug would be. Application Security offers our recommendations around remediation and prioritization based on the determined risk. For some issues, depending on the root cause, a member of the Application Security team will fix the issue and request a review by the responsible engineering team. In other cases, such as a larger, more impactful change, the engineering team will take the lead, consulting with the Application Security team for validation of the issue. In all cases, the specific vulnerability is not only fixed, but investigation and analysis is performed to see if other similar code paths could have similar vulnerabilities.
In addition to code fixes, we strongly believe that, like with all bugs, test cases should have caught the issue before it was shipped. We work with the engineering teams to ensure proper test coverage is also included as part of the remediation work, specifically reviewing that we have negative test cases. Similarly, we have internal tooling in place to perform static analysis during the development lifecycle. We use vulnerabilities as a chance to refine and improve the analysis performed during development to catch similar issues in future development.
Closing the loop with the researcher
Depending on the timeline for the corresponding engineering work, a fix may or may not be shipped by the time we get to the fun part: rewarding our intrepid researchers for their hard work. Using the determined risk bucket, we derive a dollar amount to pay for the submission. Over the past three months, we have paid bounty hunters over $80,000 in rewards, with an average award of $1,200 per payout.
After the payout has been determined and communicated, we use HackerOne to issue the payout amount and send some GitHub Security Swag to the researcher. We then close out the report on HackerOne.
More perks
In addition to a cash payout, there are few other perks we award to our researchers. As an added bonus we apply a coupon to the researcher’s GitHub account providing 100% off unlimited private repositories for a year. For repeat researchers we extend a lifetime coupon.
We will also add researchers to the @GitHubBounty organization. If they accept the team invitation, a Security Bug Bounty Hunter
badge is added to their public GitHub profile. We use this organization to enable soon-to-be-released features to give researchers a head start on finding vulnerabilities.
Report transparency
When we started the program in 2014, we wanted a way to be transparent about the submissions we received and fixed. We determined that the best way to do so would be to build and maintain a GitHub Pages and Jekyll-based site at https://bounty.github.com. We use this to give an extra shoutout to our researchers and publish a quick writeup of the submissions we have paid out. We feel that this helps our users know what we are doing to fix and secure our products, gives our researchers credit for their awesome work, and hopefully helps other application security teams learn from our experiences with the program.
Automating the process
In running the program, we noticed that the final two steps, adding coupons and teams to a researcher’s GitHub account and writing up posts to the bounty site, were consuming a fair amount of our time. These tasks usually occurred after a fix happened and carried less urgency than the rest of the process, sometimes getting stale and forgotten due to the manual steps required.
With HackerOne’s release of an API, we took the opportunity to automate these final steps. By issuing a command in our chat system, we can open a PR to our GitHub Pages repo for the bounty site as well as apply coupons and team membership to the researcher’s GitHub account. The bounty site post is templated with the the majority of the required metadata, allowing us to get to the meaty part of the writeup without unnecessary copy and pasting. The hackerone-client
library was developed to interface this internal tooling with the HackerOne API.
What’s next
GitHub’s Bug Bounty program has been evolving for the past three years and we’ve learned from the peaks and valleys it has experienced. We have seen moments of overwhelming participation that tax our resources, as well as moments of neglect as our team has shifted priorities at times. Because of these experiences, we’ve been able to create a process that allows our team to work smartly and efficiently.
As we expand the program in the future, we will continue to adapt our tools and processes to fit our needs. We would love feedback from both bug bounty researchers as well as other bug bounty teams. Send us a message on Twitter at @GitHubSecurity.
GitHub’s Application Security team is also looking to expand. If you are interested working alongside us, check out our Application Security Engineer job listing.
Tags:
Written by
Related posts
GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub
How we used GitHub to build GitHub Enterprise Cloud with data residency.
The ultimate guide to developer happiness
Five actionable tips and strategies to supercharge developer happiness—and a more innovative workplace.
How GitHub supports neurodiverse employees (and how your company can, too)
Teams with neurodivergent employees can be up to 30 percent more productive. Discover tips on how best to support them in your workplace.