Bug Bounty Service
Vulnerability Disclosure Philosophy
Respect the rules. Operate within the rules set forth by the Security Team, or speak up if in strong disagreement with the rules.
Respect privacy. Make a good faith effort not to access or destroy another user's data.
Be patient. Make a good faith effort to clarify and support their reports upon request.
Do no harm. Act for the common good through the prompt reporting of all found vulnerabilities. Never willfully exploit others without their permission.
Prioritize security. Make a good faith effort to resolve reported security issues in a prompt and transparent manner.
Respect Researchers. Give researchers public recognition for their contributions.
Reward research. Financially incentivize security research when appropriate.
Do no harm. Not take unreasonable punitive actions against researchers, like making legal threats or referring matters to law enforcement.
We are committed to protecting the interests of Researchers. However, vulnerability disclosure is an inherently murky process. The more closely a Researcher's behavior matches these guidelines, the more we'll be able to protect you if a difficult disclosure situation escalates.
Security Teams will publish a program policy designed to guide security research into a particular service or product. You should always carefully review this program policy prior to submission as they will supersede these guidelines in the event of a conflict.
If you believe you have found a vulnerability, please submit a Report to the appropriate program on the BUGCAMP platform. The Report should include a detailed description of your discovery with clear, concise reproducible steps or a working proof-of-concept. If you don't explain the vulnerability in detail, there may be significant delays in the disclosure process, which is undesirable for everyone.
The report is updated when significant events occur, such as the validation of a vulnerability, the need for additional information, or when the Researcher qualifies for a bounty.
Vulnerability Disclosure Process
The contents of the report are immediately provided to the security team and initially kept non-public to allow sufficient time for the security team to publish remediation measures. After the report has been closed, both the researcher and the security team may request the disclosure of information regarding the vulnerability.
* Default: If neither party raises an objection, the contents of the Report will be made public within 30 days.
* Mutual agreement: We encourage the Researcher and Security Team members to remain in open communication regarding disclosure timelines. If both parties are in agreement, the contents of the Report can be made public on a mutually agreed timeline.
* Protective disclosure: If the Security Team has evidence of active exploitation or imminent public harm, they may immediately provide remediation details to the public so that users can take protective action.
* Extension: Due to complexity and other factors, some vulnerabilities will require longer than the default 30 days to remediate. In these cases, the Report may remain non-public to ensure the Security Team has an adequate amount of time to address a security issue. We encourage Security Teams to remain in open communication with the Researcher when these cases occur.
Some Researchers may receive invitations to private Programs. Your participation in a private Program is entirely optional and subject to strict non-disclosure by default. Prior to accepting an invitation to a private Program, Researchers should carefully review any program policies and non-disclosure agreements required for participation. Researchers that intend any form of public disclosure should not participate in private Programs.
Researchers may receive public recognition for your find if
1) you are the first person to file a Report for a particular vulnerability
2) the vulnerability is confirmed to be a valid security issue
3) you have complied with these guidelines
* Security Team
A team comprised of individuals responsible for addressing security issues found in a product or service. Depending on the circumstances, this team may consist of an organization's formal security team, a group of volunteers within an open-source project, or an independent panel of volunteers (such as the Internet Bug Bounty.)
Also known as hackers. Anyone who has investigated a potential security issue in some form of technology, including academic security researchers, software engineers, system administrators, and even casual technologists.
A Researcher's description of a potential security vulnerability in a particular product or service. On BUGCAMP, Reports always start out as non-public submissions to the appropriate Security Team.
A software bug that would allow an attacker to perform an action in violation of an expressed security policy. A bug that enables escalated access or privilege is a vulnerability. Design flaws and failures to adhere to security best practices may qualify as vulnerabilities. Weaknesses exploited by viruses, malicious code, and social engineering are not considered vulnerabilities unless the Security Team says otherwise in the program's policy.
Security Teams may publish a Program and Program Policy designed to guide security research into a particular service or product. If this program is private, your participation is entirely optional and subject to non-disclosure by default.