Git Product home page Git Product logo

cyber.dhs.gov's Introduction

Welcome to cisagov

GitHub Build Status

Welcome to cisagov, the GitHub home for the Cybersecurity and Infrastructure Security Agency (CISA)!

This repository aims to make it easier to get working with GitHub and Free and Open Source Software (FOSS) for people who work at or with CISA.

For developer-focused documentation and guides, please visit our development-guide repository.

Common questions

Getting started with GitHub

  • How do I make a GitHub account?
  • Why do I add my work email instead of making a separate work-only GitHub account?
    • GitHub's terms of service say to use one account per person. Any commits made will be associated with the user who created them, and GitHub allows for granular, role-based access that can also be revoked when someone departs CISA.
  • How do I use GitHub? Where do I start?

GitHub access

Policies and content guides

  • What belongs in cisagov versus another GitHub organization?
    • Here are some questions to ask when considering posting a project:
      • Does CISA use or develop the software? Is it developed by or for one of the groups or divisions within CISA?
        • If not, we recommend the authors create their own GitHub organization and post their work there
      • What source control system is in place currently?
        • Many source control systems, such as Mercurial and GitLab, can export the entire development history for import into GitHub - this is by far the preferred method
      • Is the project still under active development or is it in maintenance?
        • If the project no longer has a team performing maintenance, we recommend the repository be archived to make that clear to people who may want to use it
  • What belongs on cisa.gov versus on cisagov?
    • The cisa.gov site is primarily focused at an audience outside of CISA, such as Critical Infrastructure partners or the public
    • cisagov is for both internal and external users, as well as partners. It exists specifically to share projects with the public as well as internal users.
  • Working in public (dos and don'ts, best practices)
    • As a best practice, use the cisagov organization issue templates and pull request templates. These templates are available by default in all repositories created in the cisagov organization.
    • As a best practice, we require code reviews before merging pull requests. This is done using branch protection.
  • When should I talk to CISA External Affairs (EA)?
    • Early and often!
  • What is CISA's open source policy?

Feedback and contact

Have an idea about how to make these pages better? File an issue!

For any repository-specific questions or feedback, please make an issue in that repository so the appropriate team will see it.

For more about CISA as an agency or any of its subcomponents, please visit the About CISA page on cisa.gov.

For other GitHub-related questions, feel free to email us.

Developer resources

We have a cisagov development-guide repository, which contains coding standards, steps for setting up a development environment, and other information.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details.

Thanks

We would like to thank the General Services Administration and 18F, the Consumer Financial Protection Bureau, Department of Defense, and Office of Management and Budget for their work in blazing the path for the use of FOSS in the U.S. federal government.

License

This project is in the worldwide public domain.

This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication.

All contributions to this project will be released under the CC0 dedication. By submitting a pull request, you are agreeing to comply with this waiver of copyright interest.

cyber.dhs.gov's People

Contributors

antifreeze avatar cablej avatar climber-girl avatar dependabot[bot] avatar fartbagxp avatar felddy avatar h-m-f-t avatar hallewellgov avatar joshuastankus avatar jsf9k avatar konklone avatar mlove-9 avatar rpeaston avatar sckooop avatar snyk-bot avatar todrobbins avatar wa9ace avatar yoaaiz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cyber.dhs.gov's Issues

Explicit requirement for inclusion of agency OGC in policy development

Develop and Publish a Vulnerability Disclosure Policy

[...]
3. Publish a vulnerability disclosure policy as a web page in plain
text or HTML.
[...]
a) The policy must include:
[...]
iv. A commitment to not recommend or pursue legal action
against anyone for security research activities that the
agency concludes represents a good faith effort to follow
the policy, and deem that activity authorized.

It's worth considering that this BOD will probably initially land amongst OCIO (i.e., the "techy" crowd), but the required deliverables need to be worked alongside the "lawyerly" folks at any given agency. Specifically, the section quoted above could use a subordinate clause that states OGC has reviewed and concurred with any assertions made regarding pursuit of legal action.

I can work a pull request with some draft language.

Consider adding references to alternative prioritization schemes

Footnote 20:

One approach is to attach a risk score to the vulnerability, which can help to establish priority. The goal of risk scoring at this stage is to quickly provide an organization a sense of the severity and potential impact of a vulnerability. These scores will be subjective. An agency might score the potential impact of the disclosed vulnerability to their system or service’s confidentiality, integrity, and availability with severity rankings of ‘low’, ‘moderate’, ‘high’, ‘not applicable’ (out of scope, negligible, not enough information), and ‘incident’ (should any of those already be compromised) for each metric. See the TTS/18F Handbook in the prior footnote.

I'd suggest adding a reference to Prioritizing Vulnerability Response: A Stakeholder-Specific Vulnerability Categorization from Carnegie Mellon University's Software Engineering Institute https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=636379

Another related reference is FAIR https://www.fairinstitute.org/what-is-fair

(disclaimer: I'm one fo the coauthors of the SEI report)

Binding Operational Directive 20-01: on Public Disclosure

While it's noted in the final paragraph before the FAQ:

Agencies may require that the researcher give the agency a defined window of time to address the vulnerability before public disclosure, but should not seek to limit publication after this window of time has passed, or after the vulnerability has been addressed.

This information is unfortunately absent from the directives on producing the policy itself.

Information on how and when a researcher will be able to disclose information to the public is a vital part of the vulnerability disclosure process. Without clearly stated guidelines on public disclosure, disclosure programs naturally tend to become closed disclosure programs, as researchers fear legal reprisal for their work if they publish. Those who run disclosure programs often pay less care for timely remediation if the researcher is strongly disinclined, or explicitly disallowed from making information about the issue and its remediation timeline public, and the program becomes a way to hide such information.

There should be clear guidance in "Develop and Publish a Vulnerability Disclosure Policy" on what public disclosure policies are appropriate, and direction to publish one as part of the disclosure policy. If at all possible, it'd be great to see a stipulation of a maximum reasonable disclosure period. It's understandable that government organizations implementing this directive might have disclosure periods longer than industry standard, but they shouldn't be forever.

Add DKIM to domains that don't send email

We're about to update our own guidance on this:
https://www.gov.uk/guidance/protect-domains-that-dont-send-email
to suggest people add a wildcard DKIM record to make it more likely email will get rejected.

"Create an empty DKIM record with:
type: TXT
host or name: *._domainkey
value: v=DKIM1; p=

As this is a wildcard record you cannot check it other than to look in the DNS host admin panel.
Revoke all existing DKIM selectors in both TXT and CNAME records.
This record will make email servers more likely to reject email from your domain."

Comment on target timelines for Response & Remediation activities

Wonderful to see the directive require "Target Timelines" for various VDP activities. In my experience, organizational alignment around these metrics drives healthy programs and increasingly progressive targets can be a powerful tool for further increasing security maturity. Two suggestions here:

(1) Consider elevating your recommendations from footnotes 21, 22, and 23 into the text of the directive itself. These recommended targets are great defaults and strike me as being worthy of highlighting front and center else I'd be concerned that they are easily missed or ignored.

(2) For any organization, Response Targets are a journey, not destination. Consider encouraging agencies to (a) set more progressive targets where feasible and (b) perform a periodic evaluation to determine if targets can be improved further.

Emailed comment from the General Services Administration

  • Recommend DHS provide additional guidance on security.txt fields. For example, DHS could provide some more information about which fields it thinks we should have and which ones are optional.

  • BOD-20-01 requires a 15 day turnaround for adding information to the Org/Security POC field. For some agencies, this might be a large amount of work to be able to complete this task within the allocated time frame. This could be changed to 30-45 days as this is not a high risk. If there is not a compelling threat forcing a 15 day turnaround; recommend relaxing.

  • Has DHS consulted with privacy and legal experts for this? We are reviewing this from an IT security and operational perspective. Recommendation coordination with Privacy Council.

  • Can DHS provide guidance or consider updating the BOD to provide policy flexibility for vendor-owned/vendor-operated systems? There might be contractual issues that can impact applying the VDP to them. We had similar issues with BOD 18-01.

  • For Section “Vulnerability Disclosure Handling Procedures”: Handling the vulnerability reports once the policy is published will require additional resources to address, but this is an unfunded mandate.

  • Is a BOD the correct policy vehicle for the VDP requirement? An alternative would be to start this process as guidance and possibly move towards a BOD as agencies learn more about setting up these programs.

Sincerely,

Armando Quintananieves
Director (acting), Security Operations Division
Office of the Chief Information Security Officer
U.S. General Services Administration

Add an FAQ regarding the pct flag

@seanthegeek makes an excellent point in the trustymail repo about how an agency could sidestep the intent of the BOD: if a pct flag isn't used, the implicit value is 100. But in October, an agency could have p=reject but have that applied to less than 100 percent of mail from their domain. That's bad, and they should feel bad, so we should add some guidance.

Order BODs on nav from most recent to least?

Right now, the oldest BODs are on the top of the side nav. It seems likely that people will be coming to the site more for recent BODs than older ones, and putting them at the top of the nav bar would make them more discoverable.

Recommend Availability of Encryption (i.e.: PGP) to secure report submission

While a web-based form will naturally use a secure communication protocol (HTTP over TLS, for example), email is often not secure from end-to-end.

Where the severity of a potential vulnerability warrants it, or at a security researcher's discretion, a secure email communication channel should be made available. This may be as simple as adding a "PGP Key" (a public key component used in public/private key cryptograph) to the security.txt file.

To be clear, I mean that secure end-to-end email reporting and discussion should be available to researchers and agencies, not that it should be mandatory.

Display absolute dates for deadlines somewhere

The BOD uses relative times (120 days from...), and it would be much more helpful to just have specific dates shown somewhere.

I would definitely recommend updating the Checklist to show those (even if addition to the relative dates).

It would also be great to change text on the homepage, but if you don't want to change the text on the home page to keep it matched to the PDF, then I would link the 120 days from... text to the Checklist, and then have the Checklist express it in absolute dates.

Emailed Comment from IBM X-Force IRIS

Hello,

I reviewed the VDP and request rewording of paragraph 6 on page 6.

Within 270 calendar days after the issuance of this directive, and within every 90 calendar days thereafter, the scope of the VDP must increase by at least one internet-accessible system or service.

I wasn't clear on what this paragraph was communicating to the reader.

Thank you,

Carlos Carrillo

Emailed comment from the Department of Energy

Comment No. Location Comment (and type*) or Recommended Edit Rationale
1 Pg. 1, §1 Critical: A Binding Operational Directive (BOD) is an ill-suited vehicle to mandate this type of policy. DOE does support the development of vulnerability disclosure policies, however, DOE believes this should be done through an alternative policy deliberation process that allows for more discussion and thoughtful review of costs, operational and security impacts, and exceptions. Per FISMA, the purpose of a BOD is "to safeguard Federal information and information systems from a known or reasonably suspected information security threat, vulnerability, or risk" yet the BOD does not directly identify a "known or reasonably suspected information security threat, vulnerability, or risk". DOE has seen no data to suggest a lack of VDP has significantly hindered federal cybersecurity, particularly with a significantly aggressive timeline (e.g. 15 days). Given this, the development of VDP through the policy process would be preferred.
2 Pg. 1, §1 Substantive: Request the risk rationale and cost-benefit analysis supporting the promulgation of this draft proposed policy. Appropriate use of the BOD process – Due to the very high cost and mission impact of imposing a Binding Operational Directive, this regulatory vehicle should be held to a very high bar for appropriateness and prioritized risk reduction. The DHS does not disclose the process used to determine that this proposed directive, as the most critical risk, has no risk reduction target, and the metrics associated with this directive are aligned to insure reporting, not the reduction of risk.
3 Pg. 1, §1 Critical: Insert exclusion clause for out of scope systems such as classified National Security Systems (NSS). The DOE has a unique set of missions that require coordination with both the DoD and the Intelligence Community, that requires additional consideration. NIST provides the following definition for NSS: "any information system (including any telecommunications system) used or operated by an agency or by a contractor of an agency, or other organization on behalf of an agency—(i) the function, operation, or use of which involves intelligence activities; involves cryptologic activities related to national security; involves command and control of military forces; involves equipment that is an integral part of a weapon or weapons system; or is critical to the direct fulfillment of military or intelligence missions (excluding a system that is to be used for routine administrative and business applications, for example, payroll, finance, logistics, and personnel management applications); or (ii) is protected at all times by procedures established for information that have been specifically authorized under criteria established by an Executive Order or an Act of Congress to be kept classified in the interest of national defense or foreign policy." Based on this definition, DOE will exclude systems and services related to nuclear technology, other technologies deemed sensitive or related to national security, and all systems and services that interact directly with DoD or the Intelligence Community from the scope of this BOD and will make note of such exception within its policy and handling procedures.
4 Pg. 3, §1 Substantive: Remove all “Bug Bounty” references and discussion. Discussion of “Bug Bounty” is not relevant to the direction provided in the draft proposed policy. Rather the current language only provides notification of intent of OMB to convene an unnamed group of agencies to consider “leveraging bug bounty programs.”  Inclusion of Bug Bounty references is confusing and generally reinforces the misconception that VDP and Bug Bounty are the same.
5 Pg. 4, §1 Substantive: Extend .gov registrar information from 15 business days to 60 days. While this may be accomplished for the “DOE.GOV” domain, the vast majority of the Department's “.GOV” footprint is managed in the Management & Operating (M&O) contract space to include .org and .edu addresses. For example, the DOE has a minimum of 35 second-level .gov domains and 3,878 subdomains. The inferred “capability to receive unsolicited reports about potential security vulnerabilities” for the diverse, federated DOE community will require additional time considering the Department's diverse internet domain footprint.
6 Pg. 4, §3 Critical: Recommend rewording the requirement to direct Agencies to develop a plan for a VDP and handling procedures within 180 days of BOD issuance and begin implementing their VDP for internet facing systems and services according to their plans' prescribed timeline. As written, this BOD will have significant cost implications considering the DOE's federated nature and broad internet presence. Given a lack of clear correlation between unreported vulnerabilities and federal risk, it is hard to justify the resources necessary to meet the timelines and requirements as written. Should agencies have the opportunity to develop their respective plans for VDP within 180 days and have additional time to implement, this would allow each Agency to take their respective challenges into account without significant unbudgeted costs. This is an unfunded mandate with sweeping impact to Agency contracts, which will be made significantly more difficult given the tight timeline. If Agencies were instead directed to develop a plan for implementing VPD within 180 days (as opposed to having their VDP in place within 180 days), they could assess and consider total costs and impacts as implemented. Notably, the DOD model required 11 full-time equivalents (FTEs) to implement a similar VDP - replicating this at DOE is not included in the existing budget allocations. Due to the nature of the DOE's organization, National Laboratories are often contractor owned, contractor operated (COCO) or government owned, contractor operated (GOCO), each organization type requiring strict contracts and agreements to manage operations and costs. Applicability of the BOD to federated elements – an open question remains as to the applicability of the BOD process to government owned, contractor operated M&O institutions. Contract modifications for either organizational model will be required for each National Laboratory to implement the requirements as prescribed. Extensive contract modifications will be lengthy and necessary with major cost implications, which are not currently budgeted for. In addition the cost in additional FTEs to execute VDP requirements will likely be higher on Laboratories as the rates will be based on contractor and vendor wage rates.
7 Pg. 4, §3 Critical: If the request to require a plan for VDP and handling procedure development within 180 days is rejected, DOE requests extending the requirement for publication of a VDP from 180 calendar days to one year of directive issuance. Accelerated DOE policy publication process takes an average of 7-8 months making it difficult to comply with the current timeline as provided. Due to the DOE's federated environment, proposed policies must undergo extensive program office reviews to ensure alignment with contractual and legal requirements.                                                                                                          In addition, ensuring clear guidance on how to submit vulnerability reports requires that tools and resources to facilitate that process be identified, procured, and ready for use by the time the policy is published. Ensuring the proper tools and resources are in place will require more than 180 calendar days to organize and confirm, and as mentioned, extensive contract modifications will be needed for existing laboratories and operations, which can take over a year to finalize and go into effect. The compressed timeline for policy issuance, underlying timelines for implementation, and expanded scope may force DOE to outsource vulnerability disclosure reporting, handing procedures, and the program itself to third-party bug bounty platforms at greater cost in order to meet the compliance schedule instead of building an internal program the DOE can manage and mature. The cost and process impact to DOE sites will have a material impact on DOE field elements. This impact will be non-proportionally acute for the smaller sites.
8 Pgs. 4, 5, 6; §3.a.i., 5, and 8.a.iv Critical: Request clarification on definition of internet-accessible production system or service to handle unique contractual relationships and obligations. There is ambiguity over what internet accessible systems and services are in scope and whether that includes internet-accessible systems and services owned and controlled by an Agency or any internet-accessible system or service used by an Agency or which contains Agency information. Similar to other Agencies, the DOE may use internet-accessible systems and services (e.g., cloud service providers and software as a service) to execute operations but those systems and services may be leased from, contractually owned, and provided by a third party service provider or partnering institution. The DOE's VDP cannot sanction testing of third party-owned assets or offer any legal protections. This is of particular concern to the DOE given its unique structure whereby Laboratories often share systems and services with external partners and third parties, or where information on such systems is not owned by the DOE but is research owned by other institutions that is being held on DOE assets.
9 Pgs. 4, 5, §3.a.iii, 3.a.v Critical: If the request to require a plan for VDP and handling procedure development within 180 days is rejected, DOE requests extending the requirement for publication of vulnerability disclosure handling procedures from 180 calendar days to one year of directive issuance. Accelerated DOE policy publication process takes an average of 7-8 months making it difficult to comply with the current timeline as provided. Due to the DOE's federated environment, proposed policies must undergo extensive program office reviews to ensure alignment with contractual and legal requirements.                                                                                                                                                                                                                                                                                          Requirements to describe how vulnerability reports may be submitted, including where and what information to include, requires a clear submission process and access to the proper tools or resources needed to execute that process. Determination and deployment of such processes or supporting tools and resources are likely to require more than 180 days to complete, which makes documenting them in formal policy difficult to achieve.
10 Pg. 5, §3.a.v Substantive: Add a clause that enables agencies to define the level of remediation detail provided based on the sensitivity in VDP rules of engagement and as applicable to mission delivery. Agencies should have the flexibility to determine the level of information that should be shared with reporters depending on the sensitivity of the vulnerability. With a range of missions from open science to nuclear, the level of transparency should be appropriate for the program risk.
11 Pg. 5, §3.b.ii Critical: Agencies should not be restricted in their ability to register authorized parties for testing with minimal burden of personal information and limit the participation of others. Due to the potential sensitivity of IT assets, information, and operations, the ability to flag potential testers in alignment with each Agency's mission should be provisioned. Although a noble goal to allow anyone, anywhere to test federal systems, the additional resources needed to respond to all un-vetted testers will have a significant impact on cyber operations. Without having a method to vet and communicate with reporters, agencies have no way to constrain resource loading (e.g. on cloud usage) with direct impact to staff (e.g., SOC resources on false positives) and funding levels. It will also make it harder to separate malicious actors from authentic security researchers. For example allowing registered users to use a valid VPN for testing, tracking, and flow management.
12 Pg. 5, §3.b.iii Critical: Add clause to enable agencies to define a required time limited response period as part of their "good faith" criteria that may not be extend past 180 days. By only allowing for a 'request for a reasonably time-limited response period', agencies have even less control on how to prioritize and address reported vulnerabilities and may expend limited resources to remediate all vulnerabilities immediately to meet the time-limited response period prior to disclosure to other parties. DOE is a unique agency that has significant interactions with DoD and the Intelligence Community. Given that the current BOD also does not allow for limiting testers to vetted reporters, there is significant risk that this would increase DOE's exposure to malicious actors if reporters are given the ability to immediately report vulnerabilities to third parties as well. For example, since reporters are not precluded from submitting their identified vulnerability to 'others' and can do so at any time, even with a request by an Agency for a time-limited response time, this could encourage active testing to then provide criminal or malicious actors with information on methods for exploitation. Since 'others' is not defined, any third party could be contacted by the reporter, including nation-state actors and registered or known federal hackers and criminal actors. With the requirement under 3.b.v to be transparent about remediation efforts, that same information could be presented to 'others' who could then execute an exploit on the vulnerability either prior to remediation or immediately after - having found a new method of exploiting the vulnerability. It is premature for this directive to identify a time restricted period because agencies do not know the volume of reports they will receive and the amount of time it will take to respond and remediate. Without this information agencies cannot realistically determine what a reasonable defined time would be and it will vary from agency to agency. However, while the time limited response period will vary according to each agency the volume of vulnerabilities received and the severity of those vulnerabilities, because agencies will not be able to exceed 180 days the spirit and goal of VDP will remain intact.
13 Pg. 6, § 7 Editorial: Recommend limiting initial scope to include only domains and systems/services owned by an Agency. Some DOE sites use .edu and other domains, based on their contract and research relationships, which complicates the potential process. Scope of this draft proposed policy is explicitly limited to “.gov” domains. This will not capture DOE M&O footprint and will incentivize movement away from the “.gov” domain. Additionally, there are a number systems and services used by DOE contractors but not owned by any DOE elements or program offices. Should all systems and services used by an Agency be in scope, this will have significant legal ramifications should unrelated third parties be exposed to testing through an Agency VDP.
14 Pg. 7, §9.a-c Substantive: Recommend identifying preferred or required method for submission of reports to CISA. Method for how an agency should report any valid or credible reports, or escalate reports to CISA for action or visibility is not identified. To ensure consistent reporting and compliance, and that all reporting is channeled to the proper point of contact for escalation and consideration, a specific mailbox or reporting method with required. The method should also identify core data fields needed by CISA. Procedures setting expectations for CISA response and review of submissions is also requested to ensure timely and transparent communication with agency stakeholders, system or service owners, and reporters, per requirement 3.a.v.
15 Pg. 7, §9.c Substantive: Strike section entirely. Current language is too broad and vague. "Any situation that is deemed helpful or necessary" is too open ended and almost any instance could be construed to fit under that definition. With an inundation of reports that can broadly align themselves to this criteria, agencies will expend significant resources to review and address all reported potential vulnerabilities (or even customer complaints).
16 Pg. 7, §10 Substantive: Execution of requirements under item #10 will require an update to the FISMA CIO Metrics through OMB. Coordination with OMB required but status unknown. FISMA quarterly reporting is based on requirements set forth by OMB through the FISMA CIO Metrics. Inclusion of additional reporting requirements in FISMA quarterly submissions will require alignment with OMB and the FISMA CIO Metrics and recommend removing an equal number of other metrics to keep reporting burden neutral.
17 Pg. 7, §10 Critical: Recommend extending requirement for updated FISMA quarterly reporting metrics from 270 calendar days from directive issuance to 270 days from issuance of Agency policy and handling procedures. Compliance with associated requirements will take time to implement. Following publication of an Agency policy and handling procedures, the DOE's federated departmental elements, program offices, sites, and national laboratories will need time to not only incorporate the policy and procedures but also execute them properly. Gathering metrics with integrity for inclusion in FISMA reporting will thus require more time than the prescribed timeline.

* Critical: Fundamental error or omission, Substantive: Preferred change, Editorial: Grammatical or stylistic

Emailed comment from the Department of Commerce

Most federal agencies lack a formal mechanism to receive information from third parties about potential security vulnerabilities on their systems. Many agencies have no defined strategy for handling reports about such issues shared by outside parties. Only a few agencies have clearly stated that those who disclose vulnerabilities in good faith are authorized.

The root cause for agencies having a “lack of a formal mechanism” should also be addressed to encourage transparency and openness. The root cause for not sharing tends to be embarrassment at having the vulnerabilities, which discourages their sharing. However, if more is done to socialize the idea that the presence of vulnerabilities shouldn’t be hidden then that will also help with increasing transparency and sharing.

Agencies should recognize that “a reporter or anyone in possession of vulnerability information can disclose or publish the information at any time”. A key benefit of a vulnerability disclosure policy is to reduce risk to agency infrastructure and the public by incentivizing coordinated disclosure so there is time to fix the vulnerability before it is publicly known.

There have been instances in the past when public disclosure of vulnerabilities on Federal systems has led to prosecution. So any “coordinated disclosure” needs to include a mechanism to ensure prosecution will not result from the disclosure.

Within 15 business days after the issuance of this directive, update the following at the .gov registrar:

  1. The security contact field for each .gov domain registered. The email address defined as the security contact must be regularly monitored, and personnel managing it must be capable of triaging unsolicited security reports for the entire domain.

A potential risk of publicizing the “.gov registrar” email address in this way opens it up to being a target of malicious actors to create a DoS attack by flooding the site with fake vulnerability info.

Within 180 calendar days after the issuance of this directive: 3. Publish a vulnerability disclosure policy as a web page in plain text or HTML.

Rather than assigning the creation of a policy to agencies to create independently, recommend that CISA lead an effort to coordinate the creation of a policy that federal agencies can use, which takes into account other CISA initiatives, such as CDM. This would allow for a more efficient and successful development and short and long term implementation of the policy.

Thank you,
DOC IT Security

Emailed comment from the Cybersecurity Coalition

December 26, 2019

Cybersecurity and Infrastructure Security Agency
Department of Homeland Security
245 Murray Lane
Washington, D.C. 20528

Submitted electronically to [email protected]

Re: Draft of Binding Operational Directive on Developing a Vulnerability Disclosure Policy

The Cybersecurity Coalition (“Coalition”) submits this comment in response to the request for comment issued by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (“CISA”) on November 27, 2019, titled “Binding Operational Directive 20-01 (draft), Develop and Publish a Vulnerability Disclosure Policy.” (The “BOD”). The Coalition appreciates the opportunity to provide these comments and commends CISA for providing guidance to Federal agencies on the development and implementation of vulnerability disclosure policies and vulnerability disclosure handling procedures. The Coalition further commends CISA for soliciting public commentary on the Binding Operational Directive.

The Coalition is composed of leading companies with a specialty in cybersecurity products and services dedicated to finding and advancing consensus policy solutions that promote the development and adoption of cybersecurity technologies.[1] We seek to ensure a robust marketplace that will encourage companies of all sizes to take steps to improve their cybersecurity risk management. We are supportive of efforts to identify and promote the adoption of cybersecurity best practices, information sharing, and voluntary standards throughout the global community.

The Coalition applauds CISA for promoting the adoption of vulnerability disclosure policies ("VDP") and handling procedures in Federal agencies and providing guidance on the development and implementation of vulnerability disclosure policies. Policymakers and government bodies have key roles to play in driving broader adoption of coordinated vulnerability disclosure (“CVD”) principles, especially by adopting CVD processes for government agencies and integrating CVD into cybersecurity guidance consistent with international standards and industry best practices.[2] The Coalition has taken the position that government agencies, at all levels, should be required to adopt an internal CVD program based on existing, widely adopted, international standards.[3] CVD should already be a consideration for Federal agencies since CVD is a core practice in the NIST Cybersecurity Framework,[4] which agencies are directed to use for cyber risk management.[5]

Receiving, evaluating, and responding to vulnerability disclosures will require resources. The Coalition urges CISA, OMB, and Congress to work together to ensure agencies have access to adequate funding, workforce, and other resources necessary to successfully implement their VDPs and CVD processes. To prepare for implementation, agencies should also be encouraged to proactively scan their internal assets as soon as possible, mitigate high priority vulnerabilities, and ensure their vulnerability management processes are effective.

The Coalition supports the phase-in approach to the scope of the VDP, as well as the goal of bringing agencies' internet-accessible assets within scope of the VDP. However, we recognize that some agencies may face challenges in applying the VDP requirement to all such assets within two years. We encourage CISA to be responsive to such concerns and provide agencies with flexibility on the timeline for expanding the scope of their VDPs to ensure the pace of expansion is assertive yet proportionate to agencies' skill levels and resources.

The Coalition appreciates that the BOD references widely-adopted international standards on CVD – ISO/IEC 29147 (2018) and ISO/IEC 30111 (2019) – as key normative sources.[6] While this reference to the standards is helpful, the Coalition recommends that CISA explicitly urge agencies to align their vulnerability disclosure and handling practices with the ISOs to the degree practical. Alignment with international standards is crucial to set consistent expectations and strengthen norms around vulnerability disclosure and handling, especially as some countries consider regulations that deviate sharply from those standards.[7]

The Coalition urges CISA to clarify that the BOD does not require a deadline for agencies to mitigate vulnerabilities. The draft BOD directs agencies to set timelines for mitigation of disclosed vulnerabilities, and recommends agencies set the timeline for 90 days or less.[8] The BOD further suggests, in the implementation guide, that agencies should “specify a target time for resolution, in days.”[9] It is appropriate for agencies to have a general internal guidelines and target time for mitigation, and to work to improve on that time in the long term. However, a fixed deadline should not apply in all circumstances, as evaluation and mitigation of some vulnerabilities may be too complex to meet the deadline, as the BOD recognizes in footnote 23.[10] Missing an artificial deadline may result in unmet expectations and loss of trust with vulnerability reporters, and potentially prompt premature public disclosure of un-mitigated vulnerabilities that creates additional risks of exploitation. CISA should explicitly guide agencies to clarify in their VDP documentation that the target deadline is not applicable to circumstances that require a longer timeline to mitigate.

International standards do not recommend specific mitigation timeframes, but that vendors should balance the need to develop remediation as soon as possible ‘with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues’ - meaning the completeness and effectiveness of the proposed mitigation.[11] The Coalition recommends CISA further align the BOD with international standards and clarify that agencies should apply mitigations as quickly as possible and in reasonable timeframes, taking into consideration the completeness and effectiveness of the proposed mitigation, as well as the severity of the vulnerability, but do not mandate or support specific and targeted timeframes. This aligns with the VDP language proposed by the BOD (asking the report to: “[p]rovide us a reasonable amount of time to resolve the issue before you disclose it publicly”).[12]

International standards require minimizing those that handle vulnerability information, including at DHS and relevant agencies, to only those essential to mitigation development. Adequate processes and protections should be put in place to maintain the confidentiality of the information and limit the circulation of information to entities not essential to mitigation development, in line with international standards.

Lastly, under international standards and industry-wide adopted CVD best practices, external discoverers of a vulnerability are encouraged to report the relevant information to the potentially impacted manufacturer, developer, or owner of the technology at hand, who is best positioned to lead the coordination and mitigation efforts.[13] Consistent with these standards and best practices, the BOD should suggest that agencies' VDPs encourage vulnerability reporters to disclose directly to the vendor of impacted third party products or services that the agency uses, when possible.[14] This will help ensure the vendor, who is best-positioned to lead the mitigation development process, is made aware of the vulnerability quickly and avoid relying solely on the agency to do so.

The Coalition appreciates CISA’s efforts to incorporate the larger cybersecurity community into the protection of Federal information assets and systems. As CISA continues to develop policies and standards around vulnerability disclosure, the Coalition looks forward to serving as a resource concerning both technical and policy questions.

Sincerely,

Ari Schwartz
Executive Coordinator


[1]: The views expressed in this comment reflect the consensus views of the Coalition and do not necessarily reflect the views of any individual Coalition member. For more information on the Coalition, see www.cybersecuritycoalition.org.
[2]: Cybersecurity Coalition, Policy Priorities for Coordinated Vulnerability Disclosure and Handling, Feb. 25, 2019, pgs. 9-11, https://www.cybersecuritycoalition.org/policy-priorities.
[3]: Id.
[4]: National Institute of Standards and Technology, Framework for Improving Critical Infrastructure Cybersecurity version 1.1, RS.AN-5, pg. 42, Apr. 16, 2018, https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf.
[5]: White House, Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, Sec. 1(c)(ii), May. 11, 2017, https://www.whitehouse.gov/presidential-actions/presidential-executive-order-strengthening-cybersecurity-federal-networks-critical-infrastructure.
[6]: According to the Binding Operational Directive 20-01 November 27, 2019 (draft): “International standards ISO 29147 (vulnerability disclosure) and ISO 30111 (vulnerability handling processes) are high quality normative resources. As vulnerability disclosures can come from anyone across the globe, aligning with international best practices minimizes potential friction”.
[7]: See comments of the Cybersecurity Coalition and the Cyber Threat Alliance, Cybersecurity Vulnerabilities Administrative Regulation, Jul. 17, 2019, https://www.cybersecuritycoalition.org/cybersecurity-vulnerabilities.
[8]: See CISA draft BOD: " b) Set target timelines for and track: iii. Resolution of vulnerabilities, including notification of the outcome to the reporter." See also, footnote 23: CISA recommends no more than 90 days from the receipt of the report... Complex situations, including those that involve multi-party coordination, might require additional time."
[9]: See CISA draft BOD: "For instance, a policy ought to: ...Specify a target time for resolution, in days." https://cyber.dhs.gov/bod/20-01/#implementation-guide
[10]: International Standards recognize that the time needed to develop, test, and deploy mitigations in a manner that will incentivize adoption by end users varies according to the technology and vulnerability. In certain complex environments, the mitigation of vulnerabilities may require taking action at multiple and interdependent layers of the system (i.e. multi-party CVD) to validate the vulnerability, develop and test the mitigations in various environments, and effectively deliver the mitigations to end-users. See Center for Cybersecurity Policy and Law, Improving Hardware Component Vulnerability Disclosure (2019), available at https://centerforcybersecuritypolicy.org/improving-hardware-component-vulnerability-disclosure. See also FIRST, Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure, available at https://first.org/global/sigs/vulnerability-coordination/multiparty/guidelines-v1.0.
[11]: See ISO/IEC 30111 (2019), Section 7.2.5 (Remediation Development): ‘When determining the best remediation, the vendor should attempt to balance the need to create a remediation quickly, with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues.’ See also Section 7.2 with respect to vulnerability handling phases monitoring.
[12]: CISA VDP template: https://cyber.dhs.gov/bod/20-01/vdp-template
[13]: ISO/IEC 30111 (2019) Section 7.2.4 Remediation development (“The vendor develops and performs appropriate tests to ensure the vulnerability issue has been addressed on all supported platforms”). See also Section 5.6.3 ISO/IEC 29147 (2018) (“A reporter identifies potential vulnerabilities in products or services and notifies the vendor”).
[14]: The draft BOD implementation guide suggests that agencies direct reporters to disclose vulnerabilities to vendors of third party products when the reporter approaches an agency due to a perceived regulatory role. However, when the disclosure concerns a vulnerability in a third party service that the agency uses, the draft BOD does not suggest that agencies encourage reporters to disclose to the vendor in addition to the agency.

Emailed comment from Amélie Koran

I would recommend looking at a processing and oversight mechanism that is already in place for agencies - Inspector Generals - that already do audit assessments and have the contact points and relationships to get things patched. IG Empowerment Act provides leverage and doesn’t require more or other legislation or rule making to enforce compliance outside of current authorities. IGs also typically have the knowledge and process in place over the many years of operations to know how to work within agency operations, frameworks and missions to accomplish what is suggested in this BOD. I would seriously suggest re-evaluating the use of bug bounty and bug bounty-like platforms to outsource some of this given the recent Hacker One breach.

This talk was given at BSides NoVA a few years ago that details the challenges and opportunities to this approach.

https://youtu.be/PFFRzrGyyh0

Add content around STARTTLS

The /guide page doesn't elaborate on STARTTLS, and where and what (exactly) is in scope. Make this clear, and include port-related information.

Link headers

Throughout the site, headers are id'd, making them easily linkable. Make the headers links to themselves, which will help shareability without folks needing to open dev tools.

Emailed comment from Adam Bernstein, HUD OIG

US government agencies currently have many systems in operation with known vulnerabilities and weaknesses and limited funds or resources to mitigate the issues. The agencies continue to operate the systems because they accept the security risk in order to not impede the agency mission. If the systems become non-operational during an attack, then the assumption is that appropriations will then be provided to mitigate the issue. These legacy and underfunded systems should never be a part of any vulnerability disclosure program because the discovery of more vulnerabilities without the ability for remediation will only further weaken the country’s IT systems.

Suggestions on "Consequences of complying with this policy"

I applaud the directive's emphasis on the inclusion of authorization "safe harbor" language. Such language is an essential but often overlooked component of a successful vulnerability disclosure policy. A few comments on the existing language from the provided VDP Template:

(1) The provided VDP template instructions strongly encourages the implementing agency not modify the provided authorization language. This is perhaps a necessary balance but there is an opportunity here for CISA to standardize this essential language for consistency across government. Could you consider making specific authorization language mandatory?

(2) The provided VDP template authorization language includes a phrase that appears to be unrelated to authorization. Recommend striking "we will work with you to understand and resolve the issue quickly" from this section. It is perhaps better suited under "What you can expect from us".

(3) The provided VDP template authorization language does not make a commitment to defend security researchers against vexatious or frivolous litigation. The DoJ's framework suggests: If legal action is initiated by a third party against a party who complied with the vulnerability disclosure policy, the organization will take steps to make it known, either to the public or to the court, that the individual’s actions were conducted in compliance with the policy.

Two suggested variations for your consideration.

Adapted from the existing VDP Template:

If you make a good faith effort to comply with this policy during your security research, we will consider your research to be authorized. {Agency Name} will not recommend or pursue legal action related to your authorized research and will make this authorization known should legal action be initiated by a third party against you.

Adapted from Dropbox:

{Agency Name} consider activities conducted consistent with this policy to constitute “authorized” conduct. We will not pursue civil action or initiate a complaint to law enforcement for accidental, good faith violations of this policy. If legal action is initiated by a third party against you, we will take steps to make it known that your actions were conducted in compliance with this policy.

Emailed comment from the Center for Democracy & Technology (CDT)

(Note: Edits suggested by CDT are at https://github.com/cisagov/cyber.dhs.gov/pull/107/files)

December 16, 2019
Department of Homeland Security
Cybersecurity & Infrastructure Security Agency
Via: [email protected]

Re: Draft Binding Operational Directive 20-01 to Develop and Publish a Vulnerability Disclosure Policy

CDT has a long history of supporting information security researchers who help defend information systems and networks, and discover and repair new weaknesses in systems that are used by Americans everyday[1]. Ethical and law-abiding research and evaluation are an essential part of defending against an ever-changing landscape of security threats[2]. Those researchers hesitate to report vulnerabilities and weaknesses to organizations for fear of facing legal retribution. A Vulnerability Disclosure Policy (VDP) offers a way to overcome that “chilling effect” by providing a legitimate communication pipeline between researchers and organizations. That trusted-based relationship is even more important when those systems are owned and operated by the government.

DHS should be commended for providing guidance to Federal Government agencies on implementing their own VDP. The guidance sets the standard for agencies that their VDP should clearly communicate, in plain language, the expectations of the agencies and the rules of engagement for researchers. This creates a legal safe harbor to shield researchers from legal threats while protecting system operators and vendors from jeopardizing critical operations. This tracks with two resources that agencies could have considered for guidance: disclose.io and the Department of Justice (DOJ). Disclose.io is “a cross-industry, vendor-agnostic standardization project for safe harbor best practices to enable good-faith security research”[3] available as an open source project. The Computer Crime & Intellectual Property Section (CCIPS) of the DOJ drafted A Framework for a Vulnerability Disclosure Program for Online Systems[4] for use by private and public sector organizations considering a formal VDP. Agencies can also look to managed crowd-sourced vulnerability disclosure service providers, such as HackerOne and Bugcrowd, to vet researchers and triage reported findings in a manner that is manageable before making the commitment to administer a program in-house. Simply launching a “Hack The ____” program to entice hackers with cash payouts is a fast way to be overwhelmed with problems without the necessary infrastructure to fix them[5].

In addition to a vulnerability disclosure service provider or a traditional web-based form, agencies should consider a model used by news organizations designed to protect whistleblowers[6]. A Tor-based website and SecureDrop could greatly minimize the amount of researcher PII collected. There is still a place for unstructured exploration by unvetted researchers who would not otherwise participate in organization-sanctioned events[7].

Coordinated vulnerability discovery and disclosure is an established business practice that should continue to be normalized in the public sector. No software is or will ever be 100% bug-free[8]. Existing civil and criminal laws have been effective in chilling legitimate research effort, but have not deterred malicious actors from exploiting vulnerabilities that are not reported to organizations. Managed crowd-sourced vulnerability disclosure service providers can help agencies control the flow and quality of incoming vulnerabilities while vetting and protecting researchers. Unvetted researchers should still have the opportunity to disclose vulnerabilities in a manner that minimizes the collection of their PII.

Disclosure shouldn't hurt – either the researcher or the agency. Ultimately, mandatory reporting of these efforts is the only way that the public will be aware of the efficacy of these programs, and for other government agencies at the federal, state, and local levels to see the value in establishing their own programs.

Sincerely,
[signed]
Maurice Turner
Deputy Director, Internet Architecture Project
image


[1]: Eric Stallman, Improve Cybersecurity by Allowing Vulnerability Research, (February 13, 2015), https://cdt.org/blog/improve-cybersecurity-by-allowing-vulnerability-research/
[2]: CDT, The Importance of Security Research: Four case studies, (December 15, 2017), https://cdt.org/insight/the-importance-of-security-research-four-case-studies/
[3]: https://disclose.io
[4]: DOJ, A Framework for a Vulnerability Disclosure Program for Online Systems, (July 2017), https://www.justice.gov/criminal-ccips/page/file/983996/download
[5]: https://twitter.com/k8em0/status/1186313218576150531
[6]: Senator Ron Wyden, “Letter to Council of the Inspectors General on Integrity and Efficiency to evaluate cybersecurity technologies that better protect the identity of whistleblowers”, (October 23, 2019), https://www.wyden.senate.gov/imo/media/doc/102019%20Wyden%20SecureDrop%20Tor%20Letter%20to%20IG%20Council.docx.pdf
[7]: Maurice Turner, Everything is Broken—And That’s OK, Because It’s Getting Better, (September 13, 2019),
https://cdt.org/blog/everything-is-broken-and-thats-ok-because-its-getting-better/
[8]: Michelle Richardson, “The Cyber” Part III: The Role of Vulnerability Disclosure and Bug Bounties, (April 11, 2017),
https://cdt.org/blog/the-cyber-part-iii-the-role-of-vulnerability-disclosure-and-bug-bounties/

Reorient problem statements to future desired state

Great work! A few suggestions:

Choosing to disclose a vulnerability can be an exercise in frustration for the reporter when an agency has not defined a vulnerability disclosure policy -- the effect being that those who would help ensure the public's safety are turned away:

  • The reporter cannot determine how to report: Federal agencies do not always make it clear where a report should be sent. When individuals cannot find an authorized disclosure channel (often a web page or an email address of the form [email protected]) they may resort to their own social network or seek out security staff's professional or personal contact information on the internet. Or, if the task seems too onerous, they may decide that reporting is not worth their time or effort.

  • The reporter has no confidence the vulnerability is being fixed: If a reporter receives no response from the agency or gets a response deemed unhelpful, they may assume the agency will not fix the vulnerability. This may prompt the reporter to resort to uncoordinated public disclosure to motivate a fix and protect users, and they may default to that approach in the future.

  • The reporter is afraid of legal action: To many in the security community, the federal government has a reputation for being defensive or litigious in dealing with outside security researchers. Compounding this, many government information systems are accompanied by strongly worded legalistic statements warning visitors against unauthorized use. Without clear, warm assurances that good faith security research is welcomed and authorized, researchers may fear legal reprisal, and some may choose not to report at all.

I would turn the bold headings in this bullet list into positive Should statements. "The reporter should be able to report" "The reporter should have confidence that the vulnerability will be fixed" "The reporter should not fear legal action"

the substance of the paragraphs is good, perhaps bookend each with a reiteration of the goal: This is how it should be. This is how it is currently and it's bad. This is how it should be different.


Which systems are in scope. At least one internet-accessible production system or service must be in scope at the time of publication.

Would be good to make this more broad, and more specific - e.g., all major systems and websites that will be reported to Congress for 21st Century IDEA need to be included from the get-go.


When agencies integrate vulnerability reporting into their existing cybersecurity risk management activities, they can weigh and fix a wider array of concerns.

I think a VDP should be presented as one tool in the suite of scanning tools used to screen for vulnerabilities on an ongoing basis. If it's taken as another control systems need to satisfy, there may be objections about which baseline applies.


Within 15 business days after the issuance of this directive

This doesn't anticipate the likely lag in communication from when the BOD goes out and when word gets to the relevant POCs for each domain that they need to take action.


A vulnerability disclosure policy facilitates an agency's awareness of otherwise unknown vulnerabilities. It commits the agency to authorize good faith security research and respond to vulnerability reports, and sets expectations for reporters.

I think there's a possibility that agency staff need a complementary policy, internal to the agency, about how the agency will respond to vulnerabilities reported through the VDP. Or the response to VDP-reported issues needs to be incorporated into the larger vulnerability remediation policies, so that system owners know how and when they need to respond.


Create a security.txt[^15] file at the "/.well-known/" path[^16] of the agency's primary .gov domain.

Give a full example path here, there's a possibility that people will not click through, and will misunderstand what's meant by well-known path. Also, please make outbound link straight to the security.txt documentation, rather than burying it in a footnote.


Within 180 calendar days following the issuance of this directive, CISA will begin scanning for security.txt files.

Provide a template for the security.txt file format, so that the files posted will match what the CISA scanners are expecting to find.


Actual, past impact (i.e., not those that occurred in the discovery/reporting of the vulnerability) will be assessed and treated as an incident, as applicable.

Revise to "Actual past incidents related to the reported vulnerability (i.e., not those that occurred in the discovery/reporting of the vulnerability) will be assessed and treated as an incident, as applicable."

[20-01] Agency expectations, budget, and performance

I see three major issues, however I do not have constructive recommendations. Still, I will share my thoughts and experiences.

I think there will need to be verbiage that acknowledges and offers support for the agencies that utilize or rely on external (or internal) services by other agencies, such as the Department of Homeland Security or other agencies that offer security services. There's expectations that those agencies may have the budget to support improving their cybersecurity posture, or have the ability to acquire additional funding.

Additionally, establishing a Vulnerability Disclosure Policy (and later program) can negatively impact the individuals or teams responsible for cybersecurity. As we know, the oversight on cybersecurity programs comes under great scrutiny when expectations are not met and is used as performance reviews that determine whether or not the individual or team's efforts are unsatisfactory or not meeting needs.

Both a pro and a con, a VDP can and will identify that the Systems Development Lifecycle (dev, test, QA, etc) process has deficiencies where identifying and remediating security vulnerabilities should be discovered yet are lacking in the ability to do so, allowing the need for recruiting external support via VDP.

So while requiring a VDP is a good approach it will not solve fundamental, underlying issues each agency faces and instead press for more stress on cybersecurity teams and negatively impacting culture.

Add "safety" to "security and privacy"

#In Background, change

The primary purpose of fixing vulnerabilities is to protect people by maintaining or enhancing their security and privacy.

to

The primary purpose of fixing vulnerabilities is to protect people by maintaining or enhancing their security, privacy, and safety.

The increasing prevalence of software and connectivity in safety-critical contexts raises the likelihood that safety impacts will be as important as security and privacy issues going forward.

Emailed comment from the Information Technology Industry Council (ITI)

December 26, 2019

Honorable Chris Krebs

Cybersecurity and Infrastructure Security Agency
Department of Homeland Security
245 Murray Lane
Washington, D.C. 20528-0380
[email protected]

Re: CISA Binding Operational Directive 20-01 - Develop and Publish a Vulnerability Disclosure Policy

Dear Mr. Krebs,

The Information Technology Industry Council (ITI) appreciates the opportunity to submit the following comments in response to the Cybersecurity and Infrastructure Security Agency (CISA)’s Binding Operational Directive (BOD) 20-01 Draft to Develop and Publish a Vulnerability Disclosure Policy (VDP).

ITI is the premier global advocate and thought leader for the information and communications technology (ICT) industry, representing leading companies from across the ICT sector, including hardware, software, digital services, semiconductor, network equipment, cybersecurity and Internet companies. The tech industry shares the U.S. government’s goal of improving cybersecurity, and we believe our interests are fundamentally aligned with the Administration in this area. Our members are global companies with complex global supply chains that provide robust cybersecurity products and services. We appreciate the opportunity to share our views and look forward to supporting CISA in developing its federal vulnerability disclosure policy.

Effectively implementing a vulnerability disclosure policy across federal agencies and departments with varying needs, expertise, experience, and volume of internet-accessible assets, especially for the first time, will take adequate resources, funding, and a sufficiently trained workforce. ITI encourages CISA, the Office of Management and Budget (OMB), and Congress to work together to ensure agencies have what they need in this regard. Agencies should also be encouraged to get ahead of the VDP policy rollout by scanning their own systems for vulnerabilities, mitigating known critical vulnerabilities, and ensuring their existing vulnerability management programs are operational and ready to handle incoming disclosures.

Comments and Recommendations

We are encouraged by CISA’s leadership and focus on improving cybersecurity protection by guiding and supporting the development of vulnerability disclosure processes for federal departments and agencies. This is a crucial cybersecurity topic for the ICT industry, and we appreciate the timely discussion the U.S. government is leading to reduce the risk of vulnerabilities. In general, we encourage the U.S. government to consider the various well-understood and broadly adopted best practices and industry standards developed globally in the field of Coordinated Vulnerability Disclosure (CVD) and vulnerability handling.

ITI commends CISA for identifying the international standards ISO/IEC 29147 (2018) and ISO/IEC 30111 (2019) as key normative sources on Coordinated Vulnerability Disclosure (CVD) matters, and CISA’s recognition of the importance of alignment with globally adopted international standards given the global nature of technological supply chains and CVD processes.[1] ITI supports further alignment of the BOD with industry best practices and international standards, particularly with respect to minimizing the potential risk of exploitation of vulnerabilities and avoiding specific timeframes for remediation that may stifle the ability of agencies and departments to prioritize mitigation development for higher risk vulnerabilities and effectively deliver tested mitigations. Further, we encourage alignment of the BOD’s scope with existing VDP efforts in the government to ensure gradual and consistent adoption of the policy. Specifically, ITI proposes that the initial scope of applicability of the BOD be limited to “any public-facing website owned, operated, or controlled by [agency or department], including web applications hosted on those sites,” in line with the Department of Defense (DoD) VDP program, and that the scope of the term “internet accessible systems” in the context of the BOD should be further clarified and aligned with such definition.[2]

We also encourage CISA to consider further aligning the proposed BOD with broadly adopted industry best practices, such as articulated in the ISO/IEC international standards ISO/IEC 30111 (2013)[3] and ISO/IEC 29147 (2018),[4] that can usefully inform the development and adoption of CVD processes by federal agencies. Such alignment is key given the technologically intertwined nature of vulnerability management processes. ITI has identified a few key areas in which such alignment could be beneficial, including:

Encourage the Reporting of Vulnerabilities to the Key Affected Manufacturer, Developer, or Owner of the Technology at Hand

Under International standards and industry-wide adopted CVD best practices, the external finders or entities that are made aware of the vulnerability (reporters) are encouraged to report the relevant information to the potentially affected manufacturer, developer, or owner of the technology at hand, as that is the party best-positioned to lead the coordination efforts, validate the vulnerability, develop remediation, deliver the remediation to users, and publish the security advisory.[5]

We acknowledge that the BOD notes, in the Implementation Guide, that “Your agency may receive reports covering the online services of organizations in the sector your agency participates in or oversees. To communicate expectations, you might consider sharing something about this in your VDP: ‘Vulnerabilities in aviation, financial systems should be reported to the vendor or system owner, not to the Federal Aviation Administration, Department of the Treasury’.[6]

However, such recommendation should not be limited to this particular case. We recommend that the BOD instruct agencies to implement processes and VDPs that encourage more broadly the reporting of the vulnerability directly to the developer of the affected system if such vendor is identified by the reporter. Accordingly, language similar to the above should be included in the VDP template and required under article 3.a., section “Develop and Publish a Vulnerability Disclosure Policy” of the BOD and not limited to “regulated” systems.

Not Mandating or Recommending Specified Timelines for Mitigation Resolution

International standards and industry best practices in CVD recognize the time needed to develop, test, and deploy mitigations in a manner that will incentivize adoption by end users varies according to both the technology and vulnerability. In certain complex environments, the mitigation of vulnerabilities might require taking action at multiple and interdependent layers of the system[7] as well as broad coordination within the technical ecosystem to validate the vulnerability, develop the mitigations, test the mitigation(s) in various environments, and finally effectively deliver the mitigation(s) to end users. Proposing specific timelines might hinder the ability of agencies to prioritize the resolution of higher severity vulnerabilities and effectively tested mitigations. Accordingly, international standards do not recommend any timeframes but rather that vendors balance the need to develop remediation as soon as possible with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues, to ensure both the completeness and effectiveness of the proposed mitigation.[8]

We appreciate CISA’s recognition, in footnote 23 of the draft BOD, of the complexity of coordination in certain environments. Yet, the BOD further notes that “many in the security research community consider public disclosure of a vulnerability to be appropriate between 45 to 90 days after the first communication with the affected entity...” and that “CISA recommends no more than 90 days from the receipt of the report.”[9] The BOD further suggests, in the implementation guide, that agencies should “specify a target time for resolution, in days.”

This proposal, as explained above, diverges from international standards. As such, a specification of a target time, as opposed to the publication of general statistics covering resolution and response times, in some sectors, is not a common practice in VDPs nor is it included, for example, in the DoD VDP.[10]

Some countries in the world have proposed mandating specific timelines for CVD, an approach which diverges from international standards and undermines the ability to deliver effective and stable mitigations, thus potentially putting end-users at risk and compromising the impacted systems and infrastructure.[11] We recommend CISA modify the approach proposed in the draft BOD to further align with international standards and clarify that mitigations should be developed as quickly as possible and in a reasonable timeframe, taking into consideration the completeness and effectiveness of the proposed mitigation and the severity of the vulnerability but not mandating or supporting specific and targeted timeframes.[12]

Minimize Users’ Risk and Harm by Ensuring the Confidentiality and Limited Disclosure of Information Concerning Unmitigated Potential Vulnerabilities

One key objective of CVD and vulnerability handling processes is to minimize users’ risk and potential harm associated with the vulnerability. To minimize this risk, under international standards, information concerning potential vulnerabilities is kept in confidence and only disclosed to parties that are essential for the development, testing and deployment of a mitigation.[13]

Premature disclosure of vulnerabilities to entities that are not essential to mitigation development may potentially increase risk of exploitation of the vulnerability. We recommend further processes and guardrails be put in place to ensure the confidentiality of the information and that the circulation of information concerning potential vulnerabilities is limited to entities essential to mitigation development, in line with international standards.

Regarding disclosure restrictions and timelines for patching, we note that the draft BOD strikes the right balance in these areas and we recommend CISA keep this language.

Finally, ITI recommends additional dialogue, coordination and alignment among the various federal efforts to implement CVD processes across federal agencies, including proposed legislation.[14]

Conclusion

ITI looks forward to continuing to work with CISA and other key stakeholders to advance the U.S. government’s vulnerability disclosure policy. We welcome further discussion of any of the topics above.

Respectfully submitted,
{signed}

John Miller, Senior Vice President of Policy
Information Technology Industry Council


[1]: See Binding Operational Directive 20-01 November 27, 2019 (draft): “International standards ISO 29147 (vulnerability disclosure) and ISO 30111 (vulnerability handling processes) are high quality normative resources. As vulnerability disclosures can come from anyone across the globe, aligning with international best practices minimizes potential friction”.
[2]: Id.
[3]: ISO/IEC 30111:2013 Information Technology – Security Techniques – vulnerability handling processes.
https://www.iso.org/standard/53231.html.
[4]: ISO/IEC 29147:2018 Information Technology – Security Techniques – vulnerability disclosure.
https://www.iso.org/standard/72311.html.
[5]: ISO/IEC 30111 (2019) Section 7.2.4 Remediation development (“The vendor develops and performs appropriate tests to ensure the vulnerability issue has been addressed on all supported platforms.”). See also Section 5.6.3 ISO/IEC 29147 (2018) (“A reporter identifies potential vulnerabilities in products or services and notifies the vendor.”)
[6]: Binding Operational Directive 20-01 November 27, 2019 (draft).
[7]: See Center for Cybersecurity Policy and Law, Improving Hardware Component Vulnerability Disclosure (2019), https://centerforcybersecuritypolicy.org/improving-hardware-component-vulnerability-disclosure. See also FIRST, Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure, available at https://first.org/global/sigs/vulnerability-coordination/multiparty/guidelines-v1.0.
[8]: See ISO/IEC 30111 (2019), Section 7.2.5 (Remediation Development): ‘When determining the best remediation, the vendor should attempt to balance the need to create a remediation quickly, with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues.’ See also Section 7.2 with respect to vulnerability handling phases monitoring.
[9]: Binding Operational Directive 20-01 November 27, 2019 (draft), available at https://cyber.dhs.gov/bod/20- 01/.
[10]: DoD Vulnerability Disclosure Policy, https://hackerone.com/deptofdefense.
[11]: See a letter joined by ITI addressing a proposed such regulation. https://www.digitaleurope.org/resources/joint-industry-letter-on-cybersecurity-vulnerabilities- administrative-regulation-response-to-miit-published-draft-for-comments/.
[12]: This aligns with the VDP language proposed by the BOD (asking the reporter to: “provide us a reasonable amount of time to resolve the issue before you disclose it publicly”).
[13]: See ISO/IEC 29147 (2018) (mainly section 5.8) and ISO/IEC 30111 (2019) (mainly section 7.4).
[14]: “The IoT Cybersecurity Improvement Act of 2019” (S. 734/H.R. 1668), was introduced in March 2019, with amendments made at committee markup in the House and Senate in June 2019. See also committee report by HSGAC on S. 734 (S. Rept. 116-112).

Reporting requirements and metrics may under-represent long-lived reports

In Reporting Requirements and Metrics:

  1. After 270 calendar days following the issuance of this directive, within the first FISMA reporting cycle and quarterly thereafter, report the following metrics through CyberScope:
    a) Number of vulnerability disclosure reports
    b) Number of reported vulnerabilities determined to be valid (e.g., in scope and not false-positive)
    c) Number of currently open and valid reported vulnerabilities
    d) Number of currently open and valid reported vulnerabilities older than 90 days from the receipt of the report
    e) Median time to validate a submitted report
    f) Median time to remediate/mitigate a valid report
    g) Median time to initially respond to the reporter

We (CERT/CC) have seen vul reports sit unresolved for multiple years.

The combination of d and f could lead to misleading reports that under-represent the duration of longer-lived reports. Presumably, f will only represent reports that are already closed, so if older reports are stacking up but unclosed they will be hidden from this metric unless older reports are in the majority (which may never happen).

Merely counting the number of older reports d doesn't say anything about how old the older reports are. You could easily have some number of very old reports that are not being addressed but they'll look insignificant compared to the high churn of quick/easy fixes.

Three suggestions here. Consider adding:

  1. Counts of older reports by priority/risk level
  2. Median age of reports older than 90 days (it'd be better to have 75%ile or 95%ile if possible but that might be asking too much)
  3. Median age of all open reports (again, 75%ile or 95%ile would be useful too)

Consider an information sharing "industry day" to help agencies fully understand BOD requirements

I'm pleased that DHS is working with all federal agencies to follow the path DoD started a number of years ago to increase security, and allow a crowdsourced approach to doing it. As one of the people that helped launch Hack the Pentagon, and the DoD's Vulnerability Disclosure Policy while at DoD, I appreciate that DHS included a Frequently Asked Questions section in the draft BOD, as it seems to provide a host of good information for agencies.  After the success of Hack the Pentagon, many civilian agencies began asking a number of questions such as these ones on how to replicate it in their agencies.  One thing that worked very well and helped streamline questions asking/answering was a government-only industry day, to help get as many government agencies together to learn as much as possible BEFORE they were to stand up their own VDP. 

A one day industry day hosted by DHS may help decrease the possibility of agencies misunderstanding BOD requirements, and help them develop a strategy that would work best for their own agency.  Specifically, we found that agencies most needed the most help in understanding: 

  • the legalities of how this all works to include what safe harbor language is prudent;  
  • the contracting possibilities for those seeking a third party to help fulfill the requirement; 
  • how to expand their existing vulnerability management system to account for the increase in vulnerabilities that would need remediation and mitigation; 
  • how to train workforce to communicate with the researcher community, and react when information about vulnerabilities are submitted. 
  • market research on which companies have expertise in this area so they didn't need to do research from scratch. 

While many might think this pertains mostly to security personnel, consider also inviting lawyers, contracting personnel, public relations personnel and program managers, as all will be involved in launching an agency VDP.  The earlier an information sharing day such as this one occurs, the more likely it is that each federal agency will be able to meet the requirements of the DHS BOD, as well as the timelines associated with it.  

Emailed comment from Microsoft

Department of Homeland Security
Cybersecurity and Infrastructure Security Agency (CISA)
245 Murray Lane
Washington, D.C. 20528-0380

Submitted via electronic mail to [email protected]: Microsoft’s Response to Binding Operational Directive 20-01 (draft), Develop and Publish a Vulnerability Disclosure Policy

To whom it may concern,

Microsoft appreciates the opportunity to review and comment on the draft Binding Operational Directive 20-01 (draft Directive), Develop and Publish a Vulnerability Disclosure Policy. For more than a decade, Microsoft has been developing and evolving its approach to vulnerability disclosure and handling, seeking to protect our customers and enhance ecosystem resilience by collaborating with security researchers and other vulnerability finders and reporters. Our experience has demonstrated the value that such collaboration can have for security as well as some of the challenges that may arise in the creation and implementation of effective vulnerability disclosure policies and handling procedures.

We’re encouraged that CISA is taking the critical step of requiring Federal civilian executive branch agencies to publish and operationalize vulnerability disclosure policies, and we welcome the opportunity to support this important initiative. Given the significant work that has been done to date to develop and share international standards and guidance for vulnerability disclosure and handling, we also appreciate CISA’s efforts to align the draft Directive with existing standards and good practices.

Overall, the draft Directive’s requirements, guidance, and template highlight critical elements of and provide useful context for developing and publishing vulnerability disclosure policies and implementing appropriate handling procedures. However, we encourage CISA to take into account, both in evolving the draft Directive and assessing how to further direct and support agencies moving forward, that developing policies and describing procedures is often more straightforward than managing implementation. In particular, we anticipate that establishing new procedures and using them consistently and at scale, especially given the complicated nature of Federal information technology infrastructure, may pose significant challenges. In addition, we offer the following specific feedback for CISA’s consideration as it refines the draft Directive’s requirements and guidance and explores the role of CISA and other resources:

  • Partner effectively with vulnerability reporters (a.k.a. “security researchers” or “finders”) by clearly defining policies and expectations, prioritizing consistent communication, and recognizing reporters’ contributions to security when vulnerabilities are confirmed;
  • Ensure sufficient capacity for processing and remediating confirmed vulnerabilities by supporting operations, including by exploring additional roles for CISA and/or other resources; and
  • Prepare for complicated vulnerability disclosure and handling scenarios by facilitating readiness to both coordinate closely with third parties and mitigate risks to legacy infrastructure.

Partner effectively with vulnerability reporters

Our experience has demonstrated that consistent communication, clearly defined expectations, and transparency are foundational to effective partnerships with security researchers. In 2011, our Microsoft Security Response Center (MSRC) announced that we were adopting a new policy, called Coordinated Vulnerability Disclosure (CVD), in response to feedback from the security community and to demonstrate our focus on protecting technology users.[1] Since then, prioritizing communication throughout the vulnerability investigation and remediation process and providing appropriate transparency regarding shifts in expectations has helped to foster collaboration. Our support for these practices was also validated during our participation in a multi-stakeholder process on cybersecurity vulnerabilities convened by the National Telecommunications and Information Administration.[2] This process led to the publication of “Vulnerability Disclosure Attitudes and Actions: A Research Report.”[3] Demonstrating that communication is key, the report conveyed results of a survey in which most security researcher respondents shared that they expect regular updates on their reported vulnerability, including regarding its investigation and remediation; 95 percent expected to be notified when their reported vulnerability was resolved.[4]

The survey further demonstrated support for prioritizing communication over adhering to a resolution timeline. While many surveyed researchers expressed a desire for an anticipated resolution timeline:

...only 18% of the researchers that expressed an expectation of a resolution timeline thought that vendors should conform to a timeline without regard [for] the circumstances of a particular bug. Maintaining a definite resolution date, then, is less important than communicating the decision-making involved in determining resolution priority in a transparent manner, allowing [researchers] to calibrate their expectations.[5]

Consistent with the finding that communication and transparency related to the resolution process are more critical for collaboration than holding to resolution timelines, and given broader risk management issues associated with timelines, Microsoft’s CVD policy is focused on coordination and does not include timelines for resolution of a confirmed vulnerability. Specifically, our policy asks that the security researcher allows the vendor the opportunity to diagnose and offer fully tested updates, workarounds, or other corrective measures before any party discloses detailed vulnerability or exploit information to the public.[6]

While CISA’s draft Directive demonstrates the importance of communicating with vulnerability reporters and conveying expectations, there are further opportunities to bolster and prioritize that focus. For example, the draft Directive appropriately requires agencies to specify where reports should be sent (i.e., to provide a “front door” for reporters) and include within their policies a statement explaining when reporters can anticipate acknowledgement of their reports – both of which are foundational for establishing good communication and setting expectations. To further encourage collaboration, we recommend that CISA make clear to agencies that ongoing communication throughout the investigation and potential remediation process are critical to the successful implementation of policies.

Moreover, we encourage CISA to help agencies shift their focus away from providing resolution timelines[7] and instead recognize the importance of committing to regular cadences by which they will engage in meaningful follow up with reporters. Sticking to resolution timelines may be difficult and even undermine security due to variability in: a) risk prioritization of different vulnerabilities; and b) challenges associated with different investigations, remediation efforts, or compatibility testing circumstances. However, communicating updates, providing context for shifting expectations, and otherwise collaborating directly with security researchers is consistently achievable. What’s more, focusing on ongoing communication rather than resolution timelines will not only strengthen collaboration with reporters that better understand remediation efforts but also improve security and operations, ensuring that such efforts reflect risk priorities and that mitigations work in practice (e.g., fixes are sufficiently tested prior to deployment).

While we have learned from first-hand experience that imposing deadlines on resolving vulnerabilities is unhelpful for managing risk, timelines for other activities may be valuable. As such, in requiring agencies to set target timelines and track metrics, we encourage CISA to differentiate between timelines for acknowledging reports, assessing the validity of reports, resolving confirmed vulnerabilities, and notifying reporters of outcomes. Acknowledging reports and notifying reporters of outcomes (i.e.., after a resolution has been achieved) can and should happen according to a more consistent timeline. Delay in confirming receipt of reports, especially when a policy has committed an agency to responding within a specific timeframe, risks undermining an agency’s rapport and ability to coordinate with reporters. However, timelines for investigating and resolving vulnerabilities may reasonably vary, and without context regarding types of vulnerabilities or circumstances, metrics about median timeframes may be less meaningful.

Exploring how individual agencies and CISA might recognize researchers for their contributions to security could also foster collaboration between the Federal government and vulnerability finders and incentivize coordinated disclosure. Beyond bug bounties, Microsoft seeks to recognize and reward researchers for their contributions through a variety of mechanisms. For instance, as part of our Security Update Guide, as applicable and appropriate for each security vulnerability that’s addressed, MSRC acknowledges both internal and external researchers for their efforts.[8] To consistently measure and enable acknowledgment of especially impactful contributions, MSRC has developed a Researcher Recognition Program.[9] We also recognize our “most valuable” researchers across dimensions like high accuracy, high volume, and high impact in reporting vulnerabilities.[10] Likewise, the draft Directive could encourage individual agencies to acknowledge researchers that submitted confirmed vulnerabilities after they resolve them. In addition, CISA could look across agencies and acknowledge researchers that are making especially impactful contributions to the mission of protecting Federal civilian executive branch agencies.

Ensure sufficient capacity for processing and remediating confirmed vulnerabilities

Through Microsoft’s efforts to manage vulnerability handling processes across a wide array of products and services and our partnership with other organizations as they stand up and operationalize vulnerability disclosure policies, our experience consistently demonstrates that implementing policies is often challenging, in particular at the outset when report influx and resource demand are uncertain. Moreover, to the extent that implementation challenges disrupt efforts to operate according to defined policies (e.g., to meet timelines for response or remediation), challenges may be exacerbated by vulnerability reporters who are frustrated by inconsistent communication and unmet expectations.

Given this context, the draft Directive’s attention to helping agencies scale up slowly – by starting with at least one internet-accessible system or service in scope and adding others over time – is especially important. Likewise, while bug bounty programs can helpfully incentivize researcher focus on high-priority systems and services, from our engagement with others, we’ve learned that the experience of some organizations has been that bug bounty programs can also risk driving more vulnerability reports and exhausting bandwidth when teams are unprepared for an influx. As a result, exclusively focusing on vulnerability disclosure policies and handling processes at the outset may be a reasonable approach for agencies. CISA might also consider the following ideas to support agency capacity and operations:

  • Highlight potential challenges with vulnerability mitigation or resolution, the importance of testing fixes to limit disruptions, and resources to support such efforts.
  • Acknowledge that agencies might consider leveraging external services, such as those provided by HackerOne or Bugcrowd, to support their efforts to establish and implement a vulnerability disclosure policy and handling process; note that many organizations, including the Dept. of Defense and private sector companies of various sizes, have benefited from doing so.
  • Offer a way to help address vulnerability reports for systems and services that are out of scope.
  • Clarify that, at an agency’s request, CISA will assist with disclosure of reports to vendors when they’re inappropriately sent to agencies (e.g., because of a real or perceived regulatory role); note that while the current draft Directive acknowledges that CISA may do so, ensuring that reports reach vendors that can address vulnerabilities is critical to ecosystem security and an activity that, taken on by CISA, may enable agencies to focus their resources on other efforts.
  • List all agency webforms and points of contact for vulnerability reports in a central location on a CISA webpage along with a CISA email alias to which vulnerability reporters should reach out if they do not receive a response to their agency outreach; note that, while accomplishing the same objective of CISA testing whether it receives a response after contacting an agency through its identified mechanism (e.g., webform and/or email alias), this approach has the additional benefits of helping to support a potentially overwhelmed agency and guarding against circumstances in which a frustrated reporter might proceed with public disclosure of a vulnerability.

Prepare for complicated vulnerability disclosure and handling scenarios

As CISA’s draft Directive acknowledges, some scenarios may complicate agency efforts to craft clear vulnerability disclosure policies or implement consistent vulnerability handling procedures. For instance, the draft Directive highlights that agencies may receive reports of potential vulnerabilities impacting not only their systems and services but also third-party products or services because: 1) systems or services included in their policy’s scope incorporate third-party products or services; or 2) reporters perceive that, due to their regulatory role, agencies should receive vulnerability information relating to the products or services of organizations within their sector. In addition, reporters may submit information to agencies if they do not want to report vulnerabilities directly to vendors or do not receive a response from vendors. They may also submit information to agencies when, in order to successfully exploit a system or service, a chain of vulnerabilities must be leveraged – and that chain implicates both an agency and a third party.

When reports of potential vulnerabilities in third-party products or services are submitted to agencies, two key issues must be addressed; both are identified in the draft Directive, but we encourage CISA to consider further emphasizing them in any future versions. First, ensuring that affected vendors have visibility into potential vulnerability information is critical to both mitigating agency risk and improving ecosystem security; whether through an enhanced CISA role (as requested by agencies), coordination with CERT/CC, or otherwise, the Directive should underline the importance of disclosure to vendors and establish appropriate processes accordingly. Second, to the extent that agencies wish to include within the scope of their policies any third-party products or services (for example, as dependencies of an agency’s own systems or services), the draft Directive’s reference to Dept. of Justice (DOJ) guidance[11] is critical. In implementing such guidance, we recommend coordinating tightly with third party providers, both to facilitate visibility into potential vulnerability information and to ensure that research activities are appropriately scoped to mitigate risks imposed on such third party offerings. (By way of example and as highlighted by the DOJ’s guidance, where the third party is a provider of shared platforms, such as cloud computing services, research activities must be scoped in a way that prevents any risk to co-tenants.)

Beyond instances in which third parties or both agencies and third parties are implicated by vulnerability disclosure, agencies may receive reports of potential vulnerabilities that affect many vendors and organizations (e.g., Heartbleed). In such circumstances, agencies should also be aware of the need to share vulnerability information more broadly. CISA could reference guidance from FIRST, a confederation of incident response teams that cooperate and share best practices,[12] for multi-party disclosure.[13]

In addition, agencies may receive reports of potential vulnerabilities affecting legacy infrastructure that they own and operate but for which fixes or mitigations are especially complex – given that they do not have access to the code impacted by a vulnerability, updates are not covered in contracts, or related circumstances. For example, in the case of Heartbleed, given the number of contractors implicated and the security maturity of affected infrastructure, for some Federal environments, there was a need to rebuild executables from the original code. CISA could acknowledge such potential challenges and point agencies to additional resources for handling these more complicated scenarios.

As stated at the outset, we applaud CISA for pursuing an important initiative – requiring Federal civilian executive branch agencies to publish and operationalize vulnerability disclosure policies – and for the approach taken in the draft Directive, which both focuses on many critical elements of vulnerability disclosure policies and handling processes and leverages existing standards and good practices to support implementation. We appreciate the opportunity to provide input and would welcome any further opportunities to engage with CISA at it moves forward with refining and issuing the draft Directive, including through policy and/or operational discussions with our MSRC experts.

Sincerely,
{signed}

Angela L. McKay
Senior Director
Customer Security & Trust Microsoft

[1]: https://msrc-blog.microsoft.com/2011/04/19/coordinated-vulnerability-disclosure-from-philosophy-to-practice/; https://blogs.technet.microsoft.com/ecostrat/2011/04/19/coordinated-vulnerability-disclosure-reloaded/.
[2]: https://www.ntia.doc.gov/other-publication/2016/multistakeholder-process-cybersecurity-vulnerabilities.
[3]: https://www.ntia.doc.gov/files/ntia/publications/2016_ntia_a_a_vulnerability_disclosure_insights_report.pdf.
[4]: Id.
[5]: Id. at 6.
[6]: https://www.microsoft.com/en-us/msrc/cvd. This approach is also consistent with the draft Directive’s finding that “it is generally ideal for any public disclosure to occur after a vulnerability has been fixed...”
[7]: For example, the current draft Directive notes that “While it is generally ideal for any public disclosure to occur after a vulnerability has been fixed, agencies have the primary responsibility of addressing vulnerabilities in a timely manner....Many in the security research community consider public disclosure of a vulnerability to be appropriate between 45 to 90 days after the first communication with the affected entity...Agencies may require that the researcher give the agency a defined window of time to address the vulnerability before public disclosure, but should not seek to limit publication after this window of time has passed...”
[8]: https://portal.msrc.microsoft.com/en-us/security-guidance/acknowledgments.
[9]: https://www.microsoft.com/en-us/msrc/researcher-recognition-program.
[10]: https://msrc-blog.microsoft.com/2019/08/07/announcing-2019-msrc-most-valuable-security-researchers/.
[11]: https://www.justice.gov/criminal-ccips/page/file/983996/download#page=4
[12]: https://www.first.org/about/mission
[13]: https://www.first.org/global/sigs/vulnerability-coordination/multiparty/

Emailed comment from BSA | The Software Alliance

January 7, 2020

Cybersecurity and Infrastructure Security Agency
Department of Homeland Security
245 Murray Lane
Washington, DC 20528-0380

Via email to: [email protected]

Re: Comments on Binding Operational Directive 20-01 (draft), Develop and Publish a Vulnerability Disclosure Policy

To Whom It May Concern:

BSA I The Software Alliance appreciates the opportunity to provide comments on the Department of Homeland Security's (DHS's) draft Binding Operational Directive (BOD) on publishing a vulnerability disclosure policy, and appreciates DHS's decision to solicit input on this important policy from impacted stakeholders. BSA is the leading advocate for the global software industry before governments and in the international marketplace. Software powers technologies that enhance our personal lives and businesses in every sector, as well as throughout United States Government departments and agencies. BSA's members[1] are at the forefront of software-enabled innovation, providing the federal government with solutions that dramatically improve the government's ability to deliver timely and effective services to the citizens it governs. Moreover, BSA's members are pioneers in the field of software security, leading the development of principles relating to the secure software development lifecycle (SDLC) and coordinated vulnerability disclosure (CVD).

Over the last several years, the software community has developed best practices to help software developers and vendors, as well as security researchers, improve their ability to identify, mitigate, and disclose vulnerabilities in software products and services through CVD programs. BSA's members have unparalleled experience in both the development of CVD best practices and international standards and in the successful implementation of CVD programs.

Building on the experience of our members, BSA earlier this year released Guiding Principles for Coordinated Vulnerability Disclosure, a document that outlines best practices for establishing and implementing a successful CVD program.[2] We commend this document to your attention as a useful resource for informing agency CVD programs under the proposed BOD. This document also informs our comments on the draft BOD, which follow.

Promoting Harmonization and Further Alignment with International Standards

BSA commends DHS on its leadership in advancing the adoption of CVD policies. As the administration seeks to address matters of CVD, we support alignment of the BOD language with broadly adopted, globally applied industry best practices, as articulated in ISO/IEC 30111 (2019) and ISO/IEC 29147 (2018), among others.[3] We appreciate that the BOD explicitly refers to these standards as "high quality normative resources" and recognizes that "aligning with international best practices minimizes potential friction." Given that some CVD processes related to the BOD implementation may require industry collaboration, we believe further refinement should be considered to ensure the BOD does not unnecessarily incentivize the immature release of vulnerabilities, by requiring stringent or specific timelines or disclosure to specific entities that are not taking part in the mitigation development, prior to public disclosure. Such alignment is key because of the globally intertwined nature of technology and vulnerability handling processes in particular.

Avoiding mandates for specific timelines or pre-disclosure requirements

Under well-established best practices and international standards, and as noted in BSA's Guiding Principles, third parties (including security researchers) are generally encouraged to report the information concerning a potential vulnerability to the manufacturer or developer of the applicable technology owner or developer who is best-positioned to lead the CVD process, while the information is kept in confidence and shared only with parties essential to the mitigation development and testing processes. This key CVD principle enables the owner of the technology to develop, test, and deliver mitigations to end users, while keeping the information in confidence to limit the potential risk of exploitation while mitigations are not available. Mandating or promoting practices that suggest or require the disclosure of vulnerability information to entities that do not take part in the mitigation development diverges from such broadly understood global standards, and may increase the likelihood of potential exploitation.

In addition, internationally recognized standards and industry best practices recognize that the time needed to develop, test, and deploy mitigations in a manner that will incentivize adoption by end users varies according to the technology and vulnerability. In certain complex environments, such as hardware, the mitigation of vulnerabilities may require taking action at multiple and interdependent layers of the system (often termed "Multi-Party CVD")[4] and a broad coordination within the technical ecosystem to validate the vulnerability, develop the mitigations, test the mitigations in various environments and finally effectively deliver it to end-users. Thus, the time needed to develop a mitigation for supported products or services differs according the technology at hand. As such, internationally recognized standards, including those cited above, do not recommend any timeframes, but guide vendors to balance the need to develop remediation as soon as possible 'with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues,' meaning the completeness and effectiveness of the proposed mitigation.[5] Requiring adherence to timelines and or the premature pre-disclosure to parties that do not take part in mitigation development undermines vendors' ability to properly prioritize mitigations development, properly test the mitigation, therefore potentially increasing risk of exploitation.

While the draft BOD recognizes coordination complexities in a footnote (footnote 23), it also suggests that "[m]any in the security research community consider public disclosure of a vulnerability to be appropriate between 45 to 90 days after the first communication with the affected entity..." and proposes that "CISA recommends no more than 90 days from the receipt of the report".

We recommend that the BOD be amended to clarify that mitigations be developed as quickly as possible and in reasonable timeframes, taking into consideration the completeness and effectiveness of the proposed mitigation, as well as the severity of the vulnerability, but with no specific timeframes outlined.[6]

BSA and its members appreciate your consideration of these recommendations. Software security is one of the most pressing challenges we face in the cybersecurity arena, and BSA and its members are eager to work with OHS to encourage more robust security across the federal government by improving its ability to collaborate with software developers to identify and mitigate vulnerabilities. Thank you for the opportunity to comment on this important matter.

Sincerely,
{signed}

Tommy Ross
Senior Director, Policy

[1]: SSA's members include: Adobe, Akamai, Apple, Atlassian, Autodesk, Bentley Systems, Box, Cadence, CNC/Mastercam, IBM, Informatica, Intel, MathWorks, Microsoft, Okta, Oracle, PTC, Salesforce, ServiceNow, Siemens Industry Software Inc., Sitecore, Slack, Splunk, Trend Micro, Trimble Solutions Corporation, Twilio, and Workday.
[2]: Guiding Principles for Coordinated Vulnerability Disclosure, BSA I The Software Alliance. Available at: https://www.bsa.org/files/policy-filings/2019globalbsacoordinatedvulnerabilitydisclosure.pdf.
[3]: And successor standards as they are developed in ISO/IEC.
[4]: See Center for Cybersecurity Policy and Law, Improving Hardware Component Vulnerability Disclosure (2019), available at https://centerforcybersecuritypolicy.org/improving-hardware-component­ vulnerability-disclosure. See also FIRST, Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure, available at https://first.org/global/sigs/vulnerability­ coordination/multiparty/guidelines-v1 .0.
[5]: See ISO/IEC 30111 (2019), Section 7.2.5 (Remediation Development): 'When determining the best remediation, the vendor should attempt to balance the need to create a remediation quickly, with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues.' See also Section 7.2 with respect to vulnerability handling phases monitoring. Under international standards it is recognized the vendor is generally best positioned to lead the coordination efforts, validate the vulnerability and finally develop remediation and remediation delivery processes while the information is kept in confidence.
[6]: See also https://www.digitaleurope.org/wp/wp-content/uploads/2019/07/Joint-industry-letter-on­ Cybersecurity-Vulnerabilities-Administrative-Regulation-response-to-MIIT-final.pdf joined by BSA.

Emailed comment from the Department of Justice (CISO)

Thank you for the opportunity to review and provide comment on the draft Vulnerability Disclosure Policy (VDP) Binding Operational Directive (BOD). Please find DOJ’s comments below:

  1. DOJ recommends CISA only require adding a Security POC and Organization to the “/.well-known/” .gov domain within 15 days of BOD publication.
  2. DOJ recommends clarifying that “public-facing” sites and services refer to the population of .gov domains scanned by NCATS. CISA may want to consider publishing that list to clarify the total “in scope” testing population for the security research community.
  3. DOJ recommends OMB/CISA identify a standard disclosure hold time (90-120 days) for Departments and Agencies to remediate reported vulnerabilities and within which security researchers cannot further disclose their findings.
    • Departments and Agencies should be free to request additional time from the security research community in the event of major vulnerabilities that require more time to remediate. This would only apply when specifically requested and only for individual vulnerabilities.
    • The BOD explicitly prohibits agencies from “restricting the [researcher’s] ability to disclose discovered vulnerabilities to others, with the exception of a request for a reasonably time-limited response period.” Multiple federal VDPs with different timelines will confuse the security research community.
  4. Please clarify how CISA intends to enforce VDP requirements or measure completion against “all public facing systems and services”?
  5. Please clarify what is meant by the 60 day action in the OMB Memo for CISA to coordinate with DOJ/NIST on “immediate actions to instantiate VDPs.” How is this different from the guidance in draft BOD 20-01?

Thank you again for the opportunity to review and provide comment. DOJ looks forward to working with CISA to finalize and implement this VDP BOD.

Thanks,

Nickolous Ward
Chief Information Security Officer
United States Department of Justice

DMARC with system emails

(disclaimer, I'm brand new to DMARC)

Does DMARC p=rejectcause issues with server generated from inside the network which isn’t associated with a mailbox? E.g. cron job emails coming from a system where the sender is something like [email protected] ?

Consider a more explicit acknowledgement that not all vulnerabilities are created equal

The directive should express that vulnerability scoring is an important part of a vulnerability disclosure process as it meaningfully influences initial triage, reporter expectations, and the prioritization of remediation efforts. There appears to be some precedent for this in BOD 19-02 Vulnerability Remediation Requirements, which classifies vulnerabilities based on CVSS Qualitative Labels (Critical, High, Medium, Low, None).

The inclusion of CVSS-based severity scoring (or other methodology) would enable several potential improvements to the directive:

(1) CISA and the broader government would gain meaningful insight into risk by amending the "Reporting Requirements and Metrics" section to require metrics in section 10 (b), (c), (d), and (f) to be reported by severity.

(2) As Jack Cable notes in his comment, an agency should have the option to temporarily accept risk beneath a defined severity level.

(3) Agencies should be encouraged to set Resolution Targets for reports based on Severity. The default recommendation of 90 days, for example, may be rather long default for a Critical finding.

Clarify guidance on SPF null records for zones that send no email

https://cyber.dhs.gov/guide/#what-should-be-done-with-domains-that-do-not-send-mail says:

With DMARC p=reject, it is not necessary to specify SPF “null records” on every active domain in the zone, though doing so is not harmful.

This section is ambiguous on whether SPF null records should be set on the parent domain (the root of the zone).

For a domain like gsatest1.gov, for example, which sends no email and acts as a testing area, what are the minimum records necessary, from both a compliance and security standpoint?

People Reporting Bugs Should Always Be Anonymous

According to the Vulnerability Disclosure Policies (Draft) (20-01) the policy must include "A commitment to not recommend or pursue legal action against anyone for security research activities that the agency concludes represents a good faith effort to follow the policy, and deem that activity authorized".

Anyone who reports a bug to a government agency is most likely doing so with good intentions, regardless of how the bug was found or the initial intent of their actions. While there should be restrictions on what testing is allowed and there should be no guarantee that legal action won't take place, making all bug reports anonymous encourages people to file bug reports. To put it a different way, why would anyone want to help you if there's even a remote threat that you will attempt to harm them?

Binding Operational Directive 20-01: on security@

The Binding Operational Directive 20-01 has a footnote as follows:

CISA recommends using a team email address specifically for these reports and avoiding the use of an individual’s email address. The email address can be the same across multiple domains; it need not be on the domain it is a security contact for. However, we strongly recommend using an address of the form security@, as it is a de facto address used to initiate conversations about security issues on a domain.

security@ is actually part of the ISO/IEC 29147 standard as blogged here by the UK National Cyber Security Centre. Beyond that, it's the only vaguely standard way of sending vulnerability disclosures. https://www.openbugbounty.org/, for example will automatically email security@ when a bug is reported in a system linked to a DNS record. It's the closest I've seen to fully-automated disclosure.

It'd be great to see at least some of the ISO/IEC 29147 suggested email addresses as an important part of meeting a vulnerability disclosure baseline.

FAQ for "but this domain doesn't send mail"

This is touched on generally in a few places in the implementation guidance, but there should be a direct and clear answer to the question "my domain doesn't send mail, so I shouldn't have to do anything".

Encourage stronger SSL Protocols

Are there any thoughts about not just saying “don’t use SSLv2 / SSLv3” but also towards saying “only use TLS1.2" ?

I’m in the middle of looking at fixing up a number of internal servers and trying to figure out what we can get away with. It appears that TLS1.2 is supported in all the modern desktop browsers, but maybe we still need TLS1.1 for some mobile and other platforms? This led me to wonder if we should be more aggressive about pushing on the protocol requirements (and there is possibly a similar argument for ciphers).

Test

Just wanted to see what happens if I comment here.

Clarify guidance around scope of order regarding which domains are covered

We have seen confusion around exactly which domains an agency needs to apply the measures from BOD 18-01 to, in order for the agency to be "in compliance."

Fixing this might be as simple as updating two spots in the Compliance Checklist:

  • For 90 days, second bullet: "Configure all second-level domains and non-mail-sending domains..."
  • Within 1 year: "for all second-level domains, non-mail-sending-domains, and mail-sending hosts."

@h-m-f-t thoughts?

Public Comment from Jack Cable - BOD 20-01

Thank you for the opportunity to provide feedback on Binding Operational Directive 20-01. I commend CISA for its actions promoting vulnerability disclosure policies, and firmly believe that the proposal should move forward. As evidenced by the success of both federal and industry vulnerability disclosure policies, the potential benefit of such policies is immense and can significantly improve the security posture of federal agencies.

In this response, I outline my main pieces of feedback regarding the directive. This contains both the positive aspects of the draft (of which there are many), which I believe should remain as is, and constructive recommendations for making further improvements to the draft. I welcome open discussion on my comments in order to make the directive as effective as possible.

Specific feedback and recommendations:

  1. Legal authorization is one of the most crucial components of establishing a vulnerability disclosure policy, and I am glad to see it front and center in the draft directive. Though the current draft suggests legal safe harbor language, due its importance I would recommend requiring that agencies use the exact language specified by CISA when establishing their policies. This minimizes the risk of agencies including language that does not offer requisite protection to researchers.
  2. The directive should maintain its requirement for agencies to add all systems to their scope within 2 years, if not sooner. A VDP only reaches full effectiveness when its scope is completely open. Additional vulnerability information can never harm an agency's security posture — after all, vulnerabilities exist whether or not agencies are aware of them, and it is always better to know. It is the agency's job, not that of the VDP, to prioritize vulnerabilities received and the resources devoted to fix them.
  3. The directive should place greater emphasis on vulnerability classification for remediation. Agencies should be expected to fix a critical vulnerability much faster than a low-priority vulnerability. In particular, the reported metrics to CISA should break down received and outstanding vulnerabilities counts by severity level (critical/high/medium/low), as well as for remediation times. Additionally, the directive should emphasize that agencies may deem reports to be of minimal severity such that a fix is not warranted, in which case the agency should be able to close the report as an accepted risk. This is particularly important given that VDPs tend to receive many minimal-impact reports, and agencies only have limited resources to devote to processing reports and fixing vulnerabilities.
  4. The directive should enforce greater transparency for remediation. It is insufficient for an agency to simply claim that a vulnerability has been remediated. At minimum, agencies should have a mandatory retesting process where a second set of eyes confirms that a vulnerability has been remediated. This can also involve the researcher who originally reported the vulnerability, although they should not be the only party that retests the vulnerability. Furthermore, agencies should supplement their policies by identifying lessons learned and changes going forward for contracting and/or development to prevent prevalent classes of vulnerabilities.
  5. The directive should continue to emphasize information sharing. Vulnerability reports that agencies receive, such as common misconfigurations, may have potential of affecting a large number of federal systems even if not a zero-day vulnerability. The directive should emphasize that agencies should report any such widespread misconfiguration to CISA. As such, I recommend adding the phrasing "or common misconfigurations" to the language in 9 (a) following mention of "not publicly known vulnerabilities". Upon receiving vulnerability information, CISA should coordinate to ensure that all agencies that may be affected are aware of such information.
  6. The directive should aim to make reporting as easy as possible. The average researcher will not know the difference between cabinet-level agencies, let alone individual agencies within these departments. At minimum, each federal cabinet-level agency should have a single vulnerability disclosure policy and reporting flow. This allows one central team to gain experience interacting with researchers and validating patches, rather than having separate policies which may lead to conflicting terms or varying quality of response. Thus, the directive's direction to "to the greatest degree possible optimize for closeness to the system owner" should be revisited, instead suggesting agencies implement a single reporting channel for all organization assets. This can follow the Department of Defense's model where one group is responsible for disseminating vulnerability reports to relevant system owners. CISA should additionally establish a companion website to direct researchers to VDPs based on the organization or domain in question.
  7. Discovering an agency's vulnerability disclosure policy should be as easy as possible. Each agency should be required to link to their vulnerability disclosure policy from either the front page of their website or another prominent page. This is also important for ensuring that vulnerability disclosure policies are indexed and appear in search results.
  8. CISA, along with OMB, should plan for the case that agencies fail to adequately remediate reported vulnerabilities. This may be the first time that an agency's response capabilities are tested against real vulnerability reports, and not every agency will be successful in patching their systems effectively and timely. Failure to patch such reports is indicative of greater challenges in security management. CISA and other policy leadership should plan for when this happens and aid agencies to improve their cybersecurity processes and talent.

Thank you,

Jack Cable

Clarify to whom the vulnerability is being disclosed

In Background, change

Choosing to disclose a vulnerability can be an exercise in frustration for the reporter when an agency has not defined a vulnerability disclosure policy

to

Choosing to disclose a vulnerability to an agency can be an exercise in frustration for the reporter when the agency has not defined a vulnerability disclosure policy

Just clarifying to whom the vulnerability is being disclosed.

Emailed comment from the Consumer Financial Protection Bureau (CFPB)

Thank you for the opportunity to review and comment on the draft Binding Operational Directive 20-01, which would require executive branch agencies to publish a vulnerability disclosure policy (VDP) and establish a process for managing reports of security weaknesses. The Consumer Financial Protection Bureau (CFPB) appreciates the U.S. Department of Homeland Security’s (DHS) work in conducting its review and issuing this draft guidance. Our recommendation can be found below:

Recommendation 1: On page 5, paragraph 3.b.ii – “Limit Testing solely to 'vetted' registered parties”, we recommend that this section be redrafted to “limit REPORTING solely to ‘vetted’ registered parties.” We do not want to invite opportunities for misusing the policy for unsolicited aggressive security testing.

Again, thank you for the opportunity to review and comment on this draft guidance. Please feel free to contact [redacted] if you have any questions. We look forward to working with you in the future.

Velvet D. Johnson
Sr. IT Security Policy Specialist

Trustworthy Email Report

The Trustworthy Email Report is tracking full compliance based on a p=reject policy which is technically not required until October 2018; recommend an interim measure of compliance to reflect compliance with p=none and Starttls that is separate and distinct from a full p=reject policy stance.

Emailed comment from Rapid7

Dec. 27, 2019

The Honorable Chris Krebs
Cybersecurity and Infrastructure Security Agency
Department of Homeland Security

Rapid7 submits these comments in response to the Cybersecurity and Infrastructure Security Agency (CISA)'s draft Binding Operational Directive to Develop and Publish a Vulnerability Disclosure Policy.[1] Thank you for the opportunity to provide input.

Rapid7 is a cybersecurity and data analytics firm headquartered in Boston, MA, with offices around the world. Rapid7’s solutions manage cybersecurity risk and simplify the complex, allowing security teams to work more effectively with IT and development to reduce vulnerabilities, monitor for malicious behavior, investigate and shut down attacks, and automate routine tasks. Over 8,400 customers worldwide rely on Rapid7 technology, services, and research to improve cybersecurity outcomes, protect consumers, and securely advance their organizations.

Rapid7 supports the directive

Rapid7 applauds CISA for taking this step to require adoption of vulnerability disclosure policies (VDP) in federal agencies. Adoption of VDP and associated coordinated vulnerability disclosure (CVD) processes is a key component of cybersecurity programs for government agencies and other organizations.[2] These processes should already be a consideration for federal agencies since vulnerability disclosure processes are a core practice in the NIST Cybersecurity Framework,[3] which agencies are directed to use for cyber risk management.[4]

Rapid7 believes the draft directive is thoughtful, thorough, and will strengthen the nation's security. Properly implemented VDPs among federal agencies will help reduce delays in discovery and mitigation of vulnerabilities in agencies' assets and improve agencies' internal vulnerability management. In addition, setting a clear baseline for VDPs will provide consistency of expectations for both the public and agencies, facilitating collaboration and avoiding unnecessary conflicts with members of the public that disclose security vulnerabilities.

Agency funding and preparation

It will be critical for CISA, OMB, and Congress to work together to ensure agencies have access to the resources necessary to successfully manage their VDPs. Agencies' implementation of VDPs will require funding, staffing, and other resources for evaluation and mitigation of vulnerabilities. Agencies' resource needs will vary according to such factors as the maturity of the agency's existing vulnerability management program and the quantity of the agency's internet-accessible assets.

Agencies should also be encouraged to take immediate steps to ensure their vulnerability management programs are capable of evaluating disclosures, prioritizing critical vulnerabilities, streamlining mitigation, and communicating with relevant stakeholders. By scanning their assets – using CISA’s vulnerability scanning services as well as agencies' own tools – and going through the process of remediation, agencies can eliminate low-hanging fruit and be better prepared for disclosures submitted through their VDPs.

Alignment with standards

The CISA directive references international standards related to vulnerability disclosure and handling – ISO/IEC 29147 and ISO/IEC 30111. These references are helpful, though Rapid7 recommends that CISA explicitly urge agencies to align their VDPs with both standards to the extent practical.

The ISO/IEC 29147 and ISO/IEC 30111 standards are generally accepted and in wide use. Alignment with these standards will further strengthen norms related to vulnerability disclosure and handling. This consistency is important to set expectations and streamline processes in the public and private sectors, domestically and internationally. Explicit alignment with common best practices is also important as some countries establish regulations regarding vulnerability disclosure and handling that deviate sharply from international standards.[5] We urge CISA to promote alignment with international standards as agencies develop and implement their VDPs.

Flexible agency guidelines

Agencies should be expected to implement a VDP broadly proportionate to their capabilities, but not more. To that end, the CISA directive provides an appropriate degree of flexibility for agencies. Most agencies will be initiating their VDPs for the first time, and the maturity of agencies vulnerability management practices may vary. In this context, it is appropriate that the directive provides agencies with discretion regarding which internet-accessible systems and services are in scope and which types of testing are authorized. It is also prudent that the VDP template suggests prohibitions on certain researcher practices (DDoS, social engineering, forced physical entry, data exfiltration, etc.). However, it is also appropriate to expect agencies to mature their vulnerability management programs over time, and grow their VDPs to match.

Rapid7 supports requiring agencies to maintain a reasonable – but not fixed – timeline for resolution of disclosed vulnerabilities. We urge CISA to clarify that the 90-day target deadline for vulnerability mitigation is a guideline, not a fixed deadline applicable to all circumstances.[6] While it is appropriate for agencies to have a target mitigation time, evaluation and mitigation of some vulnerabilities may be too complex to meet the target. ISO/IEC 30111 (2019) does not specify a fixed mitigation timeline, but guides vendors to “balance the need to create a remediation quickly, with the overall testing required to ensure the remediation does not negatively impact affected users due to quality issues.”[7] Missing a fixed deadline may result in loss of trust with vulnerability reporters due to mismatched expectations about the timing of mitigation and public disclosure. Agency VDPs should explicitly note that the target time is not applicable to vulnerabilities that require a longer period to mitigate. However, CISA and OMB must hold agencies to account for repeated unreasonable delays of mitigation of disclosed vulnerabilities.

Rapid7 supports the goal of expanding the scope of agencies’ VDPs to cover all internet-accessible systems over time.[8] However, we urge CISA and OMB to be responsive to any legitimate agency concerns regarding expansion of the VDP scope to include all internet-accessible systems within two years.

Finally, Rapid7 supports CISA's approach of requiring agencies to adopt VDPs while providing agencies with flexibility regarding whether to adopt a bug bounty program. CISA's directive and associated guidance should continue to make clear that agencies are not required to establish bug bounty or other incentive programs. Regardless of whether an agency adopts a bug bounty program, agencies should implement foundational VDP and CVD processes as part of the agency's internal cybersecurity risk management program.[9] Agencies should maintain internal baseline capabilities to receive, evaluate, mitigate, and communicate about vulnerabilities, and outsourcing too much these processes may hinder maturity of the agencies' capabilities.

* * *
Thank you for giving us the opportunity to share our views. For any additional questions or feedback, please contact Harley Geiger, Rapid7's Director of Public Policy.


1 Department of Homeland Security, Binding Operational Directive 20-01, Develop and Publish a Vulnerability Disclosure Policy, Nov. 27, 2019, https://cyber.dhs.gov/bod/20-01.
2 Cybersecurity Coalition, Policy Priorities for Coordinated Vulnerability Disclosure and Handling, Feb. 25, 2019, pgs. 9-11, https://www.cybersecuritycoalition.org/policy-priorities.
3 National Institute of Standards and Technology, Framework for Improving Critical Infrastructure Cybersecurity version 1.1, RS.AN-5, pg. 42, Apr. 16, 2018, https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf.
4 White House, Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, Sec. 1(c)(ii), May. 11, 2017, https://www.whitehouse.gov/presidential-actions/presidential-executive-order-strengthening- cybersecurity-federal-networks-critical-infrastructure.
5 See comments of the Cybersecurity Coalition and the Cyber Threat Alliance, Cybersecurity Vulnerabilities Administrative Regulation, Jul. 17, 2019, https://www.cybersecuritycoalition.org/cybersecurity-vulnerabilities.
6 See CISA draft BOD: " b) Set target timelines for and track: iii. Resolution of vulnerabilities, including notification of the outcome to the reporter." See also, footnote 23: CISA recommends no more than 90 days from the receipt of the report... Complex situations, including those that involve multi-party coordination, might require additional time."
7 See ISO/IEC 30111 (2019), Section 7.2.5 (Remediation Development).
8 See CISA draft BOD: “At 2 years after the issuance of this directive, all internet-accessible systems or services must be in scope of the policy.”
9 Harley Geiger, Prioritizing the Fundamentals of Coordinated Vulnerability Disclosure, Rapid7, Oct. 31, 2018, https://blog.rapid7.com/2018/10/31/prioritizing-the-fundamentals-of-coordinated-vulnerability-disclosure.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.