Git Product home page Git Product logo

controlsassessmentspecification's People

Contributors

adammontville avatar adelinn avatar annavlz avatar apipercis avatar bishopb avatar cdhgee avatar frozen425 avatar ginger-anderson avatar hanks42 avatar realslimslack avatar wmunyan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

controlsassessmentspecification's Issues

1.2: Use a Passive Asset Discovery Tool

--- Issue 1
M2 = total number of assets (given) is the same list as M1 from control 1.1. Meaning the count of known devices, so if that is the case should the name change to match?
"List of discoverable Assets from manual Inventory".

---- issue 2
using my example from 1.1 the M2 = 207.
with respect to M4 & M5 both are listed at given.

I can see M5 being a "given", but example 7 days the passive data is stored before renewed. But the Time an Asset appeared. Is that the start of the time period.. for example if...
M4 = 00:00:01 Sunday
M5 = 24:00:00 (1 day)
M3 = 03:45:00

The freshness (Time to Discover) = 3 hours, 45 Minutes.
usable examples like this would be most helpful for the assessors to make sure they doing the appropriate assessments.

Thank you for the opportunity to contribute.

1.4: Maintain Detailed Asset Inventory

-- issue 1
For this sub-control, I see this as a consolidation exercise. Where the organization would combine the 4 detection methods together into one list. Starting with a new set of Measures

M1 = Physical inventory (from my 1.1 comments the 207 (100 workstations, 100 IP Phones, 2 printers, 4 servers, and a router)
M2 = detected via Active Scan
M3 = Detect via Passive Scan
M4 = Detected via DHCP assignment
M5 = Any Asset that is not covered by M1 - M4
M6 = Total devices (Union of M1 - M4, tracking all applicable Detection Methods)

M7 = Coverage of all devices vs detection method (Union M2 - M5) / M6

14.9: Not in IG1

control-14/control-14.9.rst states that 14.9 is in implementation groups 1, 2, 3.

CIS Critical Security Controls Version 8 states 14.9 is in IG2, IG3, but not IG1.

3.1: Run Automated Vulnerability Scanning Tools

---- issue 1
Vulnerability Scanning Coverage - The ratio of endpoints covered by at least one vulnerability scanning tool to the total number of endpoints

While I agree in principle, all scans are not created equal. So I think we need to define what scan is at this level. A ping sweep, or syn scan, is far different than credentialed scan. So we should establish a minimal goal for the scan. Since 3.2 is authenticated scan, I assume this is an uncredentialed scan at a minimum. So I think service enumeration, OS Detection, TCP Scan, or Syn Scan, and any other basic uncredentialed information is required here.

------ issue 2
Vulnerability Scanner Configuration Quality

This metric goes to my "issue 1", we need some guidance on what is configuration requirements.

---- Issue 3
The ratio of SCAP-validated scanners to the total number of vulnerability scanners

So if the organization has a web application scanner, Nessus, and NMAP, the total scanners is 3, and SCAP validated is 1. Does this look correct? again I would have examples in here.

Non Standard File Encodings

The following files have strange encodings, which are unreadable in certain applications:

$ file ControlsAssessmentSpecification/*/*.rst | grep Non-ISO
ControlsAssessmentSpecification/control-3/control-3.8.rst:    Non-ISO extended-ASCII text, with CRLF line terminators
ControlsAssessmentSpecification/control-4/control-4.12.rst:   Non-ISO extended-ASCII text, with CRLF line terminators
ControlsAssessmentSpecification/control-9/control-9.6.rst:    Non-ISO extended-ASCII text, with CRLF line terminators

Could they be fixed? I think it's just a few special characters that are causing this issue.

1.3: Use DHCP Logging to Update Asset Inventory

--- Issue 1
Inputs
Can we add a 3 Input meaning to extract assets from the log itself.
3. List of assets that are assigned IP addresses from the DHCP server.

Operations
3. Correlate the M1 from 1.1 (List of discoverable Assets from manual Inventory) with the IP's that have been issued from the DHCP Server.
4. For each DHCP server create a Union of DHCP addresses issued.
5. Use this to launch active scanning to ensure all DHCP discovered devices are actively scanned.

Measures
M6 = total IP's issued by DHCP
M7 = M1 from 1.1
M8 = M6 / M1

Thank you for the opportunity to contribute.

4.2: Complication with Salting

https://github.com/CISecurity/ControlsAssessmentSpecification/blob/master/control-4/control-4.2.rst
https://controls-assessment-specification.readthedocs.io/en/latest/control-4/control-4.2.html

For 4.2, the use of salts when generating password hashes would likely cause problems for the proposed approach. Consider adjusting Input 2 to be the actual default passwords, rather than password hashes for those passwords. Then, the Operations could be modified slightly so that the default passwords are hashed in accordance with the appropriate salting procedures for the system in question before making the comparisons to the actual password hashes on the system.

4.6

revised -- see the internal notes.

3.2: Perform Authenticated Vulnerability Scanning

-- issue 1
The issues around what is credentialed scans has been coming up a lot.

The metrics you mention are incomplete. The issue is really at this point you need to combine many of the other metrics together.
At a high level you need these metrics:

  1. Systems scanned (all supported protocols for the OS Type)
  2. Systems scanned, but the OS has a low confidence level - This is very important as the OS detect determines the credentials used, and if the OS detection is wrong, so will the authentication.
  3. Systems scanned and the OS has a high confidence level - most likely will have the correct creds.
  4. Systems scanned of certain type and the OS detection is high. - The issue here is two systems, for example a Cisco Router and Debian, use SSH, but the creds are not the same. So you will want to separate out those aspects.
  5. Systems scanned where no authentication was attempted - in this use case, the OS was detected and the scanner could have used creds, but for some reason no authentication attempt was made.
  6. Systems scanned where authentication was attempted but the creds are bad - this helps people identify a mis-config or bad creds.
  7. Systems scanned where authentication was good, but not root access - in this case the creds used worked, but did not have privileges to run the needed scan.
  8. Systems scanned where authentication was good, but some checks failed - this could be permissions at the file level, missing files, etc.
  9. Systems scanned where authentication was good, and all checks where completed without errors.

The sub-control should really have metrics for all these instances. And then give examples of various levels at and several common OS's. Listed below are a few links to explain this steps in more detail using Tenable.sc.

https://www.tenable.com/assurance-report-cards/tracking-debian-ubuntu-and-kali-authentication-scan-results
https://www.tenable.com/assurance-report-cards/tracking-cisco-juniper-and-paloalto-authentication-scan-results
https://www.tenable.com/assurance-report-cards/tracking-solaris-authentication-results
https://www.tenable.com/assurance-report-cards/tracking-red-hatcentos-authentication-scan-results
https://www.tenable.com/assurance-report-cards/tracking-windows-authentication-scan-results

New build missing

Hello

The current build of the site, does not include some old commits:

200817e

Which is stil wrong on the site

Best Regards
Jens

Standardize inputs and measures across sub-controls

Problem

In the specification, each Sub-Control contains a set of Inputs and a set of Measures. Inputs are presented as ordered lists, so that the specification can reference them as "Input x". Measures are denoted as Mx, where x is a positive integer. Input and measure labels are effectively reset for each Sub-Control. For example, the input ordered list at the beginning of each Sub-Control resets to begin numbering at 1, and M1 exists across all Sub-Controls. In the case of the measures, more often than not similarly labeled measures carry different meaning.

From time to time, the same inputs and measures may be used across Sub-Controls. For example:

  • Sub-Control 3.1 includes the following measure: M3 = Count of endpoints (from Input 1), where Input 1 is the "List of endpoints"
  • Sub-Control 4.8 includes the following measure: M7 = Count of endpoints from Input 1, where Input 1 is the "Endpoint Inventory"

In the above example, the inputs for Sub-Controls 3.1 and 4.8 are synonymous, but clearly not expressed using the same language. While "list of endpoints" is intended to be equivalent to "endpoint inventory", the specification needs to use the same language.

It then follows that M3 of Sub-Control 3.1 and M7 of Sub-Control 4.8 are clearly the same measure.

The fact that same inputs and same measures are referenced differently across Sub-Controls makes implementation more challenging.

Proposal

The specification may benefit from standardizing inputs and measures across Sub-Controls using variables.

Input variables may be denoted by Ix, where x is a positive integer starting with 1 for the first input of the first Sub-Control, and will be incremented by one for each new input across Sub-Controls.

Measure variables may be denoted by Mx where x is a positive integer starting with 1 for the first measure of the first Sub-Control, and will be incremented by one for each new measure across Sub-Controls.

Scanning Frequency vs Discovered Timestamp

Controls

Control 1.6
Control 2.6

Comment

Both of these measures are based on the frequency of the scan of the approved vs unapproved software which seems to be based on the process (scanning frequency) vs the outcome (unapproved software are removed).

An alternative method focused on the outcome, which could also accommodate more of the "near real time" data collection could be a comparison between "Initial Discovery DateTime" - "Last Seen DateTime" of unapproved software. This would just require that a date time stamp be added to any asset whenever they're scanned and added to the inventory.

Recommendation

Change the measure from being focused on the Scan Frequency to the difference between Initial Discovery and Last Seen Datetime. Which means the measure would then be based on how many of those fall within the "acceptable" range.

1.1: Utilize an Active Discovery Tool

--- Issue 1
The description of the "tools that are compliant" or the "M3 = List of compliant tools" and "M4 = List of non-compliant tools" is extremely unclear.

Is this talking about methods of active discovery, authentication protocols, etc.

If so, if the Tool does ICMP, TCP Syn Scan, SMB Login, and HTTP login. While the Assets accepts all 4 types of authentication, we will have 100% for M8.

But if the system also supports MySQL login, then we are 80% for M8.

I am just very unclear as to what the M3, M4 and M8 are.

---- Issue 2
For the M1 (List of discovered assets), I assume this could be a list of known IP addresses in use, or number of devices purchased. For example if this is a new company and we just purchased 100 workstations, 100 IP Phones, 2 printers, 4 servers, and a router, we would have 207 Discovered assets. The objective of M5 is to scan the network and get a count of 207.

If this is the case, should we change M1 to be called "List of discoverable Assets from manual Inventory". Thus leaving M2 to be the delta between the ICMP scan (M5) and the M1.

---- Issue 3
To assist in the clarity, provide discussion examples, similar to what I did with issue 2, would go a long what in helping the reader understand what you are talking about here.

Thank you for the opportunity to contribute.

Hidden Sub-Controls and Wrong names

During an activity I found out that the CIS sub-controls 18.6 until 18.11 exist in the GitHub documentation:
GitHub 18.6
GitHub 18.7
GitHub 18.8
GitHub 18.9
GitHub 18.10
GitHub 18.11

They have the last update 4 years ago which makes them kinda obsolete compared to the offiicial CIS control documentation which only lists sub-control 18.1 to 18.5.

The problem is although it does not appear in the index of the official documentation the GitHub data basis still contains it and leads to confusion.

The bigger problem is that the pages still exist in the official documentation but are hidden
Readthedocs 18.6
Readthedocs 18.7
Readthedocs 18.8
Readthedocs 18.9
Readthedocs 18.10
Readthedocs 18.11

Is it possible to remove them so no confusion happens since in the current state it seems like these six sub-controls are semi-official?

Additionally, I found out that the names of sub-controls in the readthedocs documentation do not match.

This applies to sub-control:
4.3 (https://controls-assessment-specification.readthedocs.io/en/stable/control-4/control-4.3.html#) where the title in the index says Ensure the Use of Dedicated Administrative Accounts which comes from the index.rst but then the page itself is 4.3: Configure Automatic Session Locking on Enterprise Assets

Thank you.

Best Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.