Git Product home page Git Product logo

wg-policy-prototypes's Introduction

Policy Prototypes

A place for policy work group related proposals and prototypes.

⚠️ Warning: Code and other artifacts in this repository are prototypes and proposals, work-in-progress, not endorsed by any Kubernetes SIG, and not recommended for production use.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Join this repo

File a request at https://github.com/kubernetes/org to be added to @kubernetes-sigs, using the Template.

Once you've been a member, when you are ready to become a reviewer of other people's code, file a PR on our OWNERS file and an approver will need to approve you.

Once you've been a reviewer, you can request to become an approver by filing a PR on our OWNERS file and another approver will need to approve you.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Projects - also see Project Board

Backlog or Retired

  • Guardian - Formal verification of policy

Additional Information

wg-policy-prototypes's People

Contributors

adeniyistephen avatar anusha94 avatar anushkamittal20 avatar degenaro avatar erikgb avatar fjogeleit avatar gparvin avatar haardikdharma10 avatar jimbugwadia avatar k8s-ci-robot avatar mritunjaysharma394 avatar nikhita avatar qianlei90 avatar realshuting avatar rficcaglia avatar rinkiyakedad avatar sachinkumarsingh092 avatar vishal-chdhry avatar yindia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wg-policy-prototypes's Issues

Regula Adapter

Regula is a tool that evaluates CloudFormation and Terraform infrastructure-as-code for potential AWS, Azure, and Google Cloud security and compliance violations prior to deployment.

Regula supports the following file types:

  • CloudFormation JSON/YAML templates
  • Terraform HCL code
  • JSON-formatted Terraform plans

Regula includes a library of rules written in Rego, the policy language used by the Open Policy Agent (OPA) project.

Regula has a json output report see -

https://regula.dev/report.html

This can be processed to produce PolicyReport resources via an adapter or script.

The policy and rule name is generic for multiple policy violation generation

Today we using a generic name for policy and rule in multiple policy violation. Need to a have focused name for policy and rule each policy violations

- category: CIS Benchmarks
  message: Ensure that the controller manager pod specification file permissions are
    set to 644 or more restrictive (Automated)
  policy: Master Node Security Configuration
  properties:
    AuditConfig: ""
    AuditEnv: ""
    IsMultiple: "false"
    actual_value: permissions=600
    audit: /bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml;
      then stat -c permissions=%a /etc/kubernetes/manifests/kube-controller-manager.yaml;
      fi'
    expected_result: permissions has permissions 600, expected 644 or more restrictive
    index: 1.1.3
    reason: ""
    remediation: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml
    test_info: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chmod 644 /etc/kubernetes/manifests/kube-controller-manager.yaml
    type: ""
  result: pass
  rule: Master Node Configuration Files
  scored: true
  source: ""
  timestamp:
    nanos: 0
    seconds: 0
- category: CIS Benchmarks
  message: Ensure that the controller manager pod specification file ownership is
    set to root:root (Automated)
  policy: Master Node Security Configuration
  properties:
    AuditConfig: ""
    AuditEnv: ""
    IsMultiple: "false"
    actual_value: root:root
    audit: /bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-controller-manager.yaml;
      then stat -c %U:%G /etc/kubernetes/manifests/kube-controller-manager.yaml; fi'
    expected_result: '''root:root'' is present'
    index: 1.1.4
    reason: ""
    remediation: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
    test_info: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chown root:root /etc/kubernetes/manifests/kube-controller-manager.yaml
    type: ""
  result: pass
  rule: Master Node Configuration Files
  scored: true
  source: ""
  timestamp:
    nanos: 0
    seconds: 0
- category: CIS Benchmarks
  message: Ensure that the scheduler pod specification file permissions are set to
    644 or more restrictive (Automated)
  policy: Master Node Security Configuration
  properties:
    AuditConfig: ""
    AuditEnv: ""
    IsMultiple: "false"
    actual_value: permissions=600
    audit: /bin/sh -c 'if test -e /etc/kubernetes/manifests/kube-scheduler.yaml; then
      stat -c permissions=%a /etc/kubernetes/manifests/kube-scheduler.yaml; fi'
    expected_result: permissions has permissions 600, expected 644 or more restrictive
    index: 1.1.5
    reason: ""
    remediation: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml
    test_info: |
      Run the below command (based on the file location on your system) on the master node.
      For example,
      chmod 644 /etc/kubernetes/manifests/kube-scheduler.yaml
    type: ""
  result: pass
  rule: Master Node Configuration Files
  scored: true
  source: ""
  timestamp:
    nanos: 0
    seconds: 0

Inconsistent field names

I noticed in the documentation for v1alpha2 that the json field names and the names used in the comments are inconsistent here:

// Subjects is an optional reference to the checked Kubernetes resources
// +optional
Subjects []*corev1.ObjectReference `json:"resources,omitempty"`
// SubjectSelector is an optional label selector for checked Kubernetes resources.
// For example, a policy result may apply to all pods that match a label.
// Either a Subject or a SubjectSelector can be specified. If neither are provided, the
// result is assumed to be for the policy report scope.
// +optional
SubjectSelector *metav1.LabelSelector `json:"resourceSelector,omitempty"`

The Go fields and comments use Subject, but the json field uses resource

Timestamp in the report is always set to 0

  • category: CIS Benchmarks
    message: Apply Security Context to Your Pods and Containers (Manual)
    policy: Kubernetes Policies
    properties:
    AuditConfig: ""
    AuditEnv: ""
    IsMultiple: "false"
    actual_value: ""
    audit: ""
    expected_result: ""
    index: 5.7.3
    reason: Test marked as a manual test
    remediation: |
    Follow the Kubernetes documentation and apply security contexts to your pods. For a
    suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
    Containers.
    test_info: |
    Follow the Kubernetes documentation and apply security contexts to your pods. For a
    suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
    Containers.
    type: manual
    result: warn
    rule: General Policies
    source: ""
    timestamp:
    nanos: 0
    seconds: 0
  • category: CIS Benchmarks
    message: The default namespace should not be used (Manual)
    policy: Kubernetes Policies
    properties:
    AuditConfig: ""
    AuditEnv: ""
    IsMultiple: "false"
    actual_value: ""
    audit: ""
    expected_result: ""
    index: 5.7.4
    reason: Test marked as a manual test
    remediation: |
    Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
    resources and that all new resources are created in a specific namespace.
    test_info: |
    Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
    resources and that all new resources are created in a specific namespace.
    type: manual
    result: warn
    rule: General Policies
    source: ""
    timestamp:
    nanos: 0
    seconds: 0

Kube Bench Adapter PolicyReport vs ClusterPolicyReport

Currently the Kube Bench Adapter creates a PolicyReport in the namespace of the CronJob.

My expectation from the work with kyverno would be a ClusterPolicyReport because the PolicyReportResults of the individual checks do not relate to resources in a namespace, but to the entire cluster.

Possible CRD consumer: Cloud Custodian

I know CNCF sig-security has some GHI templates we should probably use .... and we all probably have some from our day jobs...for now I'll just steal...

Description: what's your idea?

To consume the policy report in a meaningful way as both a practical example and a template for other projects to use.

Impact: Describe the customer impact of the problem. Who will this help? How will it help them?

This will help both the operator community using/evaluating tools to assess compliance and will help developers looking to add/maintain/enhance policy compliance tool for collecting policy assessment results and then doing something with them.

Scope: How much effort will this take? ok to provide a range of options if or "not yet determined" for initial proposals. Feel free to include proposed tasks below or link a Google doc

TBD: SWAG: 2 weeks design/requirements, 2 weeks coding and testing, 2 weeks CI/CD and misc, 2 weeks beta testing and bug fixing IRL

TO DO

  • PR Reviewers/approvers
  • Documentation authors/reviewers
  • testers

Protect Kubernetes community owned CRD

When deploying the current report definition to a cluster, Kubernetes warns that MissingAnnotation:

  - lastTransitionTime: "2020-11-11T02:28:40Z"
    message: protected groups must have approval annotation "api-approved.kubernetes.io",
      see https://github.com/kubernetes/enhancements/pull/1111
    reason: MissingAnnotation
    status: "False"
    type: KubernetesAPIApprovalPolicyConformant

Per kubernetes/enhancements#1111, any CRD in group k8s.io should have annotation api-approved.kubernetes.io defined.

review OSCAL CRD design alignment

see: usnistgov/OSCAL#900

we are working through these design considerations for kubernetes but don't seem to be quite as far down the road...

how to align control parameters and align often (but not always) with parameter statements

so if I grok this the goal is to have a direct mapping of component and assessment parameter values that can be compared to actual values in the assessment results,or maybe even just to compare the assessment plan values to the control implementation (or both)

a component (internal self assessment) or SAP (3rd party later assessment) to know to map certain key parameters to organizationally defined baselines
variables (inputs in their parlance) to reference the actual parameter or value

in other words - compare actual system values to expected values as defined in control implementation requirements

for example from their InSpec prototype:

"extension-name": "maxlogins", "formal-name": "Maximum Concurrent Logins", "description": "\n For AC-10 and additional controls, indicate which parameter for the implemented requirement of a control implementation is relevant.\n Do not put the value, identify the parameter by its identifier, and the tool will bind to its @value, and can be reused across implemented requirements.\n ", "bindings": [ { "pattern": "o:component/os:control-implementation/o:implemented-requirements/o:prop[@class='maxlogins']" } ],

we have discussed how to reference assessment inputs/rules/variables too. so let's say how to map the policy rule and parameters to the component requirement. I think the compliance operator would still be responsible for all this.

Also of relevance:
usnistgov/OSCAL#841

This is maybe more "XCCDF" alignment - but just adding here for review.

Policy Violation CRD, controller, and adapters

The Policy Violation CRD proposal aims to standardize how different Kubernetes policy management tools (OPA, Falco, Kyverno, kube-bench, etc.) can report non-conformance. The goal is to allow consistent reporting and management of policy violations.

There are three potential work areas for this proposal:

  1. Define a Policy Violation CRD
  2. Write a controller to manage policy violation life-cycles
  3. Write adapters for select policy management projects to generate Policy Violation resources

Control Catalog for Policy Mappings

This codifies some of the discussions had in the wg-policy Wednesday meetings, specifically during the OSCAL alignment project, and proposes ideas on how to embody those ideas. Many of these fell into the bucket "some operator outside the PolicyReport CR should worry about these"...so now we can worry :)

As part of policy frameworks or requirements like CSA (STAR/CCM) and things like NIST 800-53/171/190, DoD DISA STIG, or risk management frameworks like NIST RMF - policy is not applied in a vacuum. Policy is itself a control or supportive of a control requirement or implementation to manage risks and threats. Policy-as-code even more so.

For a system such as kubernetes, if one just randomly picks a set of configuration items - or better, uses a tool like CIS benchmarks - to write policy rules/checks, and reports the results, you can't determine whether you have met the requirements of the framework, or not. In practice, you can't really say much of anything about the security or compliance posture of your cluster at all other than some variety of rules are passing. To have an effective, auditable, or even understandable cluster policy implementation you need to map policy to a set of controls and vice versa.

What is unique to a declarative system like kubernetes, the desired state is expressed by API calls (ignoring workload container breakouts for now). The components (or assets) are already inventoried by definition in kube-apiserver/etcd. Thus the controls just need to be defined and these can be applied to all objects with labels or annotations.

It seems to me that to more efficiently apply, maintain, and report policy in the context of a dynamic (but API defined and enumerable) system like kubernetes, it would be fairly simple to define a CRD for a control definition and perhaps a catalog resource. This is aligned with OSCAL of course, so it makes this very NIST compatible from day one...and they publish a nice yaml and json catalog we can use as a test harness.

These CRs can be used by policy engines to map (via tags or namespaces or annotations or ... ) policy code rules to object configuration checks both at deploy time (admission control) or at runtime (drift detection).

RHACM and Compliance Operator do have controls defined in OpenControl yaml and these could very easily be adapted.

TBD: Does Argo have something like this already? Cloud Custodian? others?

If there are existing CRDs for this that we can contribute to - that's just fine with me. We don't have to reinvent or fork the wheel.

Anyway, once this catalog and the control objects exist, they can be queried for status/state and whatever other data is needed to quickly assess compliance and security (those are two different, sometimes overlapping, things). Suggested control object data (loosely adapted from NIST 800-53A guidance):

  • state/behavior specifications (presumably these are tagged policy rules)
    • with parameters - see #50
  • differences between desired and actual state/behavior (these would be tagged PolicyReports)
  • metadata to facilitate analysis and risk-based decision making
  • threshold for "completeness" (ie defect or failure rate) of state/behavior tests/checks
  • timeliness/frequency of policy checks
  • metadata that adjusts or normalizes priority of each control requirement (and thus the risk and impact scores for a given Policy Report?)
  • RACI metadata for human owners of controls and PolicyReports?
  • metadata about confidentiality, integrity, and availability impact?
  • failure modes?
  • TTPs or IOCs eg MITRE ATTACK links?

Starboard CRDs should to be reviewed wrt. PolicyReport CRDs

are there gaps we need to fill in? this comes from a wg-policy slack discussion here

See also: https://aquasecurity.github.io/starboard/v0.10.1/crds/#vulnerabilityreport

But to summarize:

  1. Should/could Starboard replace VulnerabilityReport, ConfigAuditReport, CISKubeBenchReport, and KubeHunterReport CRDs with "generic" [Cluster]PolicyReport?
  2. does that include a contribution to the Starboard project with a POC? or
  3. an adapter? (see #54 #51)

CIS Benchmarks -> Policy Report Generator

The Policy WG is defining a Policy Report CRD to help unify outputs from multiple policy engines. This helps cluster-admins with managing clusters as policy results can be viewed and managed easily as Kubernetes resources from any Kubernetes management tool (kubectl, dashboard, Octant, etc.)

The project scope is to create a tool that periodically runs a CIS benchmark check like kube-bench and produces a policy report. Additional options could be to integrate the Policy Report into OSS upstream tools like dashboard and/or Octant.

Add time fields

Some choices:

  • lastSeen
  • firstSeen
  • since

Should these be at a result level?

Sample policy resource not getting created

When following the Installing guide and adding a sample policy resource to the cluster the command fails with the following output:

error: unable to recognize "https://github.com/kubernetes-sigs/wg-policy-prototypes/raw/master/policy-report/samples/sample-cis-k8s.yaml": no matches for kind "PolicyReport" in version "policy.kubernetes.io/v1alpha1"

If I'm not mistaken then this is because of using an incorrect api version in the sample resourse file.
Please let me know if I've got this wrong. I'd like to raise a PR for the same once we can confirm that this is the cause. Thanks :)

Image Scanner -> Policy Report Adapter

Develop an adapter to execute a periodic or event based image scan and convert results and to generate or update a Policy Report custom resource based on the WG Policy CRD.

The tasks involved are:

  1. Research available OSS image scanners like Clair and Trivy
  2. Design how the scan should be run i.e. when a new image pull happens or periodically
  3. Run the scan as a CronJob and produce the Policy Report CRD

Install failed, The yaml file not found.

It seems that the file changed the name or place.

+ kubectl create -f https://github.com/kubernetes-sigs/wg-policy-prototypes/raw/master/policy-report/crd/policy.kubernetes.io_policyreports.yaml
error: unable to read URL "https://github.com/kubernetes-sigs/wg-policy-prototypes/raw/master/policy-report/crd/policy.kubernetes.io_policyreports.yaml", server reported 404 Not Found, status code=404

Checklist for writing PolicyReport KEP

The Policy WG presented to SIG-Auth on 7th Dec, 2022 and we have their approval and support to submit a KEP to promote the Policy Report CRD to an official Kubernetes API! 🎉

We want to cleanup on the APIs and make any other changes that are needed before we write and submit the KEP. The below checklist (not final) is to track all such items. Feel free to update this list or comment if there are any concerns.

Tasks

  • fix #96
  • communicate to all stakeholders (see below) about the KEP proposal and get any feedback on the PolicyReport API
  • any changes to APIs will be now done for v1beta1
  • follow k8s API best practices (K8s API Conventions)
  • raise PRs in repositories consuming PolicyReport CRD with any changes
  • revisit project structure for apis
  • identify KEP prerequisites from this doc

Stakeholders

kube-bench adapter always runs the kube-bench job in the default namespace

kube-bench adapter always runs the kube-bench job in the default namespace even if the kube-bench adapter is deployed in a different namespace. It should be possible to specify the namespace where the kube-bench job is deployed and by default it should be deployed in the same namespace as the adapter (cron-job) since that is where the cluster role and cluster role bindings are created.

fix Timestamp JSON

The JSON for the timestamp field is incorrect:

	// Timestamp indicates the time the result was found
	Timestamp metav1.Timestamp `json:"metadata.creationTimestamp,omitempty"`

It should be:

	// Timestamp indicates the time the result was found
	Timestamp metav1.Timestamp `json:"timestamp,omitempty"`

Inconsistent file versioning of controller-gen

The version of controller-gen is different in the Makefile (v0.2.5) and in crd/wgpolicyk8s.io_policyreports.yaml (v0.2.6). This causes the version in crd/wgpolicyk8s.io_policyreports.yaml to be changed to 0.2.5.

Use wg-policy-prototypes as Go module to import generated code

To implement an adapter out of tree or just use CRDs outside of this repo I'll have to generate client code from the API structs. This is probably something that multiple projects would do. Do you think that it makes sense to check in code generated by client-go generator and update it whenever there's a change in CRDs?

My use case is aquasecurity/starboard#601 where we'd like to see if we can replace VulnerabilityReport CRD with PolicyReport, but I don't want to copy the skaffold for code-gen like here into our repo. Preferably I'd just import the wg-policy-prototypes repo as Go module and use generated clientset to access PolicyReport in a programmable way.

CRD PolicyReport "source" property

Hey everyone,

I'm using the PolicyReport and ClusterPolicyReport implementation from Kyverno for my tool Policy Reporter which creates metrics and provides dashboards (standalone and Grafana based) fĂźr PolicyReportResults.

If more tools start to use this CRDs, I would like to be able to filter PolicyReports based on the tool who created the Report. This is not that easy if tools using different labels or annotations.

So, my feature Request is a new property "source" / "policyEngine" or similar to have the possibility to differentiate between the engines of reports.

Cleanup unused fields in the PolicyReport API

There are some unused fields in the PolicyReport API. Some of them are -

Remove the above fields from the API.

Kube Bench Adapter has wrong Summary Numbers

The created kube-bench ClusterPolicyReport in my cluster has the following Summary

kubectl get cpolr kube-bench

NAME         PASS   FAIL   WARN   ERROR   SKIP   AGE
kube-bench   63     13     43     0       0      105s

But the ClusterPolicyReport has only 23 results in summary:

Bildschirmfoto 2021-06-29 um 09 45 00

Policy Engine identifier for the PolicyReport

Currently, we have the source field in the PolicyReportResult struct. This is used to specify the policy engine managing this report.

We do not anticipate a case where different policy engines will govern different policies, thereby creating policy reports from engines A and B. Move the source field to the top level PolicyReport schema

kube-bench-adapter Job pending in Error State

On my local K3s Cluster the Jobs pending in Error state with the following logs:

Found (1) pods
pod (kube-bench-2zd4m) - "Pending"
Found (1) pods
pod (kube-bench-2zd4m) - "Succeeded"
failed to run job of kube-bench: invalid character 'A' looking for beginning of value

Kubernetes Version: v1.21.6+k3s1

Add a Configuration field to PolicyReport

Some scale issues identified for PolicyReport can be found in this thread - open-policy-agent/gatekeeper#2394

One option to tackle this issue is for the PolicyReport API to provide some kind of contract between PolicyReport generators and consumers.
The proposal here is to add a Configuration field to the API -

configuration:
  limits:
    maxResults: 100
    statusFilter: ["FAIL", "WARN"]

This defines the max number of reports that will be stored. Also, provide a way to store only reports that the user may be interested in - the statusFilter allows to filter based on different status values - pass, fail, error, warn, skip.
The setting of these values should be exposed via the PolicyReport generator (for example, policy engines). The PolicyReport API should provide sane defaults for these fields

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.