Git Product home page Git Product logo

metac's Introduction

Metac pronounced [meta-see]

It is metacontroller and more. Long term vision of Metac is to provide a toolkit that lets users to manage their infrastructures on Kubernetes.

Metac started when development on metacontroller stopped. Metac has implemented most of the major enhancements & issues raised in metacontroller. In adition, some of Metac's features are a derivation from production needs of projects such as OpenEBS & LitmusChaos.

Motivation

Metac is an add-on for Kubernetes that makes it easy to write and deploy custom controllers in the form of simple scripts. One can get a feel of implementing controllers from various sample implementations found in the examples folder. These examples showcase various approaches, programming languages (including jsonnet) to implement controllers.

Features

These are some the features that metac supports:

  • Abstracts Kubernetes code from business logic
  • Implements various meta controllers that helps in above abstraction
    • CompositeController (cluster scoped)
    • DecoratorController (cluster scoped)
    • GenericController (namespace scoped)
  • Business logic (read reconciliation logic) can be exposed as http services
    • API based development as first class citizen
  • MetaControllers are deployed as Kubernetes custom resources
    • However, GenericController (one of the meta controllers) can either be deployed as:
      • 1/ Kubernetes based custom resources, or
      • 2/ YAML config file.
  • Ability to import metac as a go library
    • GenericController lets business logic invoked as in-line function call(s)
    • This is an additional way to invoke logic other than http calls
    • Hence, no need to write reconcile logic as http services if not desired

Using Metac

If you want to use Metac via web based hooks then Metac can be deployed as a StatefulSet with images found at this registry. However, if you want to use inline hooks, you need to import Metac into your go based controller implementation. In addition, you need to make use of go modules to import the master version of Metac into your codebase.

In case, you want to deploy Metac via helm, use this helm chart.

Differences from metacontroller

Metac tries to be compatible with the original metacontroller. However, there may be breaking changes that one needs to be careful about. If one has been using the metacontroller and tries to use metac, then one should be aware of below changes:

  • Metac uses a different api group for the custom resources
    • i.e. apiVersion: metac.openebs.io/v1alpha1
  • Metac uses a different set of finalizers
    • i.e. metac.openebs.io/<controller-name>
  • Metac is by default installed in metac namespace

If you are migrating from Metacontroller to Metac you'll need to cleanup the old Metacontroller's finalizers, you can use a command like the following:

kubectl get <comma separated list of your resource types here> --no-headers --all-namespaces | awk '{print $2 " -n " $1}' | xargs -L1 -P 50 -r kubectl patch -p '{"metadata":{"finalizers": [null]}}' --type=merge

Roadmap

These are the broad areas of focus for metac:

  • business controllers
  • test controllers
  • debug controllers
  • compliance controllers

Documentation

This is the existing i.e. metacontroller site that provides most of the important details about Metacontroller. Since metac does not differ from Metacontroller except for new enhancements and fixes, this doc site holds good.

Contact

Please file GitHub issues for bugs, feature requests, and proposals.

Use the meeting notes/agenda to discuss specific features/topics with the community.

Join #metacontroller channel on Kubernetes Slack.

Contributing

See CONTRIBUTING.md and the contributor guide.

Licensing

This project is licensed under the Apache License 2.0.

Comparison with other operators

Among most of the articles found in internet, I find this to be really informative. However, it talks about metacontroller whereas metac has filled in most of the gaps left by the former.

metac's People

Contributors

11xor6 avatar grzesuav avatar luisdavim avatar pumpkinseed avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

metac's Issues

Need for `Assert` based hooks

Motivation

In the present times, Kubernetes developers need to write lot of code (w.r.t Golang) to test their controllers. It ranges from use of Ginkgo, Gomega, fake objects, standard testing package & the list continues. This issue acts as a placeholder to put thoughts to simplify this testing effort and make it more agile (when thought in terms of time taken to write this code, maintenance, rework, integrating with DAG based pipelines, integrating with CI/CD tooling, etc).

High Level Thoughts

One of the thoughts to simplify writing test code is to have Metac support Assert based hooks on the lines of Sync & Finalize hooks. Metac is already a kubernetes based level triggered reconcile system which makes it similar to Directed Acyclic Graph (DAG) based solution. In addition, if assert based hooks can enable writing assertion logic in query based language(s) that understand structured document models such as JSON, this entire process of writing test logic becomes simpler. A developer / tester needs to only assert the JSON document to test various controller scenarios.

Using Metac, a developer need to only write test logic in a declarative style. Whereas this developer gets much more in return, which are:

  • Kubernetes itself becomes the CI/CD platform,
  • Declarative test specs are executed as DAG
  • No need to write assertion in high level languages like Golang, Python, JavaScript, etc that require specific skill set.

How to write test logic as declarative specifications?

Rego is something that I am contemplating with. However, there can be more that I am not aware of.

References

load generic controllers based on their labels

ProblemStatement: As a DevOps admin I run my controllers as a single binary that makes use of GenericController(s) loaded from metac config file. I want metac to load generic controllers with specific labels & ignore loading of other generic controllers.

PossibleSolution:
Use of predefined label keys such as

  • metac.openebs.io/feature-gate
    • label will be set by DevOps admin
    • label value can either be alpha, beta or release
    • If this label is not set then corresponding controller defaults to release feature-gate
  • tag.metac.openebs.io/<tag-name>
    • label will be set by DevOps admin
    • tag-name will be provided by DevOps admin
    • label value can either be enabled or disabled
kind: GenericController
metadata:
  name: my-controller
  labels:
    # if following label is missing then the GenericController 
    # will be considered as release grade
    metac.openebs.io/feature-gate: alpha # beta & release are other valid values
---
kind: GenericController
metadata:
  name: my-controller
  labels:
    metac.openebs.io/feature-gate: beta
    tag.metac.openebs.io/my-tag-2: enabled 
---
kind: GenericController
metadata:
  name: my-controller
  labels:
    metac.openebs.io/feature-gate: beta
    tag.metac.openebs.io/my-tag-1: enabled 
containers:
- name: my-metac
  image: localhost:5000/my-metac:latest
  command: ["/my-metac"]
  args:
  - --allowed-feature-gate=beta # controllers with either beta or release gate will be loaded
  - --allowed-tags=my-tag-1, my-tag-1 # controllers with these tags will be loaded
  - --logtostderr
  - --run-as-local
  - --workers-count=1 # number of workers per controller
  - --discovery-interval=40s
  - --cache-flush-interval=240s # re-sync interval
  - -v=5

add github actions / workflow

It will be great to add GitHub Actions based CI CD workflow to Metac. Metac has Travis doing all the CI CD stuff as of now. However, an additional CI CD tooling esp. GitHub Actions should be good to have considering its acceptance over the last few months.

support status conditions as API in hook response

UseCase: As a developer of controllers that use Metac, I would like to have first class support for status conditions as part of hook response API. This will let Metac to merge, update or delete conditions based on the new API structure. This benefits the developers to bother less on how conditions which are list of maps needs to be merged & so on.

option to add Update similar to UpdateAny but for specific attachment

Problem Statement: Currently GenericController supports updating any attachment via UpdateAny tunable. However, there can be cases when GenericController should support fine granular update policies on a per attachment basis. In such a case, UpdateAny will not be set and Update will be set against a particular attachment kind.

rewrite k8s sample controller using metac

UseStory: As a developer I would like to compare writing controllers using client-go versus metac. This will help me understand / learn different concepts used in metac and at the same time provide me a fair analysis to compare metac's approach versus using other approaches (read libraries, toolkits, programming languages, etc).

Possible Solution: It will be ideal to rewrite sample controller using below combinations to provide options to teams/individuals to help them in their final decision making process:

  1. different approaches
  • config vs. custom resource
  • web based hooks vs inline hooks
  • composite controller vs. generic controller
  1. multiple programming languages

Refer - https://github.com/kubernetes/sample-controller

Ability to precondition/construct kubernetes resources by pulling definition from a custom resource

USECASE

The chaos executor is a playbook (itself run as a pod/job) that reads some information (mostly, ENV) from a CR called chaosExperiment, constructs a data structure (list of key:value pairs) and uses this info to precondition an existing job manifest (a chaos job: which looks like this). This is done by doing a kubectl set env -f <job manifest> --dry-run -o yaml > <preconditioned-job-manifest>.yml.

The current approach is very limiting, considering that it can only precondition job wrt ENVs. It is possible that the experiment CR needs to inject more complex data into the job via configmaps, secrets, persistent volumes etc., (being a CR, we have no end to our imagination wrt how we want to expand the spec & what we want to inject into the job)

Ex: When there are jobs such as this, the set env based approach doesn't work. One of the workarounds used was to extract desired info from the experiment CR & create those resources imperatively using kubectl run.

REQUIREMENT

Have a generic way to insert spec into a manifest or an un-created resource. OR in other words, construct the spec of a resource by reading another.

add examples to showcase setting status conditions against watched resource

UseCase: As a developer who uses metac to build my controller, I need metac to support setting of status & conditions against my watched resource specifications.

Solution: Metac as such does not need any changes to support this requirement. We need to verify by adding following items:

  • add example(s) that showcases above usecase
  • add integration test(s) that showcases above usecase

Documentation: agree on principles of writing examples

As pointed out in #122 (comment)

I think that having common agreement on principles of writing examples would make it easier to use.

Some initial though from my side :

  • separate section with setting up local kubernetes - can be divided to sections of refference to upstream distributions (as probably they have it explained better) + eventually small additions (how to build docker images and use them in given solution)
  • use plain kubectl without reference to underlying platform unless necessary. Can point to general section - like (Before start - prepare environment of your choice as descried in [hyperlink to our docs])

bug: MatchFields error out if value has forward slash

Following MatchFields results in error

ResourceSelector: v1alpha1.ResourceSelector{
	SelectorTerms: []*v1alpha1.SelectorTerm{
		&v1alpha1.SelectorTerm{
			MatchFields: map[string]string{
				"kind":       "StatefulSet",
				"apiVersion": "apps/v1",
			},
		},
	},
},
got [invalid label value: "apps/v1": at key: "apiVersion": a valid label must be an empty string or 
consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric 
character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-
z0-9_.]*)?[A-Za-z0-9])?')
            Invalid field expressions: &LabelSelector{MatchLabels:map[string]string{apiVersion: 
apps/v1,kind: StatefulSet,},MatchExpressions:[]LabelSelectorRequirement{},}
            openebs.io/metac/controller/common/selector.(*Evaluation).isFieldMatch

add example that showcases `defaulting` controller

UseCase: As a developer who uses metac to build my controller, I need metac to support setting of defaults against my watched resource specifications. These defaults should be set against the fields of the resource that are not set.

NOTE: This controller is expected support GitOps requirements. In other words the specifications without defaults are stored in my git repository. However, the resource in etcd has the required defaults set. There should not be any loss of values due to differences from my git version versus the etcd version.

Solution: Metac as such does not need any changes to support this requirement. We need to verify by adding following items:

  • add example(s) that showcases above usecase
  • add integration test(s) that showcases above usecase

Improve logs starting with `attachment(s) received from hook response`

ProblemStatement: Following logs clutter the log file. It will be good to improve this log format.

I1217 11:32:29.417234       1 controller.go:882] WatchGCtl cstorpoolauto/sync-blockdevice: Sync hook completed
I1217 11:32:29.417248       1 controller.go:556] WatchGCtl cstorpoolauto/sync-blockdevice: 304 attachment(s) received from hook response &{map[] map[] map[conditions:[map[lastObservedTime:2019-12-17 11:32:29.417209 reason:CStorClusterStorageSet instance is missing status:True type:StorageToBlockDeviceAssociationError]] phase:Error] [0xc00000f088 0xc0004f1248 0xc0004f1250 0xc0004f1090 0xc00000e7c8 0xc0004f1468 0xc0004f0680 0xc0004f0748 0xc0004f1308 0xc0004f0508 0xc0004f1050 0xc0004f1178 0xc0004f0b58 0xc0004f0fc0 0xc0004f1108 0xc0004f1738 0xc00000f488 0xc0004f0738 0xc0004f0ee8 0xc0004f0040 0xc00000edd8 0xc0004f03d0 0xc0004f0648 0xc00000f1a8 0xc0004f0788 0xc00000e278 0xc00000e440 0xc0004f0e58 0xc00000f3c0 0xc00000e378 0xc0004f0770 0xc0004f0740 0xc00000f388 0xc00000f3f0 0xc00000f460 0xc00000e300 0xc0004f1900 0xc00000ec08 0xc0004f01f0 0xc00000f2e8 0xc0004f1130 0xc0004f0800 0xc0004f0c70 0xc00000e718 0xc0004f1040 0xc0004f1958 0xc00000f500 0xc0004f0b70 0xc0004f09f0 0xc0004f1198 0xc00000e798 0xc0004f17d0 0xc0004f0210 0xc0004f1500 0xc00000e858 0xc00000f980 0xc0004f1a80 0xc0004f04a0 0xc0004f1358 0xc00000e3c8 0xc00000e3a8 0xc0004f04c8 0xc00000e818 0xc0004f0558 0xc00000f218 0xc0004f19a8 0xc0004f1d70 0xc00000e650 0xc0004f1200 0xc00000fd30 0xc0004f15c8 0xc00000f180 0xc00000edc8 0xc00000f918 0xc0004f01a0 0xc00000f3a0 0xc00000f160 0xc0004f05a8 0xc0004f0a18 0xc00000e328 0xc00000f9e0 0xc0004f0030 0xc0004f1268 0xc00000fcb8 0xc00000f898 0xc00000f138 0xc0004f1018 0xc00000fc70 0xc0004f0338 0xc0004f05c0 0xc00000f038 0xc00000f468 0xc0004f0178 0xc00000ea90 0xc0004f0218 0xc0004f0038 0xc0004f0bc8 0xc00000e840 0xc00000e498 0xc0004f0db8 0xc0004f1328 0xc00000f7f8 0xc00000f3b8 0xc0004f0538 0xc0004f1138 0xc00000f008 0xc0004f0698 0xc0004f1010 0xc00000e860 0xc00000f0d8 0xc00000f370 0xc00000e3b8 0xc00000f3d0 0xc00000f270 0xc00000fbc0 0xc00000efa8 0xc00000e600 0xc0004f11d0 0xc0004f0448 0xc0004f0d20 0xc0004f0248 0xc00000e850 0xc0004f1790 0xc00000f268 0xc0004f0c30 0xc00000e2d8 0xc00000f2b0 0xc0004f04d8 0xc00000fc40 0xc00000e758 0xc00000f310 0xc0004f1520 0xc0004f07a8 0xc0004f0028 0xc0004f08e8 0xc0004f1cc8 0xc0004f0228 0xc0004f09e8 0xc0004f0d60 0xc0004f0c10 0xc00000f588 0xc0004f07e0 0xc00000f0f0 0xc00000ecd8 0xc0004f1088 0xc0004f07b0 0xc00000f770 0xc00000f068 0xc0004f1058 0xc0004f0f88 0xc00000f168 0xc00000f440 0xc00000f4c0 0xc0004f08f8 0xc00000ece8 0xc00000eec0 0xc0004f03c0 0xc00000f698 0xc0004f0ae0 0xc0004f0928 0xc0004f1148 0xc00000eed8 0xc00000fae8 0xc00000f530 0xc00000f6d8 0xc00000fde0 0xc00000e918 0xc00000f998 0xc00000f438 0xc00000fa48 0xc00000ef90 0xc0004f1000 0xc0004f06e0 0xc0004f0858 0xc0004f0e00 0xc00000e420 0xc0004f1bc8 0xc0004f00b8 0xc0004f0cd0 0xc0004f02a0 0xc00000fd68 0xc0004f0650 0xc0004f0cc8 0xc00000fd00 0xc0004f1118 0xc00000e5b8 0xc00000e578 0xc0004f0c58 0xc00000f8e0 0xc00000e6b8 0xc00000f410 0xc00000f830 0xc0004f00a8 0xc00000e828 0xc0004f0658 0xc00000e768 0xc00000e7d0 0xc00000e760 0xc0004f0870 0xc00000e6b0 0xc00000fa90 0xc0004f11c8 0xc0004f1188 0xc00000ede0 0xc00000f458 0xc0004f05c8 0xc00000fa20 0xc00000f8e8 0xc00000e238 0xc0004f1d80 0xc0004f0ea0 0xc00000e6f0 0xc0004f0a20 0xc0004f0328 0xc0004f0b10 0xc00000e708 0xc00000f250 0xc0004f1688 0xc0004f0fa0 0xc00000ed58 0xc0004f0df0 0xc0004f1140 0xc00000e1f8 0xc00000f300 0xc0004f0848 0xc00000f4b0 0xc00000e6a0 0xc0004f0de0 0xc00000f1f0 0xc0004f0288 0xc0004f0d70 0xc00000fef0 0xc0004f14e8 0xc00000f198 0xc00000e8c0 0xc00000fb20 0xc00000e218 0xc00000fc20 0xc0004f1e40 0xc00000f498 0xc00000f928 0xc0004f0988 0xc0004f0b40 0xc0004f0008 0xc0004f0cb8 0xc00000f8c0 0xc00000f398 0xc00000e8d0 0xc0004f1be8 0xc0004f0968 0xc00000f790 0xc00000f970 0xc00000ffa0 0xc0004f0f10 0xc00000ff88 0xc0004f0d40 0xc00000f618 0xc00000ed88 0xc00000fa38 0xc00000f8c8 0xc0004f04d0 0xc00000fe00 0xc0004f0490 0xc00000f1a0 0xc00000f5f8 0xc00000f140 0xc00000f328 0xc00000f098 0xc0004f0808 0xc0004f10a0 0xc0004f0528 0xc00000e880 0xc00000f5b8 0xc0004f0010 0xc00000e4f8 0xc00000fc48 0xc0004f0e08 0xc0004f09a0 0xc0004f0370 0xc0004f0730 0xc00000f988 0xc0004f0938 0xc00000f8a8 0xc0004f1ad8 0xc0004f19f0 0xc00000f408 0xc00000e9a8 0xc00000e680 0xc00000e5d8 0xc00000e488 0xc0004f1ed8 0xc00000e468 0xc0004f1230 0xc00000ef00 0xc00000f010 0xc00000e588 0xc00000e410 0xc0004f0148 0xc00000e808 0xc00000ef48 0xc00000e8b0 0xc0004f0688 0xc00000e948 0xc0004f0c50] 0 true false}: watch dao.mayadata.io/v1alpha1:Storage:dao:ccsset-5j7tj

Screenshot from 2019-12-18 10-23-44

local gctl: auto set namespace selector against watch

Problem Statement: I would like to use Metac as a library that is packaged as my own binary. This binary will get deployed as a Kubernetes Deployment or STS. I would like to use MetaController CR as a config that should only consider watches in this Deployment/STS namespace.

introduce MatchIntFields to match fields having int values

Problem Statement: When MatchFields is applied against following resource, it results into error.

&unstructured.Unstructured{
	Object: map[string]interface{}{
		"kind": "StatefulSet",
		"spec": map[string]interface{}{
			"replicas": 3,
		},
	},
},

This is the error:

.spec.replicas accessor error: 3 is of the type int, expected string
            Field expressions match for key spec.replicas failed
            openebs.io/metac/controller/common/selector.(*Evaluation).isFieldMatch

Solution: Introduce MatchIntFields that caters to nested fields with int as their value.

add granular update & delete options to GenericController

Problem Statement: A GenericController can create, update or delete any attachments. There are no visible issues if this controller deletes or updates the attachments which were created by this controller. However, one would like to exercise finer control via GenericController while trying to delete or update resources that were not created by the same controller. GenericController already supports UpdateAny & DeleteAny options which is common to all the attachments specified in its spec. However, this is too liberal and can create problems if not programmed properly. There are also other scenarios where one would like to only Update certain kind of attachments whereas Delete certain kind of attachments and at the same time do nothing for certain kind of attachments. All these scenarios calls for finer granularity while exercising these Update/Delete/Noop operations.

Possible Solution: One of the possible approaches can be to expose new tunables in GenericController's HookResponse API. This new tunables can be set by controller developers who write the reconcile logic. In other words, this will be controlled from inside the hook code & not by Metac. This also allows us to avoid setting generic tunables like UpdateAny & DeleteAny in the spec.

sample hook response

response: 
  allowedDeletes:
    # gvk: namespace/name
    # __ i.e. double underscore is the separator used
    # / i.e. forward slash can be used instead once verified if it works for all cases
    openebs.io__v1alpha1__Deployment: default/cool
  allowedUpdates:
    openebs.io__v1alpha1__Pod: default/nginx
type GenericControllerWebHookResponse struct {
 // existing fields...

 // new fields
  allowedUpdates map[string]string
  allowedDeletes map[string]string
}

Case for Metac

Motivation

Metac's main purpose is to simplify writing Kubernetes controllers.

Let us try to analyse the things that developers need to deal with before writing Kubernetes controller logic. Below go snippet provides most of the thought process that goes in before writing the controller / reconciler business logic.

factory := informers.NewSharedInformerFactory(clientset, *resync)

pvLister := factory.Core().V1().PersistentVolumes().Lister()
nodeLister := factory.Core().V1().Nodes().Lister()
vaLister := factory.Storage().V1beta1().VolumeAttachments().Lister()
csiNodeLister := factory.Storage().V1beta1().CSINodes().Lister()
handler := controller.NewCSIHandler(
	clientset,
	pvLister,
	nodeLister,
	csiNodeLister,
	vaLister,
	timeout,
)

ctrl := controller.NewCSIAttachController(
	clientset,
	handler,
	factory.Storage().V1beta1().VolumeAttachments(), // informer
	factory.Core().V1().PersistentVolumes(),         // informer
	workqueue.NewItemExponentialFailureRateLimiter(*retryIntervalStart, *retryIntervalMax),
	workqueue.NewItemExponentialFailureRateLimiter(*retryIntervalStart, *retryIntervalMax),
)

The next set of activities is typically to filter the resources fetched from above listers or informers. The amount of conditional logic necessary to get the right resource increases as the number of combinations required to frame the reconcile logic increase. This is where Metac tries to fill the gap of reducing this boilerplate across various controllers.

How does Metac solve the boilerplate issue?

Metac when used as a library aims to provide the business / reconcile function the set of resources (fetched & filtered via Informers & Listers). Reconcile function can do whatever it wants to do with these received objects. Finally, reconcile function is expected to return the modified objects as response. Metac comes back into picture and applies (kubernetes server side apply) these updated objects into Kubernetes cluster.

However, I still need to write code!

There are lot of cases where one need to write code & probably above boilerplate does not make any difference. After all it is better to avoid importing yet another library.

A team can take decision based on its requirements at hand. However, current state of affairs in Kubernetes forecast any given project might need multiple controllers with each controller targeting a specific use-case. In other words, a project might need 10 or more granular controller images (or go routines). Note that I have considered the lowest possible figures in a Kubernetes setup. So if we are thinking about scale and agility then Metac might help us achieve the same without hassles.

How do you build controllers that deal with following?

  • add/update/remove labels,
  • add/update/remove annotations,
  • add/update/remove tolerations,
  • add/update/remove resource limits,
  • add/remove finalizers,
  • apply CRDs before actual controller spins up,
  • apply configs via ConfigMap before actual image is spun,
  • install/upgrade usecases,
  • monitoring usecases,
  • e2e testcases,
  • chaos testcases,
  • & the list continues...

In my opinion, above cases can be handled easily when Metac can parse above logic via Jsonnet or GoTemplate or some other parsing techniques.

Ability to not watch attachments or child resources

@AmitKumarDas, first off - I just want to thank you for taking over ownership of this project! The work you have done is very much appreciated.

I was wondering if you can think of a way for metac to create a resource but not actually watch for changes. There is a Child resource that we are creating(via CompositeController) that is being continually updated, which is causing a large number of requests on the metacontroller service, which is then causing delays on our controller. The child resource is HPA(horizontal Pod Autoscaler) and the current CPU utilization of the pods is being set on the status by the HPA controller every minute.

We'd like to ignore any updates to the HPA(or any child or attachment), but I don't see an option to do that in the GenericController or the CompositeController. I saw this issue which I think would help, but there has not been any discussion on it - GoogleCloudPlatform/metacontroller#172

Any thoughts appreciated?

ability to use 'go get' to pull metac

These were some comments/suggestions received from community in metacontroller slack channel.

I'm trying to use metac as a library (following this example): https://github.com/AmitKumarDas/metac/blob/master/examples/gctl/set-status-on-cr/main.go

go get -u github.com/AmitKumarDas/metac
go get: github.com/AmitKumarDas/[email protected]: parsing go.mod:
	module declares its path as: openebs.io/metac
	        but was required as: github.com/AmitKumarDas/metac
exit status 1
go get -u openebs.io/metac
go get openebs.io/metac: unrecognized import path "openebs.io/metac" (parse https://openebs.io/metac?go-get=1: no go-import meta tags ())
exit status 1
...
Thanks.  I'm using go modules with go 1.13, but not vendoring.
I got it working by adding those particular require and replace 
directives from the sample go.mod in my own program.  
It just seemed awkward.

If you want to keep the code hosted on github, but want to 
continue using the module name at openebs.io consider setting 
up the "vanity domain" stuff for that domain, so that go get works: 
https://sagikazarmark.hu/blog/vanity-import-paths-in-go/ (edited) 

I *really* like how `metac` can be imported as a library.  Super nice 
feature.  As a go developer, it would be great to be able to go get 
that, the same as I can for k8s.io modules, etc.

Custom en-queue logic

As a developer, I would like Metac to support custom logic before en-queuing a watch resource.
For example, I want below en-queue condition to be supported by Metac

import (
  "k8s.io/apimachinery/pkg/api/equality"
)

// shouldEnqueueVAChange checks if a changed VolumeAttachment should be enqueued.
//
// It filters out changes in Status.Attach/DetachError - these were posted by the
// controller just few moments ago. If they were enqueued, Attach()/Detach() would
// be called again, breaking exponential backoff.
func shouldEnqueueVAChange(old, new *storage.VolumeAttachment) bool {
	if old.ResourceVersion == new.ResourceVersion {
		// This is most probably periodic sync, enqueue it
		return true
	}

	if new.Status.AttachError == nil &&
		new.Status.DetachError == nil &&
		old.Status.AttachError == nil &&
		old.Status.DetachError == nil {
		// The difference between old and new must be elsewhere than
		// Status.Attach/DetachError
		return true
	}

	sanitized := new.DeepCopy()
	sanitized.ResourceVersion = old.ResourceVersion
	sanitized.Status.AttachError = old.Status.AttachError
	sanitized.Status.DetachError = old.Status.DetachError

	if equality.Semantic.DeepEqual(old, sanitized) {
		// The objects are the same except Status.Attach/DetachError.
		// Don't enqueue them now. Let them be enqueued due to resync
		// i.e. after sync interval
		return false
	}
	return true
}

feat: add selector expressions for interface{} & map based selections

Problem Statement: It might be a good idea to perform selector matches against fields that are set with interface{} or map[string]interface{} datatypes. Since some of the datatypes used in a resource fall under above, having a selector expression on the same will be helpful.

We should also evaluate matching []interface{} as well as []map[string]interface{}

NOTE: []map[string]interface{} is popularly known as ListMap in Kubernetes.

Attaching my metacontroller project

hi, i'm not sure if you wish to link it - but i've created a metacontroller based project that handles rabbit MQ, 2 things:

  1. i can change the necessary code to move it to metac instead, if you could give me some info what should be changed (and if this is now the official repo?)
  2. i did use k8s API to read some related secrets, i understand you have a new feature that handles some dependent objects (as my custom objects needs to refer to some secrets), i didn't find any documentation about this..

https://github.com/arielb135/rabbitController

add support to update attachments even when they are pending deletion

UserStory: As an infrastructure admin, I want my workload specific un-installer to uninstall all native Kubernetes resources, custom resources as well related custom resource definitions. I would like to delete the workload specific namespace & expect all associated pods, deployments, crds, custom resources (even with finalizers) get deleted.

Metac ignores reconciliation if attachment resource(s) is pending deletion. This issue tackles this limitation of Metac by proposing a suitable enhancement.

bug: integration tests are never run

ProblemStatement: Integration tests in metac are never run. It encounters an error & swallows it due to a bug in error handling. This results in Integration Tests that always passes.

I0416 19:06:58.749589    8281 main.go:139] Waiting for kube-apiserver to be ready
I0416 19:07:59.568576    8281 apiserver.go:97] Stopping kube-apiserver
I0416 19:07:59.568614    8281 apiserver.go:100] kube-apiserver exit status: exit status 1

Update logs don't depict the right workflow

Problem Statement: Current code of Metac logs the extra update checks even if the object is not a candidate for update.

Solution: Change the log statements present in following blocks of code:

https://github.com/AmitKumarDas/metac/blob/master/controller/common/manage_attachments.go#L299-L352

e.g.

if oObj.GetDeletionTimestamp() != nil && !e.IsUpdateDuringPendingDelete() {
		glog.V(4).Infof(
			"%s: Can't update %s: Pending deletion", e, DescObjectAsKey(dObj),
		)
		return nil
	}

to

if oObj.GetDeletionTimestamp() != nil && !e.IsUpdateDuringPendingDelete() {
		glog.V(4).Infof(
			"%s: Not eligible for update %s: Pending deletion", e, DescObjectAsKey(dObj),
		)
		return nil
	}

Documentation (/docs) links does not work

When clicking on various links inside docs github return 404 - they should be adjusted. Also README.md points to metacontroller which have been diverged from this project

feat(GenericController): add option to reconcile resources lazily

Problem Statement: As a DevOps admin, I want GenericController to reconcile resources lazily. For example, I would want to reconcile a CRD & its CR at the same time using a single controller. However, this results in error since this CR instance can only be reconciled after its CRD becomes available at the kubernetes api server.

WorkAround: We need to develop two or more GenericControllers to implement this. This works out since controllers are independent of each other & reconcile eventually (utilises level triggered nature of k8s controllers).

However, it will be convenient to users and involve no learning curve for users if a single GenericController can be used to solve this problem.

usecase: Cron Controller similar to CronJob Controller.

So there is a dire need of a controller that can help me do CRUD operations on some CR, but those operations must be following the CronTab, and have all other functionality supported in the cronJob Controller, such as Concurrency Policy, Delayed Retry Time, and others.

feat: reconcile status in GenericController

Problem Statement: All the metacontrollers avoid reconciling the status field of children or attachments. This has been done to avoid hot loop paths in control loop. However, GenericController's attachments may not necessarily trigger hot loop paths. It might be advisable to selectively allow reconciling status of its attachments as well. In addition, this helps a controller developer to set latest error, warnings, or other fields that belongs to status.

Solution(s):

  • Allow enabling or disabling reconciling status of attachments in GenericController
  • Default policy should enable reconciling the status (thus leaving the controller developer to avoid hot loop paths)

add `run to completion` option for metac process

UserStory: As a developer/admin, I want to develop my uninstall logic based on metac. I would like the logic to be run as a kubernetes job & mark completion of uninstall when this job is completed. This ensures the job's pod & associated resources (cpu, memory) are freed up. Current version of metac does not have a way to stop the reconciliation completely & subsequently stop the main process which is required to mark a Kubernetes job as completed.

Filter attachment(s) based on watch

Motivation

Metac's GenericController introduces the concept of watch and attachments. One can think of watch as the resource that is observed via Informer and attachments as the resources that should be fetched/listed via Lister based invocations. With this concept in mind, how do we filter out the attachments that belong to the particular watch. Note that GenericController should list the attachments that may or may not be owned by the watch resource. In order to keep the attachments arbitrary and still if required filter them based on watch is what this issue is all about.

Possible Solution

This is the current GenericController that watches kind:Storage and returns the watch object along with all the PVCs and PVs.

spec:
  watch:
    apiVersion: core/v1alpha1
    resource: storages
  attachments:
  - apiVersion: storage/v1
    resource: persistentvolumeclaims
  - apiVersion: storage/v1
    resource: persistentvolumes

This is the suggested GenericController that watches kind:Storage and returns specific PVCs & PVs. In this particular spec, PVCs that are owned by Storage and PVs that have the Storage name as its annotation are selected.

spec:
  watch:
    apiVersion: core/v1alpha1
    resource: storages
  attachments:
  - apiVersion: storage/v1
    resource: persistentvolumeclaims
    matchesWatch:
      expressions:
      - operator: OwnerIsWatch # Does watch own this PVC?
  - apiVersion: storage/v1
    resource: persistentvolumes
    matchesWatch:
      expressions:
      - key: ddp.openebs.io/storage-name # a key in PV's annotations
        operator: AnnotationIsWatchName # Is annotation value the name of Watch

Use Metac as a library

as-deployment

This picture suggests use of Metac as a library to be used by the logic implementing K8s controller(s).

  • Deployment here refers to a Kubernetes Deployment
  • Deployment will have a container image that is supposed to implement one or more K8s controllers
  • Deployment image will use Metac as a library
  • Config implies the configuration used by this Deployment image
  • Config consists of a part of Metac controller schema
  • Since this image will handle the reconcile i.e. sync, no need to specify any hooks that Metac typically uses when Metac is used as a standalone container

Config

Config to the deployment can look like below:

watch:
  apiVersion: ddp.mayadata.io/v1
  resource: storages
attachments:
- apiVersion: v1
  resource: persistentvolumeclaims
- apiVersion: storage.k8s.io/v1
  resource: persistentvolumes
- apiVersion: v1
  resource: nodes
- apiVersion: csi.k8s.io/v1alpha1
  resource: csinodes
- apiVersion: csi.k8s.io/v1alpha1
  resource: volumeattachments

Above yaml is actually a controller specific config that this deployment will make use of to get the current (last observed) state of following resources:

- storages,
- persistentvolumeclaims,
- persistentvolumes,
- nodes,
- csinodes,
- volumeattachments

This observed state of the declared resources is received directly into some pre-defined function i.e. hook (reconcile function) as argument(s). All this is possible without the need to write logic around K8s Informers, work queues, code generation, etc. This reconcile function can now manipulate the state and send back the manipulated state as the response.

Feeding the K8s observed state of requested resources & applying the updated resources against K8s cluster is handled auto-magically by the Metac code which is used as a library by this deployment. The only thing this controller code needs to bother is validating if resources are already in their desired states or need any changes. In other words, deals with just the business logic.

reconcile-hook

On a side note, above abstracts dealing with huge number of Kubernetes resources. However, this is not sufficient in the long run, since above will load lot of resources that need to be filtered later within the reconcile hook. A better config might look something like below:

watch:
  apiVersion: ddp.mayadata.io/v1
  resource: storages
attachments:
- apiVersion: v1
  resource: persistentvolumeclaims # filter if claimed by watch or fetch all
- apiVersion: storage.k8s.io/v1
  resource: persistentvolumes # filter if claimed by watch or fetch all
- apiVersion: v1
  resource: nodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1
  resource: csinodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1
  resource: volumeattachments # filter if claimed by watch or fetch all

Note that there is no change to config, but perhaps above filtering should happen internally by Metac library.

Concluding remarks!!!

๐Ÿค” Can we make use of lot of selectors to filter the watch & its attachments?

Yes. Metac will add multiple selectors like labelSelector, annotationSelector, nameSelector, namespaceSelector & so on. However, a controller should be designed with single responsibility in mind. In other words, when the config section grows with lot of attachments, it will be appropriate to re-design the config i.e. break it into multiple smaller configs, feed them to the deployment image and implement similar number of reconcile functions. After all all of them will work to get to the desired state.

Metac - use case - mutating admission web hook

Hi @AmitKumarDas,

Just a quick question, if you have time for it.
I'm trying to use metac/metacontroller so it could patch the existing object's specs (Service). It works with labels/annotations, but what I need is to inject an additional configuration parameter (in my case - externalIPs). The problem is that metac doesn't update the existing object (not created by itself), but rather tries to create a new one, which already exists. I suspect, that although it seemed to be easy at first to use metac for that, it's a wrong tool and I have no other chance but to implement a custom MutatingAdmissionWebhookController. What's your opinion?

Kind regards,
Sergei

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.