Git Product home page Git Product logo

projectsyn / commodore Goto Github PK

View Code? Open in Web Editor NEW
45.0 12.0 8.0 3.79 MB

Commodore provides opinionated tenant-aware management of Kapitan inventories and templates. Commodore uses Kapitan for the heavy lifting of rendering templates and resolving a hierachical configuration structure.

Home Page: https://syn.tools/commodore/

License: BSD 3-Clause "New" or "Revised" License

Dockerfile 0.39% Python 94.35% Jsonnet 4.05% Shell 0.58% Makefile 0.62%
kapitan kubernetes cfgmgmt jsonnet helm projectsyn gitops

commodore's People

Contributors

akosma avatar anothertobi avatar bastjan avatar bittner avatar ccremer avatar chloesoe avatar chrisglass avatar corvus-ch avatar daiboruta avatar debakelorakel avatar dependabot[bot] avatar glrf avatar haasad avatar kidswiss avatar laserb avatar marcofl avatar megian avatar mhutter avatar mweibel avatar psy-q avatar renovate-bot avatar renovate[bot] avatar simu avatar srueg avatar thebiglee avatar tobru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

commodore's Issues

Update Cookiecutter Component Template

Update the Cookiecutter template for new components to include the following:

  • A license: BSD-3-Clause
  • Option to base a component on a Helm chart (i.e. --helm) -> see #118.
  • Add GitHub actions (jsonnet & yaml lint, others tbd)
  • doc/ subfolder?
  • README.adoc template
  • CHANGELOG.adoc template
  • GitHub templates

How does the release process look like? Keyword: Changelog?

  • Release process automation
  • GitHub release preparation

Initialize Catalog Git Repository if there is no HEAD

Currently the catalog repository needs to be initialized before Commodore is able to work with it. Adapt Commodore to be able to initialize a Git repository before working with it.

This helps to reduce one manual step in the initialization phase.

Use local Git user information when running Commodore locally

Right now Commodore always configures the Git committer and author to be Commodore <[email protected]>.

The audit trail would be much improved if Commodore uses the local Git user information when running locally.

Ideally, Commodore will use the local Git user information by default, and expose configuration options to override this behavior.

Make the Inventory Structure Configurable

Background

Currently the inventory structure is hard coded into Commodore:

  • global.common
  • global.cloud.<cloud-provider>
  • global.cloud.<cloud-provider>.<cloud-region>
  • global.distribution.<k8s-distribution>
  • <customer>.<cluster>

This limits the flexibility of different setups and makes changes to this hierarchy more difficult.

Similar to Puppet Hiera, this structure should be configurable outside of Commodore.

Proposal

Implement a config file which can define the inventory structure. This config file should be created in the global config Git repo with a specific name (e.g. syn.yml). It should allow to use facts as placeholders for names in the structure. The leaf of an entry is always a class (i.e. .yml file).

The referenced classes should be optional and if they don't exist they should be skipped. I.e. if a cloud doesn't have a region, no cloud/${cloud}/${region}.yml file needs to exist.

Possible example:

hierarchy:
  - global/common
  - global/cloud/${cloud}
  - global/cloud/${cloud}/${region}
  - global/distribution/${distribution}
  - global/lieutenant-instance/${lieutenant-instance}
  - ${tenant}/common
  - ${tenant}/${cluster}

References

Configurable revisions for global and tenant config repositories

Commodore always clones the master branch of the global and the tenant config repositories. Using the --local flag, one can change this behavior. While hacking around with Commodore, using local mode might not be possible. Having the ability to use other references (tag, commit) would solve this.

This would have helped in #195. There we had to add a temporary commit that had to be removed right before the merge.

Engineer Renovate support for GitHub

Context

Enhance the Project Syn Renovate to support Commodore Components stored on GitHub. It should be able to open PRs when dependencies are updated. Also define and document where this custom Renovate is running and store the custom code on GitHub.

Persist jsonnet-bundler Lock File

Context

Currently the jsonnetfile.lock.json file is recreated on every catalog compile run and uses the most recent versions referenced in all jsonnetfile.json dependencies.
This leads to changing cluster catalogs without changing any of the inputs (components, inventory and facts).
While we can make sure to reference dependencies by immutable versions (i.e. git SHAs) we can't control sub-dependencies (e.g. of kube-prometheus).
The lock file should be persisted in some way so repeatable catalog compiles are possible.

Introduce meta-parameter which components can use instead of hardcoding their base directory location

Context

This idea came up in a discussion regarding handling of shifting component directory paths when compiling components standalone (e.g. for component testing, cf. projectsyn/component-espejo#8).

Currently components need to hard code where they want to save their dependencies to. For catalog compilation to work, this will usually be dependencies/<component-name>/path/to/dep.

However, when compiling a component outside a Commodore working directory, this does not really work.

Proposal

One way to avoid this issue is to change Commodore to provide a meta-parameter to each component which contains a path to the directory of the component, e.g. parameters.<component_name>._base_directory.

Components can then define their dependencies as

parameters:
  kapitan:
    dependencies:
      - type: https
        source: https://.../crds.yaml
        output_path: ${<component_name>:_base_directory}/manifests/.../crds.yaml

We've got precedence for such meta-parameters in #221 where we introduce parameters.<component_name>._instance to uniquely identify component instances.

Alternatives

The current implementation is an alternative.

Refactor error handling

Context

Currently, we tend to throw ClickExceptions for cases where we do handle errors. However, some of the error messages are less than helpful, e.g. component new in a working directory which doesn't contain a previously setup Commodore directory tree (pre #183):

$ commodore component new test
Adding component test...
 > Installing component
Error: While setting up symlinks: [Errno 2] No such file or directory: '../../../dependencies/test/class/test.yml' -> 'inventory/classes/components/test.yml'

This particular case is addressed in #183, but there's probably other error messages that are similarly unhelpful.

To improve the Commodore user experience, we should

  • Come up with an error (and exception) handling concept
  • Document the concept
  • Refactor the current error handling to adhere to the concept

Alternatives

We can continue doing error handling ad-hoc, but this won't lead to a better user experience for Commodore.

Implement Option to Disable Components

It should be possible to disable a component which was included in the hierarchy. This helps for example for local testing.
The list of components do be disabled must be defined on the cluster and if possible anywhere in the hierarch. So for example a local cloud region could be implemented which disables most of the components.

Task Deliverables

  • List of components to be disabled can be specified on a cluster config
  • If possible the list should be specified anywhere in the hierarchy

Allow defining full path to Git repository for components

Enable Commodore to use components which are hosted in different repositories. This will likely require some reworking of the current component discovery mechanism, which relies on the COMMODORE_GLOBAL_GIT_BASE configuration to actually clone component repositories.

Task deliverables

  • Commodore can clone components hosted on different Git services
  • A Git base URL is configured as a default for components
  • Overrides can be specified for single components in the config file (commodore.yml)
  • In addition, component_versions can be used to change the URL for a component in the hierarchy

Allow to specify branch to push compiled catalog to

Context

The compiled catalog is currently pushed to the master branch of the catalog repository. For review purposes or deployment of the changes at a later stage Commodore should be able to push to a specified branch.

Alternatives

I don't see any alternatives.

FileExistsError on catalog compile

Catalog compile jobs often error because of

FileExistsError: [Errno 17] File exists: '/tmp/tmpq5eh6tin.kapitan/extracted

Sometimes, the job succeeds after some tries, sometimes the job succeeds after re-triggering the job (because the K8s job will correctly fail after 3 attempts), and sometimes nothing helps.

Note: I triggered a recompile for ALL clusters for a customer, and only the clusters that had a (successful) catalog compilation within the last 4h failed.

See the commodore-job-runner on synfra.

Steps to Reproduce the Problem

  1. Trigger a compile job

Actual Behavior

...
Dependency https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.7/config/v1.7/aws-k8s-cni.yaml: successfully fetched
Dependency https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.7/config/v1.7/aws-k8s-cni.yaml: saved to dependencies/eks-addon-manager/manifests/aws-k8s-cni.yaml
Dependency https://redhat-cop.github.io/resource-locker-operator/resource-locker-operator/resource-locker-operator-v0.1.2.tgz: successfully fetched
Dependency https://redhat-cop.github.io/resource-locker-operator/resource-locker-operator/resource-locker-operator-v0.1.2.tgz: extracted to dependencies/resource-locker/helmcharts
Dependency https://charts.appuio.ch/k8up-0.6.1.tgz: successfully fetched
Dependency https://kubernetes-charts.storage.googleapis.com/prometheus-pushgateway-1.2.13.tgz: successfully fetched
Dependency https://charts.appuio.ch/k8up-0.6.1.tgz: extracted to dependencies/backup-k8up/helmcharts
Unknown (Non-Kapitan) Error occurred
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/multiprocessing/pool.py", line 125, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.8/site-packages/kapitan/dependency_manager/base.py", line 176, in fetch_http_dependency
    os.makedirs(unpack_output)
  File "/usr/local/lib/python3.8/os.py", line 223, in makedirs
    mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/tmp/tmpq5eh6tin.kapitan/extracted'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/kapitan/targets.py", line 119, in compile_targets
    fetch_dependencies(
  File "/usr/local/lib/python3.8/site-packages/kapitan/dependency_manager/base.py", line 92, in fetch_dependencies
    [p.get() for p in pool.imap_unordered(http_worker, http_deps.items()) if p]
  File "/usr/local/lib/python3.8/site-packages/kapitan/dependency_manager/base.py", line 92, in <listcomp>
    [p.get() for p in pool.imap_unordered(http_worker, http_deps.items()) if p]
  File "/usr/local/lib/python3.8/multiprocessing/pool.py", line 868, in next
    raise value
FileExistsError: [Errno 17] File exists: '/tmp/tmpq5eh6tin.kapitan/extracted'


[Errno 17] File exists: '/tmp/tmpq5eh6tin.kapitan/extracted'

Expected Behavior

Successful catalog compilation

Consider vendoring kube.libsonnet in Commodore

Context

Standalone component compilation needs to vendor/pull in Bitnami's kube.libsonnet somehow if compilation is not executed in a pre-initialized Commodore working directory.

Proposal

One of

  • Vendor kube.libsonnet in commodore/lib, making some vendored version available through the Commodore package/Docker image.
  • Add a static dependency to kube.libsonnet in the Jsonnetfile which is rendered by Commodore.

Alternatives

Each component individually vendors kube.libsonnet as required.

Engineer Automated Component Testing

Context

Implement a GitHub action to automatically test Commodore Components. For an initial testing it could look like this:

  1. Render component locally
  2. Run kubeval over the rendered manifests

Integrate this into the Cookiecutter template.

Cannot compile a component created with Docker named "component-name"

When creating a component using the docker run commodore:v0.2.0 component new component-name command, the compilation does not work.

Steps to Reproduce the Problem

Follow the steps below:

  1. mkdir -p catalog inventory dependencies compiled
  2. docker run -i --rm --network=host --env-file=./.env --user="$(id -u)" --volume $PWD/catalog/:/app/catalog/ --volume $PWD/dependencies/:/app/dependencies/ --volume $PWD/inventory/:/app/inventory/ --volume ~/.ssh:/app/.ssh:ro --volume ~/.gitconfig:/app/.gitconfig:ro projectsyn/commodore:v0.2.0 catalog compile c-ancient-cherry-5082 --api-url=$LIEUTENANT_URL --api-token=$LIEUTENANT_TOKEN
  3. docker run -i --rm --user="$(id -u)" --volume ~/.ssh:/app/.ssh:ro --volume $PWD/catalog/:/app/catalog/ --volume $PWD/dependencies/:/app/dependencies/ --volume $PWD/inventory/:/app/inventory/ --volume ~/.gitconfig:/app/.gitconfig:ro projectsyn/commodore:v0.2.0 component new component-name
  4. docker run -i --rm --user="$(id -u)" --volume ~/.ssh:/app/.ssh:ro --volume $PWD/compiled/:/app/compiled/ --volume $PWD/catalog/:/app/catalog/ --volume $PWD/dependencies/:/app/dependencies/ --volume $PWD/inventory/:/app/inventory/ --volume ~/.gitconfig:/app/.gitconfig:ro projectsyn/commodore:v0.2.0 component compile dependencies/component-name

Actual Behavior

Error: Could not find component class file: /app/dependencies/component-name/class/name.yml

Expected Behavior

I expect the component to compile normally. This error does not happen if the component name is not component-name.

Make Commodore Working Dir Configurable

Context

Currently Commodore uses the current working dir as it's workplace and clones/generates everything there.

Implement an environment variable (e.g. $COMMODORE_HOME) to specify the default working dir of Commodore. Add a flag to override this (e.g. -w/--workdir) and set an appropriate default if neither the flag nor the env var are specified (depending on the platform, e.g. ~/.cache/commodore/).

This might also help with #135 to make it more clear that Commodore handles (and might delete) files.

Alternatives

Currently we implement an alternative by always using the current work dir which in some contexts is confusing. The new approach still allows to use another directory for example with -w $PWD.

Specify Jsonnet-Bundler Dependency Versions in Hierarchy

Context

Currently a component can have a jsonnetfile.json to specify Jsonnet dependencies. This file is not handled by Kapitan and therefore can't make use of the configuration hierarchy. The only way to have different versions of dependencies is with a separate branch of the component (which then can be specified in the hierarchy).

Implement a possibility to specify Jsonnet dependencies and their version in the hierarchy so they can be overridden.

For example the kube-prometheus library is specific to the K8s version and therefore needs a different version (git branch/commit) for each K8s version.

Alternatives

As described, an alternative would be to use different branches to handle this.

Drop COMMODORE_GLOBAL_GIT_BASE

Context

The variable COMMODORE_GLOBAL_GIT_BASE is no longer used to determine the location of the global config git repository (See #226). However it is still in use for auto discovering component repositories.

Having this discussed this with @simu, we concluded to just drop auto discovery for now. As a consequence, each and every component must be listed in commodore.yml within the global config git repo.

Note that the global git base is for now still used for backwards comparability. Commodore is using it when talking to older versions of Lieutenant not providing the URL of the global config git repo.

Alternatives

Make Lieutenant to provide the value needed for component auto discovery. Change Commodore to make use of that.

Refactor CLI commands

Context

The commodore clean command removes the entire inventory and all components. Changes which are not checked into git will be lost. On a regular catalog compile (if not --local mode), the clean command is run first.

The safeguard should interactively ask if changes should really be dropped in case of a dirty workspace. A --force flag could be implemented to drop changes without asking first (for example in a CI/CD or test context).

Implement Interactive Option for Push

Introduce an option to make the push interactive. After showing the diff it should ask the user for input if the push should be done or not.
This ensures that the showed diff is actually what's being pushed.

Task Deliverables

  • Interactive option (i.e. -i/--interacive) for push action

Support for instantiating components multiple times

Context

Currently Commodore components don't support installing multiple instances of the software they manage. There are already a some components for which installing multiple instances is a common use case, such as nfs-client-provisioner.

Potential solutions

Multi-instance aware components

The "simple" solution from the perspective of Commodore/Kapitan/Reclass is to simply leave multi-instance support to the component authors. For this solution, component authors would explicitly need to implement multi-instance support. This approach would most likely involve a parameter structure which exposes the multi-instance nature of the component in parameters.<component-name>.

Taking nfs-client-provisioner as an example, the parameters structure could be something like:

parameters:
  nfs_client_provisioner:
    namespace: ...
    common:
      host: ...
    instances:
      instanceA:
        path: ...
      instanceB:
        path: ...
      instanceC:
        host: ...
        path: ...

In this example structure, we allow users to define configurations which are shared between multiple (but maybe not all) instances in a key common. Instances are configured in key instances, and keys in instances are used as instance identifiers. Instance configuration overrides configurations in common.

The main downside of this approach is that it may not be feasible to implement for components which use Helm charts as their base, unless the Helm chart exposes a similar structure. The reason for this is that Kapitan's Helm templating is expressed in the reclass inventory, and reclass is not flexible enough to instantiate multiple copies of a part of the hierarchy based on another key in the inventory.

Commodore support for component instantiation

Another approach is to not make components multi-instance aware, but instead implement support for instantiating a component multiple times in Commodore. This approach would potentially scale better, as component authors won't have to reinvent the wheel for every component which they want to support multiple instances. Additionally, Commodore can probably juggle things so that Kapitan's helm templating can be used to instantiate a Helm-chart based component multiple times.

This approach would need logic in Commodore which identifies the component(s) to instantiate multiple times based on the included component classes according to some "instance naming scheme".

Commodore would then have to duplicate the relevant files (component class and defaults mainly), rewriting references to the component name in those files (input/output paths, parameters key, ...) to match the "instance name" which would be derived from the class "instance naming scheme".

Users wishing to instantiate a component multiple times would need to include the component using the "instance naming scheme", and provide parameters for each instance separately. Reclass references can potentially be used to share parameter values between instances.

Alternative approaches

Another approach which is not outlined in detail above is that when multi-instance support is required, the Commodore component must install an operator which supports instantiating the required software multiple times. This is naturally dependent on an operator existing for the software that needs to be instantiated multiple times, and is similar to the first proposed solution from the perspective of how multiple instances could then be configured via the hierarchy.

Out of scope

How to manage services for the customer on a Syn-enabled cluster is out of scope for this issue.

Automated update of GitHub Actions

Context

Commodore and Commodore components use GitHub Actions as its CI platform. GitHub actions are versioned. Updates to those actions need to be maintained. Dependabot has support for GitHub actions (see https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/keeping-your-actions-up-to-date-with-github-dependabot).

  • GitHub Actions of Commodore are automatically kept up to date.
  • GitHub Actions of Commodore component template are automatically kept up to date.

Fake Components During Discovery

Context

During discovery, all components are being cloned, even ones that are not included for the current cluster. Faking the components in this step could improve performance a lot. Additionally would it lower the side effects components can have to each other.
After the inventory is built, the actually included components of a cluster can be cloned.

Provide Mechanism to Conditionally Install Sub-Components

If a components has many sub-components, a way to install them only if the main-component is installed should be available.

Example:
The crossplane component consists of the main crossplane component and the crossplane-aws and crossplane-cloudscale sub-components. If I include the crossplane-cloudscale component in the cloudsclae cloud provider class, I only want it installed if the crossplane main-component is also installed.

Currently we handle this with an if-statement in the main.jsonnet and app.jsonnet files:

if std.member(inv.classes, 'components.crossplane') then {
  '00_stack': cloudscale_stack,
  '10_provider': cloudscale_provider,
  '20_s3_instance_classes': s3.instance_classes,
} else {}

Maybe there's a better way to model these component dependencies and automatically resolve which ones to (not) install.

Define a Kapitan target per component

Context

We've found some instances (e.g. #156) where it would be beneficial to have a Kapitan target per component. For #156, having specific targets per component would allow us to reduce the amount of information that must be provided for the postprocessing filter definition (e.g. the component defining the filter would be clear based on the target).

There are some challenges in the implementation, if we want to make this change in a backwards-compatible manner. The current architecture relies on the single target Kapitan inventory for a number of features, the big ones being the dynamic hierarchy and component discovery. Additionally, some approaches for implementing this change would require refactoring all component classes.

Below we outline an approach which should be backwards compatible for the dynamic hierarchy and component discovery implementation, while not requiring any refactoring in the existing components.

Proposal

The best approach which doesn't require component refactoring and allows us to reuse the current implementation for the dynamic hierarchy and component discovery is to reorganize the Kapitan inventory slightly.

The new inventory structure will be constructed with empty files in classes/components/ for each component. This way, the existing class includes of form components.<component-name> do not pull in the component class which has the parameters.kapitan key anymore. The actual component classes will be made available in classes/_components (exact naming TBD).

With this inventory structure, each target can have the dynamic hierarchy class include of global.commodore without defining compilation instructions for each component. Each target still needs to include all component default classes, to ensure all inventory references can be resolved. Additionally each target will need to include class _components.<component-name> to pull in the Kapitan compile instructions for the targeted component.

A target will have the following form:

classes:
  - [ ... all component defaults for the cluster ... ]
  - global.commodore
  - _components.<comonent-name>

parameters:
  kapitan:
    vars:
      target: <component-name>

Alternatives

An alternative approach which requires refactoring of all components would be to change components to define their Kapitan configuration in parameters.<component-name>.kapitan instead of parameters.kapitan. This allows us to continue using the component classes in the hierarchy exactly the same way as previously.

Commodore could then generate individual targets per component where parameters.kapitan is extended as follows

parameters:
  kapitan: ${<component_name>:kapitan}

Each component is then responsible that their parameters.<component-name>.kapitan entry has vars.target set to <component-name>.

Add support for http proxies

Context

I cannot develop syn components locally since my business laptop is behind a web proxy. Is it possible to respect HTTP_PROXY, HTTPS_PROXY and NO_PROXY variables?

Alternatives

No alternative.

Research and Plan usage of Modulesync

As the number of components will grow, we need a way to keep the plumbing in sync over all Project Syn maintained Commodore Components.

Have a look at https://github.com/voxpupuli/modulesync, research for alternatives and figure out how we can use it for keeping plumbing configuration in syn.

This goes hand in hand with the Cookiecutter templates which also need to be kept in sync. Which one will be the main template?

Make use of Kapitan Compile Cache

When Kapitan is called with kapitan compile --cache is called, a cache file is created in the compiled/ directory. This file contains the hashes of all input values (classes and parameters). Before compiling the catalog, Kapitan can calculate these hashes and compare them to the current ones. If they didn't change, the compilation won't change anything and can be skipped.

.kapitan_cache

folder: {}
inventory:
  cluster:
    classes: ff5655c9b1f7c742936a02cd0399236f649531c2293f98c7a836d2fcc7036729
    parameters: 002430ede00ab44b0677e163eda31fc369b2706f4dba2a4dfe1bab6c5f1ab811

To make use of this, the following changes are required:

  • Verify the actual caching mechanism and if our assumptions are correct
  • Enable the --cache when calling Kapitan
  • Copy the .kapitan_cache from catalog/ to compiled/ before compilation
  • Copy the .kapitan_cache from compiled/ to catalog/ after compilation (and make sure it's commited)

Since we have to clone all inventory & component repos anyway this won't speed up the compilation very much at first. But once we implemented a caching mechanism for all git repos, this could have a large impact.

commodore compile error

When trying to compile with commodore to get all cluster info in a clean folder.
Note: I'm not using poetry

Steps to Reproduce the Problem

Note: I'm not putting env var values here, for security reasons.

docker pull docker.io/projectsyn/commodore:latest

commodore () {
    mkdir -p inventory/classes/global dependencies/lib compiled/ catalog/ cache/
    docker run \
    --interactive=true \
    --tty \
    --rm \
    --user="$(id -u):$(id -u)" \
    --volume "$HOME"/.ssh:/app/.ssh:ro \
    --volume "$PWD"/compiled/:/app/compiled/ \
    --volume "$PWD"/catalog/:/app/catalog \
    --volume "$PWD"/dependencies/:/app/dependencies/ \
    --volume "$PWD"/inventory/:/app/inventory/ \
    --volume ~/.gitconfig:/app/.gitconfig:ro \
    --volume "$PWD"/cache:/app/.cache \
    -e COMMODORE_API_URL=$COMMODORE_API_URL \
    -e COMMODORE_GLOBAL_GIT_BASE=$COMMODORE_GLOBAL_GIT_BASE \
    -e COMMODORE_API_TOKEN=$COMMODORE_API_TOKEN \
    projectsyn/commodore:latest \
    $*
}

commodore catalog compile ${CLUSTER_ID}

Actual Behavior

and the output is this:

...
Fetching components...
Updating Kapitan target...
Updating cluster catalog...
Updating Jsonnet libraries...
Traceback (most recent call last):
  File "/usr/local/bin/commodore", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.8/site-packages/commodore/cli.py", line 174, in main
    commodore.main(prog_name='commodore', auto_envvar_prefix='COMMODORE')
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/click/decorators.py", line 73, in new_func
    return ctx.invoke(f, obj, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/commodore/cli.py", line 85, in compile_catalog
    _compile(config, cluster)
  File "/usr/local/lib/python3.8/site-packages/commodore/compile.py", line 159, in compile
    write_jsonnetfile(config)
  File "/usr/local/lib/python3.8/site-packages/commodore/dependency_mgmt.py", line 233, in write_jsonnetfile
    with open("jsonnetfile.json", "w") as file:
PermissionError: [Errno 13] Permission denied: 'jsonnetfile.json'

Expected Behavior

A clean compile that I can use to init terraform

Rework Component Discovery

Context

Component discovery is currently based on the concept of searching the component under COMMODORE_GLOBAL_GIT_BASE or by checking a commodore.yml in the commodore-defaults Git repo. The idea is to get rid of that and store this information in Lieutenant.

Details still tbd.

Support rendering manifests for a single component

To simplify developing components it should be possible to have Commodore render just the manifests generated by that component, ideally without requiring any external APIs. This will also be a useful building block for implementing automated validation and testing of components.

Potentially the changes required for #71 can be reused to implement this feature.

Task deliverables

  • Manifests generated by a single component can be rendered by Commodore
  • No pre-setup workspace or external API interaction is required to render a component's manifests

Make overriding of dictionaries and arrays possible

Context

The inventory hierarchy allows to add new keys to a dictionary, add values to an array and change scalar values. However it is not possible to override dictionaries and arrays as a whole.

This is an essential feature to effectively define sensible defaults within a component (for example labels, annotations, node selectors and many more).

Alternatives

https://reclass.pantsfullofunix.net/configfile.html suggests one can place the file reclass-config.yml and allow override list and dicts by empty entity (see https://github.com/kapicorp/reclass/blob/fb50054d6332167c34dd2ae8b486943dd9c28746/README-extensions.rst#allow-override-list-and-dicts-by-empty-entitynone-instead-of-merge). However this does not work. It seems that the file does not get read.

poetry run commodore catalog compile $CLUSTER_ID --local
Running in local mode
 > Will use existing inventory, dependencies, and catalog
 > Using target: cluster
 > Reconstructing Cluster API data from target
Registering config...
Registering components...
Configuring catalog repo...
Inventory reclass error: -> cluster
   Cannot merge scalar over dictionary, at openshift4_registry:config:nodeSelector, in yaml_fs://./commodore/inventory/classes/defaults/openshift4-registry.yml; yaml_fs://./commodore/inventory/classes/t-ancient-morning-1764/openshift4.yml; yaml_fs://./commodore/inventory/classes/t-ancient-morning-1764/c-old-fire-5788.yml
Traceback (most recent call last):
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/resources.py", line 338, in inventory_reclass
    cached.inv = _reclass.inventory()
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/core.py", line 254, in inventory
    entities[n] = self._nodeinfo(n, inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/core.py", line 229, in _nodeinfo
    node.interpolate(inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/datatypes/entity.py", line 79, in interpolate
    self._parameters.interpolate(inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/datatypes/parameters.py", line 294, in interpolate
    self._interpolate_inner(path, inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/datatypes/parameters.py", line 321, in _interpolate_inner
    new = self._interpolate_render_value(path, value, inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/datatypes/parameters.py", line 327, in _interpolate_render_value
    new = value.render(self._base, inventory)
  File "~/Library/Caches/pypoetry/virtualenvs/commodore-4c9QClYe-py3.8/lib/python3.8/site-packages/kapitan/reclass/reclass/values/valuelist.py", line 146, in render
    raise TypeMergeError(self._values[n], self._values[n-1], self.uri)
reclass.errors.TypeMergeError: -> cluster
   Cannot merge scalar over dictionary, at openshift4_registry:config:nodeSelector, in yaml_fs://./commodore/inventory/classes/defaults/openshift4-registry.yml; yaml_fs://./commodore/inventory/classes/t-ancient-morning-1764/openshift4.yml; yaml_fs://./commodore/inventory/classes/t-ancient-morning-1764/c-old-fire-5788.yml

Clone components only once

Context

For each compilation of a cluster, commodore removes all components and clones them again. This is a waste of bandwidth and time.

Instead we could clone them into a separate directory. On compilation, commodore would only clone components not only present. For all the others, only a git fetch and git checkout will be required.

Making the components available within the inventory can then be done with symlinks.

Automated catalog update commit message is formatted incorrectly

When looking at the commit history for a cluster catalog in GitLab, the commit title is empty (which leads to some usability issues with GitLab).

image

Steps to Reproduce the Problem

  1. check existing catalogs

Actual Behavior

Commit title is empty

Expected Behavior

Commit has a proper title

Rework Documentation of Commodore

Currently the documentation of Commodore is relatively scarce and should be updated to make using Commodore a breeze.

The documentation should:

  • describe the concept of Commodore: What does it do? How does it work?
  • show the configuration options
  • describe the inner workings: how it integrates and uses Kapitan
  • describe the commodore.yml
  • describe the configuration hierarchy
  • give an idea about Commodore Components

Also the start page of https://syn.tools/commodore/index.html should give a short intro to Commodore, so that it gives an idea what it's all about.

Adhere to https://documentation.divio.com/ for putting the pages and content into the right context.

Provide All Cluster Facts in Target

Add the cluster facts (from the API) to the target in a dedicated facts dict.

Once all components are migrated, delete the old structure.

Refactor postprocessing to define filters in inventory

Context

Currently, we're defining postprocessing filters in each component in a dedicated file postprocess/filters.yml. This has some upsides -- we know which component defines each filter, we know where the component filters are based on the definition file -- but also some downsides -- we don't get reclass features, e.g. variable references for free, we have some configuration that's not part of the inventory.

Alternatives

The current implementation is an alternative for the proposed change

Proposal

Refactor the Commodore postprocessing to extract filters to run from the inventory, e.g. key parameters.commodore.postprocess.filters. This key could be extended by component classes that want to define postprocessing filters, the same way each component provides entries in parameters.kapitan.compile.

This change would allow us to extract filters from the parsed inventory, with all reclass references already resolved.

We would need to adjust some of the code which executes the postprocessing filters, and probably require some more information to be provided in the filter definitions, especially the path to the filter code for custom jsonnet filters would have to be given in each filter definition.

The idea would be to introduce "inventory filter definitions" for new components and gradually migrate existing components to the new method, before fully removing support for postprocess/filters.yml.

References

This idea first popped up in #155 where we considered options to avoid having to implement custom code to resolve reclass references "by hand" in Commodore.

Include Classes Based on AND-combinations of Facts

Context

Currently it's possible to include classes based on facts (currently hard-coded, dynamically with #93). This only allows for OR-combinations of facts though: For example I can only have a class which applies for all Rancher clusters and one for all cloudscale.ch clusters. But not a class which applies to clusters of type Rancher AND cloudscale.ch.

Possible Solution

Enable the ignore class not found feature we already use during component discovery (#138) also during catalog rendering. With this we could include all possible combinations of facts, regardless if the actual class exists or not. For example, with #93 implemented:

classes:
- global.distribution.${cluster:dist}
- global.distribution.${cluster:dist}.${cloud:provider}
- global.cloud.${cloud:provider}
- global.cloud.${cloud:provider}.${cluster:dist}
- global.cloud.${cloud:provider}.${cloud:region}
- ${customer:name}.${cluster:name}

This approach would allow to also configure this aspect of the hierarchy. The downside is that we might run into issues with classes which should exist but are ignored (e.g. typos, yml vs yaml, etc.).

Alternative

Implement a feature in Commodore to remove classes which don't exist. This should only auto-remove classes which are allowed to be missing, for example such combination classes.

This approach would still allow to configure the hierarchy as implemented in #93 and additionally skip missing classes. The exact rule which classes are allowed to be missing needs careful thought.

Investigate Kapitan Upgrade Issues

We tried to upgrade the Kapitan dependency to v0.29.2 which lead to various issues:

  • On macOS we run into the fatal: morestack g0 problem which we didn't before (see kapicorp/kapitan#568)
  • Jsonnet compilation fails with RUNTIME ERROR: Builtin function objectHasEx expected (object, string, boolean) but got (string, string, boolean)
  • A URI is added to the AlertManager config: \"uri\": \"yaml_fs:///app/data/inventory/classes/global/commodore.yml\"\

Steps to Reproduce the Problem

  1. poetry update
  2. poetry install
  3. poetry run commodore catalog compile c-old-fire-5788 (for the Jsonnet error)
  4. poetry run commodore catalog compile c-4h8tcd (on macOS for morestack error)
  5. docker-compose run commodore catalog compile c-4h8tcd (for the uri issue)

Actual Behavior

As described above

Expected Behavior

The cluster compiles should succeed and show no diffs compared to a compile with Kapitan v0.29.1

Provide possibility to fetch dependencies when running Commodore in local mode

Currently Commodore does not touch any dependencies when executed with --local. This is not very helpful, e.g. when adding new jsonnet library dependencies or similar. Implement an option for local mode where dependencies (potentially selectively, or if they don't exist yet) are fetched.

Consider switching Commodore operation to only do git fetch; git checkout when dependencies already exist locally. This would also allow CI pipelines to cache dependencies between runs.

Task deliverables

  • Commodore can optionally fetch dependencies from remote when run in local mode
  • Dependency fetching in local mode does not destroy local changes (which was the original point of local mode in the first place)

Simplify running Commodore locally

Currently, running Commodore locally can be a bit tricky. The docker image is PoC quality at best, in particular with regard to how SSH credentials are handled and running Commodore via Pipenv requires a fair amount of setup on the local machine. Currently the Pipenv dependencies seem to be incomplete on OS X, further complicating the local setup.

Consider providing the option to have Commodore use deploy keys to clone repositories from various Git hosting platforms. Alternatively consider providing a way to "forward" the local ssh agent on the host into the docker container.

A third option to consider (in conjunction with #70) is to provide Commodore as a package on PyPI for users to install.

Task deliverables

  • Make running Commodore while developing components as straight-forward as possible
  • Create good documentation on how to setup Commodore locally
  • Verify that Commodore can be executed locally on Linux and OS X without any manual adjustments

Simplified Installation of Commodore

Context

To install and use Commodore the current way is to use Poetry. This is an entry barrier as it isn't a common tool and mainly for developing Commodore, not for using Commodore.

Engineer a way to make installation and usage of Commodore very easy:

  • pip install commodore
  • apt install commodore
  • yum install commodore
  • yay commodore
  • brew install commodore

Alternatives

The alternative would be to provide a Docker alias as documented at https://syn.tools/commodore/running-commodore.html.

This brings in the dependency of having Docker running and is error prone because of issues with volume permissions.

Support jsonnet-bundler to manage component jsonnet dependencies

Currently, the mechanisms available in Commodore don't really support pulling in jsonnet dependencies which use jsonnet-bundler.

However, many useful jsonnet libraries (e.g. kube-prometheus) make use of jsonnet-bundler. It's nearly impossible to include such a library in a Commodore component at the moment, or pull it in via the jsonnet_libs configuration which Commodore supports for 3rd party jsonnet libraries.

Task deliverables

  • Commodore supports pulling in jsonnet-bundler dependencies at the component level at compile time
  • Commodore supports pulling in jsonnet-bundler dependencies at the global level at compile time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.