Git Product home page Git Product logo

component-openshift4-nodes's Introduction

Commodore Component: OpenShift 4 Nodes

This is a Commodore Component for OpenShift 4 Nodes.

This repository is part of Project Syn. For documentation on Project Syn and this component, see syn.tools.

Documentation

The rendered documentation for this component is available on the Commodore Components Hub.

Documentation for this component is written using Asciidoc and Antora. It can be found in the docs folder. We use the Divio documentation structure to organize our documentation.

Run the make docs-serve command in the root of the project, and then browse to http://localhost:2020 to see a preview of the current state of the documentation.

After writing the documentation, please use the make docs-vale command and correct any warnings raised by the tool.

Contributing and license

This library is licensed under BSD-3-Clause. For information about how to contribute, see CONTRIBUTING.

component-openshift4-nodes's People

Contributors

anothertobi avatar bastjan avatar ccremer avatar corvus-ch avatar debakelorakel avatar glrf avatar haasad avatar happytetrahedron avatar simu avatar srueg avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

component-openshift4-nodes's Issues

Only nodes with the role `worker` can be created

Creating node groups only works when the role is worker.

According to https://docs.openshift.com/container-platform/4.4/machine_management/creating-infrastructure-machinesets.html, this should be possible. It seems to lack an important piece of information.

Steps to Reproduce the Problem

    nodeGroups:
      infra:
        role: infra

Actual Behavior

The resulting MachineSet will result in the creation of corresponding Machines. The Machines only reach the state Provisioned but not Ready. They also do not show up as Nodes.

Expected Behavior

Creation of any sort of roles work. Machines will be created, they reach the Ready state and they show up as Nodes.

Support NodeGroup Specific Configuration of ProviderSpec

Context

Configuring different resources for each node group is currently not possible for providers besides GCP (for GCP only because of #15). My specific use-case would be to configure memoryMiB and numCPUs differently for two node groups using the vSphere provider [1].

It would make sense to allow each node group to define fields of the providerSpec to add/overwrite the configured default.

[1] https://docs.openshift.com/container-platform/4.6/machine_management/creating_machinesets/creating-machineset-vsphere.html#machineset-yaml-vsphere_creating-machineset-vsphere

Alternatives

An alternative would be to manage machine sets manually which is error-prone.

Delete machine sets created by the installer

Context

The installer already creates a MachineSet per availability zone. This MachineSets need to be removed.

This can probably be solved by deploying a job object. The executed container then runs a script (or some other piece of software). The script then uses the API to identify and delete those MachineSets.

One note of warning: an IPI cluster can never have zero worker machines. Before deleting those default MachneSets, we might need to ensure, the new ones are up and running. This might not be need though as pod disruption budgets and other mechanism might prevent deletion of nodes if the workload can not be rescheduled.

Alternatives

Making all the details available, to bring those MachineSets into the cluster catalogue.

GCP Specific Fields Added to ProviderSpec

The component adds provider-specific fields to the providerSpec [1]. These fields seem to be specific to the GCP provider.
The parameter instanceType is set as value for machineType but neither of those is present in the vSphere provider. The zone for GCP would be placement:availabilityZone for AWS.

Sample config on AWS: [2]
Sample config on GCP: [3]

[1] https://github.com/appuio/component-openshift4-nodes/blob/master/component/main.jsonnet#L43
[2] https://docs.openshift.com/container-platform/4.6/machine_management/creating_machinesets/creating-machineset-aws.html#machineset-yaml-aws_creating-machineset-aws
[3] https://docs.openshift.com/container-platform/4.6/machine_management/creating_machinesets/creating-machineset-gcp.html#machineset-yaml-gcp_creating-machineset-gcp

Steps to Reproduce the Problem

  1. Not configure the instanceType parameter for the component
  2. Compile the cluster with commodore
  3. Error that instanceType is missing

Actual Behavior

Fields specific to GCP are added to the ProviderSpec.

Expected Behavior

No provider-specific fields are added to the ProviderSpec.

Effective replica count of multi zone machines can be higher than the requested one

When defining a multi AZ node group, the resulting sum of replicas will always be rounded up to a multiple of availability zone count. This can be higher than the requested ones and lead to additional costs.

Steps to Reproduce the Problem

nodeGroups:
  worker:
    replicas: 4
    multiAz: true
availabilityZones:
    - europe-west6-a
    - europe-west6-b
    - europe-west6-c

Actual Behavior

Three MachineSets each with a replicas of 2 and thus a total of 6 machines.

Expected Behavior

Three MachineSets. One with a replica set to 2 and two with a replica set to 1.

Workaround

nodeGroups:
  worker-a:
    replicas: 2
    multiAz: false
  spec:
    template:
      spec:
        providerSpec:
          zone: europe-west6-a
  worker-b:
    replicas: 1
    multiAz: false
  spec:
    template:
      spec:
        providerSpec:
          zone: europe-west6-b
  worker-c:
    replicas: 1
    multiAz: false
  spec:
    template:
      spec:
        providerSpec:
          zone: europe-west6-c

Get a clusters infrastructure ID as a fact

Context

The parameter infrastrucutreID must be set on each cluster. This value is created by the OpenShift 4 installer and is easily extracted from the cluster using the following command:

oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Exposing this values a a fact, allows to define a default within this component. This removes the need to manually copy that value to a clusters configuration.

Alternatives

Unknown

Deprecate top level Kubelet and Container Runtime configs

Context

In #50 we introduced the option to manage machine config pools. We also added the option to specify KubeletConfig and ContainerRuntimeConfig as part of the machine config pool parameter.

There is always a 1:1 mapping from container runtime and kubelet config to machine config pool.

We should deprecate the top level kubeletConfigs and containerRuntimeConfigs parameters, as specifying them trough the machine config pool is sufficient and more convenient. We also need to document how to move to the new way of configuring things.

Create follow up to have it finally removed.

Alternatives

Keep two ways of defining KubeletConfig and ContainerRuntimeConfig resources

Allow to configure auto scaling

  • Create a ClusterAutoscaler object and provide parameters to control its behaviour.
  • Create a MachineAutoscaler for each of the created MachineSets
  • Replace replicas with replicas.min replicas.max.
  • Defaults for replicas.min and replicas.max should be 1. This disables auto scaling by default, making enabling it a conscious decision.
  • Do not longer set the replicas field on the MachineSets. It will be controlled by auto scaling and would confuse ArgoCD.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.