Git Product home page Git Product logo

Comments (5)

wking avatar wking commented on July 27, 2024

Currently we distribute masters around AZs (see here, set via this, this, this, and this), unless:

  • The caller tells us which subnets to use (currently by ID), or
  • The caller tells us which VPC to use (also currently by ID).

So this issue sounds like "extend InstallConfig to support the external-VPC and master-subnet-IDs variables which are currently supported in Terraform". Does that sound right?

If I'm on the right track so far, this is technically possible. But there have been concerns about growing InstallConfig too much (e.g. see these removed docs, originally from and #45 and #85). Personally, I don't see a problem with exposing variables this way as long as:

  • They stay optional, so users who don't care (yet) can ignore them, and
  • We have either:
    • The ability to update the cluster on the fly, if the cluster admins decide they want to subsequently update the properties in the live config, or
    • The ability to complain if a cluster admin attempts to change an immutable setting (e.g. the cluster ID).

But we probably need to sit down and work out how we plan to distinguish mutable vs. write-once-at-install-time settings (just with docs? With a ClusterConfig subset of InstallConfig as discussed here? With a cluster-version-operator API for "I'd like to adjust the cluster to $NEW_CONFIG" instead of allowing admins to edit the config in place? Something else?). @abhinavdahiya and @crawford, thoughts?

from installer.

dgoodwin avatar dgoodwin commented on July 27, 2024

Yes there would be an extending InstallConfig/ClusterConfig component, and possibly some work in the AWS machine actuator to balance the machines. We'd want it for masters as well as compute I think.

The design we were targeting did allow for the user to specify the subnets they want, driven by Dedicated use cases. Linking by subnet "name" rather than ID would be helpful as well because it allows us to define both in the API at once, whereas the IDs would not be known at that time.

They should be optional I agree.

At this point I would not be recommending an in-cluster config modification story, one time install would still be good enough at this point I think. Hive would be a way to update a cluster config on the fly, and we would enforce what is mutable and what isn't via CRD webhook validation. The right answer here is probably a cloud specific AWS operator that handles a subnet CRD, and Install/ClusterConfig boils down to creation of those CRDs similar to what we've talked about for ongoing maintenance of other cloud infra the installer might create.

from installer.

wking avatar wking commented on July 27, 2024

The design we were targeting did allow for the user to specify the subnets they want, driven by Dedicated use cases. Linking by subnet "name" rather than ID would be helpful as well because it allows us to define both in the API at once, whereas the IDs would not be known at that time.

I'm not familiar with Dedicated workflows. If the user wants a new cluster near some existing Dedicated hosts, couldn't they give us the IDs of their existing subnets? Or make the subnets on their own and pass us the IDs (bring your own subnets)? Changing from ID to names may not be too bad, but there are already a lot of balls in the air ;).

Hive would be a way to update a cluster config on the fly, and we would enforce what is mutable and what isn't via CRD webhook validation.

👍

from installer.

twiest avatar twiest commented on July 27, 2024

I'm not familiar with Dedicated workflows. If the user wants a new cluster near some existing Dedicated hosts, couldn't they give us the IDs of their existing subnets?

For some context, OpenShift Dedicated does 1 cluster per VPC. So 2 clusters would never share the same VPC's subnet(s). If 2 clusters needed to communicate, they'd do so over some type of bridge (e.g. VPC Peering).

Or make the subnets on their own and pass us the IDs (bring your own subnets)?

I agree with @dgoodwin that we'd want to do something more automatic here. The goal of 4.x is to drastically reduce the toil that OPS has to do to maintain clusters. Devan's "cloud specific AWS operator that handles a subnet CRD" sounds good to me.

from installer.

eparis avatar eparis commented on July 27, 2024

At this point we install multi-az out of the box and do not plan to address this issue any further.

from installer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.