Git Product home page Git Product logo

cluster-registry's People

Contributors

csbell avatar deads2k avatar font avatar gyliu513 avatar hongchaodeng avatar irfanurrehman avatar ixdy avatar janetkuo avatar jessfraz avatar jianhuiz avatar justinsb avatar k8s-ci-robot avatar liggitt avatar luxas avatar madhusudancs avatar marun avatar mbohlool avatar mikedanese avatar mml avatar mwielgus avatar nikhiljindal avatar onyiny-ang avatar p0lyn0mial avatar perotinus avatar pmorie avatar smarterclayton avatar sttts avatar thockin avatar wojtek-t avatar xiangpengzhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-registry's Issues

Supports guarantees around immutability/identity of clusters in list

/sig multicluster

The cluster registry should provide a way to guarantee that a cluster that has been added has not been changed out for another cluster at the same IP address. This could be via a certificate stored in the cluster that a controller validates against the registry, and that can be provided to the user to validate against the cluster in a defined way.

From https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/cluster-registry/project-design-and-plan.md#later

Determine whether generated code needs to be checked in

From a comment on #21, there might be reason to check in the generated artifacts to enable people who do not use bazel to depend on this repository; and perhaps to make the test infrastructure easier to wrangle. I am quite opposed to checking in artifacts unless it really is blocking a workflow, so this issue is tracking whether enough need to outweigh the pain of checked-in generated code.

If this is causing you problems, please bump this issue!

clusterregistry should be able to back its storage with custom resources

Generic API servers can store state anywhere. The sample API server (the project I believe this project is based off of) happens only supports etcd, but since the cluster-registry expects to store very small amounts of data (~hundred of objects max?) custom resources don't create a crazy amount of overhead.

I'd like to propose that the clusterregistry should support storing state in custom resources. This would dramatically reduce the amount of overhead it takes to deploy it.

Version skew is understood, documented and tested

/sig multicluster

We need to test version skew for the cluster registry, the hosting clusters and kubectl. Ideally, we will have upgrade tests and e2e tests that can verify many of these things, but we do need to decide what is worth testing and then write the tests/frameworks necessary.

As a principle, since the cluster registry itself is so simple and foundational, the value of having these sorts of tests is very high, since it allows users to upgrade with more confidence and because the tests themselves are not that complicated to write.

From https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/cluster-registry/project-design-and-plan.md#later

Supports status from active controllers

/sig multicluster

This requires some API design work: the Status object is currently empty. Someone will need to drive building consensus around an initial design for this object. Note that there may be several controllers interacting with the cluster registry as an expected use case, and the status object needs to take this into account.

From https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/cluster-registry/project-design-and-plan.md#beta

crinit tool name and interface should be considered more carefully

From #65 (comment), we should consider how the crinit tool can be named and how its interface should be organized.

  • renaming crinit to kubecr (credit @font for the name) or something else
  • renaming/regrouping the subcommands to prepare for adding more commands to interact with a cluster registry. This isn't necessarily straightforward, since the tool is intended to support both aggregated and standalone deployments, each of which may have different create/update/delete operations, which may imply a three level nesting of (e.g.) kubecr <aggregated|standalone> <init|update|delete>.

I think this is something we'd want to do on a beta timeframe, since at that point we will be expecting the tool's interface not to change significantly.

Add doc.go files for GoDoc

GoDoc seems to require doc.go files in packages that it scans. Add them where appropriate, and make sure that there are no other things to do to integrate with GoDoc.

Flesh out the ClusterStatus field contents and usage patterns

Migrated from the Cluster API doc:

How should the Status field work? How do users intend to use the status field? How does the API indicate that the status is being updated by a tool vs a user vs no-one? Is that useful information? For one particular example, CloudProvider, which can be provided by a user or provided by a tool, does it belong in status?

Publish reference docs for cluster registry API server and crinit

Cluster registry users need reference docs for both API server and crinit binaries. The help text accessible through --help command line flag serves its purpose, but it is also extremely valuable to have it on the web so that it is indexed and made quickly accessible through search engines. It saves users from downloading the binaries and running --help every time they want to look up something really quick.

cc @perotinus @font @pmorie

Authentication Config is not versionable

The proposed authentication configuration revolves around a map[string]string

// Config is a map of values that contains the information necessary for a
// client to authenticate to a Kubernetes API server.
// +optional
Config map[string]string `json:"config,omitempty" protobuf:"bytes,2,rep,name=config"

My understanding is that in general we are now avoiding untyped maps for anything intended to be system-meaningful. They are hard to version & update, and we risk repeating the same problems we hit when using labels/annotations where we probably should have been using fields.

Should we consider a pattern more akin to health checks or volumes, with an effective union of strongly-typed options-types?

clusterregistry apiserver delegated vs standalone personality should be considered

From #8 (comment):

How do we want to handle the different personalities that clusterregistry will assume when running as a delegated vs standalone apiserver?

Some things that will need to be addressed:

  • Authn model: currently CR has built-in authn model. Will also need to delegate to core k8s apiserver. Need to allow both.
  • Authz model: currently CR has always-allow authz model. Will also need to delegate to core k8s apiserver. Need to allow both.
  • Necessary flags: handling common and mutually exclusive flags for standalone vs delegated
  • Other stuff?

Some possible options:

  • Add extra flag error checking. Possible confusion when user specifies combinations that are not compatible. This assumes we have several mutually exclusive options. It's not clear that we do yet but may in the future.
    • Roughly 8 for authn standalone and 8 for authn delegated.
  • Do we want to add new sub-commands to handle the different flags that could then trigger different paths for the authn/authz setup? This would seem strange for an apiserver. Are there any other apiservers doing this?
    • This could be something similar to hyperkube where the first argument specifies which "personality" to execute.
  • Two separate binaries for delegated vs standalone. Probably try to avoid this unless we follow a hyperkube type model.
  • Other options?

Motivation for the CloudProvider field?

I'm trying to understand the use case for the CloudProvider field.

What is the cloud provider in a multi-repo, multi-cloudprovider-implementation world? If there are two AWS cloudprovider implementations, should they have the same name or a different name? How are the CloudProvider names registered?

What if a single cluster spans multiple clouds? Should this be of []CloudProvider?

Write a design document

Now that the cluster registry prototype is done, we should write a design based on this knowledge and circulate it more broadly.

Write a user guide

Once the cluster registry is in a stable enough state, it should have a users guide distinct from the developers guide.

This may make sense to host here, or perhaps on a shared Kubernetes page.

cc @pmorie

Multi-tenancy

The cluster registry currently doesn't support namespaces, which are the main way that Kubernetes APIs support multi-tenancy. It's not clear whether we can use RBAC as-is to support multi tenancy in the cluster registry, or whether we will have to build some alternative support or modify the cluster registry in order to work more fluently with RBAC.

This is one reason that we would implement an authorizer other than "AlwaysAllow": in the simple single-tenant case, we can probably assume that authn==authz (that is, anyone who can authenticate is authorized), since there is only one resource type, the cluster,

Not a major issue now.

Set up per-PR tests and other CI

Once #19 goes in, there will be an incomplete but runnable suite of bazel tests that should be run before PR submission.

This involves adding some configuration to the Prow config (https://github.com/kubernetes/test-infra/blob/master/prow/config.yaml) and the job config (https://github.com/kubernetes/test-infra/blob/master/jobs/config.json).

Since it is currently necessary to run ./update-codegen.sh and bazel run //:gazelle before running the bazel tests, I expect that we will have to use a custom execution scenario (https://github.com/kubernetes/test-infra/blob/master/scenarios/execute.py) and call the bazel scenario directly after running the codegen commands.

Figure out a coherent strategy for updating vendored dependencies

Based on @ericchiang's comment (#58 (comment)).

dep is currently not quite ready for use with all of Kubernetes, and using it will potentially expose us to incompatibilities; however, there is activity in other k8s repos to improve dep support. Since updating vendored deps is not a common activity, it's not critical to have it be a completely smooth experience, but it should also not be so arcane and messy as to be undoable. We need to evaluate further.

Implement delegated authorization

The cluster registry needs a way to authorize requests. Currently, it allows authenticated callers full access. It should at least support r/w vs readonly usage.

The canonical Kubernetes way to handle this would be RBAC, but we don't currently support creating RBAC policies in the cluster registry, and it's not clear that the cluster registry should have a parallel set of RBAC rules to the main Kubernetes API server (and it probably should defer to another API server if it is an aggregated API server).

Implement non-delegated authorization

The cluster registry should have a way to handle authorization when it is not being run as a delegated API server. It doesn't seem ideal to implement the RBAC APIs in the cluster registry, given the potential for confusion and mismatches in implementing an API for a k8s object. One potential approach here is to use an authenticating/authorizing proxy.

Split from #8, which is tracking work for delegated authorization.

Working integration with Federation

/sig multicluster

The API should be stable enough at this point to support replacing the Federation cluster API with the cluster registry. The details of this integration will need some design: it's not clear whether the Federation API server will aggregate the cluster registry, or include the API types directly, or something else entirely.

From https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multicluster/cluster-registry/project-design-and-plan.md#beta

Cluster API issues

@pmorie raised a few issues in #16 about the cluster registry API that were not addressed:

  • being more explicit about how credentials are handled/whether credentials should be stored
    • work here is waiting for input from SIG-auth about how to handle cluster credentials in the cluster registry
  • optional fields should be pointers
    • this seems reasonable, but would like to discuss with sig-arch and/or sig-api-machinery to make sure

Besides this, there are a few issues from the cluster registry API design that also need to be addressed:

  • using a stronger type for the Kubernetes API server info
    • this was going to be a copy of api.Endpoints from k8s, for reasons described here, but after pushback more discussion is needed
  • spec/status field names/presence
    • this is waiting on discussions with sig-arch and/or sig-api-machinery
  • CABundle in KubeAPIServer
    • waiting for response on comment

And an issue from #29:

  • figure out motivation for or remove CloudProvider field

cf https://docs.google.com/a/google.com/document/d/1Oi9EO3Jwtp69obakl-9YpLkP764GZzsz95XJlX1a960/edit?disco=AAAABFxkK0A

cc @pmorie @quinton-hoole

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.