Git Product home page Git Product logo

application's People

Contributors

ant31 avatar barney-s avatar barry8schneider avatar bgrant0607 avatar cgrant avatar crimsonfaith91 avatar dlorenc avatar ensonic avatar fish-pro avatar foxish avatar gyliu513 avatar huyhg avatar hzxuzhonghu avatar ironpan avatar janetkuo avatar jbrette avatar k8s-ci-robot avatar konryd avatar kow3ns avatar liyinan926 avatar mattfarina avatar mortent avatar myishay avatar nan-yu avatar nikhita avatar rlenferink avatar sgandon avatar tamalsaha avatar tossmilestone avatar zheddie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

application's Issues

Improve end-user installation instructions

The current instructions could use some improvement.

A few suggestions:

  • Make a minimal manifest with only the CRD (i.e. remove namespace, statefulset, etc.)
  • Put the manifest in a friendlier location (i.e. not "hack" directory)
  • Put the relative path to the manifest in the command-line instructions.
  • Nice to have: a section with some explanation about how to manage multiple CRD verisons.

Application status

The Application resource is intended to be used by UIs. One of the most useful features of those is reporting the health of the displayed resources.

The "status" of the application (for the sake of discussion, let's simplify to either "OK" or "Error") can be generated in one of the following ways:

  1. With an explicit field in ApplicationStatus.
  2. With a reference to a a running pod's readiness probe.
  3. By analyzing Kubernetes-level events involving the application.
  4. By computing status of each of the Application's components and using the most problematic one.

Each of the approaches carries some trade-offs:

Explicit field in ApplicationStatus

Pros:

  1. Easy to consume.

Cons:

  1. Would have to be populated by the application controller, the logic that would run there is far from obvious and might differ depending on application

With a reference to a a running pod's readiness probe.

Pros:

  1. Easy to consume.
  2. Easy to implement custom status logic.
  3. The pod to use might be one that already runs inside the app (eg. the worpress app might use the serving pod for this, which would also check the database)

Cons:

  1. The pod would have to be managed with a deployment, growing the overhead for each application.
  2. The application owners / providers would have to include the readiness pod.
  3. Puts additional load on cluster resources.

NOTE: this could be an optional approach, with a default fallback being any of the other approaches.

By analyzing Kubernetes-level events

Pros:

  1. Implementable using only existing Kubernetes mechanism.
  2. Similar approach works well for existing resources (eg. Deployment in kube-dashboard)

Cons:

  1. The algorithm for changing a list of events into a status would have to wear out in use and will be easy to break, especially in the initial period until the Application resource matures.
  2. Most likely, will only be able to capture the Kubernetes-control-plane-level problems. Making it work for individual component's statuses as well will inherit all of the other solutions' problems.
  3. Will force each consumer to re-implement the status computation logic.

By computing status of each of the Application's components and using the most problematic one

Pros:

  1. Exhaustive and very likely to match users' expectations.

Cons:

  1. Expensive to compute, even more so when listing all applications in cluster (where it would pretty much mean querying for all available GroupKinds everywhere).

Please add a roadmap

Would be great to see a roadmap for this project. What are you planning to do next and in what timeframe?

tests are failing

Issue

One of the test is failing when it is executed

make docker-build IMG=my-app-operator
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go all
CRD manifests generated under '/Users/dabou/Code/go-workspace/src/github.com/kubernetes-sigs/application/config/crds' 
RBAC manifests generated under '/Users/dabou/Code/go-workspace/src/github.com/kubernetes-sigs/application/config/rbac' 
go test ./pkg/... ./cmd/... -coverprofile cover.out
?       github.com/kubernetes-sigs/application/pkg/apis [no test files]
?       github.com/kubernetes-sigs/application/pkg/apis/app     [no test files]
ok      github.com/kubernetes-sigs/application/pkg/apis/app/v1beta1     11.795s coverage: 9.1% of statements
ok      github.com/kubernetes-sigs/application/pkg/component    0.053s  coverage: 40.0% of statements
?       github.com/kubernetes-sigs/application/pkg/controller   [no test files]
2019/02/05 19:25:35 *v1beta1.Application/default/foo Validating spec
2019/02/05 19:25:35 *v1beta1.Application/default/foo Applying defaults
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  { reconciling component
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Expected Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Observed Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Reconciling Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  } reconciling component
2019/02/05 19:25:35 *v1beta1.Application/default/foo Validating spec
2019/02/05 19:25:35 *v1beta1.Application/default/foo Applying defaults
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  { reconciling component
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Expected Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Observed Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  Reconciling Resources:
2019/02/05 19:25:35 *v1beta1.Application/default/foo(cmpnt:app)  } reconciling component
--- FAIL: TestReconcile (5.23s)
    application_controller_test.go:77: 
        Timed out after 5.004s.
        Expected success, but got an error:
            <*errors.StatusError | 0xc0002b55f0>: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
                    Status: "Failure",
                    Message: "Deployment.apps \"foo-deployment\" not found",
                    Reason: "NotFound",
                    Details: {
                        Name: "foo-deployment",
                        Group: "apps",
                        Kind: "Deployment",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 404,
                },
            }
            Deployment.apps "foo-deployment" not found
FAIL
coverage: 75.0% of statements
FAIL    github.com/kubernetes-sigs/application/pkg/controller/application       16.850s
?       github.com/kubernetes-sigs/application/pkg/customresource       [no test files]
ok      github.com/kubernetes-sigs/application/pkg/finalizer    0.043s  coverage: 94.7% of statements
?       github.com/kubernetes-sigs/application/pkg/genericreconciler    [no test files]
ok      github.com/kubernetes-sigs/application/pkg/kbcontroller 0.031s  coverage: 0.0% of statements
ok      github.com/kubernetes-sigs/application/pkg/resource     0.063s  coverage: 27.8% of statements
?       github.com/kubernetes-sigs/application/cmd/manager      [no test files]
make: *** [test] Error 1

Proposal: Flag to adopt resources

Proposal is for adopting resources that match the AppCRD selectors.

# Flag that indicates matched resources must be adopted
.spec.adopt: true

In some use cases where application crd is used, the tooling explicitly sets OwnerReference of the objects (that match appcrd selectors) to point to the Application object.

Can we make this automatic controller by a flag that is turned off by default.

Application Readiness

Application Status

Objective

The objective of this proposal is to provide a mechanism to aggregate the status of an Application. We propose a mechanism to compute the readiness, availability, errors, and disruptions associated with an Application. As black and white box health monitoring are a complicated topic that deserves its own treatment, we do not address it in this proposal.

Background

  • We have decided that status should not computed from phases. Phases are deprecated since state-machine enumerations are antithetic to the level-triggered world of k8s. See the API conventions and this discussion .
  • Based on the above conventions, and on previous community discussions in the main repository, we use conditions (see 51594) and fields in the .status of a resource to communicate information about its readiness. This allows existing CRDs to opt into the scheme without breaking compatibility with existing tooling and to evolve to use fields. Additionally, it provides a mechanism to provide additional, human readable, information with respect to the status of an Application's components.

Conditions

Conditions are used across the Kubernetes API surface in order to indicate the condition of a resource as its controllerseeks to realize the declared intent in its specification. They are described by the golang struct below. Throughout this proposal we use conditions in conjunction with fields.

type Condition struct {
	// Type is the type of the condition.
	Type ComponentConditionType `json:"type"`
	// Status is one of True, False, Unknown.
	Status v1.ConditionStatus `json:"status"`
	// LastTransitionTime is the last time the condition's status changed.
	LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
	// Reason is the reason for the condition's last transition.
	Reason string `json:"reason,omitempty"`
	// Message is a human readable message indicating details about the transition.
	Message string `json:"message,omitempty"`
}

Readiness

From the perspective of Pods, readiness indicates the ability of the Pod to receive network traffic. It also indicates that the resource, from the perspective of the control loops that act on it, is ready for use. We use the same semantics for the components of an Application. Readiness for an Application implies that all of its components are ready. That is, an Application is ready if and only if all of its components are ready. Components that contain no user declared desired state (i.e. have no spec) (e.g. ConfigMaps and Secrets) are always ready post creation.

  1. A controller MAY communicate that a resource is ready by indicating status.ready=true or by including a condition like {"type":"Ready","status":"true"} in the resource's status.conditions.
  2. A controller MAY communicate that a resource is unready by indicating status.ready=false or by including a condition like {"type":"Ready","status":"false"} in the resource's status.conditions.
  3. If a controller communicates conflicting readiness (e.g setting status.ready=true and including a condition like {"type":"Ready","status":"false"}) the value of the field takes precedence.
  4. If a controller uses both a status field and a condition to communicate readiness, and if the field and condition are consistent, the condition is treated as a decorator for the field (e.g settingstatus.ready=true and including a condition like {"type":"Ready","status":"false","message":"RDBMs is ready for use."} provides a user readable message about the readiness of the resource).

Availability

From the perspective of Deployments, and many out of tree resources, availability, indicates that all Pods in the related to the resource remain ready after some configurable duration. This notion is useful for application components in general as indication that the resource is unlikely to fall victim to infant mortality after creation or mutation. We use availability for an Application's components in this context. As availability is not applicable to all components, it is not aggregated for an Application.

  1. A controller MAY communicate that a resource is available by indicating status.available=true or by including a condition like {"type":"Available","status":"true"} in the resource's status.conditions.
  2. A controller MAY communicate that a resource is unavailable by indicating status.available=false or by including a condition like {"type":"Available","status":"false"} in the resource's status.conditions.
  3. If a controller communicates conflicting availability (e.g setting status.availabile=true and including a condition like {"type":"Available","status":"false"}) the value of the field takes precedence.
  4. If a controller uses both a status field and a condition to communicate availability, and if the field and condition are consistent, the condition is treated as a decorator for the field.

Observation

Kubernetes control loops communicate that they have observed modifications of the declared desired state contained in a resource by setting its status.observedGeneration to its generation. A resource for which this is true is said to be observed.

  1. Controllers SHOULD update status.observedGeneration to the value of meta.generantion to communicate that they have observed the creation of, or a mutation to, a resource they control.

Progress

Kubernetes control loop use various methods to communicate that reconciliation between a resources specification and the observed state of the system is progressing. Deployments, and many non-core resources, communicate this using the Progressing condition. For an Application, progressing components indicate that the application is updating.

  1. A controller MAY communicate that reconciliation is in progress by indicating setting status.progress to a true or to a non-negative 32-bit floating point number between in [0,100] (e.g status.progress=true or status.progress=99.9).
  2. A controller MAY communicate that reconciliation is in including a condition like {"type":"Progressing","status":"true"} in the resource's status.conditions.
  3. If a controller communicates conflicting progress (e.g setting status.progressing=true and including a condition like {"type":"Progressing","status":"false"}) the value of the field takes precedence.
  4. If a controller uses both a status field and a condition to communicate readiness, and if the field and condition are consistent, the condition is treated as a decorator for the field.

Disruptions

Application components may be affected by planned or unplanned disruptions. For instance the destruction of a Node may disrupt many replicated Pod sets. The application controller MAY use other resources, e.g PodDisruptionBudgets, to add this condition to a resource's status, and the controller for a resource MAY communicate this directly by adding such a condition.

  1. A controller MAY communicate disruptions using by adding a condition like
    {"type":"Disruption","status":"true","reason":"Node unavailable","message":"Auto-scaling in progress"}.
  2. Disruptions are considered to be orthogonal to readiness for the computation of Application readiness. If a disruption makes a resource unready the resources controller MUST communicate this via readiness.

Errors

At any point in their lifetime controller may encounter errors when realizing the declared intent of the user. The are communicating using the status.conditions of the resource.

  1. A controller MAY communicate an error by including a condition like
    {"type":"Error","status":"true","reason":"Controller Wedged","message":"SharedInformer sycn failing."}.
  2. Errors are considered to be orthogonal to readiness for the computation of Application readiness. If an error makes a resource unready the resources controller MUST communicate this via readiness.

Compatibility Requirements

Resources and controllers that wish to be compatible with the Application Controller status computation need only implement the following.

  1. Any resource that does not implement a .spec contains no declarative intent. It is always ready.
  2. The representative resource MUST implement a .status field.
  3. The .status field MUST indicate readiness.
  4. The .status field MAY indicate availability.
  5. The .status field SHOULD indicate observation by the controller.
  6. The .status field MAY contain conditions.
  7. Errors MAY be reported using error conditions in the status.conditions field.
  8. Disruptions MAY be reported using disruption conditions in the status.conditions field.

Core Resource Adaptations

The core resources do not all conform to the schema above. In the future, we may modify them to do so. For the time being, the following describes how the Application controller will compute the status of these resources.

  1. Deployment
    1. Readiness - .spec.replicas is equal to .status.readyReplicas is equal to .status.replicas and all are greater than zero.
    2. Availability - .status.conditions contains an Available condition.
    3. Progress - true if spec.conditions contains a Progressing condition.
    4. Observed - when .status.observedGeneration is equal to spec.generation.
    5. Errors - Failure conditions are converted to Error conditions.
  2. ReplicaSet and ReplicationController
    1. You should not use these directly in an Application. Deployment should be used instead.
    2. Readiness - .spec.replicas is equal to .status.replicas and .status.readyReplicas and all are greater than zero.
    3. Availability - .spec.replicas is equal to .status.replicas and .status.availableReplicas and all are greater than zero.
    4. Progress - true if .spec.replicas is not equal to .status.replicas.
    5. Observed - when .status.obloadbalancerservedGeneration is equal to spec.generation.
    6. Errors - ReplicaFailure conditions will be converted to Error conditions.
  3. StatefulSet
    1. Readiness - .spec.replicas is equal to .status.replicas and .status.readyReplicas and all are greater than zero.
    2. Availability - N/A, StatefulSet Pods are available when ready.
    3. Progress - true if .status.currentReplicas is not equal to status.updateReplicas or if
      .status.replicas is not equal to .spec.replicas.
    4. Observed - when .status.observedGeneration is equal to spec.generation.
    5. Errors - N/A
  4. DaemonSet
    1. Readiness - .status.currentNumberScheduled is equal to .status.desiredNumberScheduled and .status.numberReady and all are greater than 0.
    2. Availability - if .status.currentNumberScheduled is equal to .status.desiredNumberScheduled and .status.numberAvailable and all are greater than 0.
    3. Progress - DaemonSet is sensitive to Node cardinality. It is making progress when status.numberUpdated is greater than 0.
    4. Observed - when .status.observedGeneration is equal to spec.generation.
    5. Errors - N/A
  5. Pod
    1. Readiness - The Pods has a Ready condition.
    2. Availability - N/A
    3. Progress - N/A
    4. Observed - N/A
    5. Errors - N/A
  6. Ingress
    1. Readiness - .status.loadbalancer.ingress list is not empty.
    2. Availability - N/A
    3. Progress - N/A
    4. Observed - N/A
    5. Errors - N/A
  7. Service
    1. Readiness - All non-load balanced Services are ready when created. All load balanced Services are ready when .status.loadbalancer.ingerss is not empty.
    2. Availability - N/A
    3. Progress - N/A
    4. Observed - N/A
    5. Errors - N/A
  8. PersistentVolumeClaim
    1. A PersistentVolumeClaim claim is Ready when its .status.phase is Bound. This may seem strange as PVC implements a Ready phase, but a PVC is not useful to the application until it is bound, and most errors post creation and during binding.
    2. Availability - N/A
    3. Progress - N/A
    4. Observed - N/A
    5. Errors - N/A

API

This section contains the proposed modifications to the API. Here, we modify the ApplicationStatus type to report the observed status of its components. Each ComponetStatus contains a link, resource identifying information, and the ComponentConditions of the components indicated by the Application's .Spec.ComponentKinds and .Spec.Selector. The status of the applications components is used to compute ApplicationConditions that apply to the application as a whole.

// ComponentConditionType indicates the type of a ComponentCondition.
type ComponentConditionType string

const (
	// ComponentAvailable indicates that the component is available. This is used by controller to indicate that the 
	// resource has been ready for a sufficient period of time after creation or mutation that it is unlikely to suffer 
	// from infant mortality.
	ComponentAvailable ComponentConditionType = "Available"
	// ComponentReady indicates that component is ready to use.
	ComponentReady = "Ready"
	// ComponentProgressing is used to communicate that a component is in updating (i.e. A Deployment with a Rollout in
	// progress or a StatefulSet with a RollingUpdate in progress).
	ComponentProgressing = "Progressing"
	// ComponentDisrupted is used to indicate that the component is affected by a planned or unplanned disruption.
	ComponentDisrupted = "Disrupted"
	// Error is used to communicate and error condition for a component.
	ComponentError = "Error"
)


// ComponentCondition represents the condition of a component of an application. It is modeled after the Conditions
// use for the Kubernetes workloads objects.
type ComponentCondition struct {
	// Type is the type of the condition.
	Type ComponentConditionType `json:"type"`
	// Status is one of True, False, Unknown.
	Status v1.ConditionStatus `json:"status"`
	// LastTransitionTime is the last time the condition's status changed.
	LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
	// Reason is the reason for the condition's last transition.
	Reason string `json:"reason,omitempty"`
	// Message is a human readable message indicating details about the transition.
	Message string `json:"message,omitempty"`
}

// ComponentStatus contains the status of a generic component selected by the Application.
type ComponentStatus struct {
	// Name is the name of the component to which the condition pertains.
	Name string `json:"name"` ComponentConditionTyp
	// Namespace is the name containing the component to which the condition pertains.
	Namespace string `json:"namespace,omitempty"`
	// GroupKind indicates the group and kind of the component.
	GroupKind metav1.GroupKind `json:groupKind"`
	// Link is a link to the resource that represents the component.
	Link string `json:"link,omitempty"`
	// Available indicates that the component is available. This is used by controller to indicate that the 
	// resource has been ready for a sufficient period of time after creation or mutation that it is unlikely to suffer 
	// from infant mortality.
	Available *bool `json:"available,omitempty"`
	// tReady indicates that component is ready to use.
	Ready *bool `json:"ready,omitempty"`
	// Progressing is used to communicate that a component is in updating (i.e. A Deployment with a Rollout in 
	// progress or a StatefulSet with a RollingUpdate in progress).
	Progressing *bool `json:"progressing,bool"`
    // Observed indicates that the controller for the component's resource has observed the current generation.
    // (i.e .Meta.Generation == .Status.ObservedGeneration)
    Observed bool = `json:"observed,omitempty"`
	// Conditions contains the conditions that are applicable to the component.
	Conditions [] ComponentCondition `json:"conditions,omitempty"`
}

// ApplicationConditionType indicates the type of an application condition.
type ApplicationConditionType string

const (
	// ApplicationError is used to communicate that an Application has an Error condition.
	ApplicationError ApplicationConditionType = "Error"
)

// ApplicationCondition represents the condition of an Application.
type ApplicationCondition struct {
	// Type is the type of the condition.
	Type ApplicationConditionType `json:"type"`
	// Status is one of True, False, Unknown.
	Status v1.ConditionStatus `json:"status"`
	// LastTransitionTime is the last time the condition's status changed.
	LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty"`
	// Reason is the reason for the condition's last transition.
	Reason string `json:"reason,omitempty"`
	// Message is a human readable message indicating details about the transition.
	Message string `json:"message,omitempty"`
}

type ApplicationStatus struct {
	// Ready indicates that all of an Application's components are Available.
    Ready *bool `json:"available,omitempty"`
    // Updating is used to communicate that a component is in updating (i.e. A Deployment with a Rollout in 
    // progress or a StatefulSet with a RollingUpdate in progress. 
    Updating *bool `json:"progressing,bool"`
	// Conditions is a list of ApplicationConditions for the application.
	Conditions [] ApplicationCondition
	//ComponentStatus is a list of the statues of the applications components.
	Components [] ComponentStatus
}

Application Status Computation

The Application controller will periodically list the applications residing on the API Server. For each Application resource the controller will do the following.

  1. If the spec.assemblyPhase of the Application is pending the controller will not update the .status of the Application. This allows application installers time apply all necessary components prior to application status computation.
  2. If the Application is assembled, the controller will do the following for each GroupKind indicated by the
    Application's .spec.componentKinds.
    1. Use the discovery API to get the default version of the resource.
    2. Retrieve the resource from the API Server.
    3. Determine the readiness, availability, progress and observed status of each component with respect to the core resource adaptations and use this create a corresponding ComponentStatus and to append it to the .status.components of the Application.
    4. If one or more components of the Application are Progressing the status.updating field is set to true.
    5. If the Application is assembled and all of its components are ready the status.ready field is set to true.
    6. If the Application is assembled and any of its components are not ready the status.ready field is set to false.
  3. The Application controller will then update the Application to communicate the status to the end user.

Example

apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
  name: "wordpress-01"
  labels:
    app.kubernetes.io/name: "wordpress-01"
spec:
  type: "wordpress"
  selector:
    matchLabels:
     app.kubernetes.io/name: "wordpress-01"
  componentKinds:
    - group: core
      kind: Service
    - group: apps
      kind: StatefulSet
  version: "4.9.4"
  description: "WordPress is open source software you can use to create a beautiful website, blog, or app."
  icons:
    - src: "https://s.w.org/style/images/about/WordPress-logotype-wmark.png"
      type: "image/png"
      size: "1000x1000"
    - src: "https://s.w.org/style/images/about/WordPress-logotype-standard.png"
      type: "image/png"
      size: "2000x680"
  maintainers:
    - name: Kenneth Owens
      email: [email protected]
  owners:
    - name: Kenneth Owens
      email: [email protected]
  keywords:
   - "cms"
   - "blog"
   - "wordpress"
  links:
    - description: About
      url: "https://wordpress.org/"
    - description: Web Server Dashboard
      url: "https://metrics/internal/wordpress-01/web-app"
    - description: Mysql Dashboard
      url: "https://metrics/internal/wordpress-01/mysql"
status:
  ready: true
  updating: false
  components:
    - name: wordpress-mysql-hvc
      namespace: default
      groupKind: Service
      link: /apis/v1/namespaces/default/services/wordpress-mysql-hvc
      ready: true
    - name: wordpress-mysql
      namespace: default
      groupKind: apps/StatefulSet
      link: /apis/apps/v1/namespaces/default/statefulsets/wordpress-mysql
      ready: true
      avialable: true
    - name: wordpress-webserver-svc
      namespace: default
      groupKind: Service
      link: /apis/v1/namespaces/default/services/wordpress-webserver-svc
      ready: true
    - name: wordpress-webserver
      namespace: default
      groupKind: apps/StatefulSet
      link: /apis/apps/v1/namespaces/default/statefulsets/wordpress-webserver
      ready: true
      avialable: true

Add documentation

This project needs some clear docs on:

  • Why this project exists (current problem and value proposition)
  • Installing into a cluster
  • How to use the CRD
  • Documentation on what properties can be used
  • How someone should use owner references alongside the CRD
  • The difference between an application and an application instance?
  • Contributing documentation on how and when to use kubebuilder with project
  • Contributing docs on updating the installation of this project in a cluster

I imagine there are others. This is just a first pass.

Move to kube-builder 1.0.x

Move the code base to the new release of kube-builder.
This would be useful to start implementing features in AppCRD.

Onwers with Maintainers Structure

The structure for maintainers is:

maintainers:
  - name: Kenneth Owens
    email: [email protected]

The structure for owners is:

owners:
  - "Kenneth Owens [email protected]"

We should make these consistent. I didn't create a PR because I wasn't sure of the naming to use internally. I could use the Maintainer struct but the name isn't a great fit. Any ideas on the naming?

Primary link attribute?

Many apps provide a primary link (e.g. PhpMyAdmin, WordPress admin, IP address) that is the user's main destination. It looks like this link is typically put into "infoItems" array. Would it be useful to define an attribute for 'primaryLink', so it is easier to identify and highlight in a UI?

Splitting application definition from instance selection

The current application definition combines the description for a kind of an application with the selector for finding an instance of that kind of application.

We were thinking about how to use the application definition for OLM, which has many of the same metadata needs as in the Application CRD. The fact that both metadata and selector are combined makes it hard to re-use for us.

For reference, here's the example application in the Readme:

apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
  name: "wordpress-01"
  labels:
    app.kubernetes.io/name: "wordpress-01"
    app.kubernetes.io/version: "3"
spec:
  selector:
    matchLabels:
     app.kubernetes.io/name: "wordpress-01"
  componentKinds:
    - group: core
      kind: Service
    - group: apps
      kind: Deployment
    - group: apps
      kind: StatefulSet
  descriptor:
    version: "4.9.4"
    description: "WordPress is open source software you can use to create a beautiful website, blog, or app."
    icons:
      - src: "https://example.com/wordpress.png"
        type: "image/png"
    type: "wordpress"
    maintainers:
      - name: Kenneth Owens
        email: [email protected]
    owners:
      - "Kenneth Owens [email protected]"
    keywords:
      - "cms"
      - "blog"
      - "wordpress"
    links:
      - description: About
        url: "https://wordpress.org/"
      - description: Web Server Dashboard
        url: "https://metrics/internal/wordpress-01/web-app"
      - description: Mysql Dashboard
        url: "https://metrics/internal/wordpress-01/mysql"

I'd like to propose splitting this into two objects; one which represents the application kind, and one which selects a particular instance of that application.

For the example above, this would look like:

Description of Application Type: (note missing selector and name change)

apiVersion: app.k8s.io/v1beta1
kind: ApplicationMetadata
metadata:
  name: "wordpress"
  labels:
    app.kubernetes.io/name: "wordpress"
    app.kubernetes.io/version: "3"
spec:
  version: "4.9.4"
  description: "WordPress is open source software you can use to create a beautiful website, blog, or app."
  icons:
  - src: "https://example.com/wordpress.png"
    type: "image/png"
  type: "wordpress"
  maintainers:
  - name: Kenneth Owens
    email: [email protected]
  owners:
  - "Kenneth Owens [email protected]"
  keywords:
  - "cms"
  - "blog"
  - "wordpress"
  links:
  - description: About
    url: "https://wordpress.org/"
  - description: Web Server Dashboard
    url: "https://metrics/internal/wordpress-01/web-app"
  - description: Mysql Dashboard
    url: "https://metrics/internal/wordpress-01/mysql"

and a selector for a specific instance of that application type:

apiVersion: app.k8s.io/v1beta1
kind: ApplicationSelector
metadata:
  name: "wordpress-01"
  labels:
    app.kubernetes.io/metadata: "wordpress"
spec:
  metadata: "wordpress"
  selector:
    matchLabels:
     app.kubernetes.io/name: "wordpress-01"
  componentKinds:
    - group: core
      kind: Service
    - group: apps
      kind: Deployment
    - group: apps
      kind: StatefulSet
status:
  assemblyPhase: "Pending"

I can then stamp out multiple instances (say, a production and a staging) without replicating all of the metadata between them.

Application privacy policy

There is a discuss on Helm charts on how applications can collect analytics on not. It dawned on me that may be there should be a field for setting link to a privacy policy document in the application CRD. Is the Links field enough for that?

ref: helm/charts#4697 (comment)

cc: @mattfarina

Status Aggregation for Application CRD

Background

Application CRD has a list of GroupKinds and a LabelSelector to logically group live objects in the cluster. The Application CRD also provides a way to attach descriptive metadata to the logical grouping.

What does this solve

This proposes to enhance Application CRD controller to provide status tracking for the objects that match the selectors. This dramatically simplifies the status tracking requirements for a client that uses Application CRD.

Proposal

The proposal is to add a reconciler that creates watches for the GroupKinds that the application CRD refers to. The reconciler then inspects the matching objects and updates the aggregate status in the Application CRD.

Please review the attached proposal: #77

Application Garbage Collection

There are at least two ways to deal with garbage collection of application resources. The first, and simplest, approach is to inject owner reference into each component of the application using the Name, APIVersion, and Kind of the Application CRD. This would work as follows. There is no need to worry about injecting OwnerReferences into intermediate or children objects (e.g. If an OwnerReference is injected into a Deployment, chain garbage collection will delete its ReplicaSets and their Pods). When the Application is deleted the Garbage Collector will delete all Orphaned children. While this has the benefit of simplicity, it has a few drawbacks.

  1. The Application object must be crated prior to the creation of any owned children. Otherwise, the Garbage Collector may delete the children prior to creation of the Application.
  2. Adopting existing workloads
  3. All tools would have to implement this uniformly and correctly.

Another approach would be to construct a graph based controller using the Application's Selector and appropriate labels on children objects to indicate ownership. This controller would watch for the creation of all objects in the system and establish ownership by automatically injecting OwnerReferences. As above, when the Application object is deleted, the Garbage Collector will delete its children.This mitigates the need for tools to inject OwnerReferences, and it ensures that the Garbage Collector will not delete an Application's children prior to creation of the Application (i.e. there need not be a serialized ordering between observation of Application creation and the creation of its children), but it has drawbacks as well.

  1. The graph based Application controller is comparable in complexity to the Garbage Collector itself.
  2. It still requires that an Application and its children are appropriately labeled (i.e. It does not fully mitigate the potential for erroneous behavior do to user errors).
  3. It will require mem and cpu resources on par with the Garbage Collector, and it will require that the Application controller watch (potentially) nearly all creation and deletion events in the system.

Another option is to begin with user tool injected OwnerReferences, and, when the full requirements of Garbage Collection become well understood after some period of production usage, automate the process by adding OwnerReference marking in the Application controller. The controller can be implemented in a way that is strictly backward compatible with user injected OwnerReference. In fact, in order function properly it must be compatible with user injected OwnerReferences.

Build of docker image fails - application_types.go:21:2: cannot find package

I think that the doc should be updated/improved to mention which command should be first executed to install the missing vendor packages as I get this error during docker build

imagebuilder -t my-image -f Dockerfile.controller .
--> Image golang:1.9.3 was not found, pulling ...
--> Pulled 0/8 layers, 1% complete
--> Pulled 1/8 layers, 16% complete
--> Pulled 2/8 layers, 29% complete
--> Pulled 2/8 layers, 46% complete
--> Pulled 3/8 layers, 55% complete
--> Pulled 4/8 layers, 58% complete
--> Pulled 5/8 layers, 71% complete
--> Pulled 6/8 layers, 84% complete
--> Pulled 7/8 layers, 90% complete
--> Pulled 7/8 layers, 98% complete
--> Pulled 8/8 layers, 100% complete
--> Extracting
--> FROM golang:1.9.3 as builder
--> ENV TEST_ASSET_DIR /usr/local/bin
--> ENV TEST_ASSET_KUBECTL $TEST_ASSET_DIR/kubectl
--> ENV TEST_ASSET_KUBE_APISERVER $TEST_ASSET_DIR/kube-apiserver
--> ENV TEST_ASSET_ETCD $TEST_ASSET_DIR/etcd
--> ENV TEST_ASSET_URL https://storage.googleapis.com/k8s-c10s-test-binaries
--> RUN curl ${TEST_ASSET_URL}/etcd-Linux-x86_64 --output $TEST_ASSET_ETCD
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 16.9M  100 16.9M    0     0  2198k      0  0:00:07  0:00:07 --:--:-- 2340k
--> RUN curl ${TEST_ASSET_URL}/kube-apiserver-Linux-x86_64 --output $TEST_ASSET_KUBE_APISERVER
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  200M  100  200M    0     0  2438k      0  0:01:24  0:01:24 --:--:-- 2268k
--> RUN curl https://storage.googleapis.com/kubernetes-release/release/v1.9.2/bin/linux/amd64/kubectl --output $TEST_ASSET_KUBECTL
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 64.2M  100 64.2M    0     0  2507k      0  0:00:26  0:00:26 --:--:-- 2561k
--> RUN chmod +x $TEST_ASSET_ETCD
--> RUN chmod +x $TEST_ASSET_KUBE_APISERVER
--> RUN chmod +x $TEST_ASSET_KUBECTL
--> WORKDIR /go/src/github.com/kubernetes-sigs/application
--> COPY pkg/    pkg/
--> COPY cmd/    cmd/
--> COPY vendor/ vendor/
--> RUN go build -a -o controller-manager ./cmd/controller-manager/main.go
pkg/apis/app/v1alpha1/application_types.go:21:2: cannot find package "k8s.io/kubernetes/pkg/apis/core" in any of:
        /go/src/github.com/kubernetes-sigs/application/vendor/k8s.io/kubernetes/pkg/apis/core (vendor tree)
        /usr/local/go/src/k8s.io/kubernetes/pkg/apis/core (from $GOROOT)
        /go/src/k8s.io/kubernetes/pkg/apis/core (from $GOPATH)
running 'go build -a -o controller-manager ./cmd/controller-manager/main.go' failed with exit code 1

After doing a dep ensure, I get another error

...
--> RUN go build -a -o controller-manager ./cmd/controller-manager/main.go
# github.com/kubernetes-sigs/application/pkg/inject
pkg/inject/inject.go:17:14: undefined: "github.com/kubernetes-sigs/application/vendor/github.com/kubernetes-sigs/kubebuilder/pkg/inject/args".Injector
running 'go build -a -o controller-manager ./cmd/controller-manager/main.go' failed with exit code 2

Remark : Build executed against master branch of this repo

Owner definition in Application CRD does not agree with README, not with sample/application.yaml

Note README states type for maintainers and owners is the same:

  • spec.descriptor.maintainers | []ContactData
  • spec.descriptor.owners | []ContactData

However, in install.yaml they are dissimilar:

                maintainers:
                  items:
                    properties:
                      email:
                        type: string
                      name:
                        type: string
                      url:
                        type: string
                    type: object
                  type: array

and

                owners:
                  items:
                    type: string
                  type: array
                type:
                  type: string

Moreover,

samples/application.yaml assumes maintainers and owners are the same type:

    maintainers:
      - name: Kenneth Owens
        email: [email protected]
    owners:
      - name: Kenneth Owens
        email: [email protected]

Interestingly, samples/application.yaml successfully creates an application object on Kubernetes 1.8.0,
but it fails on Kubernetes 1.9.1 with this error:
spec.descriptor.owners in body must be of type string: "object"

I didn't try other versions. But it's apparent the definition in install.yaml should be the same. I fixed this in my local repo and would happily create a PR, but my company has not yet given me approval to sign the Contributor License Agreement (CLA) for this project.

Referencing Secrets and other resources

This issue captures one of the threads on #4 where it was proposed to allow the Application to reference values/properties from Secret, ConfigMap, Ingress or Service.

Some quotes from the thread:
From @deustis: "[...]On the topic of credentials, we might want to establish a pattern for referencing Secrets. A very common getting started experience for an off-the-shelf app is trying to locate the password for an admin dashboard[...]"

@huyhg proposed simple templating system, where the template itself would live in spec and interpolated template would live in status.

@prydonius prefers enumerating reference types explicitly (Secret, ConfigMap, Ingress, Service)

@deustis proposed a templating system that would live in in spec but the evaluation would be the client responsibility (like @huyhg's proposal, but with no interpolation into spec). Also, the individual fields of the referenced objects would be selected using jsonpath.

@konryd pointed out that secret's values are typically evaluated on startup.

How will objects be selected

I am not clear what values from within the Application CRD will be used to select the objects that will define my Application within the cluster.

My assumption is that values defined in the Application CRD will be used to query cluster objects which will then define my Application.

Is the selector the single value spec.selector or is the selection also limited by spec.componentKinds?

Is the selection does by the Application controller and/or by the third party tool? The tool provider would get the selector from the CRD and then query for the objects?

[Suggestion] Add EnVarFromSource as new InfoItemSourceType

Description

The InfoItemSource allows to identify the type of the information we want to capture using an InfoItem. This information is defined using the InfoItemSourceType.

Unfortunately, the existing list of const as defined hereafter does not include the Kubernetes type: EnvVarFrom [1].

Existing Source's types

type InfoItemSource struct {
	// Type of source.
	Type InfoItemSourceType `json:"type,omitempty"`
...
}

// InfoItemSourceType is a string
type InfoItemSourceType string

// Constants for info type
const (
	SecretKeyRefInfoItemSourceType    InfoItemSourceType = "SecretKeyRef"
	ConfigMapKeyRefInfoItemSourceType InfoItemSourceType = "ConfigMapKeyRef"
	ServiceRefInfoItemSourceType      InfoItemSourceType = "ServiceRef"
	IngressRefInfoItemSourceType      InfoItemSourceType = "IngressRef"
)

Suggestion

We propose to extend the list of the InfoItemSourceType const in order to include the EnvVarFromSource

const (
	EnvVarFromSourceRefInfoItemSourceType    InfoItemSourceType = "EnvVarFromSourceRef" // Refer to either a Secret or ConfigMap
)

[1] : https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#envvarsource-v1-core

WDYT @ant31 @barney-s @mattfarina ?

Add CI

We need to add CI that runs against master and pull requests.

spec.componentKinds not sufficiently described

In README.md spec.componentKinds lack descriptions for the 'group' key. This seems to be either empty, core or apps in the samples. Please describe.
Also the GroupKind is written as a link, but the link target does not exists.

[Suggestion] Add EnVar as new type to be used with InfoItem

Suggestion

The InfoItem allows to define human readable key,value pair containing important information about how to access the Application.

One of the most important information which is represented by the K8s Env Var [1] is currently missing.

Such EnvVar are used by many java, nodejs, ... applications when they are deployed as they will let by example to configure a Datasource to access a database (e.g : database name, username, password, ...), client accessing a Messaging broker, jvm, ...

By adopting a new InfoItemSourceType and InfoItemSource we could define such a list of key/value pairs as defined using an Envs array

Proposition

type InfoItemSource struct {
	// Type of source.
	Type InfoItemSourceType `json:"type,omitempty"`
        ...
	Envs []Env `json:"envs,omitempty"`
}

type Env struct {
	Name  string `json:"name,omitempty"`
	Value string `json:"value,omitempty"`
}

// Constants for info type
const (
	EnvVarInfoItemSourceType    InfoItemSourceType = "EnVar"

[1] : https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#envvar-v1-core

WDYT @ant31 @barney-s @mattfarina ?

Ownership between custom resources and Applications

Issue #7 is about putting OwnerReference onto components that comprise an Application. I would like to use this issue to open some discussion on the ownership between custom resources and Applications. Here's a scenario: suppose there's a CRD for a type of applications, which need a set of Kubernetes resources to run, e.g., a StatefulSet, some Service, and also some other resources. The corresponding Kubernetes operator watches instances of the CRD kind and creates the necessary resources for each newly created custom resource instance. When creating the resources, the operator adds an OwnerReference to each of the resources referencing the instance. The reason for this is when the user deletes the custom resource object, the resources get garbage collected.

Additionally, to allow components of each application instance to be logically grouped, an Application instance is also created by the operator. So the Application instance is created as a result of the operator seeing the custom resource instance and it probably makes sense to have the custom resource object "owns" the Application instance. However, it might also makes sense (although debatable I guess) the other way around if the custom resource instance is owned by the Application instance as part of the logic group of components supporting the application. The former is straightforward to realize but the latter faces the same problem #7 discussed.

I would like to hear others' thoughts on this.

Application should comprise components across namespaces

Proposed: add namespace qualification to 'componentKinds'

Discussion:

It is abundantly clear that cloud native applications may be comprised of resources from multiple namespaces.

Scenarios:

  • One scenario is that an Application corresponds exactly to a set of resources that are deployed together, and deployed to the same namespace. This scenario is most easily described as a single team, deploying multiple resources with a single helm chart, all to the same namespace.

  • Another scenario is that an Application corresponds to a set of resources, some of which are deployed together (e.g. single team, as above), as well as others that are deployed separately (perhaps shared services) by other teams. We should not assume all teams deploy to the same namespace.

For example, my stock-trader application is comprised of these components:

  1. stocktrader UI (ns=stocktrader)
  2. trader (ns=stocktrader)
  3. quote (ns=stocktrader)
  4. portfolio (ns=stocktrader)
  5. loyalty (ns=loyalty, a shared service)
  6. messaging (ns=services, a shared service)

The preceding list makes the point that an application's components - the very ones the user wants to regard as being part of their application - may be deployed as part of the principle application (i.e. stocktrader, in this example), as well as include services deployed by other teams (e.g. loyalty and messaging).

To describe an application whose components are in various namespaces requires a way to specify that fact. I propose we add namespace to componentKinds, e.g., like this:

componentKinds
     - group: deployments-stocktrader
        kind: Deployment 
        namespace: stocktrader
    - group: deployments-loyalty
       kind: Deployment 
       namespace:  loyalty
    - group: deployments-services
       kind: Deployment
       namespace: services 

Application Dependencies

The original KEP proposed the following notion of application Dependencies

// ApplicationSpec defines the specification for an Application.
type ApplicationSpec struct {
	// Existing fields omitted for brevity
        // Dependencies is a list of Applications on which this Application depends.
        Dependencies [] string
}

// ApplicationStatus defines controllers the observed state of Application
type ApplicationStatus struct {
	// ObservedGeneration is used by the Application Controller to report the last Generation of an Application
	// that it has observed.
	ObservedGeneration int64 `json:"observedGeneration,omitempty"`

       // Installed is a list of the currently installed components and dependencies for an Application as 
       // observed by the controller.
       Installed []string `json:"installed,omitempty"`
}

// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

// Application
// +k8s:openapi-gen=true
// +resource:path=applications
// The Application object acts as an aggregator for components that comprise an Application. Its
// Spec.ComponentGroupKinds indicate the GroupKinds of the components the comprise the Application. Its Spec. Selector
// is used to list and watch those components. All components of an Application should be labeled such the Application's
// Spec. Selector matches.
type Application struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
        // The specification object for the Application.
	Spec   ApplicationSpec   `json:"spec,omitempty"`
	// The status object for the Application.
	Status ApplicationStatus `json:"status,omitempty"`
}

The notion of Dependencies is necessarily namespace bound by this definition. The application controller would simply track the components of the installed application through their life-cycle and update the Status.Installed list when a dependent component is created.

  1. Is being bound to the same namespace too restrictive?
  2. Is this notion of dependencies (admittedly a weak one) useful. That is, should dependency manage be in the purview of a package manager and not the installed application itself. For instance, in debian based linux distro's apt and dpkg manage the dependencies of an application and ensure that shared libraries, as an example, that an application depends on are installed along with the application. The application simply expects its dependencies to be available at runtime.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Does the Application operator supports Components ?

I'm a bit confused about how the Application operator populates a list of components from an Application

This code is responsible to get the components of a resource

components := rsrc.Components()
			for _, component := range components {

but if I look to the code developed

// Components returns components for this resource
func (a *Application) Components() []component.Component {
	c := []component.Component{}
	c = append(c, component.Component{
		Handle:   a,
		Name:     "app",
		CR:       a,
		OwnerRef: a.OwnerRef(),
	})
	return c
}

a Component is appended to the list and the Component is always created using the current application

Questions :

  • Is the plan of the Application CRD to also contains Components which corresponds to others Applications (see composition proposition as defined within the original kep) ?
  • If the Application CRD will support such a composition, where within the spec do you plan to define such components list as the purpose of the ComponentGroupKinds of the spec is to group the k8s resources created or to be created for an application ?

@ant31 @mattfarina @barney-s

[Proposition] Extend the componentGroupKind to define a Component's kind

The ApplicationSpec type includes the field spec.componentKinds to group under a name, related kubernetes resources such as Service, StatefulSet, ConfigMap, Secret ... describing globally what the application is composed

// ApplicationSpec defines the specification for an Application.
type ApplicationSpec struct {
	// ComponentGroupKinds is a list of Kinds for Application's components (e.g. Deployments, Pods, Services, CRDs). It
	// can be used in conjunction with the Application's Selector to list or watch the Applications components.
	ComponentGroupKinds []metav1.GroupKind `json:"componentKinds,omitempty"`

If we use this Application custom resource to install/configure the environment on kubernetes to deploy the resources needed using a controller or operator, then it is important to have also a specialised type able to :

  • Describe what the component is or framework, runtime, ... it encapsulates
  • Packaging mode : jar, war, ...
  • Type: application, job, ...
  • Mode to be used to install it,
  • Env variables to be configured or parameters to be passed to configure the runtime/jvm

Example: As a user, I would like to install a Spring Boot application using the version 1.5.15 of the framework and would like to access it externally using a route. The default port of the service is 8080. To convert this requirement into a component's type, then the following object could be created

apiVersion: component.k8s.io/v1alpha1
kind: Component
metadata:
  name: my-spring-boot
spec:
  deployment: innerloop
  packaging: jar
  type: application
  runtime: spring-boot
  version: 1.5.15
  exposeService: true

The advantage to have such component custom resource is that we could be able with a UI or CLI to display the information in a more readable way

kubectl application describe

NAME        	    Category      Type               Version       Source       Visible Externally
payment-frontend    runtime       nodejs             0.8           local        yes
payment-backend     runtime       spring-boot        1.5.15        binary       no
payment-database    service       db-postgresql-apb  dev                        no

Component's type proposition

type ComponentSpec struct {
	// The name represents a human readable string describing from a business perspective what this component is related to
	// Example : payment-frontend, retail-backend
	Name string
	// The packagingMode refers to the archive file's type which has been used to package the code
	// Example : jar, war, ...
	PackagingMode string
	// The type is related to how the component is installed, as a pod, job, statefulset
	Type string
	// DeploymentMode indicates the strategy to be adopted to install the resources into a namespace
	// and next to create a pod. 2 strategies are currently supported; inner and outer loop
	// where outer loop refers to a build of the code and the packaging of the application into a container's image
	// while the inner loop will install a pod's running a supervisord daemon used to trigger actions such as : assemble, run, ...
	DeploymentMode string `json:"deployment,omitempty"`
	// Runtime is the framework used to start within the container the application
	// It corresponds to one of the following values: spring-boot, vertx, tornthail, nodejs
	Runtime string `json:"runtime,omitempty"`
	// To indicate if we want to expose the service out side of the cluster as a route
	ExposeService bool `json:"exposeService,omitempty"`
	// Cpu is the cpu to be assigned to the pod's running the application
	Cpu string `json:"cpu,omitempty"`
	// Cpu is the memory to be assigned to the pod's running the application
	Memory string `json:"memory,omitempty"`
	// Port is the HTTP/TCP port number used within the pod by the runtime
	Port int32 `json:"port,omitempty"`
	// The storage allows to specify the capacity and mode of the volume (ReadWrite) to be mounted for the pod
	Storage Storage `json:"storage,omitempty"`
	// The list of the images created according to the DeploymentMode to install the loop
	Images []Image `json:"image,omitempty"`
	// Array of env variables containing extra/additional info to be used to configure the runtime
	Envs []Env `json:"env,omitempty"`
	// List of services consumed by the runtime and created as service instance from a Service Catalog
	Services []Service
	// The features represents a capability that it is required to have, to install to allow the component
	// to operate with by example a Prometheus backend system to collect metrics, an OpenTracing datastore
	// to centralize the traces/logs of the runtime, to deploy a servicemesh, ...
	Features []Feature
}

Resource, Component or Object ?

The readme makes several references to Application resources or components in the cluster

  • The Application CRD provides a way for you to aggregate individual Kubernetes components
  • allows for the aggregation and display of all the components in the Application.
  • The selector is used to match resources that belong to the Application.
  • All of the applications resources should be labels such that they match this selector.
  • Users should use the app.kubernetes.io/name label on all components of the Application

In the Kubernetes documentation, the only reference to components is related to the binary components needed to deliver a functioning Kubernetes cluster.

In the Kubernetes documentation, a resource is defined as "A resource is an endpoint in the Kubernetes API that stores a collection of API objects of a certain kind."

I would like to propose that we try to use Kubernetes terminology as much as possible,

Should the readme examples listed above for components and resources be documented as objects (Kubernete Objects)?

assemblyPhase in spec of status?

Why is the assemblyPhase part of the spec of the application CRD and not the status? It's not something that a user would set, right?

Nest Application metadata fields

I'd separate the metadata fields about an application, like description, keywords... from the fields holding some logic like matchLabels, componentKinds.

First, it's more clear for the user and tools authors to spot what is informative only and what is modifying the behavior of the controller
Second, the descriptor spec could be more easily reused by other project and integrated to different CRD.

related comment (from the kinflate initial PR): kubernetes/kubernetes#52570 (comment)

  type Application struct {
     
     // Metadata about the app. 
     descriptor Descriptor            
     ...
 }

  // Descriptor is a convenience struct gathering metadata to support
  // Manifest search, browse and updates.
  type Descriptor struct {
     Name string
     Version string
     Description string
     Icon string
     Keywords []string
     Homepage string
     ...
  }

ingressRef does not support HTTPS protocol

Currently ingressRef supports only HTTP protocol. I would like to have there HTTPS protocol as well.

Current state:

 - name: WordPress site address
    type: Reference
    valueFrom:
      ingressRef:
        name: $APP_INSTANCE_NAME-wordpress-ingress

Preferred state:

 - name: WordPress site address
    type: Reference
    valueFrom:
      ingressRef:
        name: $APP_INSTANCE_NAME-wordpress-ingress
        protocol: HTTPS

I think the same needs could be with serviceRef as well.

matchLabels is insufficient for legacy uses. Can we support matchExpressions too?

This syntax requires me to label all resources that belong to an app with the same label (both key and value). This might be hard for external pieces, that I don't want can't change easily.

spec:
  selector:
    matchLabels:
     app.kubernetes.io/name: "wordpress-01"

Why can't we also support this:

spec:
  selector:
    matchExpressions:
      key: app
      operator: in
      values:  ["foo", "bar", "baz"]

Enrich the code of the application_controller.go with k8s labels

The existing code of the application_controller.go sets the recommended k8s labels [1]
here in the code

https://github.com/kubernetes-sigs/application/blob/master/pkg/controller/application/application_controller.go#L40-L47

const (
	NameLabelKey = "app.kubernetes.io/name"
	VersionLabelKey = "app.kubernetes.io/version"
	InstanceLabelKey= "app.kubernetes.io/instance"
	PartOfLabelKey= "app.kubernetes.io/part-of"
	ComponentLabelKey= "app.kubernetes.io/component"
	ManagedByLabelKey= "app.kubernetes.io/managed-by"
)

to decorate a k8s resource created but don't use it within the controller_application code to annotate the Deployment resource.

I propose to add them when the labels are added here [2] in the code to support their usage and explain what is the value of each label.

[1] https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels
[2] https://github.com/kubernetes-sigs/application/blob/master/pkg/controller/application/application_controller.go#L133

To define ApplicationType as a new CRD

Looking into the Application definition, how do we manage common metadata for sample type of application. For example if we have a lot of "SpringBoot" applications, some of the metadata should be same, it will be duplicated in each same type of application and it will be hard to maintain.

Is Operator Framework a complement or competitor technology?

Red Hat and the Kubernetes open source community today share the Operator Framework โ€“ an open source toolkit designed to manage Kubernetes native applications, called Operators, in a more effective, automated, and scalable way.

  • Operator SDK: Enables developers to build Operators based on their expertise without requiring knowledge of Kubernetes API complexities.
  • Operator Lifecycle Management: Oversees installation, updates, and management of the lifecycle of all of the Operators (and their associated services) running across a Kubernetes cluster.
  • Operator Metering (joining in the coming months): Enables usage reporting for Operators that provide specialized services.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.