Git Product home page Git Product logo

autoscaler's Introduction

Kubernetes (K8s)

CII Best Practices Go Report Card GitHub release (latest SemVer)


Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for the deployment, maintenance, and scaling of applications.

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.

Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If your company wants to help shape the evolution of technologies that are container-packaged, dynamically scheduled, and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.


To start using K8s

See our documentation on kubernetes.io.

Take a free course on Scalable Microservices with Kubernetes.

To use Kubernetes code as a library in other applications, see the list of published components. Use of the k8s.io/kubernetes module or k8s.io/kubernetes/... packages as libraries is not supported.

To start developing K8s

The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.

If you want to build Kubernetes right away there are two options:

You have a working Go environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make
You have a working Docker environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release

For the full story, head over to the developer's documentation.

Support

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

That said, if you have questions, reach out to us one way or another.

Community Meetings

The Calendar has the list of all the meetings in the Kubernetes community in a single location.

Adopters

The User Case Studies website has real-world use cases of organizations across industries that are deploying/migrating to Kubernetes.

Governance

Kubernetes project is governed by a framework of principles, values, policies and processes to help our community and constituents towards our shared goals.

The Kubernetes Community is the launching point for learning about how we organize ourselves.

The Kubernetes Steering community repo is used by the Kubernetes Steering Committee, which oversees governance of the Kubernetes project.

Roadmap

The Kubernetes Enhancements repo provides information about Kubernetes releases, as well as feature tracking and backlogs.

autoscaler's People

Contributors

aleksandra-malinowska avatar bigdarkclown avatar bpineau avatar bskiba avatar dbenque avatar elmiko avatar feiskyer avatar gjtempleton avatar jayantjain93 avatar jbartosik avatar k8s-ci-robot avatar kawych avatar kgolab avatar kisieland avatar krzysied avatar krzysztof-jastrzebski avatar losipiuk avatar maciekpytel avatar marwanad avatar mwielgus avatar olagacek avatar piosz avatar schylek avatar shubham82 avatar tghartland avatar tkulczynski avatar towca avatar voelzmo avatar x13n avatar yaroslava-serdiuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autoscaler's Issues

AWS Cluster Autoscaler Permissions

Using v0.5.4 of the aws-cluster-autoscaler, we're getting this error:

E0609 23:20:59.162974       1 static_autoscaler.go:108] Failed to update node registry: Unable to get first autoscaling.Group for node-us-west-2a.dev.clusters.mydomain.io

It sure looks like a permission problem... But per the instructions, I have the following policy on my instance role named nodes.dev.clusters.mydomain.io:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Without this addition, I get a different error:

E0609 23:05:48.475214       1 static_autoscaler.go:108] Failed to update node registry: AccessDenied: User: arn:aws:sts::11111111111:assumed-role/nodes.dev.clusters.mydomain.io/i-0472257b3f8d4ec43 is not authorized to perform: autoscaling:DescribeAutoScalingGroups
	status code: 403, request id: 2cf17af0-4d68-11e7-825c-73c99354b20d

So we're thinking that we have the necessary permissions.

For reference here's our execution config:

./cluster-autoscaler
--cloud-provider=aws
--nodes=1:10:node-us-west-2a.dev.clusters.mydomain.io
--nodes=1:10:node-us-west-2b.dev.clusters.mydomain.io
--nodes=1:10:node-us-west-2c.dev.clusters.mydomain.io
--scale-down-delay=10m
--skip-nodes-with-local-storage=false
--skip-nodes-with-system-pods=true
--v=4

Any ideas on what to do?
Is there any strategy for debugging this?

Allow setting 'minimum headroom' for autoscaling

I want to be able to say 'if the cluster is more than X% full, scale up until it is not'. This is important in super dynamic clusters that are very spiky - we run a Kubernetes cluster for a University, and a large spike of pods start up when classes start. If we waited for them to fail Scheduling before adding more nodes, this provides them with a suboptimal experience (since it might take several minutes for a new node to spin up).

One problem would be defining what 'full' is, in a way that doesn't duplicate what's in the scheduler.

Node deleted but still streaming DeletingNode events

I installed a Kubernetes cluster on AWS and CoreOS hosts with Tack and the cluster-autoscaler is included as a add-on. This is the yaml they use: https://github.com/kz8s/tack/blob/master/addons/autoscaler/cluster-autoscaler.yml (uses v0.5.2)

After a bit of time with a successful but empty cluster, the autoscaler kicked in and killed 1 or the 3 workers.

The node is no longer shown when doing kubeclt get nodes.

The problem is, the worker node is stuck as DeletingNode which can be seen from thousands of events along the lines of:

Deleting Node ip-10-56-0-138.ec2.internal because it's not present according to cloud provider

Example:

$ kubectl get events
LASTSEEN   FIRSTSEEN   COUNT     NAME                          KIND      SUBOBJECT   TYPE      REASON         SOURCE              MESSAGE
3s         6h          4780      ip-10-56-0-138.ec2.internal   Node                  Normal    DeletingNode   controllermanager   Node ip-10-56-0-138.ec2.internal event: Deleting Node ip-10-56-0-138.ec2.internal because it's not present according to cloud provider

(note: count: 4780!)

Checking the configmap that the autoscaler creates shows the worker node that was removed is still somehow registered. i.e.

  Nodes: Healthy (ready=5 unready=0 notStarted=0 longNotStarted=0 registered=6)

Is there a problem with the autoscaler? Is it supposed to unregister the node or is this normal?

Is there a way I can get more info about why DeletingNode event is appearing so often. There must be a reason for the node not able to be fully deleted. At one point, a stateful set put a pv and pvc on the worker that was deleted - I'm not sure if this could cause a issue with it being unregistered. The pv and pvc were manually removed with no luck curbing the continuing DeletingNode event stream.

Sorry if this issue is not appropriate. Feel free to remove if this is the case. ( It's hard to tell if it could be a bug with the autoscaler or just my use-case.)


The config map in full:

$ kubectl get configmap cluster-autoscaler-status -n kube-system -o yaml
apiVersion: v1
data:
  status: |+
    Cluster-autoscaler status at 2017-06-08 17:30:00.417692456 +0000 UTC:
    Cluster-wide:
      Health:      Healthy (ready=5 unready=0 notStarted=0 longNotStarted=0 registered=6)
                   LastProbeTime:      2017-06-08 17:29:59.812893761 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:26:35.872670968 +0000 UTC
      ScaleUp:     NoActivity (ready=5 registered=6)
                   LastProbeTime:      2017-06-08 17:29:59.812893761 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:26:35.872670968 +0000 UTC
      ScaleDown:   NoCandidates (candidates=0)
                   LastProbeTime:      2017-06-08 17:30:00.119227722 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:46:54.809754422 +0000 UTC

    NodeGroups:
      Name:        worker-general-test
      Health:      Healthy (ready=2 unready=0 notStarted=0 longNotStarted=0 registered=2 cloudProviderTarget=2 (minSize=1, maxSize=5))
                   LastProbeTime:      2017-06-08 17:29:59.812893761 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:26:35.872670968 +0000 UTC
      ScaleUp:     NoActivity (ready=2 cloudProviderTarget=2)
                   LastProbeTime:      2017-06-08 17:29:59.812893761 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:26:35.872670968 +0000 UTC
      ScaleDown:   NoCandidates (candidates=0)
                   LastProbeTime:      2017-06-08 17:30:00.119227722 +0000 UTC
                   LastTransitionTime: 2017-06-08 10:46:54.809754422 +0000 UTC

kind: ConfigMap
metadata:
  annotations:
    cluster-autoscaler.kubernetes.io/last-updated: 2017-06-08 17:30:00.417692456 +0000
      UTC
  creationTimestamp: 2017-06-08T10:26:25Z
  name: cluster-autoscaler-status
  namespace: kube-system
  resourceVersion: "60900"
  selfLink: /api/v1/namespaces/kube-system/configmaps/cluster-autoscaler-status
  uid: ed1780d0-4c34-11e7-bb12-0afa88f15a64

Custom/external cloud provider?

Hello,

Would you be interested in an external cloud provider? This would allow the creation of new machines with specific requirements.

Implementation
https://github.com/VioletRainbows/autoscaler/blob/external/cluster-autoscaler/cloudprovider/external/external_cloud_provider.go
Please note that it is a work in progress.

Implementation details
New cloud provider that calls (http) an external server that can add and remove nodes.
autoscaler -> external cloud provider -> [homemade API server able to create machines]

Three endpoints would need to be implemented by the user for a default configuration; for instance:

We are currently using it to scale our Azure architecture at my workplace.

Cost & preferred_node expander

This is an umbrella bug for an effort to provide a decent expander function for CA. The function
will :

  • Include cost of nodes and how well the money are spent
  • Promote small nodes in small clusters and larger in large clusters
  • Balance the two above based on the number of pending pods and/or requested capacity.

Feature Question/Request: Support other metrics when determining "scale-up"

Hello,

I wasn't sure where the best place to ask this question would be, as I am not sure what Slack channel the cluster-autoscaler tool falls under. Hope this works.

I was wondering if there could be any other metrics used to determine when a cluster is scaled up. Specifically, I was hoping to see something along the lines of "okay, let's scale up when the average utilization across the cluster is greater than 80%" in addition to the current "scale when pods are pending" approach. Has there been any discussion around this? Our current scale-up takes a few minutes and I want to be able to scale in anticipation of more Pods coming online in addition to when they've already been requested.

Thanks!

autodiscovery with several tags does not work

I'm trying to setup autoscaler for 2 clusters in a single aws zone like described here:
having any single tag out of two works fine (k8s.io/cluster-autoscaler/enabled or kubernetes.io/cluster/production)
but as soon as I set 2 tags container starts to crush

- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,kubernetes.io/cluster/production
I0714 12:48:42.177681       1 status.go:122] Succesfully wrote status configmap with body "Cluster-autoscaler status at 2017-07-14 12:48:41.487387224 +0000 UTC:
Initializing"
I0714 12:48:42.177952       1 auto_scaling.go:96] Starting getAutoscalingGroupsByTag with key=k8s.io/cluster-autoscaler/enabled,kubernetes.io/cluster/production
F0714 12:48:42.397512       1 cloud_provider_builder.go:91] Failed to create AWS cloud provider: Failed to get ASGs: Unable to find ASGs for tag key k8s.io/cluster-autoscaler/enabled,kubernetes.io/cluster/production
goroutine 54 [running]:
k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog.stacks(0xc4201c8400, 0xc42086a480, 0xd8, 0x101)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog/glog.go:766 +0xa7
k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog.(*loggingT).output(0x2d8c7e0, 0xc400000003, 0xc4206a6370, 0x2cda4be, 0x19, 0x5b, 0x0)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog/glog.go:717 +0x348
k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog.(*loggingT).printf(0x2d8c7e0, 0x3, 0x1f0319e, 0x27, 0xc4205bd220, 0x1, 0x1)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog/glog.go:655 +0x14f
k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog.Fatalf(0x1f0319e, 0x27, 0xc4205bd220, 0x1, 0x1)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/github.com/golang/glog/glog.go:1145 +0x67
k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder.CloudProviderBuilder.Build(0x7fff3c1589d0, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7fff3c158a2d, 0x4a, 0x0, ...)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/cloudprovider/builder/cloud_provider_builder.go:91 +0x68d
k8s.io/autoscaler/cluster-autoscaler/core.NewAutoscalingContext(0xa, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x7fff3c158a2d, 0x4a, 0xd18c2e2800, 0x1ed7132, 0xa, ...)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/autoscaling_context.go:115 +0x13d
k8s.io/autoscaler/cluster-autoscaler/core.NewStaticAutoscaler(0xa, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x7fff3c158a2d, 0x4a, 0xd18c2e2800, 0x1ed7132, 0xa, ...)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/static_autoscaler.go:54 +0x113
k8s.io/autoscaler/cluster-autoscaler/core.(*AutoscalerBuilderImpl).Build(0xc4203366c0, 0x410db8, 0x120)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/dynamic_autoscaler.go:131 +0x10e
k8s.io/autoscaler/cluster-autoscaler/core.NewPollingAutoscaler(0x2d10c80, 0xc4203366c0, 0x1ce0860)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/polling_autoscaler.go:36 +0x35
k8s.io/autoscaler/cluster-autoscaler/core.NewAutoscaler(0xa, 0x3fe0000000000000, 0x8bb2c97000, 0x1176592e000, 0x0, 0x7fff3c158a2d, 0x4a, 0xd18c2e2800, 0x1ed7132, 0xa, ...)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/core/autoscaler.go:59 +0x5c6
main.run(0xc4202f2320)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/main.go:184 +0x253
main.main.func2(0xc4202e6060)
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/main.go:276 +0x2a
created by k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/client/leaderelection.(*LeaderElector).Run
	/gopath/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/client/leaderelection/leaderelection.go:150 +0x97

clusters deployed using kube-aws
Cluster Autoscaler version 0.5.4

[addon-resizer] deployment update drops existing toleration fields

Probably due to the version of the k8s go client used by the addon-resizer

#kubectl create -f heapster-controller.yaml
deployment "heapster" created
# kubectl get deployment heapster -o yaml | grep tolerations -A6
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: role
        operator: Equal
        value: k8s-edge-node
# kubectl get deployment heapster -o yaml | grep tolerations -A6
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: role
        operator: Equal
        value: k8s-edge-node
# kubectl get deployment heapster -o yaml | grep tolerations -A6
# kubectl get deployment heapster -o yaml | grep tolerations -A6

Go client version

       {
            "ImportPath": "k8s.io/kubernetes/pkg/client/clientset_generated/release_1_3",
            "Comment": "v1.3.0-alpha.5-165-g7476d97",
            "Rev": "7476d97781563b70e8b89a8bd3f99ea75ae6c290"
        },

Failed to drain node - pods remaining after timeout

Is it possible to increase the node drain timeout?

seeing this in logs:
Failed to scale down: Failed to delete ip-10-100-6-220.ec2.internal: Failed to drain node /ip-10-100-6-220.ec2.internal: pods remaining after timeout

Support "opt-out of autoscaling" annotation on nodes

We had some user requests to allow node to "opt-out" of being scaled down. We could support that by not scaling down nodes that have a specific annotation. This has an additional benefit of making e2e test behavior more predictable by preventing unexpected scale-down during test setup, etc.

WDYT @mwielgus?

Cluster autoscaler fails to start

After removing the the pod I get the following error.

I0630 12:37:26.683009       1 leaderelection.go:248] lock is held by cluster-autoscaler-37505376-64n1p and has not yet expired
I0630 12:37:26.683032       1 leaderelection.go:185] failed to acquire lease kube-system/cluster-autoscaler

Tried to remove everything related to CA from the cluster, but it seems to be keep failing

Node drain simulator disregards PodDistributionBudget namespace

CA is matching pods to PDBs by label only, without considering namespace. So if there's a PDB in different namespace than the pod, but matches its labels and has allowed-disruptions=0, the pod will be considered impossible to evict (and block node drain attempt), even if it's not actually the case.

Think about how to handle dedicated node groups

The way we implement dedicated node groups in Kubernetes is

  1. attach a taint to one or more nodes (like dedicated=foo); pods that are allowed to use those nodes tolerate that taint
  2. attach a label to the same set of nodes (like dedicated=foo; the key/value space of taints is separate from the key/value space of labels); pods that must use those nodes have node affinity or nodeSelector for that label (in addition to having the toleration from (1))

Say we have a pending pod that is supposed to run on a dedicated node. The node created for this pod must have the label (part (2) above). But it does not need to have the taint (part (1) above) since a pod with a toleration will just as well schedule onto a node with or without the corresponding taint. When CA sees such a pending pod, it should presumably create a node with both the label and taint, not just the label, but the logic today probably does not do this.

Allow pod's graceful termination to override CA GracefulTermination

In a scenario when a PreStop duration is longer than 1 min (CA current default GracefulTermination time),
the pod will be killed in the middle of a "graceful shutdown". It might harm the user experience.
For example let's say that we would like to drain web-sockets connection and we decide that 20 min is enough time for the user to finish his task/action, we expect that setting the pod's graceful termination time to 20 will be enough, but currently it will just stop the socket right after 1 minute.

Add CA e2e using volumes

#19 was not covered by existing tests and we should add an e2e to make sure something similar won't happen again.

cluster-autoscaler works improperly with several kubernetes clusters in the same region

Moved from: kubernetes-retired/contrib#2711
Author: @it-svit

I have two clusters in two different VPCs.
But single cluster autoscaler is trying to affect both clusters.

I0808 15:01:14.854588       1 aws_manager.go:190] Regenerating ASG information for kube1-Nodepool2-1BK23XPH5999G-Workers
I0808 15:01:14.888882       1 aws_manager.go:190] Regenerating ASG information for kube2-Nodepool1-1303RAB6NXCBX-Workers
E0808 15:01:14.925243       1 static_autoscaler.go:219] Failed to scale up: failed to find template node for node group kube2-Nodepool1-1303RAB6NXCBX-Workers
W0808 15:01:14.925281       1 clusterstate.go:237] Failed to find readiness information for kube2-Nodepool1-1303RAB6NXCBX-Workers
W0808 15:01:14.925287       1 clusterstate.go:271] Failed to find readiness information for kube2-Nodepool1-1303RAB6NXCBX-Workers

cluster-autoscaler with AWS detect all nodes as unregister and delete them all...

As you can see below in the cluster-autoscaler logs it keep retrying to kill my node because it detects it as unregistered...

I0420 13:07:54.181348       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:07:56.185240       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:07:58.278044       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:00.281941       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:02.291554       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:02.887964       1 aws_manager.go:187] Regenerating ASG information for k8s-jenkins-nodes
I0420 13:08:02.935320       1 static_autoscaler.go:124] 5 unregistered nodes present
I0420 13:08:02.935335       1 utils.go:161] Removing unregistered node aws:///eu-west-1b/i-0c9cff088fa234079
W0420 13:08:03.005527       1 utils.go:173] Failed to remove node aws:///eu-west-1b/i-0c9cff088fa234079: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 63fa02d8-25ca-11e7-bd04-5525bf04935c
W0420 13:08:03.005554       1 static_autoscaler.go:131] Failed to remove unregistered nodes: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 63fa02d8-25ca-11e7-bd04-5525bf04935c
I0420 13:08:04.374476       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:06.574583       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:08.578247       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:10.582628       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:12.674304       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:13.108901       1 aws_manager.go:187] Regenerating ASG information for k8s-jenkins-nodes
I0420 13:08:13.150615       1 static_autoscaler.go:124] 5 unregistered nodes present
I0420 13:08:13.150632       1 utils.go:161] Removing unregistered node aws:///eu-west-1b/i-0caa38d264ea8e5bd
W0420 13:08:13.227270       1 utils.go:173] Failed to remove node aws:///eu-west-1b/i-0caa38d264ea8e5bd: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 6a11c420-25ca-11e7-bd04-5525bf04935c
W0420 13:08:13.227300       1 static_autoscaler.go:131] Failed to remove unregistered nodes: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 6a11c420-25ca-11e7-bd04-5525bf04935c
I0420 13:08:14.679535       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:16.684021       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:18.778081       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:20.781735       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:22.784877       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:23.371558       1 aws_manager.go:187] Regenerating ASG information for k8s-jenkins-nodes
I0420 13:08:23.410711       1 static_autoscaler.go:124] 5 unregistered nodes present
I0420 13:08:23.410729       1 utils.go:161] Removing unregistered node aws:///eu-west-1b/i-02d32be84bef14074
W0420 13:08:23.468063       1 utils.go:173] Failed to remove node aws:///eu-west-1b/i-02d32be84bef14074: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 702e6712-25ca-11e7-9208-1b1cd043bb1a
W0420 13:08:23.468097       1 static_autoscaler.go:131] Failed to remove unregistered nodes: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 702e6712-25ca-11e7-9208-1b1cd043bb1a
I0420 13:08:24.788189       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:26.874330       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:28.877450       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:29.780194       1 reflector.go:405] k8s.io/contrib/cluster-autoscaler/utils/kubernetes/listers.go:156: Watch close - *v1.Pod total 0 items received
I0420 13:08:30.880923       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:32.886314       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:33.589574       1 aws_manager.go:187] Regenerating ASG information for k8s-jenkins-nodes
I0420 13:08:33.640812       1 static_autoscaler.go:124] 5 unregistered nodes present
I0420 13:08:33.640831       1 utils.go:161] Removing unregistered node aws:///eu-west-1b/i-02d32be84bef14074
W0420 13:08:33.707148       1 utils.go:173] Failed to remove node aws:///eu-west-1b/i-02d32be84bef14074: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 764786f0-25ca-11e7-aff8-b5aec7e2f86b
W0420 13:08:33.707175       1 static_autoscaler.go:131] Failed to remove unregistered nodes: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 764786f0-25ca-11e7-aff8-b5aec7e2f86b
I0420 13:08:34.889530       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:36.892950       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:38.977885       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:40.981849       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:42.985896       1 leaderelection.go:204] succesfully renewed lease kube-system/cluster-autoscaler
I0420 13:08:44.066389       1 aws_manager.go:187] Regenerating ASG information for k8s-jenkins-nodes
I0420 13:08:44.119056       1 static_autoscaler.go:124] 5 unregistered nodes present
I0420 13:08:44.119070       1 utils.go:161] Removing unregistered node aws:///eu-west-1b/i-02d32be84bef14074
W0420 13:08:44.185569       1 utils.go:173] Failed to remove node aws:///eu-west-1b/i-02d32be84bef14074: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 7c86a6a7-25ca-11e7-b7f1-890a3321955a
W0420 13:08:44.185598       1 static_autoscaler.go:131] Failed to remove unregistered nodes: ValidationError: Currently, desiredSize equals minSize (5). Terminating instance without replacement will violate group's min size constraint. Either set shouldDecrementDesiredCapacity flag to false or lower group's min size.
        status code: 400, request id: 7c86a6a7-25ca-11e7-b7f1-890a3321955a

So after a few research everywhere and finally in the code I've found this :

Kubernetes repo : kubernetes/pkg/kubelet/kubelet.go (l 2096)

func (kl *Kubelet) updateCloudProviderFromMachineInfo(node *v1.Node, info *cadvisorapi.MachineInfo) {
	if info.CloudProvider != cadvisorapi.UnknownProvider &&
		info.CloudProvider != cadvisorapi.Baremetal {
		// The cloud providers from pkg/cloudprovider/providers/* that update ProviderID
		// will use the format of cloudprovider://project/availability_zone/instance_name
		// here we only have the cloudprovider and the instance name so we leave project
		// and availability zone empty for compatibility.
		node.Spec.ProviderID = strings.ToLower(string(info.CloudProvider)) +
			":////" + string(info.InstanceID)
	}
}

They build ProviderID like aws:////INSTANC_ID

autoscaler/cluster-autoscaler/cloudprovider/aws/aws_manager.go (l 220)

// GetAsgNodes returns Asg nodes.
func (m *AwsManager) GetAsgNodes(asg *Asg) ([]string, error) {
	result := make([]string, 0)
	group, err := m.getAutoscalingGroup(asg.Name)
	if err != nil {
		return []string{}, err
	}
	for _, instance := range group.Instances {
		result = append(result,
			fmt.Sprintf("aws:///%s/%s", *instance.AvailabilityZone, *instance.InstanceId))
	}
	return result, nil
}

You build ProviderID like aws:///AWS_REGION/INSTANCE_ID

autoscaler/cluster-autoscaler/clusterstate/clusterstate.go (l 705)

// Calculates which of the existing cloud provider nodes are not registered in Kuberenetes.
func getNotRegisteredNodes(allNodes []*apiv1.Node, cloudProvider cloudprovider.CloudProvider, time time.Time) ([]UnregisteredNode, error) {
	registered := sets.NewString()
	for _, node := range allNodes {
		registered.Insert(node.Spec.ProviderID)
	}
	notRegistered := make([]UnregisteredNode, 0)
	for _, nodeGroup := range cloudProvider.NodeGroups() {
		nodes, err := nodeGroup.Nodes()
		if err != nil {
			return []UnregisteredNode{}, err
		}
		for _, node := range nodes {
			if !registered.Has(node) {
				notRegistered = append(notRegistered, UnregisteredNode{
					Node: &apiv1.Node{
						ObjectMeta: metav1.ObjectMeta{
							Name: node,
						},
						Spec: apiv1.NodeSpec{
							ProviderID: node,
						},
					},
					UnregisteredSince: time,
				})
			}
		}
	}
	return notRegistered, nil
}

So when you try to check if the instance is registered it's return false on registered.Has(node) due to a different way to implement it when it should be true.

StatefulSet with AntiAffinity prevents cluster-autoscaler from working

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: gke (machine type n1-highcpu-4)
  • OS (e.g. from /etc/os-release): Alpine Linux v3.5
  • Kernel (e.g. uname -a): Linux cc332daac761 4.9.13-moby SMP Sat Mar 25 02:48:44 UTC 2017 x86_64 Linux

What happened:
We have an issue with the cluster-autoscaler where new pods are stuck on Pending and a new node isn't being created. We see these events in the pod:

  FirstSeen	LastSeen	Count	From			SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----			-------------	--------	------			-------
  3m		7s		22	cluster-autoscaler			Normal		NotTriggerScaleUp	pod didn't trigger scale-up (it wouldn't fit if a new node is added)
  4m		0s		17	default-scheduler			Warning		FailedScheduling	No nodes are available that match all of the following predicates:: Insufficient cpu (2).

What you expected to happen:
The pods request only 1 CPU resource, so pods would definitely fit on a new node of instance type n1-highcpu-4.

How to reproduce it (as minimally and precisely as possible):
We can reproduce this by creating a new simple cluster with the following command:

gcloud container clusters create scale-test --cluster-version 1.6.2 --zone us-east1-b --additional-zones us-east1-c --machine-type n1-highcpu-4 --num-nodes 1 --preemptible --enable-autoupgrade --enable-autorepair --enable-autoscaling --min-nodes 1 --max-nodes 10

We then run kubectl apply -f "deploy.yml" with the following configuration:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: busy-loop
spec:
  replicas: 1
  revisionHistoryLimit: 5
  template:
    metadata:
      labels:
        tier: core
        app: busy-loop
    spec:
      nodeSelector:
        cloud.google.com/gke-preemptible: "true"
      containers:
      - name: busy-loop
        image: <SIMPLE BUSY LOOP IMAGE>
        ports:
        - containerPort: 5950
          name: busy-loop
        resources:
          requests:
            cpu: "1000m"
            memory: "256Mi"
        livenessProbe:
          exec:
            command:
            - cat
            - deploy.yml
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: busy-loop
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: busy-loop
  minReplicas: 2
  maxReplicas: 100
  targetCPUUtilizationPercentage: 10
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: rethinkdb
spec:
  serviceName: rethinkdb
  replicas: 3
  template:
    metadata:
      labels:
        tier: data
        app: rethinkdb
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: rethinkdb
              topologyKey: kubernetes.io/hostname
      containers:
      - name: rethinkdb
        image: <SIMPLE RETHINKDB IMAGE>
        readinessProbe:
          tcpSocket:
            port: 28015

With this configuration new nodes are not being created at all with the NotTriggerScaleUp event message returned by the cluster-autoscaler. When we perform the exact same steps except remove the affinity setting from the configuration new nodes are created without a problem. It seems that the AntiAffinity in some way makes the cluster-autoscaler incorrectly think that there wouldn't be any room on a new node.

Autoscaler not scale instance when only 1 pod is unschedulable

Hi All,

K8S version is: v1.6.1
Autoscaler version is: v0.5.4 the same happened in v0.5.1
Provider: AWS

Not sure it's a bug or maybe configuration problem. Autoscaler not scaling another instance in case only 1 pod is unschedulable I saw it happens when 1 is unschedulable maybe the same goes for more.

I'm able to fix this only if I will create another pod and then the scale_up.go will Estimated 1 nodes needed

Logs:

I0804 13:23:44.597641       1 static_autoscaler.go:130] 1 unregistered nodes present
I0804 13:23:44.597684       1 static_autoscaler.go:197] Filtering out schedulables
I0804 13:23:44.597870       1 static_autoscaler.go:205] No schedulable pods
I0804 13:23:44.597893       1 scale_up.go:44] Pod <namespace>/<pod> is unschedulable
I0804 13:23:44.620660       1 scale_up.go:62] Upcoming 1 nodes
I0804 13:23:44.657738       1 scale_up.go:124] No need for any nodes in k8s-production-asg
I0804 13:23:44.657767       1 scale_up.go:132] No expansion options
I0804 13:23:44.657787       1 static_autoscaler.go:247] Scale down status: unneededOnly=true lastScaleUpTime=2017-08-04 13:17:36.225268892 +0000 UTC lastScaleDownFailedTrail=2017-08-03 18:27:44.967241708 +0000 UTC schedulablePodsPresent=false

After creating another pod:

0804 13:23:55.090329       1 scale_up.go:62] Upcoming 1 nodes
I0804 13:23:55.127798       1 scale_up.go:145] Best option to resize: k8s-production-asg
I0804 13:23:55.127826       1 scale_up.go:149] Estimated 1 nodes needed in k8s-production-asg
I0804 13:23:55.164701       1 scale_up.go:169] Scale-up: setting group k8s-production-asg size to 8
I0804 13:23:55.203865       1 aws_manager.go:124] Setting asg k8s-production-asg size to 8

YAML:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: autoscaler-default-pool
  namespace: kube-system
  labels:
    app: autoscaler-default-pool
    asg-name: k8s-production-asg
spec:
  # replicas not specified on purpose, default 1
  selector:
    matchLabels:
      app: autoscaler-default-pool
  template:
    metadata:
      labels:
        app: autoscaler-default-pool
    spec:
      containers:
        - image: gcr.io/google_containers/cluster-autoscaler:v0.5.4
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --kubernetes=https://elb.amazonaws.com:6443
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=true
            - --nodes=2:50:k8s-production-asg
          env:
            - name: AWS_REGION
              value: us-west-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: Always
      volumes:
        - name: ssl-certs
          hostPath:
            path: /etc/pki/tls/certs/ca-bundle.crt

scaling-down nodes that are running only system pods

I'm using:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

What I did:

  • I created a one-node GKE cluster, cluster autoscaling disabled
  • At the time of creation, all kube-system pods were running on the one node
  • I subsequently enabled cluster autoscaling with a minimum cluster size of 1 and a maximum of 5
  • All kube-system pods continued to run on the one original node
  • I ran a Job that taxed the system to the extent that CA added a new node
  • After the job was finished, all pods were removed, leaving only kube-system pods running

What I expected:

  • CA would scale down the cluster to its initial (minimum) size (one node) shortly after the taxing job was complete

What happened:

  • The cluster stayed at two nodes
  • When CA added a new node, the kube-system pods had been redistributed to run across both nodes
  • According to the FAQ here, this happened because CA will not remove nodes running system pods

Suggestion:

  • If a cluster is above its configured minimum size only because kube-system pods are preventing it from shrinking, the superfluous node(s) should be automatically drained, the kube-system pods should be consolidated onto a single node (or rather, onto a number of nodes equal to the cluster's configured minimum), and the superfluous nodes should be removed from the cluster

Scale down bug on AWS

There're 3 nodes in an Autoscaling Group of AWS:

  • Node-A, utilization: 0.3, kube-system pods present, "Protect From Scale In".
  • Node-B, utilization: 0.0, may be removed
  • Node-C, utilization: 0.8

Cluster Autoscaler reports that the Node-B will be removed in a given scale-down-delay, but AWS removes Node-A due to the desired capacity shrinking request by user, rather than the Node-B.

At 2017-06-28T06:06:36Z instance i-0b9d was taken out of service 
in response to a user request, shrinking the capacity from 3 to 2.

Node-A has pods in namespace kube-system, so I guess the termination should be triggered by AWS , before the Cluster Autoscaler takes any actions. Does Cluster Autoscaler send requests to change desired capacity first? But there's already ShouldDecrementDesiredCapacity in aws_manager.go

kubernetes: v1.6.6
cluster-autoscaler: v0.5.4
autoscaling group:

{
    "AutoScalingGroups": [
        {
            "AutoScalingGroupName": "kube-test-asg",
            "AutoScalingGroupARN": "arn",
            "LaunchConfigurationName": "kube-test",
            "MinSize": 1,
            "MaxSize": 5,
            "DesiredCapacity": 2,
            "DefaultCooldown": 300,
            "LoadBalancerNames": [],
            "TargetGroupARNs": [],
            "HealthCheckType": "EC2",
            "HealthCheckGracePeriod": 300,
            "Instances": [
                {
                    "InstanceId": "i-0309",
                    "LifecycleState": "InService",
                    "HealthStatus": "Healthy",
                    "LaunchConfigurationName": "kube-test",
                    "ProtectedFromScaleIn": false
                },
                {
                    "InstanceId": "i-0af4",
                    "LifecycleState": "InService",
                    "HealthStatus": "Healthy",
                    "LaunchConfigurationName": "kube-test",
                    "ProtectedFromScaleIn": false
                }
            ],
            "CreatedTime": "2017-06-27T01:02:31.582Z",
            "SuspendedProcesses": [],
            "VPCZoneIdentifier": "subnet",
            "EnabledMetrics": [],
            "Tags": [],
            "TerminationPolicies": [
                "Default"
            ],
            "NewInstancesProtectedFromScaleIn": false
        }
    ]
}

AWS AutoDiscovery ASG Limits

I'm running Kubernetes V1.6.4, CA V0.6.0 with autodiscovery of ASGs on AWS. If I change the maximum or minimum number of nodes in the AWS console, these changes are not reflected in the AutoScaler's config.

I chose to use the autodiscovery as I thought it would enable me to keep a single source of truth for ASG limits.

If I decrease the maximum number of nodes in AWS, CA can try to scale up but will throw an error when aws declines the request.

If I increase the maximum number of nodes in AWS, because the cluster has run out of resource, CA doesn't notice this and won't scale up even though there is capacity to

Given that AWS is constantly being polled for updates to the ASG, shouldn't changes in max/min number of nodes be updated?

I'm not sure whether this behaviour has been implemented and isn't working or whether this would be classed as a feature request but it definitely seems like the intuitive behaviour of the autodiscovery

What to apply AWS Policy to?

Excuse the stupid question please but I'm confused about the README a bit.
What do I need to apply the suggested IAM policy to?

I've set this up with helm and it's not working. No error messages but nodes are not scaling up when running against capacity limit and i suspect the policy might be the culprit:

echo "Installing helm chart 'aws-cluster-autoscaler'…"
envsubst < autoscaling.template.yml > autoscaling.yml
helm install stable/aws-cluster-autoscaler -f autoscaling.yml
rm autoscaling.yml

With this autoscaling.template.yml:

autoscalingGroups:
  - name: ${AUTO_SCALING_GROUP_NAME}
    maxSize: 12
    minSize: 4

awsRegion: ${AWS_REGION}

image:
  repository: gcr.io/google_containers/cluster-autoscaler
  tag: v0.5.4
  pullPolicy: IfNotPresent

For further debugging purposes, here's kubectl describe pod aws-cluster-autoscaler:

Name:		loping-armadillo-aws-cluster-autoscaler-3021239351-mwxgp
Namespace:	default
Node:		xxx.eu-central-1.compute.internal/xxx.xxx.xxx.xxx
Start Time:	Wed, 07 Jun 2017 19:38:22 +0200
Labels:		app=aws-cluster-autoscaler
		pod-template-hash=3021239351
		release=loping-armadillo
Annotations:	kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"loping-armadillo-aws-cluster-autoscaler-3021239351","uid":"19ea6...
		kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container aws-cluster-autoscaler
Status:		Running
IP:		xxx.xxx.xxx.xxx
Controllers:	ReplicaSet/loping-armadillo-aws-cluster-autoscaler-3021239351
Containers:
  aws-cluster-autoscaler:
    Container ID:	docker://8449a1d6fbc09a2e52d71f2cc67b520720125743f2f0384887b94cafddb6a44f
    Image:		gcr.io/google_containers/cluster-autoscaler:v0.5.4
    Image ID:		docker-pullable://gcr.io/google_containers/cluster-autoscaler@sha256:abe1ed1410c6ea58a80afec69e2b4397740cfa4ffc02484eb0cfbe96d3e81984
    Port:		8085/TCP
    Command:
      ./cluster-autoscaler
      --cloud-provider=aws
      --nodes=4:12:nodes.my-domain.com
      --scale-down-delay=10m
      --skip-nodes-with-local-storage=false
      --skip-nodes-with-system-pods=true
      --v=4
    State:		Running
      Started:		Wed, 07 Jun 2017 19:38:22 +0200
    Ready:		True
    Restart Count:	0
    Requests:
      cpu:	100m
    Environment:
      AWS_REGION:	eu-central-1
    Mounts:
      /etc/ssl/certs/ca-certificates.crt from ssl-certs (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jdpp7 (ro)
Conditions:
  Type		Status
  Initialized 	True
  Ready 	True
  PodScheduled 	True
Volumes:
  ssl-certs:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/ssl/certs/ca-certificates.crt
  default-token-jdpp7:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-jdpp7
    Optional:	false
QoS Class:	Burstable
Node-Selectors:	<none>
Tolerations:	node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
		node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:		<none>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.