Git Product home page Git Product logo

cluster-operator's Introduction

This project is deprecated. Please see https://github.com/openshift/hive.

cluster-operator

Development Deployment

Initial (One-Time) Setup

  • Install required packages:
    • Fedora: sudo dnf install golang make docker ansible
    • Mac OSX:
  • Change docker to allow insecure pulls (required for oc cluster up) and change the log driver to json-file (more reliable):
    • Edit /etc/sysconfig/docker
    • Change OPTIONS= to include --insecure-registry 172.30.0.0/16 --log-driver=json-file
  • Enable and Start docker:
    • sudo systemctl enable --now docker
  • Install the OpenShift and Kubernetes Python clients:
    • sudo pip install kubernetes openshift
  • Install python SELinux libraries
    • Fedora 27: sudo dnf install libselinux-python
    • Fedora 28 (and later): sudo dnf install python2-libselinux
  • Clone this repo to $HOME/go/src/github.com/openshift/cluster-operator
  • Get cfssl:
    • go get -u github.com/cloudflare/cfssl/cmd/...
  • Get the oc client binary
    • Fedora: Download a recent oc client binary from origin/releases (doesn't have to be 3.10):
    • Mac OSX: Minishift is the recommended development environment
  • Create a kubectl symlink to the oc binary (if you don't already have it). This is necessary for the kubectl_apply ansible module to work.
    • Note: It is recommended to put the kubectl symlink somewhere in your path.
    • ln -s oc kubectl
  • Start an OpenShift cluster:
    • Fedora: oc cluster up --image="docker.io/openshift/origin"
    • Mac OSX: Follow the Minishift Getting Started Guide
    • Note: Startup output will contain the URL to the web console for your openshift cluster, save this for later
  • Login to the OpenShift cluster as system:admin:
    • oc login -u system:admin
  • Create an "admin" account with cluster-admin role which you can use to login to the WebUI or with oc:
    • oc adm policy add-cluster-role-to-user cluster-admin admin
  • Login to the OpenShift cluster as a normal admin account:
    • oc login -u admin -p password
  • Ensure the following files are available on your local machine:
    • $HOME/.aws/credentials - your AWS credentials, default section will be used but can be overridden by vars when running the create cluster playbook.
    • $HOME/.ssh/libra.pem - the SSH private key to use for AWS

Deploy / Re-deploy Cluster Operator

WARNING
By default when using deploy-devel-playbook.yml to deploy cluster operator, fake images will be used. This means that no actual cluster will be created. If you want to create a real cluster, pass -e fake_deployment=false to the playbook invocation.
  • Deploy cluster operator to the OpenShift cluster you are currently logged into. (see above for oc login instructions above)
    • ansible-playbook contrib/ansible/deploy-devel-playbook.yml
    • This creates an OpenShift BuildConfig and ImageStream for the cluster-operator image. (which does not yet exist)
  • deploy-devel-playbook.yml automatically kicks off an image compile. To re-compile and push a new image:
    • If you would just like to deploy Cluster Operator from the latest code in git:
      • oc start-build cluster-operator -n openshift-cluster-operator
    • If you are a developer and would like to quickly compile code locally and deploy to your cluster:
      • Mac OSX only: eval $(minishift docker-env)
      • NO_DOCKER=1 make images
        • This will compile the go code locally, and build both cluster-operator and cluster-operator-ansible images.
      • make integrated-registry-push
        • This will attempt to get your current OpenShift whoami token, login to the integrated cluster registry, and push your local images.
    • This will immediately trigger a deployment now that the images are available.
    • Re-run these steps to deploy new code as often as you like, once the push completes the ImageStream will trigger a new deployment.

Creating a Test Cluster

  • ansible-playbook contrib/ansible/create-cluster-playbook.yml
    • This will create a cluster named after your username in your current context's namespace, using a fake ClusterVersion. (no actual resources will be provisioned, the Ansible image used will just verify the playbook called exists, and return indicating success)
    • Specify -e cluster_name, -e cluster_namespace, or other variables you can override as defined at the top of the playbook.
    • This command can be re-run to update the definition of the cluster and test how the cluster operator will respond to the change. (WARNING: do not try to change the name/namespace, as this will create a new cluster)

You can then check the provisioning status of your cluster by running oc describe cluster <cluster_name>

Developing Cluster Operator Controllers Locally

If you are actively working on controller code you can save some time by compiling and running locally:

  • Run the deploy playbooks normally.
  • Disable your controller in the cluster-operator-controller-manager DeploymentConfig using one of the below methods:
    • Scale everything down: oc scale -n openshift-cluster-operator --replicas=0 dc/cluster-operator-controller-manager
    • Disable just your controller: oc edit -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager and add an argument for --controllers=-disableme or --controllers=c1,c2,c3 for just the controllers you want.
    • Delete it entirely: oc delete -n openshift-cluster-operator DeploymentConfig cluster-operator-controller-manager
  • make build
    • On Mac you may need to instead build a Darwin binary with: go install ./cmd/cluster-operator
  • bin/cluster-operator controller-manager --log-level debug --k8s-kubeconfig ~/.kube/config
    • You can adjust the controllers run with --controllers clusterapi,machineset,etc. Use --help to see the full list.

Developing With OpenShift Ansible

The Cluster Operator uses its own Ansible image which layers our playbooks and roles on top of the upstream OpenShift Ansible images. Typically our Ansible changes only require work in this repo. See the build/cluster-operator-ansible directory for the Dockerfile and playbooks we layer in.

To build the cluster-operator-ansible image you can just run make images normally.

WARNING: This image is built using OpenShift Ansible v3.10. This can be adjusted by specifying the CO_ANSIBLE_URL and CO_ANSIBLE_BRANCH environment variables to use a different branch/repository for the base openshift-ansible image.

You can run cluster-operator-ansible playbooks standalone by creating an inventory like:

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_become=true
ansible_ssh_user=centos
openshift_deployment_type=origin
openshift_release="3.10"
oreg_url=openshift/origin-${component}:v3.10.0
openshift_aws_ami=ami-833d37f9

[masters]

[etcd]

[nodes]

You can then run ansible with the above inventory file and your cluster ID:

ansible-playbook -i ec2-hosts build/cluster-operator-ansible/playbooks/cluster-operator/node-config-daemonset.yml -e openshift_aws_clusterid=dgoodwin-cluster

Maintenance

Use of kubectl_ansible and oc_process modules

We're using the Cluster Operator deployment Ansible as a testing ground for the kubectl-ansible modules that wrap apply and oc process. These roles are vendored in similar to how golang works using a tool called gogitit. The required gogitit manifest and cache are committed, but only the person updating the vendored code needs to install the tool or worry about the manifest. For everyone else the roles are just available normally and this allows us to not require developers to periodically re-run ansible-galaxy install.

Updating the vendored code can be done with:

$ cd contrib/ansible/
$ gogitit sync

Roles Template Duplication

For OpenShift CI our roles template, which we do not have permissions to apply ourselves, had to be copied to https://github.com/openshift/release/blob/master/projects/cluster-operator/cluster-operator-roles-template.yaml. Our copy in this repo is authoritative, we need to remember to copy the file and submit a PR, and request someone run the make target for us whenever the auth/roles definitions change.

Utilities

You can build the development utilities binary coutil by running: make coutil. Once built, the binary will be placed in bin/coutil. Utilities are subcommands under coutil and include:

  • aws-actuator-test - allows invoking AWS actuator actions (create, update, delete) without requiring a cluster to be present.
  • extract-jenkins-logs - extracts container logs from a cluster operator e2e run, given a Jenkins job URL
  • playbook-mock - used by the fake-ansible image to track invocations of ansible by cluster operator controllers
  • wait-for-apiservice - given the name of an API service, waits for the API service to be functional.
  • wait-for-cluster-ready - waits for a cluster operator ClusterDeployment to be provisioned and functional, reporting on its progress along the way.

cluster-operator's People

Contributors

abutcher avatar csrwng avatar dgoodwin avatar jhernand avatar jwforres avatar openshift-merge-robot avatar staebler avatar twiest avatar warmchang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-operator's Issues

Tags and releases?

We have an application that includes the cluster operator as one of its components. Currently, in order to be able to have reproducible deployments of this application, we have a clone of this git repository where we add our own tags. From those tags we build our own versioned images, like cluster-operator:v0.0.3, cluster-operator:v0.0.4, etc. When then use those versioned images to do reproducible deployments to our production environments. We would like to stop doing that, and use instead stable versions released by the cluster operator project. Is there any plan to have those stable releases? Will you use tags to handle them? Will the images that you publish be versioned as well?

API Server crashlooping because it can't find etcd

After running on my cluster for a couple of hours, I started seeing the api server crash loop. Here's a log from a run:

โฏ oc logs -f cluster-operator-apiserver-5fd88df5b9-jm7l5 -c apiserver -n cluster-operator
I0214 20:44:46.929434       1 server.go:59] Preparing to run API server
I0214 20:44:47.100606       1 round_trippers.go:417] curl -k -v -XGET  -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Accept: application/json, */*" https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0214 20:44:47.111407       1 round_trippers.go:436] GET https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 10 milliseconds
I0214 20:44:47.111433       1 round_trippers.go:442] Response Headers:
I0214 20:44:47.111437       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:44:47.111440       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:44:47.111442       1 round_trippers.go:445]     Content-Length: 2690
I0214 20:44:47.111444       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:44:47 GMT
I0214 20:44:47.111648       1 request.go:873] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"extension-apiserver-authentication","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication","uid":"18fe5de7-11bf-11e8-b52b-4e2fb6bacaa7","resourceVersion":"69","creationTimestamp":"2018-02-14T19:41:49Z"},"data":{"client-ca-file":"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMwHhcNMTgwMjE0MTkyNzAyWhcNMjMwMjEz\nMTkyNzAzWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuWb2ZHhjbBPrFaJ7Hi1sO\nzYx/u1477bI92MX7ZcL/Kx3/huXj4RkbE+TBjCkpO6xTfjI5tWc0+5jXkc2lt2cP\n1YYJdtP9LWfNVg0TN0HU3nqaBR2OGtkuzqXYYfUfKJNU1e6Kg8zh3xi5BoI1LwNM\np5slISyjJR76FAwWhlcx9fRZOS324EOQBujx0ZuH1qwXfXsrt80oMMZWMGDTMmEt\nr0Kd6WODuYow9KbqouQrCbQdv9RCh9OGBtSRh8WivKq+BntaCZVWbFE8qEGg7shd\ncGTpgW4idHOUSFx49EOqTp4cVDEctUWp9MR6L5TqId7alHwFAHUrOnl7NXcCdyzR\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQBTusNrTl/ba/4ZPGaaYBLmzTx5H48JSCugmnWKIeI8JnnJ\nD+XwCqMT8lEPXeveY/dstNBITl3LnVT8+fpAMNsB/bSnb+qirPtr9RzNgFHr2N3M\nUOCNNBYOJ0Yj8tkKum6I3rdsZh0WRRU9SNBmpUmHoAQGCJes1m9+OOEkPnhUi0pI\nw38pq/FjILyyWYH9p+wOgAOheqVs/KFLxaVi5n0fwyaF10Bf8pUFdcon4rzWFH1c\nk0AsPWqftOU2I+p6rvP553gW2XrnfKC/03CiT4fFf5VGWWBPGsiuZmxRmZhgDqjE\nH352IP0AxodUjkJdvvR4GDji7+tvy4BI/cMktyt1\n-----END CERTIFICATE-----\n","requestheader-allowed-names":"[\"aggregator-front-proxy\"]","requestheader-client-ca-file":"-----BEGIN CERTIFICATE-----\nMIICnjCCAYagAwIBAgIBATANBgkqhkiG9w0BAQsFADAAMB4XDTE4MDIxNDE5NDEz\nN1oXDTIzMDIxMzE5NDEzOFowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBAM9iOcCuPDdL4+vToMbIAyWz6FayEL2vcCw9q0TkRN+0B0yt1NWG3M+eeS+s\nmOIj8Wb8/+Z4/Thejrza20QjmLdMraV88BfdXbTb4HnsiVTk1e7c7QbFP7YZZtQ1\n0wz2jtB1+uPEJaC+LfZmJv2mb89WhFwOhuTiTj4NzDvvnDsm1vL9aerdXCH7ZnvZ\nTKlLnl4HdKH4Q6WhMro2HB792tZGoZq7ZBSDRYCGVhhW6Sg10Id5Qc2FP1X2duCW\nYiiN00jWjC8G6UucZUdcspUAQz9z4ZCE9Zjm7pvc2LPRLraireYlyoEMCj5nFyF1\n5sONYVqBvNSPVGK9muHcAoehOgsCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8G\nA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACsWEX1YGTjbqg9kO0a5\naa38BoagLkygO+/qw6b2cByeGvdM9vGUn5j5LWmIKIT3TVNy7pA2EtpVtw1CUdB5\nGO7O4KeYJ/uxW/9tRYf7Uzokkd0iEwd5RY1bhoZBb1hergQHYsBMgf9jKusfPLH+\nI6fDYZEW5jWKkd6BRNW/XyW5RSEUf6Sh59ZTjNhdbTFjOsuoDMLrARGilP/qYav0\nKIQ1wAostR4TFtEZJ/Kf6Z1ufQVDpZmx6IGyZECEHDIYLPvsK9PwAcYxd7sbHGO4\nfHdy1MD/VHLWZhZ6q5UhylmLGrdyxqimWNXSy93lrdCpBRSAlDpkCygQp+015idA\ngH8=\n-----END CERTIFICATE-----\n","requestheader-extra-headers-prefix":"[\"X-Remote-Extra-\"]","requestheader-group-headers":"[\"X-Remote-Group\"]","requestheader-username-headers":"[\"X-Remote-User\"]"}}
I0214 20:44:47.113393       1 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0214 20:44:47.114585       1 round_trippers.go:436] GET https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0214 20:44:47.114606       1 round_trippers.go:442] Response Headers:
I0214 20:44:47.114609       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:44:47.114611       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:44:47.114613       1 round_trippers.go:445]     Content-Length: 2690
I0214 20:44:47.114615       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:44:47 GMT
I0214 20:44:47.114644       1 request.go:873] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"extension-apiserver-authentication","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication","uid":"18fe5de7-11bf-11e8-b52b-4e2fb6bacaa7","resourceVersion":"69","creationTimestamp":"2018-02-14T19:41:49Z"},"data":{"client-ca-file":"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMwHhcNMTgwMjE0MTkyNzAyWhcNMjMwMjEz\nMTkyNzAzWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuWb2ZHhjbBPrFaJ7Hi1sO\nzYx/u1477bI92MX7ZcL/Kx3/huXj4RkbE+TBjCkpO6xTfjI5tWc0+5jXkc2lt2cP\n1YYJdtP9LWfNVg0TN0HU3nqaBR2OGtkuzqXYYfUfKJNU1e6Kg8zh3xi5BoI1LwNM\np5slISyjJR76FAwWhlcx9fRZOS324EOQBujx0ZuH1qwXfXsrt80oMMZWMGDTMmEt\nr0Kd6WODuYow9KbqouQrCbQdv9RCh9OGBtSRh8WivKq+BntaCZVWbFE8qEGg7shd\ncGTpgW4idHOUSFx49EOqTp4cVDEctUWp9MR6L5TqId7alHwFAHUrOnl7NXcCdyzR\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQBTusNrTl/ba/4ZPGaaYBLmzTx5H48JSCugmnWKIeI8JnnJ\nD+XwCqMT8lEPXeveY/dstNBITl3LnVT8+fpAMNsB/bSnb+qirPtr9RzNgFHr2N3M\nUOCNNBYOJ0Yj8tkKum6I3rdsZh0WRRU9SNBmpUmHoAQGCJes1m9+OOEkPnhUi0pI\nw38pq/FjILyyWYH9p+wOgAOheqVs/KFLxaVi5n0fwyaF10Bf8pUFdcon4rzWFH1c\nk0AsPWqftOU2I+p6rvP553gW2XrnfKC/03CiT4fFf5VGWWBPGsiuZmxRmZhgDqjE\nH352IP0AxodUjkJdvvR4GDji7+tvy4BI/cMktyt1\n-----END CERTIFICATE-----\n","requestheader-allowed-names":"[\"aggregator-front-proxy\"]","requestheader-client-ca-file":"-----BEGIN CERTIFICATE-----\nMIICnjCCAYagAwIBAgIBATANBgkqhkiG9w0BAQsFADAAMB4XDTE4MDIxNDE5NDEz\nN1oXDTIzMDIxMzE5NDEzOFowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBAM9iOcCuPDdL4+vToMbIAyWz6FayEL2vcCw9q0TkRN+0B0yt1NWG3M+eeS+s\nmOIj8Wb8/+Z4/Thejrza20QjmLdMraV88BfdXbTb4HnsiVTk1e7c7QbFP7YZZtQ1\n0wz2jtB1+uPEJaC+LfZmJv2mb89WhFwOhuTiTj4NzDvvnDsm1vL9aerdXCH7ZnvZ\nTKlLnl4HdKH4Q6WhMro2HB792tZGoZq7ZBSDRYCGVhhW6Sg10Id5Qc2FP1X2duCW\nYiiN00jWjC8G6UucZUdcspUAQz9z4ZCE9Zjm7pvc2LPRLraireYlyoEMCj5nFyF1\n5sONYVqBvNSPVGK9muHcAoehOgsCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8G\nA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACsWEX1YGTjbqg9kO0a5\naa38BoagLkygO+/qw6b2cByeGvdM9vGUn5j5LWmIKIT3TVNy7pA2EtpVtw1CUdB5\nGO7O4KeYJ/uxW/9tRYf7Uzokkd0iEwd5RY1bhoZBb1hergQHYsBMgf9jKusfPLH+\nI6fDYZEW5jWKkd6BRNW/XyW5RSEUf6Sh59ZTjNhdbTFjOsuoDMLrARGilP/qYav0\nKIQ1wAostR4TFtEZJ/Kf6Z1ufQVDpZmx6IGyZECEHDIYLPvsK9PwAcYxd7sbHGO4\nfHdy1MD/VHLWZhZ6q5UhylmLGrdyxqimWNXSy93lrdCpBRSAlDpkCygQp+015idA\ngH8=\n-----END CERTIFICATE-----\n","requestheader-extra-headers-prefix":"[\"X-Remote-Extra-\"]","requestheader-group-headers":"[\"X-Remote-Group\"]","requestheader-username-headers":"[\"X-Remote-User\"]"}}
I0214 20:44:47.115175       1 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0214 20:44:47.116303       1 round_trippers.go:436] GET https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0214 20:44:47.116325       1 round_trippers.go:442] Response Headers:
I0214 20:44:47.116328       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:44:47.116330       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:44:47.116332       1 round_trippers.go:445]     Content-Length: 2690
I0214 20:44:47.116334       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:44:47 GMT
I0214 20:44:47.116358       1 request.go:873] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"extension-apiserver-authentication","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication","uid":"18fe5de7-11bf-11e8-b52b-4e2fb6bacaa7","resourceVersion":"69","creationTimestamp":"2018-02-14T19:41:49Z"},"data":{"client-ca-file":"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMwHhcNMTgwMjE0MTkyNzAyWhcNMjMwMjEz\nMTkyNzAzWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuWb2ZHhjbBPrFaJ7Hi1sO\nzYx/u1477bI92MX7ZcL/Kx3/huXj4RkbE+TBjCkpO6xTfjI5tWc0+5jXkc2lt2cP\n1YYJdtP9LWfNVg0TN0HU3nqaBR2OGtkuzqXYYfUfKJNU1e6Kg8zh3xi5BoI1LwNM\np5slISyjJR76FAwWhlcx9fRZOS324EOQBujx0ZuH1qwXfXsrt80oMMZWMGDTMmEt\nr0Kd6WODuYow9KbqouQrCbQdv9RCh9OGBtSRh8WivKq+BntaCZVWbFE8qEGg7shd\ncGTpgW4idHOUSFx49EOqTp4cVDEctUWp9MR6L5TqId7alHwFAHUrOnl7NXcCdyzR\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQBTusNrTl/ba/4ZPGaaYBLmzTx5H48JSCugmnWKIeI8JnnJ\nD+XwCqMT8lEPXeveY/dstNBITl3LnVT8+fpAMNsB/bSnb+qirPtr9RzNgFHr2N3M\nUOCNNBYOJ0Yj8tkKum6I3rdsZh0WRRU9SNBmpUmHoAQGCJes1m9+OOEkPnhUi0pI\nw38pq/FjILyyWYH9p+wOgAOheqVs/KFLxaVi5n0fwyaF10Bf8pUFdcon4rzWFH1c\nk0AsPWqftOU2I+p6rvP553gW2XrnfKC/03CiT4fFf5VGWWBPGsiuZmxRmZhgDqjE\nH352IP0AxodUjkJdvvR4GDji7+tvy4BI/cMktyt1\n-----END CERTIFICATE-----\n","requestheader-allowed-names":"[\"aggregator-front-proxy\"]","requestheader-client-ca-file":"-----BEGIN CERTIFICATE-----\nMIICnjCCAYagAwIBAgIBATANBgkqhkiG9w0BAQsFADAAMB4XDTE4MDIxNDE5NDEz\nN1oXDTIzMDIxMzE5NDEzOFowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBAM9iOcCuPDdL4+vToMbIAyWz6FayEL2vcCw9q0TkRN+0B0yt1NWG3M+eeS+s\nmOIj8Wb8/+Z4/Thejrza20QjmLdMraV88BfdXbTb4HnsiVTk1e7c7QbFP7YZZtQ1\n0wz2jtB1+uPEJaC+LfZmJv2mb89WhFwOhuTiTj4NzDvvnDsm1vL9aerdXCH7ZnvZ\nTKlLnl4HdKH4Q6WhMro2HB792tZGoZq7ZBSDRYCGVhhW6Sg10Id5Qc2FP1X2duCW\nYiiN00jWjC8G6UucZUdcspUAQz9z4ZCE9Zjm7pvc2LPRLraireYlyoEMCj5nFyF1\n5sONYVqBvNSPVGK9muHcAoehOgsCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8G\nA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACsWEX1YGTjbqg9kO0a5\naa38BoagLkygO+/qw6b2cByeGvdM9vGUn5j5LWmIKIT3TVNy7pA2EtpVtw1CUdB5\nGO7O4KeYJ/uxW/9tRYf7Uzokkd0iEwd5RY1bhoZBb1hergQHYsBMgf9jKusfPLH+\nI6fDYZEW5jWKkd6BRNW/XyW5RSEUf6Sh59ZTjNhdbTFjOsuoDMLrARGilP/qYav0\nKIQ1wAostR4TFtEZJ/Kf6Z1ufQVDpZmx6IGyZECEHDIYLPvsK9PwAcYxd7sbHGO4\nfHdy1MD/VHLWZhZ6q5UhylmLGrdyxqimWNXSy93lrdCpBRSAlDpkCygQp+015idA\ngH8=\n-----END CERTIFICATE-----\n","requestheader-extra-headers-prefix":"[\"X-Remote-Extra-\"]","requestheader-group-headers":"[\"X-Remote-Group\"]","requestheader-username-headers":"[\"X-Remote-User\"]"}}
I0214 20:44:47.116859       1 round_trippers.go:417] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0214 20:44:47.118065       1 round_trippers.go:436] GET https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication 200 OK in 1 milliseconds
I0214 20:44:47.118088       1 round_trippers.go:442] Response Headers:
I0214 20:44:47.118092       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:44:47.118094       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:44:47.118097       1 round_trippers.go:445]     Content-Length: 2690
I0214 20:44:47.118099       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:44:47 GMT
I0214 20:44:47.118153       1 request.go:873] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"extension-apiserver-authentication","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication","uid":"18fe5de7-11bf-11e8-b52b-4e2fb6bacaa7","resourceVersion":"69","creationTimestamp":"2018-02-14T19:41:49Z"},"data":{"client-ca-file":"-----BEGIN CERTIFICATE-----\nMIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu\nc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMwHhcNMTgwMjE0MTkyNzAyWhcNMjMwMjEz\nMTkyNzAzWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE1MTg2MzY0MjMw\nggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCuWb2ZHhjbBPrFaJ7Hi1sO\nzYx/u1477bI92MX7ZcL/Kx3/huXj4RkbE+TBjCkpO6xTfjI5tWc0+5jXkc2lt2cP\n1YYJdtP9LWfNVg0TN0HU3nqaBR2OGtkuzqXYYfUfKJNU1e6Kg8zh3xi5BoI1LwNM\np5slISyjJR76FAwWhlcx9fRZOS324EOQBujx0ZuH1qwXfXsrt80oMMZWMGDTMmEt\nr0Kd6WODuYow9KbqouQrCbQdv9RCh9OGBtSRh8WivKq+BntaCZVWbFE8qEGg7shd\ncGTpgW4idHOUSFx49EOqTp4cVDEctUWp9MR6L5TqId7alHwFAHUrOnl7NXcCdyzR\nAgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG\nSIb3DQEBCwUAA4IBAQBTusNrTl/ba/4ZPGaaYBLmzTx5H48JSCugmnWKIeI8JnnJ\nD+XwCqMT8lEPXeveY/dstNBITl3LnVT8+fpAMNsB/bSnb+qirPtr9RzNgFHr2N3M\nUOCNNBYOJ0Yj8tkKum6I3rdsZh0WRRU9SNBmpUmHoAQGCJes1m9+OOEkPnhUi0pI\nw38pq/FjILyyWYH9p+wOgAOheqVs/KFLxaVi5n0fwyaF10Bf8pUFdcon4rzWFH1c\nk0AsPWqftOU2I+p6rvP553gW2XrnfKC/03CiT4fFf5VGWWBPGsiuZmxRmZhgDqjE\nH352IP0AxodUjkJdvvR4GDji7+tvy4BI/cMktyt1\n-----END CERTIFICATE-----\n","requestheader-allowed-names":"[\"aggregator-front-proxy\"]","requestheader-client-ca-file":"-----BEGIN CERTIFICATE-----\nMIICnjCCAYagAwIBAgIBATANBgkqhkiG9w0BAQsFADAAMB4XDTE4MDIxNDE5NDEz\nN1oXDTIzMDIxMzE5NDEzOFowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC\nggEBAM9iOcCuPDdL4+vToMbIAyWz6FayEL2vcCw9q0TkRN+0B0yt1NWG3M+eeS+s\nmOIj8Wb8/+Z4/Thejrza20QjmLdMraV88BfdXbTb4HnsiVTk1e7c7QbFP7YZZtQ1\n0wz2jtB1+uPEJaC+LfZmJv2mb89WhFwOhuTiTj4NzDvvnDsm1vL9aerdXCH7ZnvZ\nTKlLnl4HdKH4Q6WhMro2HB792tZGoZq7ZBSDRYCGVhhW6Sg10Id5Qc2FP1X2duCW\nYiiN00jWjC8G6UucZUdcspUAQz9z4ZCE9Zjm7pvc2LPRLraireYlyoEMCj5nFyF1\n5sONYVqBvNSPVGK9muHcAoehOgsCAwEAAaMjMCEwDgYDVR0PAQH/BAQDAgKkMA8G\nA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACsWEX1YGTjbqg9kO0a5\naa38BoagLkygO+/qw6b2cByeGvdM9vGUn5j5LWmIKIT3TVNy7pA2EtpVtw1CUdB5\nGO7O4KeYJ/uxW/9tRYf7Uzokkd0iEwd5RY1bhoZBb1hergQHYsBMgf9jKusfPLH+\nI6fDYZEW5jWKkd6BRNW/XyW5RSEUf6Sh59ZTjNhdbTFjOsuoDMLrARGilP/qYav0\nKIQ1wAostR4TFtEZJ/Kf6Z1ufQVDpZmx6IGyZECEHDIYLPvsK9PwAcYxd7sbHGO4\nfHdy1MD/VHLWZhZ6q5UhylmLGrdyxqimWNXSy93lrdCpBRSAlDpkCygQp+015idA\ngH8=\n-----END CERTIFICATE-----\n","requestheader-extra-headers-prefix":"[\"X-Remote-Extra-\"]","requestheader-group-headers":"[\"X-Remote-Group\"]","requestheader-username-headers":"[\"X-Remote-User\"]"}}
I0214 20:44:47.120872       1 util.go:152] Admission control plugin names: []
I0214 20:44:47.120899       1 server.go:65] Creating storage factory
I0214 20:44:47.120926       1 server.go:103] Completing API server configuration
I0214 20:44:47.121607       1 etcd_config.go:88] Created skeleton API server
I0214 20:44:47.121627       1 etcd_config.go:99] Installing API groups
I0214 20:44:47.121643       1 storage_factory.go:278] storing {clusteroperator.openshift.io clusters} in clusteroperator.openshift.io/v1alpha1, reading as clusteroperator.openshift.io/__internal from storagebackend.Config{Type:"", Prefix:"/clusteroperator", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000}
I0214 20:44:47.121724       1 storage_factory.go:278] storing {clusteroperator.openshift.io clusterversions} in clusteroperator.openshift.io/v1alpha1, reading as clusteroperator.openshift.io/__internal from storagebackend.Config{Type:"", Prefix:"/clusteroperator", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000}
I0214 20:44:47.121763       1 compact.go:54] compactor already exists for endpoints [http://localhost:2379]
I0214 20:44:47.121786       1 storage_factory.go:278] storing {clusteroperator.openshift.io machinesets} in clusteroperator.openshift.io/v1alpha1, reading as clusteroperator.openshift.io/__internal from storagebackend.Config{Type:"", Prefix:"/clusteroperator", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000}
I0214 20:44:47.121825       1 compact.go:54] compactor already exists for endpoints [http://localhost:2379]
I0214 20:44:47.121847       1 storage_factory.go:278] storing {clusteroperator.openshift.io machines} in clusteroperator.openshift.io/v1alpha1, reading as clusteroperator.openshift.io/__internal from storagebackend.Config{Type:"", Prefix:"/clusteroperator", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000}
I0214 20:44:47.121866       1 compact.go:54] compactor already exists for endpoints [http://localhost:2379]
I0214 20:44:47.121874       1 etcd_config.go:111] Installing API group clusteroperator.openshift.io
I0214 20:44:47.123586       1 etcd_config.go:132] Finished installing API groups
I0214 20:44:47.123610       1 server.go:52] Running the API server
[restful] 2018/02/14 20:44:47 log.go:33: [restful/swagger] listing is available at https:///swaggerapi
[restful] 2018/02/14 20:44:47 log.go:33: [restful/swagger] https:///swaggerui/ is mapped to folder /swagger-ui/
I0214 20:44:47.127659       1 healthz.go:74] Installing healthz checkers:"ping", "poststarthook/generic-apiserver-start-informers", "poststarthook/start-cluster-operator-apiserver-informers", "etcd"
I0214 20:44:47.127974       1 serve.go:89] Serving securely on [::]:6443
I0214 20:44:47.128011       1 util.go:166] Starting shared informers
I0214 20:44:47.128017       1 util.go:171] Started shared informers
I0214 20:44:47.132453       1 request.go:873] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/healthz","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0214 20:44:47.132514       1 round_trippers.go:417] curl -k -v -XPOST  -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" -H "Accept: application/json, */*" -H "Content-Type: application/json" https://172.30.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews
I0214 20:44:47.133444       1 round_trippers.go:436] POST https://172.30.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews 201 Created in 0 milliseconds
I0214 20:44:47.133466       1 round_trippers.go:442] Response Headers:
I0214 20:44:47.133469       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:44:47.133472       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:44:47.133474       1 round_trippers.go:445]     Content-Length: 301
I0214 20:44:47.133476       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:44:47 GMT
I0214 20:44:47.133488       1 request.go:873] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/healthz","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":true,"reason":"allowed by cluster rule"}}
I0214 20:44:47.133941       1 handler.go:160] cluster-operator-apiserver: GET "/healthz" satisfied by nonGoRestful
I0214 20:44:47.133964       1 pathrecorder.go:240] cluster-operator-apiserver: "/healthz" satisfied by exact match
I0214 20:44:47.133972       1 server.go:132] etcd checker called
E0214 20:44:47.134174       1 server.go:141] etcd failed to reach any server
I0214 20:44:47.134193       1 healthz.go:112] healthz check etcd failed: etcd failed to reach any server
I0214 20:44:47.134299       1 wrap.go:42] GET /healthz: (2.127893ms) 500
goroutine 146 [running]:
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc4202ee700, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc4202ee700, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc42021d7a0, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f54d3ddc3e0, 0xc420106008, 0xc420788540, 0xb4, 0x1f4)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1930 +0xda
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42073b160, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc42075e940, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x190f857, 0x1a, 0xc42011a6c0, 0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/handler.go:161 +0x6ad
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4206afdc0, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	<autogenerated>:1 +0x75
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:51 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc420216640, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:95 +0x318
net/http.HandlerFunc.ServeHTTP(0xc420055a00, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc420216690, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:79 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc4202166e0, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc4206afde0, 0x7f54d3ddc3e0, 0xc420106008, 0xc4204c4700)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc4206afe60, 0x24b4cc0, 0xc420106008, 0xc4204c4700, 0xc420346420)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab

logging error output: "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-cluster-operator-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n"
 [[kube-probe/.] 172.17.0.1:45424]
I0214 20:44:56.097572       1 handler.go:160] cluster-operator-apiserver: GET "/healthz" satisfied by nonGoRestful
I0214 20:44:56.097607       1 pathrecorder.go:240] cluster-operator-apiserver: "/healthz" satisfied by exact match
I0214 20:44:56.097622       1 server.go:132] etcd checker called
E0214 20:44:56.098034       1 server.go:141] etcd failed to reach any server
I0214 20:44:56.098083       1 healthz.go:112] healthz check etcd failed: etcd failed to reach any server
I0214 20:44:56.098253       1 wrap.go:42] GET /healthz: (768.943ยตs) 500
goroutine 201 [running]:
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc420438070, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc420438070, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc4200df300, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f54d3ddc3e0, 0xc420764098, 0xc4208460c0, 0xb4, 0x1f4)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1930 +0xda
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42073b160, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc42075e940, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x190f857, 0x1a, 0xc42011a6c0, 0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/handler.go:161 +0x6ad
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4206afdc0, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	<autogenerated>:1 +0x75
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:51 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc420216640, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:95 +0x318
net/http.HandlerFunc.ServeHTTP(0xc420055a00, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc420216690, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:79 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc4202166e0, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc4206afde0, 0x7f54d3ddc3e0, 0xc420764098, 0xc420739b00)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc4206afe60, 0x24b4cc0, 0xc420764098, 0xc420739b00, 0xc4205c87e0)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab

logging error output: "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-cluster-operator-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n"
 [[kube-probe/.] 172.17.0.1:45580]
I0214 20:44:56.974886       1 handler.go:160] cluster-operator-apiserver: GET "/healthz" satisfied by nonGoRestful
I0214 20:44:56.974962       1 pathrecorder.go:240] cluster-operator-apiserver: "/healthz" satisfied by exact match
I0214 20:44:56.974975       1 server.go:132] etcd checker called
E0214 20:44:56.975415       1 server.go:141] etcd failed to reach any server
I0214 20:44:56.975485       1 healthz.go:112] healthz check etcd failed: etcd failed to reach any server
I0214 20:44:56.975665       1 wrap.go:42] GET /healthz: (874.595ยตs) 500
goroutine 228 [running]:
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc420438230, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc420438230, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc4200dfa80, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420846300, 0xb4, 0x1f4)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1930 +0xda
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42073b160, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc42075e940, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x190f857, 0x1a, 0xc42011a6c0, 0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/handler.go:161 +0x6ad
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4206afdc0, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	<autogenerated>:1 +0x75
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:51 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc420216640, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:95 +0x318
net/http.HandlerFunc.ServeHTTP(0xc420055a00, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc420216690, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:79 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc4202166e0, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc4206afde0, 0x7f54d3ddc3e0, 0xc4207640c8, 0xc420646200)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc4206afe60, 0x24b4cc0, 0xc4207640c8, 0xc420646200, 0xc4205c8a80)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab

logging error output: "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-cluster-operator-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n"
 [[kube-probe/.] 172.17.0.1:45588]
I0214 20:45:06.099206       1 request.go:873] Request Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/healthz","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":false}}
I0214 20:45:06.099323       1 round_trippers.go:417] curl -k -v -XPOST  -H "Content-Type: application/json" -H "User-Agent: cluster-operator/v0.0.0 (linux/amd64) kubernetes/790d284" -H "Accept: application/json, */*" -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJjbHVzdGVyLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyLXRva2VuLTlxbmpuIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItb3BlcmF0b3ItYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiM2VhZWExMTktMTFiZi0xMWU4LWI1MmItNGUyZmI2YmFjYWE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNsdXN0ZXItb3BlcmF0b3I6Y2x1c3Rlci1vcGVyYXRvci1hcGlzZXJ2ZXIifQ.Q0bHyN-_FFpUTvjxO2mBDujZj7Twi8NIF_s6eLCfq1f8YM29wnbY2grlRBfzoK-2OyGtRGgSXQMdD9BcviruL8SkHLSz1GltVBb-IGMaT44FWek1E0LU3hQzKdFQNSqu-rxI0GX2RCk6swtjeoNJsVZ0mh-SqfkmhaW2di5o817lQvzghoUbBioSEHZyMvidgx9UqeWbG0TKXsTjI3d_VO8CvvRPwDlNeOZ4YSAXTfIiJLFTWvrI3dfj7PR7FkKnVE535rrzMwVqiGaj7v1v8KCPBwgHC_MroyWsS9AexkL3dFN9tSHOLN6W_zTIPP5sf8hvWmHfNYkbysJSTJRgqw" https://172.30.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews
I0214 20:45:06.103588       1 round_trippers.go:436] POST https://172.30.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews 201 Created in 4 milliseconds
I0214 20:45:06.103619       1 round_trippers.go:442] Response Headers:
I0214 20:45:06.103623       1 round_trippers.go:445]     Cache-Control: no-store
I0214 20:45:06.103626       1 round_trippers.go:445]     Content-Type: application/json
I0214 20:45:06.103628       1 round_trippers.go:445]     Content-Length: 301
I0214 20:45:06.103631       1 round_trippers.go:445]     Date: Wed, 14 Feb 2018 20:45:06 GMT
I0214 20:45:06.103688       1 request.go:873] Response Body: {"kind":"SubjectAccessReview","apiVersion":"authorization.k8s.io/v1beta1","metadata":{"creationTimestamp":null},"spec":{"nonResourceAttributes":{"path":"/healthz","verb":"get"},"user":"system:anonymous","group":["system:unauthenticated"]},"status":{"allowed":true,"reason":"allowed by cluster rule"}}
I0214 20:45:06.103857       1 handler.go:160] cluster-operator-apiserver: GET "/healthz" satisfied by nonGoRestful
I0214 20:45:06.103883       1 pathrecorder.go:240] cluster-operator-apiserver: "/healthz" satisfied by exact match
I0214 20:45:06.103892       1 server.go:132] etcd checker called
E0214 20:45:06.104215       1 server.go:141] etcd failed to reach any server
I0214 20:45:06.104237       1 healthz.go:112] healthz check etcd failed: etcd failed to reach any server
I0214 20:45:06.104439       1 wrap.go:42] GET /healthz: (5.446118ms) 500
goroutine 119 [running]:
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc42025c1c0, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:207 +0xdd
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc42025c1c0, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:186 +0x35
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc4203ba740, 0x1f4)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:188 +0xac
net/http.Error(0x7f54d3ddc3e0, 0xc420498050, 0xc420788780, 0xb4, 0x1f4)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1930 +0xda
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/healthz/healthz.go:121 +0x508
net/http.HandlerFunc.ServeHTTP(0xc42073b160, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc42075e940, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:241 +0x55a
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0xa1
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x190f857, 0x1a, 0xc42011a6c0, 0xc4202ef7a0, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/handler.go:161 +0x6ad
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server.(*director).ServeHTTP(0xc4206afdc0, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	<autogenerated>:1 +0x75
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:51 +0x37d
net/http.HandlerFunc.ServeHTTP(0xc420216640, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/maxinflight.go:95 +0x318
net/http.HandlerFunc.ServeHTTP(0xc420055a00, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:49 +0x203a
net/http.HandlerFunc.ServeHTTP(0xc420216690, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:79 +0x2b1
net/http.HandlerFunc.ServeHTTP(0xc4202166e0, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request.WithRequestContext.func1(0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/endpoints/request/requestcontext.go:110 +0xcb
net/http.HandlerFunc.ServeHTTP(0xc4206afde0, 0x7f54d3ddc3e0, 0xc420498050, 0xc420121300)
	/usr/local/Cellar/go/1.9.2/libexec/src/net/http/server.go:1918 +0x44
github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc4206afe60, 0x24b4cc0, 0xc420498050, 0xc420121300, 0xc420602660)
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:93 +0x8d
created by github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP
	/Users/cewong/Code/cluster-operator/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:92 +0x1ab

logging error output: "[+]ping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-cluster-operator-apiserver-informers ok\n[-]etcd failed: reason withheld\nhealthz check failed\n"
 [[kube-probe/.] 172.17.0.1:45626]

Can't create cluster with AMI and installer for 3.0.34

We are trying to deploy a cluster using version 3.0.34 of openshift-ansible and a corresponding AMI, but we get the following error from the AWS machine controller:

ERROR: logging before flag.Parse: W0910 13:14:44.469526       1 controller.go:136] Unable to create machine jhernand11-5fjvv-master-l69wb: cannot create EC2 instance: InvalidBlockDeviceMapping: Volume of size 100GB is smaller than  snapshot 'snap-06909941fa52515d0', expect size >= 200GB
	status code: 400, request id: 9090a620-f5d8-4284-8e58-cf9af963df97
time="2018-09-10T13:14:44Z" level=error msg="error creating machine: cannot create EC2 instance: InvalidBlockDeviceMapping: Volume of size 100GB is smaller than  snapshot 'snap-06909941fa52515d0', expect size >= 200GB\n\tstatus code: 400, request id: 9090a620-f5d8-4284-8e58-cf9af963df97" controller=awsMachine machine=unified-hybrid-cloud/jhernand11-5fjvv-master-l69wb

Note that as there are no stable releases yet we are using our own tagged version of the project, commit a456201. On top of that we have added a patch to use version 3.4.34 of the isntaller:

diff --git a/Makefile b/Makefile
index 3a9879c5..c461c722 100644
--- a/Makefile
+++ b/Makefile
@@ -378,7 +378,7 @@ cluster-operator-ansible-images: build/cluster-operator-ansible/Dockerfile build
        $(call build-cluster-operator-ansible-image,$(OA_ANSIBLE_URL),"release-3.9",$(CLUSTER_OPERATOR_ANSIBLE_IMAGE_NAME),"v3.9",$(CLUSTER_API_DEPLOYMENT_PLAYBOOK))
 
        # build v3.10 on openshift-ansible:master
-       $(call build-cluster-operator-ansible-image,$(OA_ANSIBLE_URL),"openshift-ansible-3.10.0-0.32.0",$(CLUSTER_OPERATOR_ANSIBLE_IMAGE_NAME),"v3.10",$(CLUSTER_API_DEPLOYMENT_PLAYBOOK))
+       $(call build-cluster-operator-ansible-image,$(OA_ANSIBLE_URL),"openshift-ansible-3.10.34-1",$(CLUSTER_OPERATOR_ANSIBLE_IMAGE_NAME),"v3.10",$(CLUSTER_API_DEPLOYMENT_PLAYBOOK))
 
        # build master/canary
        $(call build-cluster-operator-ansible-image,$(OA_ANSIBLE_URL),$(OA_ANSIBLE_BRANCH),$(CLUSTER_OPERATOR_ANSIBLE_IMAGE_NAME),$(VERSION),$(CLUSTER_API_DEPLOYMENT_PLAYBOOK))

Is this a known issue? Any suggestion on how to address it?

Building and using your own golden image undocumented

Hi. We (cc @nimrodshn) are trying out cluster-operator according to README, in fake=false mode.
We got MachineSet and Machine objects being created but we don't get any AWS instances.
Machine status remains at:

  status:
    lastUpdated: null
    providerStatus: null

Looking at pod logs it seems AWS credentials didn't make it into openshift-ansible:

```TASK [openshift_aws : fetch master instances] **********************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_aws/tasks/setup_master_group.yml:10
Wednesday 11 July 2018  07:50:04 +0000 (0:00:00.033)       0:00:03.289 ******** 
Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/amazon/ec2_instance_facts.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: default
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
FAILED - RETRYING: fetch master instances (20 retries left).Result was: {
    "attempts": 1, 
    "changed": false, 
    "instances": [], 
    "invocation": {
        "module_args": {
            "aws_access_key": null, 
            "aws_secret_key": null, 
            "ec2_url": null, 
            "filters": {
                "instance-state-name": "running", 
                "tag:clusterid": "nshneor-gfv8m", 
                "tag:host-type": "master"
            }, 
            "instance_ids": [], 
            "profile": null, 
            "region": "us-east-1", 
            "security_token": null, 
            "validate_certs": true
        }
    }, 
    "retries": 21
}

The pod has AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY set:

nshneor@dhcp-2-169 ~/workspace/go/src/github.com/openshift/cluster-operator (master) $ oc describe pods master-nshneor-gfv8m-nqts5-gcnk8 
Name:           master-nshneor-gfv8m-nqts5-gcnk8
Namespace:      myproject
Node:           localhost/10.35.2.169
Start Time:     Wed, 11 Jul 2018 10:46:12 +0300
Labels:         controller-uid=798a762e-84de-11e8-a192-28d2448581b1
                job-name=master-nshneor-gfv8m-nqts5
Annotations:    openshift.io/scc=restricted
Status:         Running
IP:             172.17.0.4
Controlled By:  Job/master-nshneor-gfv8m-nqts5
Containers:
  install-masters:
    Container ID:   docker://31a09cd730e09b7e739654cc0fdc497a2d2e569f1142ceba566a38599b993e99
    Image:          cluster-operator-ansible:canary
    Image ID:       docker://sha256:2f0c518288260d1f0026dcc12129fa359b4909c4fbdaab83680d7e62fe295e25
    Port:           <none>
    State:          Running
      Started:      Wed, 11 Jul 2018 10:49:59 +0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 11 Jul 2018 10:48:02 +0300
      Finished:     Wed, 11 Jul 2018 10:49:45 +0300
    Ready:          True
    Restart Count:  2
    Environment:
      INVENTORY_FILE:             /ansible/inventory/hosts
      ANSIBLE_HOST_KEY_CHECKING:  False
      OPTS:                       -vvv --private-key=/ansible/ssh/privatekey.pem -e @/ansible/inventory/vars
      AWS_ACCESS_KEY_ID:          <set to the key 'awsAccessKeyId' in secret 'nshneor-aws-creds'>      Optional: false
      AWS_SECRET_ACCESS_KEY:      <set to the key 'awsSecretAccessKey' in secret 'nshneor-aws-creds'>  Optional: false
      PLAYBOOK_FILE:              /usr/share/ansible/openshift-ansible/playbooks/cluster-operator/aws/install_masters.yml
    Mounts:
      /ansible/inventory/ from inventory (rw)
      /ansible/ssh/ from sshkey (rw)
      /ansible/ssl/ from sslkey (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from cluster-installer-token-fvrqc (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  inventory:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      master-nshneor-gfv8m-nqts5
    Optional:  false
  sshkey:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nshneor-ssh-key
    Optional:    false
  sslkey:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nshneor-certs
    Optional:    false
  cluster-installer-token-fvrqc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-installer-token-fvrqc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age               From                Message
  ----     ------                 ----              ----                -------
  Normal   Scheduled              4m                default-scheduler   Successfully assigned master-nshneor-gfv8m-nqts5-gcnk8 to localhost
  Normal   SuccessfulMountVolume  4m                kubelet, localhost  MountVolume.SetUp succeeded for volume "inventory"
  Normal   SuccessfulMountVolume  4m                kubelet, localhost  MountVolume.SetUp succeeded for volume "sshkey"
  Normal   SuccessfulMountVolume  4m                kubelet, localhost  MountVolume.SetUp succeeded for volume "sslkey"
  Normal   SuccessfulMountVolume  4m                kubelet, localhost  MountVolume.SetUp succeeded for volume "cluster-installer-token-fvrqc"
  Warning  BackOff                36s               kubelet, localhost  Back-off restarting failed container
  Normal   Pulled                 24s (x3 over 4m)  kubelet, localhost  Container image "cluster-operator-ansible:canary" already present on machine
  Normal   Created                23s (x3 over 4m)  kubelet, localhost  Created container

The secrets do exist:

Name:         nshneor-aws-creds
Namespace:    myproject
Labels:       <none>
Annotations:  
Type:         Opaque

Data
====
awsAccessKeyId:      20 bytes
awsSecretAccessKey:  40 bytes


Name:         nshneor-ssh-key
Namespace:    myproject
Labels:       app=cluster-operator
Annotations:  
Type:         Opaque

Data
====
ssh-privatekey:  1674 bytes

How can we troubleshoot it further?

Update dependencies to kube 1.9

The dependencies currently use kube 1.8.2. Openshift is on kube 1.9 now, so we should update our dependencies to kube 1.9 as well.

Integrate with the cluster-registry

Once we have published an alpha version of the cluster-registry, this project should integrate with it. Note, we may want to consider renaming the Cluster resource in this project to something like ProvisionedCluster. One possible integration would be that once a cluster has been created by the cluster operator, an entry could be created for it and maintained in the cluster-registry.

Additional validation needed on cluster resource

Validate that:

  • machineSet uses the same type of hardware spec as default in cluster
  • required fields in hardware spec are defined either in default hardware spec of cluster or in the machineset

controller-manager not waiting for api registration

With the move to kube 1.9, the controller-manager is no longer waiting for the cluster-operator api to be registered with the aggregation api server prior to attempting to start the controllers. Consequently, the controllers are skipped as their dependent resources do not exist.

Remote machineset deletion - handle possible cluster controller starvation

Currently the cluster controller will handle connecting to the remote cluster and deleting machinesets within the main sync loop. This means that if it takes a long time to connect to the remote cluster, we could potentially starve the cluster controller's goroutines. If we run into this problem, a possible solution is to handle these remote deletions in a separate queue.

Controllers sending empty patches

$ oc logs cluster-operator-controller-manager-86f754b7d9-msbhq | grep "about to patch"
time="2018-01-29T14:37:12Z" level=debug msg="about to patch cluster with {\"status\":{\"conditions\":[{\"lastProbeTime\":\"2018-01-29T14:37:12Z\",\"lastTransitionTime\":\"2018-01-29T14:37:12Z\",\"message\":\"Job cluster-operator/job-infra-dgoodwin-cluster-t9fnm is running since \\u003cnil\\u003e. Pod completions: 0, failures: 0\",\"reason\":\"JobRunning\",\"status\":\"True\",\"type\":\"InfraProvisioning\"}]}}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:37:12Z" level=debug msg="about to patch cluster with {}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:37:13Z" level=debug msg="about to patch cluster with {}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:37:13Z" level=debug msg="about to patch cluster with {}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:37:13Z" level=debug msg="about to patch cluster with {}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:38:41Z" level=debug msg="about to patch cluster with {\"status\":{\"conditions\":[{\"lastProbeTime\":\"2018-01-29T14:38:41Z\",\"lastTransitionTime\":\"2018-01-29T14:38:41Z\",\"message\":\"Job cluster-operator/job-infra-dgoodwin-cluster-t9fnm completed at 2018-01-29 14:38:41 +0000 UTC\",\"reason\":\"JobCompleted\",\"status\":\"False\",\"type\":\"InfraProvisioning\"},{\"lastProbeTime\":\"2018-01-29T14:38:41Z\",\"lastTransitionTime\":\"2018-01-29T14:38:41Z\",\"message\":\"Job cluster-operator/job-infra-dgoodwin-cluster-t9fnm completed at 2018-01-29 14:38:41 +0000 UTC\",\"reason\":\"JobCompleted\",\"status\":\"True\",\"type\":\"InfraProvisioned\"}],\"provisioned\":true,\"provisionedJobGeneration\":1}}" cluster=cluster-operator/dgoodwin-cluster
time="2018-01-29T14:38:42Z" level=debug msg="about to patch machineset with {\"status\":{\"conditions\":[{\"lastProbeTime\":\"2018-01-29T14:38:42Z\",\"lastTransitionTime\":\"2018-01-29T14:38:42Z\",\"message\":\"Job cluster-operator/provision-machineset-dgoodwin-cluster-master-f7dlr-kj9wm is running since \\u003cnil\\u003e. Pod completions: 0, failures: 0\",\"reason\":\"JobRunning\",\"status\":\"True\",\"type\":\"HardwareProvisioning\"}]}}" machineset=cluster-operator/dgoodwin-cluster-master-f7dlr
time="2018-01-29T14:38:42Z" level=debug msg="about to patch machineset with {}" machineset=cluster-operator/dgoodwin-cluster-master-f7dlr

In-progress jobs not deleted when resource deleted

For controllers that do not run a job when the resource is deleted, any in-progress jobs for that controller are left running after the resource is deleted. For example, let's say that the job for master controller is running when the master machine set is deleted. The job will continue running. This does not cause problems or residual resources in the cloud. It only causes jobs and pods that exist and run longer than they need to and some errors in the controller logs when the job does eventually finish.

Intermittent integration test race failures

This appears to be coming and going in master today. I saw it surface on my PR in jenkins and locally, and switched to testing off master, it appears there sometimes as well, but sometimes not. Seems relatively easy to reproduce though, I'm seeing it every couple runs or so.

==================                                                                                                                                                                                                            [69/1987]
WARNING: DATA RACE                                                                                                                                                                                                                     
Read at 0x00c4208829c0 by goroutine 73:                                                                                                                                                                                                
  runtime.chansend()                                                                                                                                                                                                                         /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/chan.go:128 +0x0                                                                                                                                                                       github.com/openshift/cluster-operator/test/integration.startServerAndControllers.func1()                                                                                                                                                   /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch/watch.go:135 +0x503                                                                                                             
  github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*SimpleReactor).React()                   
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fixture.go:410 +0x64                                                                                                                   github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*Fake).Invokes()                          
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fake.go:143 +0x267
  github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake.(*FakeJobs).Create()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake/fake_job.go:82 +0x28f                                                                                         
  github.com/openshift/cluster-operator/pkg/controller.(*jobControl).createJob()                                   
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:409 +0x873          
  github.com/openshift/cluster-operator/pkg/controller.(*jobControl).ControlJobs()                                 
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:235 +0x6e6
  github.com/openshift/cluster-operator/pkg/controller.(*jobSync).Sync()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobsync.go:130 +0x54f
  github.com/openshift/cluster-operator/pkg/controller.(JobSync).Sync-fm()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/accept/accept_controller.go:108 +0x5e
  github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).processNextWorkItem()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:233 +0x145
  github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).worker()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:222 +0x38
  github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).(github.com/openshift/cluster-operator/pkg/controller/machineset.worker)-fm()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203 +0x41
  github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x6f
  github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xcd
  github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x5a

Previous write at 0x00c4208829c0 by goroutine 254:
  [failed to restore the stack]

Goroutine 73 (running) created at:
  github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).Run()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203 +0x2a6
  github.com/openshift/cluster-operator/test/integration.startServerAndControllers.func4.1()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/test/integration/controller_test.go:366 +0x53
  github.com/openshift/cluster-operator/test/integration.startServerAndControllers.func8()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/test/integration/controller_test.go:404 +0x5a

Goroutine 254 (finished) created at:
  github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x6f
  github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:53 +0xd0
  github.com/openshift/cluster-operator/vendor/k8s.io/client-go/tools/cache.(*controller).Run()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/tools/cache/controller.go:122 +0x3af
  github.com/openshift/cluster-operator/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run()
      /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/tools/cache/shared_informer.go:226 +0x80c
==================
E0223 09:57:00.972004   31648 runtime.go:66] Observed a panic: "send on closed channel" (send on closed channel)
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:72                                                                                                                  
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65                                                                                                                  
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51                                                                                                                  /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/asm_amd64.s:509                                                                                                                                                                            /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/panic.go:491                                                                                                                                                                               /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/chan.go:173                                                                                                                                                                                
/home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/chan.go:113                                                            
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch/watch.go:135                                                                                                                          /home/dgoodwin/go/src/github.com/openshift/cluster-operator/test/integration/controller_test.go:300                
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fixture.go:410          
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fake.go:143            
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake/fake_job.go:82                                                                                                      
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:409                       
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:235                       
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobsync.go:130                          
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/accept/accept_controller.go:108
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:233
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:222
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133      
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88                
/home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/asm_amd64.s:2337                            
panic: send on closed channel [recovered]                                                                                     
        panic: send on closed channel                                                                                                                        
                                                                                                                              
goroutine 2408 [running]:                                                                           
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)                
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x16f
panic(0x231f0a0, 0x27fa9f0)                                                                                                 
        /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/panic.go:491 +0x2a2                 
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch.(*FakeWatcher).Add(...)                         
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch/watch.go:135
github.com/openshift/cluster-operator/test/integration.startServerAndControllers.func1(0x3515dc0, 0xc42046b700, 0x4, 0xc421ec0000, 0xc42012f7d8, 0xc42196cd30, 0x2588514)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/test/integration/controller_test.go:300 +0x504
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*SimpleReactor).React(0xc42196cd20, 0x3515dc0, 0xc42046b700, 0xc421bef801, 0x80, 0x16794e8, 0x436ee0, 0xc421bef678)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fixture.go:410 +0x65
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*Fake).Invokes(0xc42012f7c0, 0x3515dc0, 0xc42046b700, 0x34f86c0, 0xc421c26c00, 0x0, 0x0, 0x0, 0x0)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fake.go:143 +0x268         
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake.(*FakeJobs).Create(0xc421959740, 0xc421c26800, 0xe, 0x351c0c0, 0xc421959740)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake/fake_job.go:82 +0x290
github.com/openshift/cluster-operator/pkg/controller.(*jobControl).createJob(0xc42194c1b0, 0xc4215b4db0, 0x28, 0x35240c0, 0xc42050c6c0, 0x34f2a80, 0xc421aed3e0, 0x3522120, 0xc420666fa0, 0x0, ...)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:409 +0x874    
github.com/openshift/cluster-operator/pkg/controller.(*jobControl).ControlJobs(0xc42194c1b0, 0xc4215b4db0, 0x28, 0x35240c0, 0xc42050c6c0, 0xc421aed301, 0x34f2a80, 0xc421aed3e0, 0xc421244300, 0xc4209a00c0, ...)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:235 +0x6e7
github.com/openshift/cluster-operator/pkg/controller.(*jobSync).Sync(0xc420242380, 0xc4215b4db0, 0x28, 0x0, 0x0)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobsync.go:130 +0x550                   
github.com/openshift/cluster-operator/pkg/controller.(JobSync).Sync-fm(0xc4215b4db0, 0x28, 0xc420432dc0, 0x226e100)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/accept/accept_controller.go:108 +0x5f   
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).processNextWorkItem(0xc421242140, 0xc420463e00)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:233 +0x146
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).worker(0xc421242140)      
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:222 +0x39   
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).(github.com/openshift/cluster-operator/pkg/controller/machineset.worker)-fm()
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203 +0x42
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc421147070)       
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:235                                                                                                                                   [0/1987]
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobsync.go:130                                                                                                                                              
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/accept/accept_controller.go:108                                                                                                                             
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:233                                                                                                                     /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:222                                                                                                                     /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203                                                                                                                     /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133                                                                                                                       
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134   
/home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88                                                                                                                        /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/asm_amd64.s:2337                                                       
panic: send on closed channel [recovered]                                                                           
        panic: send on closed channel                                                                              
                                                                                                                                                                                                                                       
goroutine 2408 [running]:                                                                                          
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)       
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x16f
panic(0x231f0a0, 0x27fa9f0)                                                                               
        /home/dgoodwin/.gvm/gos/go1.9.4/src/runtime/panic.go:491 +0x2a2                                           
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch.(*FakeWatcher).Add(...)                
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/watch/watch.go:135
github.com/openshift/cluster-operator/test/integration.startServerAndControllers.func1(0x3515dc0, 0xc42046b700, 0x4, 0xc421ec0000, 0xc42012f7d8, 0xc42196cd30, 0x2588514)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/test/integration/controller_test.go:300 +0x504
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*SimpleReactor).React(0xc42196cd20, 0x3515dc0, 0xc42046b700, 0xc421bef801, 0x80, 0x16794e8, 0x436ee0, 0xc421bef678)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fixture.go:410 +0x65
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing.(*Fake).Invokes(0xc42012f7c0, 0x3515dc0, 0xc42046b700, 0x34f86c0, 0xc421c26c00, 0x0, 0x0, 0x0, 0x0)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/testing/fake.go:143 +0x268                                       
github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake.(*FakeJobs).Create(0xc421959740, 0xc421c26800, 0xe, 0x351c0c0, 0xc421959740)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/client-go/kubernetes/typed/batch/v1/fake/fake_job.go:82 +0x290
github.com/openshift/cluster-operator/pkg/controller.(*jobControl).createJob(0xc42194c1b0, 0xc4215b4db0, 0x28, 0x35240c0, 0xc42050c6c0, 0x34f2a80, 0xc421aed3e0, 0x3522120, 0xc420666fa0, 0x0, ...)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:409 +0x874                         
github.com/openshift/cluster-operator/pkg/controller.(*jobControl).ControlJobs(0xc42194c1b0, 0xc4215b4db0, 0x28, 0x35240c0, 0xc42050c6c0, 0xc421aed301, 0x34f2a80, 0xc421aed3e0, 0xc421244300, 0xc4209a00c0, ...)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobcontrol.go:235 +0x6e7
github.com/openshift/cluster-operator/pkg/controller.(*jobSync).Sync(0xc420242380, 0xc4215b4db0, 0x28, 0x0, 0x0)           
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/jobsync.go:130 +0x550             
github.com/openshift/cluster-operator/pkg/controller.(JobSync).Sync-fm(0xc4215b4db0, 0x28, 0xc420432dc0, 0x226e100)                                                      
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/accept/accept_controller.go:108 +0x5f
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).processNextWorkItem(0xc421242140, 0xc420463e00)                                                             
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:233 +0x146
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).worker(0xc421242140)                                                                       
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:222 +0x39
github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).(github.com/openshift/cluster-operator/pkg/controller/machineset.worker)-fm()              
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203 +0x42                
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc421147070)                                                                                     
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x70
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc421147070, 0x3b9aca00, 0x0, 0x1, 0xc420448000)                                                                     
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xce
github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc421147070, 0x3b9aca00, 0xc420448000)
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x5b
created by github.com/openshift/cluster-operator/pkg/controller/machineset.(*Controller).Run                       
        /home/dgoodwin/go/src/github.com/openshift/cluster-operator/pkg/controller/machineset/machineset_controller.go:203 +0x2a7
FAIL    github.com/openshift/cluster-operator/test/integration  20.598s                                                      
make: *** [Makefile:247: test-integration] Error 1                                                                               

Handle remote machineset deletion returning errors or taking a very long time

If something goes wrong on the target cluster and the controller that handles remote deletions in the root cluster cannot successfully delete the remote machinesets, then we should either set some error status or give up trying to delete the remote machinesets and proceed with the rest of deprovisioning after a given amount of time.

Fix formatting of ansible variables in job configmaps

If you examine the yaml for a configmap we mount into an ansible job, the hosts file comes out formatted nicely, the vars file does not:

apiVersion: v1
data:
  hosts: |2

    [OSEv3:children]
    masters
    nodes
    etcd

    [OSEv3:vars]
    ansible_become=true

    [masters]

    [etcd]

    [nodes]
  vars: "---\n# Variables that are commented in this file are optional; uncommented
    variables\n# are mandatory.\n\n# Default values for each variable are provided,
    as applicable.\n# Example values for mandatory variables are provided as a comment
    at the end\n# of the line.\n\n# ------------------------ #\n# Common/Cluster Variables
[snip]

Having a properly formatted vars file would be a nice to have for debugging or re-using.

There are some issues around this for k8s in general, but hopefully we can figure out why one is fine and the other is not and get them consistent.

Make failing jobs less obstrusive

  1. Do a backoff delay between failed jobs. Currently the next job starts right away after the previous job fails.
  2. Limit the length of time to keep trying jobs that are failing. Currently failed jobs will be re-start indefinitely.

It isn't possible to customize OpenShift installer variables

In the application that we are building we want to give users the possibility to build different kinds of clusters, which are basically defined by sets of OpenShift installer variables. Lets say, for example, that we want to have a kind of cluster that has ansible_service_broker_install set to false. Is there any way to do that without modifying the code of the operator? As far as I can see there is a template hard-coded in the source, but no mechanism to replace it.

Generate mocks from make

The mocks used in unit testing need to be generated via make. Currently, they need to be generated manually when there are relevant changes in the mocked interfaces.

go get github.com/golang/mock/gomock
go get github.com/golang/mock/mockgen

go generate pkg/controller

Allow clusters to be auto-updated to the latest vetted golden image

As a user on dedicated, I may want to have my cluster auto-updated to the latest vetted golden image.

Couple ideas to accomplish this:

  1. Use a cluster version that is updated by an openshift administrator when the latest vetted golden image is released.
  2. Have a out-of-band process that updates every cluster using the cluster version with the older vetted golden image to the cluster version with the latest vetted golden image.

Deleting cluster should not launch provisioning jobs.

Something appears to be wrong with recent job logic particularly around the area of cluster deletions. Deleting the cluster appears to kick off repeated provision machineset jobs that all fail.

Surfaces using the fake playbook mock pod rather than AWS but situation may be the same there.

Attempting to delete the jobs also leads to them being recreated.

Add taints to nodes

In addition to node labels, we should also allow the user to add taints to nodes.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.