Git Product home page Git Product logo

kansible's Introduction

Kansible

Kansible lets you orchestrate operating system processes on Windows or any Unix in the same way as you orchestrate your Docker containers with Kubernetes by using Ansible to provision the software onto hosts and Kubernetes to orchestrate the processes and the containers in a single system.

kansible logo

Kansible uses:

  • Ansible to install, configure and provision your software onto machines using playbooks
  • Kubernetes to run and manage the processes and perform service discovery, scaling, load balancing together with centralised logging, metrics, alerts and management.

Kansible provides a single pane of glass, CLI and REST API to all your processes whether they are inside docker containers or running as vanilla processes on Windows, AIX, Solaris or HP-UX or an old Linux distros that predate docker.

Kansible lets you migrate to a pure container based Docker world at whatever pace suits you, while using Kubernetes to orchestrate all your containers and operating system processes for your entire journey.

Features

  • All your processes appear as Pods inside Kubernetes namespaces so you can visualise, query and watch the status of your processes and containers in a canonical way
  • Each kind of process has its own Replication Controller to ensure processes keep running and so you can manually or automatically scale the number of processes up or down; up to the limit in the number of hosts in your Ansible inventory
  • Reuse Kubernetes liveness checks so that Kubernetes can monitor the state of your process and restart if it goes bad
  • Reuse Kubernetes readiness checks so that Kubernetes can know when your process can be included into the internal or external service load balancer
  • You can view the logs of all your processes in the canonical kubernetes way via the CLI, REST API or web console
  • Port forwarding works from the pods to the remote processes so that you can reuse Kubernetes Services to load balance across your processes automatically
  • Centralised logging and metrics and alerting works equally across your containers and processes
  • You can open a shell into the remote process machine via the CLI, REST API or web console; which is either a unix bash or a windows cmd shell as shown in the fabric8 console screenshot below:

kansible logo

Ansible perspective on Kansible

If you already use Ansible; then one way to think about Kansible is that you continue to use Ansible however you have been doing; using reusable composable playbooks and so forth. The only change to your playbooks that Kansible introduces is that you don't run Unix or Windows services (e.g. like systemd / init.d). You install and configure the software via Ansible playbooks; setting up whatever directories, users and permissions you require. But you don't create services or run the software.

Then we use Kubernetes (and kansible pods) as the alternative to Unix and Windows services. The reason we do this is that Kubernetes is a better distributed version of systemd/init.d/Windows services as it also includes features like:

  • service discovery and load balancing
  • health monitoring
  • centralised logging, metrics and alerts
  • manual and automatic scaling up or down
  • a consistent web console, CLI and REST API across processes running via kansible and Docker containers

Kubernetes perspective on Kansible

If you already use Kubernetes then you could look at Kansible as a way of extending the reach of Kubernetes to manage both Docker containers on a host that supports Docker plus remote processes on operating systems that don't support Docker. That then makes Kubernetes the orchestrator of all your software; whether its Dockerized or not!

All your processes are belong to us! :)

Longer term it would be great for Docker to be ported to more operating systems; along with the kubelet. So ideally more operating systems could use native Docker and kubelet; in which case there's less need for kansible. But at the time of writing, that goal is looking some way off for older versions of Windows along with AIX, Solaris and HPUX.

Whats really compelling about using Kubernetes to manage Docker containers and operating system processes via Kansible is that you can mix and match on a per microservice basis - use the right tool for the job right now - but all the while use a single orchestrator platform, Kubernetes, a single REST API, CLI tools and web console - with standard service discovery, load balancing and management functions.

Using Docker is more optimal; so we hope over time that you can use more Docker and less kansible; but its going to take our industry a while to Dockerize all the things and move everything to Linux; or to have fully working Docker + Kubernetes on Windows + all flavours of Unix. Until then, kansible can help! At least we can now pretend everything's Dockerized and running on Linux from an orchestration and management perspective ;)

How to use Kansible

You use kansible as follows:

  • create or reuse an existing Ansible playbook to install and provision the software you wish to run on a number of machines defined by the Ansible inventory

  • if you reuse an existing playbook, make sure you disable running the unix / windows services; as you will run that command instead in the kansible pods.

  • run the Ansible playbook either as part of a CI / CD build pipeline when there's a change to the git repo of the Playbook, or using a command line tool, cron or Ansible Tower

  • define a Replication Controller YAML file at kubernetes/$HOSTS/rc.yml for running the command for your process like this example.

  • the RC YAML file contains the command you need to run remotely to execute your process via $KANSIBLE_COMMAND

    • you can think of the RC YAML file as like the systemd configuration file, describing the command to run to startup the application. Only its a single file for the entire cluster which is stored in Kubernetes. Plus it can include readiness and liveness probes too .
    • You can use the {{ foo_bar }} Ansible variable expressions in the RC YAML to refer to variables from your global Ansible variables file
  • to take advantage of Kubernetes services, you can also define any number of Service YAML files at kubernetes/$HOSTS/service.yml

  • whenever the playbook git repo changes, run the kansible rc command inside a clone of the playbook git repository:

    kansible rc myhosts

where myhosts is the name of the hosts you wish to use in the Ansible inventory.

Then kansible will then create/update Secrets for any SSH private keys in your Ansible inventory and create or update a Replication Controller of kansible pods which will start and supervise your processes, capture the logs and redirect ports to enable liveness checks, centralised metrics and Kubernetes services.

So for each remote process on Windows, Linux, Solaris, AIX, HPUX kansible will create a kansible pod in Kubernetes which starts the command and tails the log to stdout/stderr. You can then use the Replication Controller scaling to start/stop your remote processes!

Working with kansible pods

  • As processes start and stop, you'll see the processes appear or disappear inside kubernetes, the CLI, REST API or the console as a kansible pod.
  • You can scale up and down the kansible Replication Controller via CLI, REST API or console.
  • You can then view the logs of any process in the usual kubernetes way via the command line, REST API or web console.
  • Centralised logging then works great on all your processes (providing the command you run outputs logs to stdout / stderr

Exposing ports

Any ports defined in the Replication Controller YAML file will be automatically forwarded to the remote process. See this example rc.yml file to see how to expose ports.

This means you can take advantage of things like centralised metrics and alerting, liveness checks, Kubernetes Services along with the built in service discovery and load balancing inside Kubernetes!

To see the use of Kubernetes Services and load balancing across remote processes with kansible check out the fabric8-ansible-hawtapp demo.

Opening a shell on the remote process

You can open a shell directly on the remote machine via the web console or by running

oc exec -it -p mypodname bash

Then you'll get a remote shell on the Windows or Unix box!

Examples

Before you start with the kansible examples you'll need:

These examples assume you have a working Kubernetes or OpenShift cluster running.

If you don't yet have a Kubernetes cluster to play with, try using the Fabric8 Vagrant image that includes OpenShift Origin as the Kubernetes cluster.

To run this example type the following to setup the VMs and provision things with Ansible:

git clone https://github.com/fabric8io/fabric8-ansible-spring-boot.git
cd fabric8-ansible-spring-boot
vagrant up
ansible-playbook -i inventory provisioning/site.yml -vv

You now should have 2 sample VMs (app1 and app2) with a Spring Boot based Java application provisioned onto the machines in the /opt folder, but with nothing actually running yet.

Now to setup the kansible Replication Controller run the following, where appservers is the hosts from the Ansible inventory in the inventory file

kansible rc appservers

This should now create a Replication Controller called springboot-demo along with 2 pods for each host in the appservers inventory file.

You should be able to look at the logs of those 2 pods in the usual Kubernetes / OpenShift way; e.g. via the fabric8 or OpenShift console or via the CLI:

e.g.

oc get pods
oc logs -f springboot-demo-81ryw

where springboot-demo-81ryw is the name of the pod you wish to view the logs.

You can now scale down / up the number of pods using the web console or the command line:

oc scale rc --replicas=2 springboot-demo

Important files

The examples use the following files:

This demonstration is similar to the above but it also demonstrates:

  • using both Windows and Linux boxes as the hosts
  • using Kubernetes Services to load balance across the processes

To run this example type the following to setup the VMs and provision things with Ansible:

git clone https://github.com/fabric8io/fabric8-ansible-hawtapp.git
cd fabric8-ansible-hawtapp
vagrant up
ansible-playbook -i inventory provisioning/site.yml -vv

Now to setup the Replication Controller for the supervisors run the following, where appservers is the hosts from the inventory

kansible rc appservers

The pods should now start up for each host in the inventory.

Using windows machines

This example uses 1 windows box and 1 linux box in the inventory. The example shows that kansible can support both operating systems just fine; it does require the playbooks to handle the differences though.

Also you typically will need to use different commands to run on Unix versus Windows which is configured in the rc.yml file. For more details see the documentation on the KANSIBLE_COMMAND_WINRM environment variable

To use windows you may need to first make sure you've installed pywinrm:

sudo pip install pywinrm

If you try to open shells via the fabric8 console or oc exec -it -p podName bash for both pods running, you'll see that one runs on a Linux box and one runs on a Windows machine like this example screenshot!

Trying out Kubernetes Services

This example also creates a Kubernetes Service which loads balances across the remote processes thanks to the kubernetes/appservers/service.yml file which is then exposed via the LoadBalancer type (on OpenShift a Route is created for this).

If you are using the fabric8 console you'll see the hawtapp-demo service in the Services tab.

You can try out the service in your browser via: http://hawtapp-demo-default.vagrant.f8/camel/hello?name=Kansible

Or using the CLI:

curl http://hawtapp-demo-default.vagrant.f8/camel/hello?name=Kansible

Each request load balances over the available processes. You can scale the Replication Controller down to 1 pod or up to 2 and each request should still work.

Configuration

To configure kansible you need to configure a Replication Controller in a file called kubernetes/$HOSTS/rc.yml.

Specify a name and optionally some labels for the replication controller inside the metadata object. There's no need to specify the spec.selector or spec.template.containers[0].metadata.labels values as those are inherited by default from the metadata.labels.

Environment variables

You can specify the following environment variables in the spec.template.spec.containers[0].env array like the use of KANSIBLE_COMMAND below.

These values can use Ansible variable expressions too.

KANSIBLE_COMMAND

Then you must specify a command to run via the $KANSIBLE_COMMAND environment variable:

apiVersion: "v1"
kind: "ReplicationController"
metadata:
  name: "myapp"
  labels:
    project: "myapp"
    version: "{{ app_version }}"
spec:
  template:
    spec:
      containers:
      - env:
        - name: "KANSIBLE_COMMAND"
          value: "/opt/foo-{{ app_version }}/bin/run.sh"
      serviceAccountName: "fabric8"

KANSIBLE_COMMAND_WINRM

This environment variable lets you provide a Windows specific command. It works the same as the KANSIBLE_COMMAND environment variable above, but this value is only used for Ansible connections of the form winrm. i.e. to supply a windows only command to execute.

Its quite common to have a foo.sh script to run sh/bash scripts on unix and then a foo.bat or foo.cmd file for Windows.

KANSIBLE_EXPORT_ENV_VARS

Specify a space separated list of environment variable names which should be exported into the remote shell when running the remote command.

Note that typically your sshd_config will disable the use of most environment variables being exported that don't start with LC_* so you may need to configure your sshd in /etc/ssh/sshd_config to enable this.

KANSIBLE_BASH

This defines the path where the bash script will be generated for running a remote bash shell. This allows running the command bash inside the kansible pod to remotely execute either /bin/bash or cmd.exe for Windows machines on the remote machine when you try to open a shell inside the Web Console or via:

oc exec -p mypodname bash

KANSIBLE_PORT_FORWARD

Allows port forwarding to be disabled.

export KANSIBLE_PORT_FORWARD=false

This is mostly useful to allow the bash command within a pod to not also try to port forward as this will fail ;)

SSH or WinRM

The best way to configure if you want to connect via SSH for unix machines or WinRM for windows machines is via the Ansible Inventory.

By default SSH is used on port 22 unless you specify ansible_port in the inventory or specify --port on the command line.

You can configure Windows machines using the ansible_connection=winrm property in the inventory:

[winboxes]
windows1 ansible_host=localhost ansible_port=5985 ansible_user=foo ansible_pass=somepasswd! ansible_connection=winrm

[unixes]
app1 ansible_host=10.10.3.20 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/app1/virtualbox/private_key
app2 ansible_host=10.10.3.21 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/app2/virtualbox/private_key

You can also enable WinRM via the --winrm command line flag:

export KANSIBLE_WINRM=true
kansible pod --winrm somehosts somecommand

or by setting the environment variable KANSIBLE_WINRM which is a little easier to configure on the RC YAML:

export KANSIBLE_WINRM=true
kansible pod somehosts somecommand

Checking the runtime status of the supervisors

To see which pods own which hosts run the following command:

oc export rc hawtapp-demo | grep ansible.fabric8  | sort

Where hawtapp-demo is the name of the RC for the supervisors.

The output is of the format:

pod.kansible.fabric8.io/app1: supervisor-znuj5
pod.kansible.fabric8.io/app2: supervisor-1same

Where the output is of the form pod.ansible.fabric8.io/$HOSTNAME: $PODNAME

kansible's People

Contributors

chirino avatar fusesource-ci avatar gastaldi avatar jimmidyson avatar jstrachan avatar mattfarina avatar rawlingsj avatar rhuss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kansible's Issues

Is AIX supported?

The readme file claims that

Windows, AIX, Solaris or HP-UX or an old Linux distros

are supported. But closer examination of the make file shows that the target OSes are

linux darwin freebsd netbsd openbsd solaris windows

And there's also no AIX binary in release page.

So is there any plan to add the AIX support in future releases? It might be a big push for adoptions in, say, banks' data center.

add support for $KANSIBLE_COMMAND_WINRM for commands if using WINRM (i.e. on window)

it'd be nice to have unix or windows commands in the same rc.yml and pick between them based on the ansible_connection type being winrm or not. Then you only type each kind of command once in the rc.yml. If the commands are identical then cool, just use $KANSIBLE_COMMAND in the rc.yml otherwise supply it and $KANSIBLE_COMMAND_WINRM if they are different

create a demo showing how to configure DNS so that it uses the kubernetes DNS resolver to find services

e.g. a sample app using a service called foo via a URL http://foo/ which we can then point /etc/resolv.conf at the OpenShift domain for the namespace of the app.

It looks like we could expose the LOCALDOMAIN environment variable to be $namespace.vagrant.f8 or something to hopefully get the process resolving DNS names within the Kubernetes namespace
http://man7.org/linux/man-pages/man5/resolv.conf.5.html

though this will require sshd configuration in the VM's to allow LOCALDOMAIN to be overloaded. See https://github.com/fabric8io/kansible#kansible_export_env_vars for more details

better support for port swizzling?

Imagine you have 5 different microservices as tarballs (springboot / hawtapp / karaf / tomcat / wildfly) that you want to be able to provision so that some hosts get more than one app. They're gonna clash ports pretty quickly.

One thing fabric8 v1 did was automatically port swizzled processes for you; by writing an env.sh / env.bat file with port numbers that the app would use to know what ports to use on startup.

It would be nice to have some kind of support for port swizzling. While folks can always manually use unique port numbers for all ports in all processes; thats soon gonna get error prone and boring :)

Install time port swizzling

It might be nice to have some simple mechanism to port swizzle apps. Maybe if an environment variable is specified in the pod etc?

e.g. at installation time, if there's a canonical file with the port values inside (say apphome/bin/env.sh or apphome/bin/env.bat` we could maybe have some tool that finds all values of the form FOO_PORT=8080 and swizzle 8080 to 123456 or whatever; then keep a track of mapping of 8080 -> 123456.

So at provision time, we'd swizzle the ports to host unique values. Then we'd write a canonical file somewhere (e.g. apphome/ports.yml) of the form...

---
ports:
  - 8080: 1234567

Then if enabled the kansible pod on startup can load the apphome/ports.yml file and know the actual real ports to expose.

So the tool at install time would need to perform a search/replace of known port values in some file; for each app name + port number we'd look up in some file on the host and if the port is known, replace it - otherwise generate a new entry and so forth.

Then installing new versions of the same app would tend to reuse the same port numbers; new apps would get new port numbers associated etc.

Dynamic port allocation

Another option could be, we configure all the ports to be using value 0 then we use some kind of jolokia probe to figure out all the port numbers at runtime and then dynamically write the apphome/ports.yml file as we see a port.

Thats way more complex though; figuring all this stuff out statically is much more efficient and simpler :)

use the ports in the RC.yml file to enable port forwarding?

it might be handy if folks want to take advantage of kubernetes services, for the gosupervise pod to automatically port forward all ports listed in its PodSpec.Ports, so that you could then define a Service on top of the supervisor pods to create regular Kubernetes services for remote processes.

So for each port number in PodSpec.Ports we'd open that port and do a TCP forward to the host address on the same port number.

Build fail - specific golang version required maybe ?

The build process of kansible fails on MacOSX

dabou:~/Fuse/projects/fabric8/fabric8/kansible$ make bootstrap
go get -u github.com/golang/lint/golint github.com/mitchellh/gox
# github.com/mitchellh/gox
../../../../../MyApplications/go-1.5.1/src/github.com/mitchellh/gox/go.go:4: import /Users/chmoulli/MyApplications/go-1.5.1/pkg/darwin_amd64/bytes.a: object is [darwin amd64 go1.5.1 X:none] expected [darwin amd64 go1.5.2 X:none]

We should perhaps mention in the doc that golang 1.5.2 is required ?

avoid the use of executing separate shell commands to apply the kubernetes resources?

it would be nice to make kansible self contained; so it didn't depend on either oc or kubectl being installed. I tried to do that but couldn't get this apply code to properly be aware of the v1 schema :(

Here's the commented out code:
https://github.com/fabric8io/kansible/blob/master/ansible/ansible.go#L534-L535

I wonder if its an easy fix?

Here's the code we try to run to apply a file:
https://github.com/fabric8io/kansible/blob/master/k8s/k8s.go#L220

"kansible rc host" command can´t create pods

Hello!

I have spent two days trying to work with Kansible, but I can't.

I use a virtual machine Ubuntu (Trusty 64) where I have a cluster of Kubernetes (v1.1.8) with a single node, which is master and minion at a time.

captura de pantalla 2016-04-05 a las 14 51 05

First of all, following the example of fabric8-ansible-spring-boot, I provisioned the application via Ansible. Then, I ran Kansible as the example says, pointing to my host group with the command:

kansible rc master --replicas=1

where 'master' is my group of Ansible hosts.

The replication controller was successfully created with the next output:

captura de pantalla 2016-04-05 a las 14 55 26

However, when I look at the description of the pod, I see that the container can't be created.

captura de pantalla 2016-04-05 a las 14 57 39

I can't see the container logs either.

captura de pantalla 2016-04-05 a las 14 58 38

And I can't run 'kubectl exec [command]' nor 'docker exec [command]'.

captura de pantalla 2016-04-05 a las 15 02 12

captura de pantalla 2016-04-05 a las 15 03 02

So, I ran the image of docker (with docker run -it fabric8/kansible /bin/bash) to ran manually the command which is executed by Kansible when the container is created: kansible pod $ANSIBLE_HOSTS. This command looks for the cluster in localhost:8080/api, but the file /etc/hosts of the machine localhost points to the internal IP of the container, not to the host machine. This command should looking for the cluster in the host's IP (192.168.90.50:8080) not in the container's IP.

captura de pantalla 2016-04-05 a las 15 42 38

Any advice on how I can get it to work?

Thanks in advance!

running commands on windows hosts don't terminate if the pod terminates

with ssh things works nicely; commands terminate gracefully if the ssh connection dies which maps to the 'run a pod it may die' model of kubernetes.

Unfortunately right now WinRM doesn't kill processes after the pod dies.

Not sure yet how to fix this nicely. We maybe need some manager to keep things running while a pod is running then terminate it? Wonder if PowerShell sessions help?

release failing using kubernetes-workflow

latest release seems to fails but not sure why, docker.io is accessible from the node but we still get the error below, @iocanel can you see what am I missing?

here's the Jenkinsfile

http://jenkins.cd.origin.fabric8.io/job/fabric8io/job/kansible/branch/master/7/console

Caused by: java.net.ProtocolException: Expected HTTP 101 response but was '500 Internal Server Error'
    at com.squareup.okhttp.ws.WebSocketCall.createWebSocket(WebSocketCall.java:123)
    at com.squareup.okhttp.ws.WebSocketCall.access$000(WebSocketCall.java:40)
    at com.squareup.okhttp.ws.WebSocketCall$1.onResponse(WebSocketCall.java:98)
    at com.squareup.okhttp.Call$AsyncCall.execute(Call.java:177)
    at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33)
    ... 3 more

full stacktrace..

[master] Running shell script
[Pipeline] } //withKubernetesPod
[Pipeline] Run build steps as a Kubernetes Pod : End
[Pipeline] } //node
[Pipeline] Allocate node : End
[Pipeline] End of Pipeline
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
    at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53)
    at io.fabric8.kubernetes.client.dsl.internal.ExecWebSocketListener.waitUntilReady(ExecWebSocketListener.java:120)
    at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:215)
    at io.fabric8.kubernetes.client.dsl.internal.PodOperationsImpl.exec(PodOperationsImpl.java:51)
    at io.fabric8.kubernetes.workflow.KubernetesFacade.exec(KubernetesFacade.java:203)
    at io.fabric8.kubernetes.workflow.PodExecDecorator$1.launch(PodExecDecorator.java:58)
    at hudson.Launcher$ProcStarter.start(Launcher.java:381)
    at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:106)
    at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:57)
    at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:98)
    at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:136)
    at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:112)
    at groovy.lang.GroovyObject$invokeMethod.call(Unknown Source)
    at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
    at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
    at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:151)
    at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:21)
    at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:75)
    at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:149)
    at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:146)
    at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
    at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:123)
    at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:15)
    at WorkflowScript.run(WorkflowScript:13)
    at io.fabric8.kubernetes.workflow.Kubernetes$Pod.inside(jar:file:/var/jenkins_home/plugins/kubernetes-steps/WEB-INF/lib/kubernetes-steps.jar!/io/fabric8/kubernetes/workflow/Kubernetes.groovy:123)
    at ___cps.transform___(Native Method)
    at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:55)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79)
    at sun.reflect.GeneratedMethodAccessor276.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:100)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79)
    at sun.reflect.GeneratedMethodAccessor276.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:106)
    at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:79)
    at sun.reflect.GeneratedMethodAccessor276.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
    at com.cloudbees.groovy.cps.Next.step(Next.java:58)
    at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:154)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$001(SandboxContinuable.java:19)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:33)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable$1.call(SandboxContinuable.java:30)
    at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:106)
    at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:30)
    at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:164)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:277)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$000(CpsThreadGroup.java:77)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:186)
    at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:184)
    at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:47)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:112)
    at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ProtocolException: Expected HTTP 101 response but was '500 Internal Server Error'
    at com.squareup.okhttp.ws.WebSocketCall.createWebSocket(WebSocketCall.java:123)
    at com.squareup.okhttp.ws.WebSocketCall.access$000(WebSocketCall.java:40)
    at com.squareup.okhttp.ws.WebSocketCall$1.onResponse(WebSocketCall.java:98)
    at com.squareup.okhttp.Call$AsyncCall.execute(Call.java:177)
    at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33)
    ... 3 more
Finished: FAILURE

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.