💡
|
If you are in a hurry and want to get hands-on with Istio insanely fast, just go to http://learn.openshift.com/servicemesh and start instantly |
There are three different and super simple microservices in this system and they are chained together in the following sequence:
customer → preference → recommendation
For now, they have a simple exception handling solution for dealing with a missing dependent service: it just returns the error message to the end-user.
You will need in this tutorial:
-
minishift
-
docker
-
Fedora:
dnf install docker
-
kubectl
-
Fedora:
dnf install kubernetes-client
-
oc (eval $(minishift oc-env))
-
Apache Maven
-
Fedora:
dnf install maven
-
-
Mac OS:
brew install stern
-
Fedora:
sudo curl --output /usr/local/bin/stern -L https://github.com/wercker/stern/releases/download/1.6.0/stern_linux_amd64 && sudo chmod +x /usr/local/bin/stern
-
-
istioctl (will be installed via the steps below)
-
curl
,gunzip
,tar
-
Mac OS: built-in or part of your bash shell
-
Fedora: should also be installed already, but just in case…
dnf install curl gzip tar
-
-
git
-
dnf install git
-
Assumes minishift
, tested with minishift v1.15.1+a5c47dd
#!/bin/bash
# add the location of minishift execuatable to PATH
# I also keep other handy tools like kubectl and kubetail.sh
# in that directory
minishift profile set kubeboot
minishift config set memory 8GB
minishift config set cpus 3
minishift config set vm-driver virtualbox ## or kvm, for Fedora
minishift config set image-caching true
minishift addon enable admin-user
minishift start
eval $(minishift oc-env) && eval $(minishift docker-env)
oc login $(minishift ip):8443 -u admin -p admin
📎
|
In this tutorial, you will often be polling the customer endpoint with curl , while simultaneously viewing logs via stern or kubetail.sh and issuing commands via oc and istioctl . Consider using three terminal windows.
|
#!/bin/bash
# Mac OS:
curl -L https://github.com/istio/istio/releases/download/0.6.0/istio-0.6.0-osx.tar.gz | tar xz
# Fedora:
curl -L https://github.com/istio/istio/releases/download/0.6.0/istio-0.6.0-linux.tar.gz | tar xz
# Both:
cd istio-0.6.0
export ISTIO_HOME=`pwd`
export PATH=$ISTIO_HOME/bin:$PATH
Verify istioctl
istioctl version
The above command should show output like:
Version: 0.6.0GitRevision: 2cb09cdf040a8573330a127947b11e5082619895
User: root@a28f609ab931Hub: docker.io/istio
GolangVersion: go1.9
BuildStatus: Clean
oc login $(minishift ip):8443 -u admin -p admin
oc adm policy add-scc-to-user anyuid -z istio-ingress-service-account -n istio-system
oc adm policy add-scc-to-user anyuid -z default -n istio-system
oc adm policy add-scc-to-user anyuid -z grafana -n istio-system
oc adm policy add-scc-to-user anyuid -z prometheus -n istio-system
oc create -f install/kubernetes/istio.yaml
oc project istio-system
oc expose svc istio-ingress
oc apply -f install/kubernetes/addons/prometheus.yaml
oc apply -f install/kubernetes/addons/grafana.yaml
oc apply -f install/kubernetes/addons/servicegraph.yaml
oc expose svc servicegraph
oc expose svc grafana
oc expose svc prometheus
oc process -f https://raw.githubusercontent.com/jaegertracing/jaeger-openshift/master/all-in-one/jaeger-all-in-one-template.yml | oc create -f -
Wait for Istio’s components to be ready
$ oc get pods -w
NAME READY STATUS RESTARTS AGE
grafana-3617079618-4qs2b 1/1 Running 0 4m
istio-ca-1363003450-tfnjp 1/1 Running 0 4m
istio-ingress-1005666339-vrjln 1/1 Running 0 4m
istio-mixer-465004155-zn78n 3/3 Running 0 5m
istio-pilot-1861292947-25hnm 2/2 Running 0 4m
jaeger-210917857-2w24f 1/1 Running 0 4m
prometheus-168775884-dr5dm 1/1 Running 0 4m
servicegraph-1100735962-tdh78 1/1 Running 0 4m
And if you need quick access to the OpenShift console
minishift console
📎
|
On your first launch of the OpenShift console via minishift , you will receive a warning like "Your connection is not private". For our demo, simply select "Proceed to 192.168.xx.xx (unsafe)" to bypass the warning. Both the username and the password are set to admin , thanks to the admin-user add-on.
|
-
OpenShift Web Console -
minishift console
-
Jaeger -
minishift openshift service jaeger-query --in-browser
-
Grafana -
minishift openshift service grafana --in-browser
-
Prometheus -
minishift openshift service prometheus --in-browser
📎
|
You can see more options via minishift openshift service --help
|
Make sure you are logged in
oc whoami
and you have setup the project/namespace
oc new-project tutorial
oc adm policy add-scc-to-user privileged -z default -n tutorial
Then clone the git repository
git clone https://github.com/workspace7/kubeboot
cd kubeboot
Start deploying the microservice projects, starting with customer
cd customer
mvn clean package
docker build -t example/customer .
docker images | grep customer
📎
|
Your very first Docker build will take a bit of time as it downloads all the layers. Subsequent rebuilds of the Docker image, updating only the microservice layer will be very fast. |
Make sure istioctl
is in your PATH
:
Now let’s deploy the customer pod:
oc apply -f ./kubernetes/Deployment.yml -n tutorial
oc create -f ./kubernetes/Service.yml -n tutorial
Since the customer
service is the one our users will interact with, let’s add an OpenShift Route that exposes that endpoint.
oc expose service customer
oc get route
oc get pods -w
❗
|
If your pod fails with ImagePullBackOff , it’s possible that your current terminal isn’t using the proper Docker Environment. See Setup environment.
|
Wait until the status is Running
and there are 1/1
pods in the Ready
column. To exit, press Ctrl+C
. You can also see the pod status via OpenShift WebConsole.
Then test the customer endpoint
curl customer-tutorial.$(minishift ip).nip.io/whereami
The output of the above command should be something like customer ⇒ customer-5dffb8bff9-qntf2c
curl customer-tutorial.$(minishift ip).nip.io
You should see the following error because the services preference
and recommendation
are not yet deployed.
customer => I/O error on GET request for "http://preference:8080": preference: Name or service not known; nested exception is java.net.UnknownHostException: preference: Name or service not known
Also review the logs
stern customer
You should see a stacktrace containing this cause:
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://preference:8080": preference; nested exception is java.net.UnknownHostException: preference
Back to the main kubeboot directory
cd ..
cd preference
mvn clean package
docker build -t example/preference .
docker images | grep preference
oc apply -f ./kubernetes/Deployment.yml -n tutorial
oc create -f ./kubernetes/Service.yml -n tutorial
oc get pods -w
Wait until the status is Running
and there are 1/1
pods in the Ready
column. To exit, press Ctrl+C
curl customer-tutorial.$(minishift ip).nip.io
It will respond with an error since the service recommendation
is not yet deployed.
📎
|
We could make this a bit more resilent in a future iteration of this tutorial |
customer => 503 preference => I/O error on GET request for "http://recommendation:8080": recommendation: Name or service not known; nested exception is java.net.UnknownHostException: recommendation: Name or service not known
and check out the logs
stern preference
You should see a stacktrace containing this cause:
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://recommendation:8080": recommendation; nested exception is java.net.UnknownHostException: recommendation
Back to the main kubeboot directory
cd ..
❗
|
The tag v1 at the end of the image name matters. We will be creating a v2 version of recommendation later in this tutorial. Having both a v1 and v2 version of the recommendation code will allow us to exercise some interesting aspects of Istio’s capabilities.
|
cd recommendation
mvn clean package
docker build -t example/recommendation:v1 .
docker images | grep recommendation
oc apply -f ./kubernetes/Deployment.yml -n tutorial
oc create -f ./kubernetes/Service.yml -n tutorial
oc get pods -w
Wait until the status is Running
and there are 1/1
pods in the Ready
column. To exit, press Ctrl+C
curl customer-tutorial.$(minishift ip).nip.io
it should now return
customer => preference => recommendation v1 from '99634814-sf4cl': 1
and you can monitor the recommendation
logs with
stern recommendation -c recommendation
Back to the main kubeboot
directory
cd ..
When you wish to change code (e.g. editing the .java files) and wish to "redeploy", simply:
cd {servicename}/
vi src/main/java/com/redhat/developer/demos/{servicename}/{Servicename}{Controller}.java
Make your changes, save it and then:
mvn clean package
docker build -t example/{servicename} .
oc get pods -o jsonpath='{.items[*].metadata.name}' -l app={servicename}
oc get pods -o jsonpath='{.items[*].metadata.name}' -l app={servicename},version=v1
oc delete pod -l app={servicename},version=v1
Why the delete pod?
Based on the Deployment configuration, Kubernetes/OpenShift will recreate the pod, based on the new docker image as it attempts to keep the desired replicas available
oc describe deployment {servicename} | grep Replicas
We can experiment with Istio routing rules by making a change to RecommendationController.java
like the following and creating a "v2" docker image.
private static final String RESPONSE_STRING_FORMAT = "recommendation v2 from '%s': %d\n";
The "v2" tag during the Docker build is significant.
There is also a second deployment.yml
file to label things correctly
cd recommendation
mvn clean package
docker build -t example/recommendation:v2 .
docker images | grep recommendation
example/recommendation v2 c31e399a9628 5 seconds ago 438MB
example/recommendation v1 f072978d9cf6 8 minutes ago 438MB
Important: We have a 2nd Deployment to manage the v2 version of recommendation.
oc apply -f ./kubernetes/Deployment-v2.yml -n tutorial
oc get pods -w
Wait until the status is Running
and there are 1/1
pods in the Ready
column. To exit, press Ctrl+C
NAME READY STATUS RESTARTS AGE
customer-3600192384-fpljb 1/1 Running 0 17m
preference-243057078-8c5hz 1/1 Running 0 15m
recommendation-v1-60483540-9snd9 1/1 Running 0 12m
recommendation-v2-2815683430-vpx4p 1/1 Running 0 15s
cd customer
oc apply -f <(istioctl kube-inject -f ./kubernetes/Deployment.yml) -n tutorial
cd ../preference
oc apply -f <(istioctl kube-inject -f ./kubernetes/Deployment.yml) -n tutorial
cd ../recommendation
oc apply -f <(istioctl kube-inject -f ./kubernetes/Deployment.yml) -n tutorial
oc apply -f <(istioctl kube-inject -f ./kubernetes/Deployment-v2.yml) -n tutorial
oc get pods -w -n tutorial
Wait until the status is Running
and there are 2/2
customer, preferences and recommendations pods in the Ready
column. To exit, press Ctrl+C
and test the customer endpoint
curl customer-tutorial.$(minishift ip).nip.io
you likely see "customer => preference => recommendation v1 from '99634814-d2z2t': 3", where '99634814-d2z2t' is the pod running v1 and the 3 is basically the number of times you hit the endpoint.
curl customer-tutorial.$(minishift ip).nip.io
you likely see "customer => preference => recommendation v2 from '2819441432-5v22s': 1" as by default you get round-robin load-balancing when there is more than one Pod behind a Service
Send several requests to see their responses
#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .5
done
The default Kubernetes/OpenShift behavior is to round-robin load-balance across all available pods behind a single Service. Add another replica of recommendation-v2 Deployment.
oc scale --replicas=2 deployment/recommendation-v2
Now, you will see two requests into the v2 and one for v1.
customer => preference => recommendation v1 from '2819441432-qsp25': 29
customer => preference => recommendation v2 from '99634814-sf4cl': 37
customer => preference => recommendation v2 from '99634814-sf4cl': 38
Scale back to a single replica of the recommendation-v2 Deployment
oc scale --replicas=1 deployment/recommendation-v2
and back to the main directory
cd ..
With Istio side cars deployed to the applications, its possible to alter routing rules to services.
From the main kubeboot directory,
istioctl create -f istiofiles/route-rule-recommendation-v2.yml -n tutorial
curl customer-tutorial.$(minishift ip).nip.io
you should only see v2 being returned
Note: "replace" instead of "create" since we are overlaying the previous rule
istioctl replace -f istiofiles/route-rule-recommendation-v1.yml -n tutorial
istioctl get routerules -n tutorial
istioctl get routerule recommendation-default -o yaml -n tutorial
By simply removing the rule
istioctl delete routerule recommendation-default -n tutorial
and you should see the default behavior of load-balancing between v1 and v2
curl customer-tutorial.$(minishift ip).nip.io
Canary Deployment scenario: push v2 into the cluster but slowly send end-user traffic to it, if you continue to see success, continue shifting more traffic over time
oc get pods -l app=recommendation -n tutorial
NAME READY STATUS RESTARTS AGE
recommendation-v1-3719512284-7mlzw 2/2 Running 6 2h
recommendation-v2-2815683430-vn77w 2/2 Running 0 1h
Create the routerule that will send 90% of requests to v1 and 10% to v2
istioctl create -f istiofiles/route-rule-recommendation-v1_and_v2.yml -n tutorial
and send in several requests
#!/bin/bash
while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .1
done
In another terminal, change the mixture to be 75/25
istioctl replace -f istiofiles/route-rule-recommendation-v1_and_v2_75_25.yml -n tutorial
Clean up
istioctl delete routerule recommendation-v1-v2 -n tutorial
What is your user-agent?
Note: the "user-agent" header being forwarded in the Customer and Preferences controllers in order for route rule modications around recommendation
istioctl create -f istiofiles/route-rule-recommendation-v1.yml -n tutorial
istioctl create -f istiofiles/route-rule-safari-recommendation-v2.yml -n tutorial
istioctl get routerules -n tutorial
and test with a Safari (or even Chrome on Mac since it includes Safari in the string). Safari only sees v2 responses from recommendation
and test with a Firefox browser, it should only see v1 responses from recommendation.
There are two ways to get the URL for your browser:
minishift openshift service customer --in-browser
That will open the openshift service customer
in browser
Or
if you need just the url alone:
minishift openshift service customer --url
http://customer-tutorial.192.168.99.102.nip.io
You can also attempt to use the curl -A command to test with different user-agent strings.
curl -A Safari customer-tutorial.$(minishift ip).nip.io
curl -A Firefox customer-tutorial.$(minishift ip).nip.io
You can describe the routerule to see its configuration
istioctl get routerule recommendation-safari -o yaml -n tutorial
Clean up
istioctl delete routerule recommendation-safari -n tutorial
istioctl delete routerule recommendation-default -n tutorial