voyagermesh / voyager Goto Github PK
View Code? Open in Web Editor NEW๐ Secure L7/L4 (HAProxy) Ingress Controller for Kubernetes
Home Page: https://voyagermesh.com
License: Apache License 2.0
๐ Secure L7/L4 (HAProxy) Ingress Controller for Kubernetes
Home Page: https://voyagermesh.com
License: Apache License 2.0
Instead of using Pod endpoints, use their stable DNS names if possible.
- backend:
statefulSetName: jenkins-service
statefulSetPort: "80"
I stumbled across voyager today, and thought the features looked really cool! I was hoping to get it set up, but ran into some bugs while following the user guide. Unfortunately, it looks like the slack channel is closed, because when I try to get an invite it says that the token is revoked.
ingress.appscode.com/daemon.nodeSelector
and remove ingress.appscode.com/daemon.hostname
username: xyz
password: abc
ingress.appscode.com/weight:<N>
.Stats should be exposed by a ClusterIP
type by service named as <ingress-name>-stats
. This will ensure that stats are not internet accessible. This is needed, since stats endpoint is not secured by SSL, even though we have a way to set password.
Since stats are not internet accessible any more, using secrets should be optional.
Give users option to select stats port using ingress.appscode.com/stats.port
. This will allow users to use port 1936 for their own services (if they want to).
curl -v https://api.appscode.info:3443/_appscode/api/health/json
says:
ALPN, server did not agree to a protocol
https://tools.keycdn.com/http2-test check for https://api.appscode.info:3443/_appscode/api/health/json says ALPN not enabled.
If I open a LoadBalancer type service directly for apiserver, KeyCDN test shows that it supports ALPN:
https://104.197.254.187/_appscode/api/health/json
Compare this to a direct GO server: https://my-ip.space/index.json
$ curl -v https://my-ip.space/index.json
* Trying 67.205.171.99...
* Connected to my-ip.space (67.205.171.99) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 697 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: my-ip.space (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: CN=my-ip.space
* start date: Fri, 30 Dec 2016 10:19:00 GMT
* expire date: Thu, 30 Mar 2017 10:19:00 GMT
* issuer: C=US,O=Let's Encrypt,CN=Let's Encrypt Authority X3
* compression: NULL
* ALPN, server accepted to use http/1.1
Current hypothesis is cipher suite includes blacklisted ciphers. Need more research. http://http2.github.io/http2-spec/#rfc.section.9.2.2
Make it clear that it should work with any cluster including GKE.
Show how to use along side other ingress controllers (including hidden GKE) https://www.reddit.com/r/kubernetes/comments/62cobg/custom_ingress_controller_on_googe_container/dfwutp8/
Document how to install from Chart
Certificate does not explain when to use HTTP Ingress
Certificate does not explain how to write the provider secret.
NodePort
Restrict Voyager to once namespace
Use with an SSH service
Route 53
Multiple Ingress vs one ingress with many services.
Currently, when the CLOUD_PROVIDER is set to aws, a new Service is created for the Ingress of type LoadBalancer by default.
The behavior I desire is for the LoadBalancer Service to use the "service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "*"
annotation to enable the Proxy Protocol on the AWS ELB, so that the proper headers like X-Real-IP and X-Forwarded-For are set correctly.
Here are my deployments:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: voyager-operator
namespace: default
labels:
run: voyager-operator
spec:
replicas: 1
selector:
matchLabels:
run: voyager-operator
template:
metadata:
labels:
run: voyager-operator
spec:
containers:
- name: voyager-operator
image: appscode/voyager:1.5.4
args:
- --cloud-provider=$(CLOUD_PROVIDER)
- --cluster-name=$(CLUSTER_NAME)
- --v=3
ports:
- containerPort: 1234
name: zero
protocol: TCP
env:
- name: CLOUD_PROVIDER
value: aws
- name: CLUSTER_NAME
value: mcclusterface
---
kind: Service
metadata:
name: voyager-operator
namespace: default
labels:
run: voyager-operator
spec:
ports:
- name: zero
port: 1234
targetPort: zero
selector:
run: voyager-operator
Use EventRecorder
to record events.
Hi there,
I was wondering if you could explain how "cross namespace service support" noted in the project's README works. I'm particularly interested in understanding this because I would like to know if I can use this ingress controller to do weighted load balancing to a set of backends in different namespaces. Essentially, I would like to siphon off a percentage of production traffic to services in separate, isolated namespaces for testing. Would something like this be possible? Does Voyager support customization of the HAProxy template (for example, to include weight
s in the backends section of the config?
Thanks!
used to be the helm chart to install voyager
, but I see it is gone.
any plans to add it back?
https://github.com/appscode/haproxy_exporter
@sadlil , I think we can take the same approach we are taking with k8sdb regarding exporters. A single exporter running in the same namespace as the operator can take care of exposing these metrics for all Haproxys.
Add GKE to the list of supported cloud providers. It should be same as GCE.
https://github.com/appscode/voyager/blob/master/pkg/controller/ingress/handler.go#L44
ref: https://www.reddit.com/r/kubernetes/comments/62cobg/custom_ingress_controller_on_googe_container/
As #100 notes, it will be useful for users to be able to apply annotations on HAproxy service and/or Deployment. Users can give a configmap with their custom annotations that we can use. Users can apply the following annotation to ingress:
ingres.appscode.com/annotations: <user-provided-cfgmap-name>
data:
svc: |
{
"a/s1": "v1",
"b/s2": "v2",
}
pods: |
{
"u/up1": "v1",
"w/p2": "v2",
}
kind: ConfigMap
metadata:
name: <user-provided-cfgmap-name>
namespace: <same-as-ingress>
The json
object under svc
key will be applied to service running in front of HAproxy. The json
object under pods
key will be applied to pods (Deployment/ DaemonSet) running HAproxy. This will allow users to use full spectrum of keys in annotations.
The keys will be filtered to remove any ingress.appscode.com/
keys, since those should be applied to ingress
directly.
We should watch these configmaps so that they are dynamically updated. This can be done using a UpdateAnnotations
mode.
I think the configmaps should be the vector for user annotations. Using secrets does not make sense, since annotations will be ultimately visible to users.
Voyager can operate in 2 modes: LoadBalancer & Daemon. Soon there will be a mode for NodePort. In that light, we should rename Daemon to HostPort
so that these modes describe variations of a service used to expose HAProxy.
TODOs:
So I followed the getting started guide, but am running into some errors when trying to deploy. First, my yamls:
resources.yaml
# following https://github.com/appscode/voyager/blob/master/docs/user-guide/README.md
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
name: ingress.appscode.com
description: "Extended ingress support for Kubernetes by appscode.com"
versions:
- name: v1beta1
---
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
name: certificate.appscode.com
description: "A specification of a Let's Encrypt Certificate to manage."
versions:
- name: v1beta1
ingress.yaml
apiVersion: appscode.com/v1beta1
kind: Ingress
metadata:
name: voyager-ingress
namespace: default
spec:
rules:
- host: mydomain.com
http:
paths:
- path: "/*"
backend:
serviceName: service1
servicePort: '8282'
- host: myotherdomain.com
http:
paths:
- path: /*
backend:
serviceName: service2
servicePort: '8484'
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: appscode-voyager
name: appscode-voyager
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: appscode-voyager
template:
metadata:
labels:
run: appscode-voyager
spec:
containers:
- name: appscode-voyager
args:
- --cloud-provider=gke
- --cluster-name=myclustername
image: appscode/voyager:1.5.1
imagePullPolicy: Always
certificate.yaml
apiVersion: appscode.com/v1beta1
kind: Certificate
metadata:
name: voyager-cert
namespace: default
spec:
domains:
- mydomain.com
- myotherdomain.com
email: [email protected]
provider: googlecloud
providerCredentialSecretName: test-gcp-secret
One thing of note is that I'm not sure what the providerCredentialSecretName
should be. I'm using the google container engine, but wasn't able to find exactly what should go in there, but I don't think that's causing the issue I'm seeing.
So anyways, I use
kubectl apply -f directory_containing_all_these_files/
and the output I get is:
deployment "appscode-voyager" configured
thirdpartyresource "ingress.appscode.com" configured
thirdpartyresource "certificate.appscode.com" configured
unable to decode "voyager/certificate.yaml": no kind "Certificate" is registered for version "appscode.com/v1beta1"
unable to decode "voyager/ingress.yaml": no kind "Ingress" is registered for version "appscode.com/v1beta1"
Is there a step I skipped?
The benefit of this approach is not clear.
AWS, GCE and GKE support use of the spec.loadBalancerSourceRanges
field on Services of type LoadBalancer to configure the firewall for the load balancer created by Kubernetes.
apiVersion: v1
kind: Service
metadata:
annotations:
ingress.appscode.com/name: frontend
ingress.appscode.com/type: LoadBalancer
name: voyager-frontend
spec:
...
type: LoadBalancer
loadBalancerSourceRanges:
- 130.211.204.1/32
- 130.211.204.2/32
https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/
I talked with @tamalsaha in Slack about this, basically, when I delete the attached pod, the Voyager pod terminates too and it keeps restarting for a while until it finally keeps running.
skuda@skuda ~ $ kubectl get pods -o wide | grep user
mysql-user-0 2/2 Running 0 2d 10.200.65.19 master-03
mysql-user-1 2/2 Running 0 1h 10.200.75.4 master-02
voyager-mysql-user-ingress-rd3gn 1/1 Running 0 1h 10.0.0.3 master-03
skuda@skuda ~ $ kubectl delete pod mysql-user-0
pod "mysql-user-0" deleted
skuda@skuda ~ $ kubectl get pods -o wide | grep user
mysql-user-0 2/2 Terminating 0 2d 10.200.65.19 master-03
mysql-user-1 2/2 Running 0 1h 10.200.75.4 master-02
voyager-mysql-user-ingress-rd3gn 0/1 Terminating 0 1h 10.0.0.3 master-03
This is the yaml file related to this setup:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql-user
spec:
serviceName: mysql-user-headless
replicas: 2
template:
metadata:
labels:
app: mysql-user
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
restartPolicy: Always
terminationGracePeriodSeconds: 10
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- mysql-user-headless
topologyKey: "kubernetes.io/hostname"
containers:
- name: mysql
imagePullPolicy: Always
image: percona:5.7
resources:
requests:
memory: "4Gi"
cpu: "2"
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: config-volume
mountPath: /etc/mysql/mysql.cnf
subPath: mysql.cnf
- name: config-volume
mountPath: /etc/mysql/conf.d/extra.cnf
subPath: extra.cnf
- name: data
mountPath: /var/lib/mysql
- name: local-fs
mountPath: /tmp
env:
- name: MYSQL_ROOT_PASSWORD
value: percona
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 10
periodSeconds: 30
livenessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 20
periodSeconds: 30
- name: phpmyadmin
image: phpmyadmin/phpmyadmin:4.6
ports:
- containerPort: 80
name: phpmyadmin
resources:
requests:
memory: "500Mi"
cpu: "500m"
env:
- name: PMA_HOST
value: 127.0.0.1
readinessProbe:
httpGet:
path: /index.php
port: 80
initialDelaySeconds: 10
periodSeconds: 30
livenessProbe:
httpGet:
path: /index.php
port: 80
initialDelaySeconds: 20
periodSeconds: 30
volumes:
- name: config-volume
configMap:
name: mysql-user-configmap
- name: local-fs
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.beta.kubernetes.io/storage-class: fastio
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
---
apiVersion: v1
kind: Service
metadata:
name: mysql-user-headless
labels:
app: mysql-user
spec:
ports:
- port: 3306
targetPort: 3306
name: mysql
protocol: TCP
- port: 80
targetPort: 80
name: phpmyadmin
protocol: TCP
clusterIP: None
selector:
app: mysql-user
---
apiVersion: appscode.com/v1beta1
kind: Ingress
metadata:
name: mysql-user-ingress
annotations:
ingress.appscode.com/type: Daemon
ingress.appscode.com/daemon.nodeSelector: "kubernetes.io/hostname=master-03"
spec:
rules:
- host: phpmyadmin.local
http:
paths:
- path: '/1'
backend:
hostNames:
- mysql-user-0
serviceName: mysql-user-headless
servicePort: '80'
rewriteRule:
- ^(GET|POST|HEAD)\ /1/(.*) \1\ /\2
- path: '/2'
backend:
hostNames:
- mysql-user-1
serviceName: mysql-user-headless
servicePort: '80'
rewriteRule:
- ^(GET|POST|HEAD)\ /2/(.*) \1\ /\2
Thanks!
spec.TLS
https://github.com/appscode/voyager/blob/master/docs/user-guide/README.md#deploy-controller shows --cloud-provider
and --cluster-name
being set, but why is this needed? If this tool is running within the cluster, why does it care what provider it's running on and what the cluster name is?
When you are using Voyager with CLOUD_PROVIDER=null because you are using it on baremetal, or maybe DigitalOcean, Voyager allows you to use these annotations:
ingress.appscode.com/type: Daemon
ingress.appscode.com/daemon.nodeSelector: "kubernetes.io/hostname=foo"
Then when the daemonset starts in the selected node it automatically binds to the tcp port 80, that makes obviously sense, but it would be great if the user is able to use NodePort as well as an alternative.
For example in our case we have a four node cluster in our hosting provider so I only have 4 available ports to use with Voyager but I have thousands of free ports that I would like to use with Voyager and Nodeport.
Thanks!
Voyager adds a new Firewall()
interface to Kubernetes CloudProvider api. This is a tracking bug to update our modifications.
kubernetes/community#128
Test case:
https://github.com/appscode/voyager/blob/master/cmd/voyager/main.go#L31
This is a problem is user is using baremetal provider.
Currently, when I add or update annotations to my Ingress object, none of the changes are made.
For example, before:
apiVersion: appscode.com/v1beta1
kind: Ingress
metadata:
name: frontend
labels:
app: frontend
spec:
# Default backend
backend:
serviceName: echoserver
servicePort: 80
After:
apiVersion: appscode.com/v1beta1
kind: Ingress
metadata:
name: frontend
labels:
app: frontend
annotations:
# Enable the AWS Proxy Protocol
# NOTE(jmodes): We need to have `bind :80 accept-proxy` for HAProxy to accept Proxy Protocol requests.
# ingress.appscode.com/annotations.service: '{"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "*"}'
# Number of HAProxy replicas to run
ingress.appscode.com/replicas: 1
# Open HAProxy stats on port 1936
ingress.appscode.com/stats: true
ingress.appscode.com/stats.secretName: frontend-stats
spec:
# Default backend
backend:
serviceName: echoserver
servicePort: 80
After applying the new Ingress, none of the changes were enacted by the controller.
Currently HAProxy supports HTTP/2 only in tcp mode. This is already supported in Voyager. See here: http://discourse.haproxy.org/t/http-2-support-in-1-7/927
This is blocked on HAProxy supporting HTTP/2 in http mode.
I have been trying to deploy voyager to give it a try and seem to be failing at it. I started with kubernetes 1.4.8, but failed. I gave it a try with a fresh cluster (1.5.2) deployed by kops with basically all defaults, heapster and dashboard. Got the same problem:
$ kubectl apply -f https://raw.githubusercontent.com/appscode/k8s-addons/master/api/extensions/ingress.yaml
thirdpartyresource "ingress.appscode.com" created
$ kubectl apply -f https://raw.githubusercontent.com/appscode/k8s-addons/master/api/extensions/certificate.yaml
thirdpartyresource "certificate.appscode.com" created
$ cat ingress-voyager/deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: appscode-voyager
name: appscode-voyager
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: appscode-voyager
template:
metadata:
labels:
run: appscode-voyager
spec:
containers:
- name: appscode-voyager
args:
- --cloud-provider=aws
- --cluster-name=tamas-k8s
image: appscode/voyager:1.5.0
imagePullPolicy: Always
$ kubectl apply -f ingress-voyager/deployment.yml
deployment "appscode-voyager" created
$ kubectl get thirdpartyresource
NAME DESCRIPTION VERSION(S)
certificate.appscode.com A specification of a Let's Encrypt Certificate to manage. v1beta1
ingress.appscode.com Extended ingress support for Kubernetes by appscode.com v1beta1
$ kubectl get po
NAME READY STATUS RESTARTS AGE
appscode-voyager-3269309913-glhpd 1/1 Running 0 4m
Everything get's deployed from what I can tell, however the logs of the pod indicated an error:
2017-02-24T14:42:14.767629922Z W0224 14:42:14.767468 1 client_config.go:481] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2017-02-24T14:42:14.832700877Z E0224 14:42:14.832589 1 reflector.go:199] github.com/appscode/voyager/vendor/github.com/appscode/k8s-addons/pkg/watcher/objects.go:138: Failed to list *api.Alert: the server could not find the requested resource
2017-02-24T14:42:15.834770679Z E0224 14:42:15.834656 1 reflector.go:199] github.com/appscode/voyager/vendor/github.com/appscode/k8s-addons/pkg/watcher/objects.go:138: Failed to list *api.Alert: the server could not find the requested resource
(this goes on forever)
And then when trying to deploy an ingress:
$ cat jenkins/ingress-voy.yml
apiVersion: appscode.com/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress-voy
namespace: jenkins
spec:
rules:
- host: jenkins.tamas.--omitted--
http:
paths:
- path: "/"
backend:
serviceName: jenkins-ui
servicePort: ui
$ kubectl apply -f jenkins/ingress-voy.yml
error: unable to decode "jenkins/ingress-voy.yml": no kind "Ingress" is registered for version "appscode.com/v1beta1"
Did I miss something or are there extra steps?
Need to update the default setup
https://github.com/appscode/voyager/search?utf8=%E2%9C%93&q=1.7.2-1.5.4&type=
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.