Git Product home page Git Product logo

yggdrasil's Issues

Add listener IP option

I would like to add a flag that configures the IP address that envoy listens on, instead of 0.0.0.0 by default.
As --envoy-port already exists to configure the downstream port, I was thinking --envoy-address?

The default would still be 0.0.0.0 so no change for those who won't specify the flag.

Add Diagram

It'd be great to add a picture and some more explanation around the ingress class annotations to make it easier to see where yggrasil sits and how to get it running (maybe example terraform config for the envoy asg stuff?)

Adopt go modules

What is your opinion on adopting go modules instead of dep?
From my point of view go modules have the advantage of eliminating an additional external tool.

Dynamically get certificates from ingresses' TLS secrets

Declaring all TLS certificates and managing them alongside Yggdrasil can be a challenge when working with many ingresses all using different certificates.

We would like to make Yggdrasil fetch (and watch) TLS secrets declared in ingresses' spec.tls and use them.

To make this functionality transparent to those who don't need it, we can imagine simply adding a syncSecrets Yggdrasil configuration option (false by default) that would ignore certificates if true:

{
  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "multi-cluster-staging"],
  "certificates": [
    {
      "hosts": ["*.api.com"],
      "cert": "path/to/cert",
      "key": "path/to/key"
    }
  ],
  "clusters": [
    {
      "token": "xxxxxxxxxxxxxxxx",
      "apiServer": "https://cluster1.api.com",
      "ca": "pathto/cluster1/ca"
    }
  ]
}

=>

{
  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "multi-cluster-staging"],
  "syncSecrets": true,
  "clusters": [
    {
      "token": "xxxxxxxxxxxxxxxx",
      "apiServer": "https://cluster1.api.com",
      "ca": "pathto/cluster1/ca"
    }
  ]
}

Any other approach in mind? Maybe one to be able to use both static certificates and TLS secrets at the same time?

Support networking.k8s.io ingresses

Kubernetes extensions/v1beta1 Ingress will be removed in 1.22.

In order to support all Kubernetes versions, Yggdrasil could try working with networking.k8s.io ingresses first before falling back to extensions/v1beta1 if the cluster doesn't have the new API capability.

Any thoughts on other ways to handle both versions?

Add annotation to configure ingress weight

Adding an yggdrasil.uswitch.com/weight annotation would be useful to configure lbEndpoint load_balancing_weight for each ingress.
Omitting it empty would not change the current behavior.

In addition to that, giving a weight of 0 could be a special case to remove an ingress from an Envoy cluster.

Envoy is not getting k8s ingress cluster config from yggdrasil control-plane

Envoy is not receiving k8s ingress configuration clusters/listeners from yggdrasil control-plane. I'm using the configuration reference:

Envoy docker container output:

[2019-07-16 03:39:18.751][8][info][main] [source/server/server.cc:207] statically linked extensions:
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:209]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:212]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:215]   filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-07-16 03:39:18.753][8][info][main] [source/server/server.cc:218]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-07-16 03:39:18.753][8][info][main] [source/server/server.cc:220]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-07-16 03:39:18.754][8][info][main] [source/server/server.cc:222]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.zipkin
[2019-07-16 03:39:18.755][8][info][main] [source/server/server.cc:225]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-07-16 03:39:18.756][8][info][main] [source/server/server.cc:228]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-07-16 03:39:18.756][8][info][main] [source/server/server.cc:234] buffer implementation: old (libevent)
[2019-07-16 03:39:18.766][8][warning][misc] [source/common/protobuf/utility.cc:173] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-07-16 03:39:18.768][8][info][main] [source/server/server.cc:281] admin address: 0.0.0.0:9901
[2019-07-16 03:39:18.769][8][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2019-07-16 03:39:18.769][8][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:60] loading 0 listener(s)
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:85] loading tracing configuration
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:105] loading stats sink configuration
[2019-07-16 03:39:18.770][8][info][main] [source/server/server.cc:478] starting main dispatch loop
[2019-07-16 03:39:19.064][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:133] cm init: initializing cds
[2019-07-16 03:39:19.067][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:137] cm init: all clusters initialized
[2019-07-16 03:39:19.067][8][info][main] [source/server/server.cc:462] all clusters initialized. initializing init manager
[2019-07-16 03:39:19.071][8][info][upstream] [source/server/lds_api.cc:74] lds: add/update listener 'listener_0'
[2019-07-16 03:39:19.071][8][info][config] [source/server/listener_manager_impl.cc:1006] all dependencies initialized. starting workers

yggdrasil docker container output:

time="2019-07-16T03:39:13Z" level=info msg="started snapshotter"
time="2019-07-16T03:39:14Z" level=debug msg="adding &Ingress{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:traefik-web-ui,GenerateName:,Namespace:kube-system-custom,SelfLink:/apis/extensions/v1beta1/namespaces/kube-system-custom/ingresses/traefik-web-ui,UID:37ba4ec6-a6b5-11e9-aa56-12311bc24cf8,ResourceVersion:7291364,Generation:1,CreationTimestamp:2019-07-15 04:01:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"kubernetes.io/ingress.class\":\"traefik\",\"traefik.ingress.kubernetes.io/frontend-entry-points\":\"http\"},\"name\":\"traefik-web-ui\",\"namespace\":\"kube-system-custom\"},\"spec\":{\"rules\":[{\"host\":\"traefik.cluster1.preprod.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"traefik\",\"servicePort\":\"web\"},\"path\":\"/\"}]}}]}}\n,kubernetes.io/ingress.class: traefik,traefik.ingress.kubernetes.io/frontend-entry-points: http,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:IngressSpec{Backend:nil,TLS:[],Rules:[{traefik.cluster1.preprod.com {HTTPIngressRuleValue{Paths:[{/ {traefik {1 0 web}}}],}}}],},Status:IngressStatus{LoadBalancer:k8s_io_api_core_v1.LoadBalancerStatus{Ingress:[],},},}"
time="2019-07-16T03:39:14Z" level=debug msg="took snapshot: {Endpoints:{Version: Items:map[]} Clusters:{Version:2019-07-16 03:39:14.0499594 +0000 UTC m=+1.109360801 Items:map[]} Routes:{Version: Items:map[]} Listeners:{Version:2019-07-16 03:39:14.0499448 +0000 UTC m=+1.109347101 Items:map[listener_0:name:\"listener_0\" address:<socket_address:<address:\"0.0.0.0\" port_value:10000 > > filter_chains:<filters:<name:\"envoy.http_connection_manager\" config:<fields:<key:\"access_log\" value:<list_value:<values:<struct_value:<fields:<key:\"config\" value:<struct_value:<fields:<key:\"format\" value:<string_value:\"{\\\"bytes_received\\\":\\\"%BYTES_RECEIVED%\\\",\\\"bytes_sent\\\":\\\"%BYTES_SENT%\\\",\\\"downstream_local_address\\\":\\\"%DOWNSTREAM_LOCAL_ADDRESS%\\\",\\\"downstream_remote_address\\\":\\\"%DOWNSTREAM_REMOTE_ADDRESS%\\\",\\\"duration\\\":\\\"%DURATION%\\\",\\\"forwarded_for\\\":\\\"%REQ(X-FORWARDED-FOR)%\\\",\\\"protocol\\\":\\\"%PROTOCOL%\\\",\\\"request_id\\\":\\\"%REQ(X-REQUEST-ID)%\\\",\\\"request_method\\\":\\\"%REQ(:METHOD)%\\\",\\\"request_path\\\":\\\"%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%\\\",\\\"response_code\\\":\\\"%RESPONSE_CODE%\\\",\\\"response_flags\\\":\\\"%RESPONSE_FLAGS%\\\",\\\"start_time\\\":\\\"%START_TIME(%s.%3f)%\\\",\\\"upstream_cluster\\\":\\\"%UPSTREAM_CLUSTER%\\\",\\\"upstream_host\\\":\\\"%UPSTREAM_HOST%\\\",\\\"upstream_local_address\\\":\\\"%UPSTREAM_LOCAL_ADDRESS%\\\",\\\"upstream_service_time\\\":\\\"%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%\\\",\\\"user_agent\\\":\\\"%REQ(USER-AGENT)%\\\"}\\n\" > > fields:<key:\"path\" value:<string_value:\"/var/log/envoy/access.log\" > > > > > fields:<key:\"name\" value:<string_value:\"envoy.file_access_log\" > > > > > > > fields:<key:\"http_filters\" value:<list_value:<values:<struct_value:<fields:<key:\"config\" value:<struct_value:<fields:<key:\"headers\" value:<list_value:<values:<struct_value:<fields:<key:\"exact_match\" value:<string_value:\"/yggdrasil/status\" > > fields:<key:\"name\" value:<string_value:\":path\" > > > > > > > fields:<key:\"pass_through_mode\" value:<bool_value:false > > > > > fields:<key:\"name\" value:<string_value:\"envoy.health_check\" > > > > values:<struct_value:<fields:<key:\"name\" value:<string_value:\"envoy.router\" > > > > > > > fields:<key:\"route_config\" value:<struct_value:<fields:<key:\"name\" value:<string_value:\"local_route\" > > fields:<key:\"virtual_hosts\" value:<list_value:<> > > > > > fields:<key:\"stat_prefix\" value:<string_value:\"ingress_http\" > > fields:<key:\"tracing\" value:<struct_value:<fields:<key:\"operation_name\" value:<string_value:\"EGRESS\" > > > > > fields:<key:\"upgrade_configs\" value:<list_value:<values:<struct_value:<fields:<key:\"upgrade_type\" value:<string_value:\"websocket\" > > > > > > > > > > listener_filters:<name:\"envoy.listener.tls_inspector\" > ]}}"
time="2019-07-16T03:39:14Z" level=debug msg="cache controller synced"
time="2019-07-16T03:39:14Z" level=debug msg="starting cache controller: &{config:{Queue:0xc4202b20b0 ListerWatcher:0xc42010c9a0 Process:0xf6b290 ObjectType:0xc42028c2c0 FullResyncPeriod:60000000000 ShouldResync:<nil> RetryOnError:false} reflector:<nil> reflectorMutex:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} clock:0x20d0af0}"
time="2019-07-16T03:39:15Z" level=debug msg="cache controller synced"

yggdrasil.json config:

  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "traefik"],
  "clusters": [
    {
      "token": "xxx1",
      "apiServer": "https://api.cluster1.preprod.com",
      "ca": "cluster1_ca.crt"
    },
    {
      "token": "xxx2",
      "apiServer": "https://api.cluster2.preprod.com",
      "ca": "cluster2_ca.crt"
    }
  ]
}

Envoy v1.10.0 config file:

  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: yggdrasil, port_value: 8080 }}]

I'm expecting to see traefik-ui cluster/listener but envoy can't get it by discovery, only yggdrasil/status was added.

Upgrade Envoy API v2 to v3

Envoy 1.18+ no longer supports the v2 API.
Yggdrasil needs to use the v3 API in order to work with recent versions of Envoy.

Do you think it's conceivable to make Yggdrasil switch straight to using v3 therefore completely dropping the v2 API?
It would mean everybody will need to upgrade their envoy nodes to 1.13+.

Ingress controllers under loadbalancer

Hi folks,

I was trying to test yggdrasil to achieve loadbalancing across two k8s clusters. Since yggdrasil using ingress controller's IP/Host name, I can't use my ELB here. Do we have any work around for this scenario?

Environment:

Cloud: AWS
Clusters in East and West
Nginx ingress ASG under internal Classic ELB.
External DNS service will update Route53 from ingress rules.

envoy.yaml

`admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
grpc_services:
envoy_grpc:
cluster_name: dev
cds_config:
api_config_source:
api_type: GRPC
grpc_services:
envoy_grpc:
cluster_name: dev

static_resources:
clusters:

  • name: dev
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 172.17.0.2, port_value: 8080 }}]`

yggdrasil.conf

{ "nodeName": "k8s-envoy-agt-w2-1", "ingressClasses": ["nginx-internal"], "clusters": [ { "token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "apiServer": "https://west.dev.master.kube.com:6443", "ca": "ca.crt" } ] }

ingress.yaml

`apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: envoy.dev.kube.com
external-dns.alpha.kubernetes.io/target: internal-dev-k8s-ing-int-w2-xxxxxx.us-west-2.elb.amazonaws.com
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/backend-protocol: HTTP
yggdrasil.uswitch.com/healthcheck-path: /
yggdrasil.uswitch.com/timeout: 30s
name: hello-world
namespace: default
spec:
rules:

  • host: envoy.dev.kube.com
    http:
    paths:
    • backend:
      serviceName: hello-world
      servicePort: 80
      path: /`

Errors on envoy startup with provided configuration

Using the latest envoy docker container:

root@f52a1a331546:/# /usr/local/bin/envoy --version
/usr/local/bin/envoy  version: 3cca9eea6befa5b300230a06516d8f9a46f519df/1.9.0-dev/Clean/RELEASE

Given the example configuration file from yggdrasil README, I get the following error when starting up envoy with it:

[2018-11-20 11:32:51.198][000007][critical][main] [source/server/server.cc:84] error initializing configuration '/etc/envoy/envoy.yaml': envoy::api::v2::core::ConfigSource::GRPC must not have a cluster name specified: api_type: GRPC
cluster_names: "xds_cluster"

This is the configuration I am using:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]
  cds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 10.132.0.21, port_value: 8080 }}]

Edit:
Tried this configuration change:

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster

And it seems to work. Maybe the example in the README has to be changed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.