Git Product home page Git Product logo

rotor's People

Contributors

9len avatar arsalaanansarideveloper avatar brirams avatar brookshelley avatar chrisgoffinet avatar falun avatar felixonmars avatar k4y3ff avatar mccv avatar phedoreanu avatar protochron avatar trjordan avatar zbintliff avatar zuercher avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rotor's Issues

mechanism to configure custom clusters and listeners

the plan is to add flags to read from a file to specify:

  • statically configured listeners (with embedded routes)
  • statically configured clusters
  • a template for new listeners
  • a template for new clusters

This covers many use-cases, but will not cover fully-dynamic specification of listeners and clusters; we plan to offer that in our paid product.

Dynamically watch namespaces in kubernetes

Currently the Kubernetes plugin requires a namespace to be defined to watch for pods. In a multi-tenant cluster I think this isn't going to be scalable. I propose we support, similar to Istio, the ability to label namespaces we want to be watched, and then Rotor can dynamically watch multiple namespaces.

zone aware balancing question

If i wanted to configure envoy to be zone aware, is it correct that i would need a rotor process in each zone? or can i use a single rotor instance. thanks!

how to use web-socket?

I didn't find any configurations to set that, can you tell me how to modify the protocol?
The configs only contains host and port (and metadata), maybe the config below is misused, I don't know.

- cluster: jenkins.102.co
  instances:
    - host: 10.6.8.102
      port: 8082
      metadata:
        - key: envoy.lb
          value: canary
        - key: use_websocket
          value: true
        - key: service_name
          value: jnkins.102.com

And I found that the listenner's config always set to

var (
	xdsClusterConfig = envoycore.ConfigSource{
		ConfigSourceSpecifier: &envoycore.ConfigSource_ApiConfigSource{
			ApiConfigSource: &envoycore.ApiConfigSource{
				ApiType:      envoycore.ApiConfigSource_GRPC,
				ClusterNames: []string{xdsClusterName},
				RefreshDelay: ptr.Duration(xdsRefreshDelaySecs * time.Second),
			},
		},
	}
)

I stuck in this for a whole day, help me out please....

Thank you...

Not able to get healthy hosts using ROTOR

Hi Team,

I am trying to integrate with Turbine Labs rotor for Envoy xDS.
I was able to bring up , Rotor, and Envoy(configured to talk to Rotor).

Issue: The hosts though are not getting updated to Envoy .

Here are my logs from Rotor and Envoy, any help here would be deeply appreciated to figure out why the hosts are not getting updated to Envoy.

Envoy is brought as below
docker run -it -d --name envoy-rotor --link rotor -v /envoy/envoy-rotor.yaml:/etc/envoy-rotor.yaml -v /var/tmp/:/var/tmp/ -p 8005:8005 -p 8001:8001 envoyproxy/envoy envoy --v2-config-only -c /etc/envoy-rotor.yaml

Envoy.txt

Rotor is brought as below
docker run -it --name rotor -d -e "ROTOR_CMD=consul" -e "ROTOR_CONSUL_DC=xxx" -e "ROTOR_CONSUL_HOSTPORT=consul_ip:8500" -e "ROTOR_CONSOLE_LEVEL=trace" -p 50000:50000 turbinelabs/rotor:0.17.2

Rotor Logs:

2018-06-25T21:38:28.860117000Z [info] 2018/06/25 21:38:28 No API key specified, the API stats backend will not be configured.
2018-06-25T21:38:28.864072000Z [info] 2018/06/25 21:38:28 serving xDS on [::]:50000
2018-06-25T21:38:28.864689000Z [info] 2018/06/25 21:38:28 log streaming enabled
018-06-25T21:55:29.202937000Z [info] 2018/06/25 21:55:29 respond type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] version "rh0E+4kL7sP/bq9Mchn6nQ==" with version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.218037000Z [info] 2018/06/25 21:55:29 Stream 1, type.googleapis.com/envoy.api.v2.ClusterLoadAssignment: ack response version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.218281000Z [info] 2018/06/25 21:55:29 open watch 33 for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] from nodeID "{"proxy_name":"default-cluster","zone_name":"default-zone"}", version "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:56:29.231249000Z [info] 2018/06/25 21:56:29 respond open watch 33[envoyservice-1] with new version "HpiopUqw68kQRX56Z1eKIg=="
2018-06-25T21:56:29.231556000Z [info] 2018/06/25 21:56:29 respond type.googleapis.com/envoy.api.v2.ClusterLoadAssignment[envoyservice-1] version "hppZLEmbKZg9WVOtbF1aQQ==" with version "HpiopUqw68kQRX56Z1eKIg=="

**Envoy Logs:

**
2018-06-25T21:55:20.720849000Z [2018-06-25 21:55:20.720][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3749 milliseconds
2018-06-25T21:55:20.721800000Z [2018-06-25 21:55:20.721][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:77] async DNS resolution complete for rotor
2018-06-25T21:55:24.683702000Z [2018-06-25 21:55:24.682][5][debug][main] source/server/server.cc:118] flushing stats
2018-06-25T21:55:25.722346000Z [2018-06-25 21:55:25.721][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:69] starting async DNS resolution for rotor
2018-06-25T21:55:25.722733000Z [2018-06-25 21:55:25.721][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3124 milliseconds
2018-06-25T21:55:25.723396000Z [2018-06-25 21:55:25.723][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 2811 milliseconds
2018-06-25T21:55:25.724564000Z [2018-06-25 21:55:25.724][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3749 milliseconds
2018-06-25T21:55:25.725642000Z [2018-06-25 21:55:25.725][5][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 4061 milliseconds
2018-06-25T21:55:25.726663000Z [2018-06-25 21:55:25.726][5][debug][upstream] source/common/upstream/logical_dns_cluster.cc:77] async DNS resolution complete for rotor
2018-06-25T21:55:29.204203000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 3
2018-06-25T21:55:29.204458000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.204700000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:422] [C0] read ready
2018-06-25T21:55:29.204946000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: 100
2018-06-25T21:55:29.205206000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: -1
2018-06-25T21:55:29.205430000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:29] [C0] read error: 11
2018-06-25T21:55:29.205693000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:277] [C0] dispatching 100 bytes
2018-06-25T21:55:29.205928000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:335] [C0] recv frame type=0
2018-06-25T21:55:29.206166000Z [2018-06-25 21:55:29.202][5][trace][http] source/common/http/async_client_impl.cc:100] async http request response data (length=91 end_stream=false)
2018-06-25T21:55:29.206403000Z [2018-06-25 21:55:29.202][5][debug][upstream] source/common/config/grpc_mux_impl.cc:160] Received gRPC message for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment at version hppZLEmbKZg9WVOtbF1aQQ==
2018-06-25T21:55:29.206645000Z [2018-06-25 21:55:29.202][5][debug][upstream] source/common/upstream/eds.cc:51] Missing ClusterLoadAssignment for envoyservice-1 in onConfigUpdate()
2018-06-25T21:55:29.206870000Z [2018-06-25 21:55:29.202][5][debug][config] bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:60] gRPC config for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment accepted with 0 resources: []
2018-06-25T21:55:29.207093000Z [2018-06-25 21:55:29.202][5][trace][upstream] source/common/config/grpc_mux_impl.cc:80] Sending DiscoveryRequest for type.googleapis.com/envoy.api.v2.ClusterLoadAssignment: version_info: "hppZLEmbKZg9WVOtbF1aQQ=="
2018-06-25T21:55:29.207295000Z node {
2018-06-25T21:55:29.207533000Z id: "node504"
2018-06-25T21:55:29.207747000Z cluster: "default-cluster"
2018-06-25T21:55:29.208032000Z locality {
2018-06-25T21:55:29.208245000Z zone: "default-zone"
2018-06-25T21:55:29.208453000Z }
2018-06-25T21:55:29.208700000Z build_version: "067f8f6523f63d6f0ccd3d44e6fd2db97804af20/1.7.0-dev/Clean/RELEASE"
2018-06-25T21:55:29.208923000Z }
2018-06-25T21:55:29.209145000Z resource_names: "envoyservice-1"
2018-06-25T21:55:29.209371000Z type_url: "type.googleapis.com/envoy.api.v2.ClusterLoadAssignment"
2018-06-25T21:55:29.211162000Z response_nonce: "33"
2018-06-25T21:55:29.211414000Z
2018-06-25T21:55:29.211659000Z [2018-06-25 21:55:29.202][5][trace][router] source/common/router/router.cc:872] [C0][S9549407309502274984] proxying 217 bytes
2018-06-25T21:55:29.211905000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:292] [C0] dispatched 100 bytes
2018-06-25T21:55:29.212129000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:321] [C0] writing 226 bytes, end_stream false
2018-06-25T21:55:29.212342000Z [2018-06-25 21:55:29.202][5][trace][http2] source/common/http/http2/codec_impl.cc:446] [C0] sent frame type=0
2018-06-25T21:55:29.212595000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 2
2018-06-25T21:55:29.212836000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.213051000Z [2018-06-25 21:55:29.202][5][trace][connection] source/common/network/raw_buffer_socket.cc:63] [C0] write returns: 226
2018-06-25T21:55:29.213275000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:384] [C0] socket event: 3
2018-06-25T21:55:29.213528000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:452] [C0] write ready
2018-06-25T21:55:29.213742000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/connection_impl.cc:422] [C0] read ready
2018-06-25T21:55:29.213963000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: 30
2018-06-25T21:55:29.214194000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:21] [C0] read returns: -1
2018-06-25T21:55:29.214405000Z [2018-06-25 21:55:29.203][5][trace][connection] source/common/network/raw_buffer_socket.cc:29] [C0] read error: 11
2018-06-25T21:55:29.214651000Z [2018-06-25 21:55:29.203][5][trace][http2] source/common/http/http2/codec_impl.cc:277] [C0] dispatching 30 bytes

Envoy Admin Stats
cluster.envoyservice-1.bind_errors: 0
cluster.envoyservice-1.lb_healthy_panic: 0
cluster.envoyservice-1.lb_local_cluster_not_ok: 0
cluster.envoyservice-1.lb_recalculate_zone_structures: 0
cluster.envoyservice-1.lb_subsets_active: 0
cluster.envoyservice-1.lb_subsets_created: 0
cluster.envoyservice-1.lb_subsets_fallback: 0
cluster.envoyservice-1.lb_subsets_removed: 0
cluster.envoyservice-1.lb_subsets_selected: 0
cluster.envoyservice-1.lb_zone_cluster_too_small: 0
cluster.envoyservice-1.lb_zone_no_capacity_left: 0
cluster.envoyservice-1.lb_zone_number_differs: 0
cluster.envoyservice-1.lb_zone_routing_all_directly: 0
cluster.envoyservice-1.lb_zone_routing_cross_zone: 0
cluster.envoyservice-1.lb_zone_routing_sampled: 0
cluster.envoyservice-1.max_host_weight: 0
cluster.envoyservice-1.membership_change: 0
cluster.envoyservice-1.membership_healthy: 0
cluster.envoyservice-1.membership_total: 0
cluster.envoyservice-1.original_dst_host_invalid: 0
cluster.envoyservice-1.retry_or_shadow_abandoned: 0
cluster.envoyservice-1.update_attempt: 51
cluster.envoyservice-1.update_empty: 50
cluster.envoyservice-1.update_failure: 0
cluster.envoyservice-1.update_no_rebuild: 0

How to route to my defined clusters in rotor file mode configuration

Hi,
@9len I want to use rotor file mode configuration, this is my configurations:

clusters.yaml

- cluster: example-cluster-1
  instances:
    - host: 172.27.71.209
      port: 8002
    - host: 172.27.71.209
      port: 8001
- cluster: example-cluster-2
  instances:
    - host: 172.27.71.209
      port: 8002

my docker run commands :

docker run --name hello-world1 -d -p 8001:80 containersol/hello-world
docker run --name hello-world2 -d -p 8002:80 containersol/hello-world
docker run -v $(pwd)/:/data    \
  -e 'ROTOR_CMD=file' \
  -e 'ROTOR_CONSOLE_LEVEL=debug' \
  -e 'ROTOR_FILE_FORMAT=yaml' \
  -e 'ROTOR_FILE_FILENAME=/data/clusters.yaml' \
  -p 50000:50000 \
  turbinelabs/rotor:0.19.0
docker run \
  -e 'ENVOY_XDS_HOST=172.27.71.209' \
  -e 'ENVOY_XDS_PORT=50000' \
  -p 9999:9999 \
  -p 80:80 \
  turbinelabs/envoy-simple:0.19.0

Now, when I do a curl request with curl localhost:80 I didn't get any response from my hello world container!!
Also if I want to customize the route to serve each cluster in different routes, what do I do?

Example file config invalid

file config:

- cluster: example-cluster-1
  instances:
    - host: 127.0.0.1
      port: 8080
    - host: 127.0.0.1
      port: 8081
- cluster: example-cluster-2
  instances:
    - host: 127.0.0.1
      port: 8083

docker-compose file:

version: "2"
services:
  rotor:
    image: turbinelabs/rotor:0.18.0
    ports:
      - 50000:50000
    environment:
      - ROTOR_CONSOLE_LEVEL=debug
      - ROTOR_CMD=file
      - ROTOR_FORMAT=yaml
      - ROTOR_FILE_FILENAME=/data/routes.yml
    volumes:
      - ./data:/data

error log:

rotor_1  | Jul 11 10:20:12 add4810ee356 syslog-ng[11]: EOF on control channel, closing connection;
rotor_1  | *** Running /etc/rc.local...
rotor_1  | *** Booting runit daemon...
rotor_1  | *** Runit started as PID 17
rotor_1  | *** Running /usr/local/bin/rotor.sh...
rotor_1  | Jul 11 10:20:12 add4810ee356 cron[21]: (CRON) INFO (pidfile fd = 3)
rotor_1  | Jul 11 10:20:12 add4810ee356 cron[21]: (CRON) INFO (Running @reboot jobs)
rotor_1  | [info] 2018/07/11 10:20:12 No --api.key specified. Using standalone mode: Envoys will be configured to serve on port 80, for Envoy cluster "default-cluster" in zone "default-zone".
rotor_1  | [info] 2018/07/11 10:20:12 No API key specified, the API stats backend will not be configured.
rotor_1  | [info] 2018/07/11 10:20:12 watching /data/routes.yml
rotor_1  | [info] 2018/07/11 10:20:12 watching /data
rotor_1  | [debug] 2018/07/11 10:20:12 file: reload
rotor_1  | [info] 2018/07/11 10:20:12 serving xDS on [::]:50000
rotor_1  | [info] 2018/07/11 10:20:12 log streaming enabled
rotor_1  | [info] 2018/07/11 10:20:12 Stopping XDS gRPC server
rotor_1  | file: invalid character ' ' in numeric literal
rotor_1  |
rotor_1  | *** /usr/local/bin/rotor.sh exited with status 1.
rotor_1  | *** Shutting down runit daemon (PID 17)...
rotor_1  | *** Running /etc/my_init.post_shutdown.d/10_syslog-ng.shutdown...
rotor_1  | Jul 11 10:20:13 add4810ee356 syslog-ng[11]: syslog-ng shutting down; version='3.5.6'
rotor_1  | Jul 11 10:20:13 add4810ee356 syslog-ng[11]: EOF on control channel, closing connection;
rotor_1  | *** Killing all processes...
rotorandenvoy_rotor_1 exited with code 1

Export a hash for state of the world

One of the issues I've found through various service mesh technologies is it can be difficult to determine what the current state is for service discovery. Imagine the scenario where you are running multiple Rotor instances for availability purposes. A bug manifests itself in one of the plugins (i.e kubernetes), and let's say the state is now stale (oops we lost the watch and never recovered!). If you are trying to debug a complex service mesh, how would you know? I think it would be extremely valuable to export over the stats backends a deterministic hash of the state of the world. That way when you are monitoring your rotor instances, you can have the confidence that they are sharing the same information across instances to envoy.

My experience is that this is really missing in the service mesh world to help provide debugging when bugs manifest.

Identify file format based on filename

Hi there,

I was very excited to try Rotor this morning, using rotor version 0.16.0 and ran into a small documentaion/UX issue:

While following the instructions in the README to start with Flat Files Service Discovery, the README doesn’t mention that to use a YAML file – in the format described – one needs to specify the --format=yaml flag to Rotor.

Otherwise Rotor tries to parse the YAML as JSON, which is the default value in the github.com/turbinelabs/codec library and spits out file: invalid character ' ' in numeric literal errors with no obvious clue as to what is causing this.

I looked into trying to guess the file format from the filename, but it looks it would require quite a few changes to the way flags are parsed.

Get error on docker build

Just forked and try to docker build on it. Got the following errors. Any idea?

Step 10/21 : RUN go get github.com/turbinelabs/rotor/...
---> Running in 7d210f05d73c
package github.com/lyft/protoc-gen-validate/tests/harness/cases/go: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/go" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/go (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/go (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/go (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/gogo (from $GOPATH)
package github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo: cannot find package "github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo" in any of:
/usr/local/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo (from $GOROOT)
/go/src/github.com/lyft/protoc-gen-validate/tests/harness/cases/other_package/gogo (from $GOPATH)

Improve kubernetes example for rotor

As best practice we should create a namespace for rotor, so when using kubectl create namespaces are created, service accounts get setup, etc without user needing to fiddle with it.

  • Create a namespace out of the box for rotor
  • Assign service accounts, etc to use this namespace

Consul health_check not work

Since consul reject the : as service's metadata key, the health_check didn't work.

{{bold "Health Checks"}}

Node health checks will be added as instance metadata named following the pattern
"check:<check-id>" with the check status as value. Additionally "node-health" is
added for an instance within each cluster to aggregate all the other health
checks on that node that either are 1) not bound to a service or 2) bound to
the service this cluster represents. The value for this aggregate metadata will be:

    passing   if all Consul health checks have a "passing" value
    mixed     if any Consul health check has a "passing" value
    failed    if no Consul health check has the value of "passing"

hashicorp/consul#4422

Static listener with tls_context support?

Hey guys,

When I enable tls_context for a static listener rotor fails to unmarshal the file

could not deserialize static resources: json: cannot unmarshal string into Go value of type []json.RawMessage
  • static config
listeners:
- address:
    socket_address:
      address: 0.0.0.0
      port_value: 443
  filter_chains:
  - filters:
    - name: envoy.http_connection_manager
      config:
        codec_type: AUTO
        stat_prefix: ingress_http
        route_config:
          virtual_hosts:
          - name: backend
            domains:
            - "example.com"
            routes:
            - match:
                prefix: "/service/1"
              route:
                cluster: service1
            - match:
                prefix: "/service/2"
              route:
                cluster: service2
        http_filters:
        - name: envoy.router
          config: {}
    tls_context:
      common_tls_context:
        alpn_protocols: h2,http/1.1
        tls_params:
          tls_minimum_protocol_version: TLSv1_2
          tls_maximum_protocol_version: TLSv1_3
          cipher_suites: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
        tls_certificates:
        - certificate_chain: { filename: /etc/envoy/cert.crt }
          private_key: { filename: /etc/envoy/cert.key }

Any ideas?
Thanks

Improve documentation on kubernetes plugin

It took me a good minute to figure this out, i can definitely see how this will bite first-time users. So I setup rotor + k8s service discovery. I created a pod with the correct labels and yet… nothing. Envoy shows nothing with config_dump, and rotor logs were not helpful. So i tried lowering the log level to debug for rotor. For some reason my pod was not being picked up. It wasnt until i remembered from my istio days you need to set the port name to http, which when i finally did, rotor picked up the pod. Plus i had to go through source code to confirm this. I think it would be helpful if the project had a simple webserver example in k8s to show this and to call it out. Also i think rotor should at the very least log that a pod was found but because it didnt match http it will not be considered.

Actionable Items

  • Provide an example kubernetes pod spec with the correct port labels and cluster label.
  • Add better logging to rotor kubernetes plugin to output at INFO that a pod was found but because it did not match the port label (i.e http) it will not be considered. This would really help debug if users make mistakes.

I am happy to do this if you agree its worthwhile.

No way to enable HTTP2

For CDS, the returned clusters do not have http2_protocol_options set and there is no way through which we can set them either (cds.go). Hence upstreams which support http2 don't work. Since gRPC uses http2, doing cluster discovery using rotor does not work with gRPC services.

Can we add another bool parameter to the Cluster struct called enableHttp2 (cluster.go) . Different plugins can have their own mechanisms of finding the value for the parameter. For example, for EC2 integration, the presence of an extra tag may tell whether HTTP2 needs to be enabled.

Project status

Hi all.

Tell me, please, what is the status of project after shutting down Tourbine Labs and the transition of the team to Slack. Is it possible to use rotor in production or is it better to look for an analog?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.