Git Product home page Git Product logo

xds's Introduction

xDS API Working Group (xDS-WG)

Goal

The objective of the xDS API Working Group (xDS-WG) is to bring together parties across the industry interested in a common control and configuration API for data plane proxies and load balancers, based on the xDS APIs.

Vision

The xDS vision is one of a universal data plane API, articulated at https://blog.envoyproxy.io/the-universal-data-plane-api-d15cec7a. xDS aims to provide a set of APIs that provide the de facto standard for L4/L7 data plane configuration, similar to the role played by OpenFlow at L2/L3/L4 in SDN.

The existing Envoy xDS APIs constitute the basis for this vision and will incrementally evolve towards supporting a goal of client neutrality. We will evolve the xDS APIs to support additional clients, for example data plane proxies beyond Envoy, proxyless service mesh libraries, hardware load balancers, mobile clients and beyond. We will strive to be vendor and implementation agnostic to the degree possible while not regressing on support for data plane components that have committed to xDS in production (Envoy & gRPC to date).

The xDS APIs have two delineated aspects, a transport protocol and data model, The xDS transport protocol provides a low latency versioned streaming gRPC delivery of xDS resources. The data model covers common data plane concerns such as service discovery, load balancing assignments, routing discovery, listener configuration, secret discovery, load reporting, health check delegation, etc.

Repository structure

The xDS APIs are split between this repository and https://github.com/envoyproxy/envoy/tree/main/api. Our long-term goal is to move the entire API to this repository, this will be done opportunistically over time as we generalize parts of the API to be less client-specific.

Mailing list and meetings

We have an open mailing list [email protected] for communication and announcements. We also meet on an ad hoc basis via Zoom.

To monitor activity, you can either subscribe to a GitHub watch on this repository or join the @cncf/xds-wg team for tagging on key PRs and RFCs.

xds's People

Contributors

alyssawilk avatar caniszczyk avatar efimki avatar ggreenway avatar howardjohn avatar htuch avatar keith avatar kyessenov avatar markdroth avatar mattklein123 avatar mmorel-35 avatar phlax avatar sergiitk avatar snowp avatar tyxia avatar yanavlasov avatar yousukseung avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xds's Issues

Replicaset and xDS node-id's

Hi,

I have a question regarding the use the node-id for envoy when using xDS.
Is it possible to use the same node-id for envoy when using them in a Replicaset. Generally speaking, is it discouraged to use the same node-id for more than one envoy, even if they should have the exact same config?

Thanks!

Please upgrade go/go.mod grpc package

Please upgrade the go/go.mod google.golang.org/grpc because I have found, using go mod graph, that this repository was the reason I got the following error because it references v1.25.1.
cloud.google.com/go/compute/metadata: ambiguous import: found package cloud.google.com/go/compute/metadata in multiple modules:

Other modules have begun to reference the cloud.google.com/go/compute/metadata package. This package was moved out of cloud.google.com/go. Updating the grpc reference should begin to clear up this issue.

My go mod graph produced the following.
github.com/cncf/xds/[email protected] google.golang.org/[email protected]
google.golang.org/[email protected] github.com/envoyproxy/[email protected]
github.com/envoyproxy/[email protected] google.golang.org/[email protected]
google.golang.org/[email protected] cloud.google.com/[email protected]

Consider tagging module "go"?

I found that go packages, say github.com/cncf/xds/go/udpa/annotations, are placed in submodule github.com/cncf/xds/go.

However, it seems that module github.com/cncf/xds/go is not tagged. According to Go Modules wiki, submodule should be tagged like relative-path-to-root/vX.X.X.
At now, when trying to import package github.com/cncf/xds/go/udpa/annotations, downstream would depends on pseudo-version of module github.com/cncf/xds/go.

github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe

I think it is not very readable and difficult to upgrade. This is not conductive to version control either.
So, I propose whether it is possible to tag submodule properly to better support go users. For example, go/v0.0.1, go/v1.0.0etc, so that other project can use tag to import this module in go.mod.

Clarification on glob collections behavior

Hello, I have a question on a specific aspect of glob collections. In the definition for glob collections, the following is mentioned:

Globs can exist in arbitrary path locations, e.g. xdstp://some-authority/envoy.config.listener.v3.Listener/some/longer/path/*. Multiple globs may be subscribed to in a DeltaDiscoveryRequest.

Does this mean that the following resource:

xdstp://some-authority/envoy.config.listener.v3.Listener/some/longer/path/resourceName

Belongs to each of these collections?

xdstp://some-authority/envoy.config.listener.v3.Listener/some/longer/path/*
xdstp://some-authority/envoy.config.listener.v3.Listener/some/longer/*
xdstp://some-authority/envoy.config.listener.v3.Listener/some/*

Or are glob collection URLs / resource URNs meant to be opaque, without actually implying hierarchy?

Stable release of Go generated protos

Anyone writing a Go application that wishes to use Envoy protos will need a stable release of this repo. In case you are not aware, proto definitions cannot be generated independently by libraries, since multiple versions of the generated code for a single message that make their way into the same binary will cause a panic at init time. And without a stability guarantee, use of this repo as the official source of the protos is risky. Please create a stable v1.x release of this repo ASAP. Thank you.

Related to cncf/udpa#42 and envoyproxy/go-control-plane#391.

cc @htuch

Go Bazel builds fail to find cel.dev/expr

In #89 the cel.dev/expr based annotations were reintroduced. Unfortunately, there still seem to be issues when trying to use Bazel, Go and Gazelle together, and importing this module.

I've recently added a dependency on github.com/cncf/xds/go/xds/type/matcher/v3 and updated my dependency on go-control-plane to pick up a bugfix (envoyproxy/go-control-plane@77feb56 ).

(I'm not sure my new dependency there matters, tbh. It's just very close and maybe adding confusion.)

This update to go-control-plane updated the version to this library, and thus picked up the cel.dev/expr change.

Now, my builds fail with:
ERROR: /private/var/tmp/_bazel_randerson/01ba2733315bf63f62b0e43c3a981a72/external/com_github_cncf_xds_go/xds/type/v3/BUILD.bazel:3:11: no such target '@dev_cel_expr//:expr': target 'expr' not declared in package '' defined by /private/var/tmp/_bazel_randerson/01ba2733315bf63f62b0e43c3a981a72/external/dev_cel_expr/BUILD.bazel (Tip: use query "@dev_cel_expr//:*" to see all the targets in that package) and referenced by '@com_github_cncf_xds_go//xds/type/v3:type'

The BUILD file there doesn't actually define any go libraries, so this isn't particularly surprising, but I don't think this commit is working as expected right now.

I'm not really sure if the problem is here or in cel.dev/expr, but ... well, debugging 3 layers of modules down is not fun.

Need documentation

Envoy is taking a dependency on protos from this repo. That means Envoy docs need references to the protos here in a better form than raw source code.

Glob collections behavior on partial responses

I have a question on how clients are supposed to interpret glob collection responses from an xDS control plane. gRPC has a default message limit of 4MB, which can cause clients to reject a response from the control plane if it is too large. In practice, most glob collections will be small enough to fit in a single response, however, at LinkedIn, some clusters teeter over the edge of this limit during high load, which was causing some clients to simply reject the response. This is especially likely during startup since the clients may request multiple collections at once which can easily cross this size threshold. Because the limit is not trivial to raise (and there is no guarantee a single value will fit all usecases), our control plane implementation instead splits the response into multiple "chunks", each representing a subset of the collection, such that each response is smaller than 4MB. However, this raises the question of how the client should behave under such circumstances.

The spec does not dictate that the collection be sent as a whole every time (nor should it, for the reason listed above), but it also provides no way to mark the "end" of a collection or a means to provide the collection's size. This means in some extreme cases the client may receive only a very small subset of the collection on the initial response from the control plane. In this scenario, should the client:

  1. Wait an arbitrary amount of time for the control plane to send the rest of the collection? In the case where the client already received everything, it could introduce unwanted latency.
  2. Simply treat the contents of the response as the full collection, even if it is partial? This is equally bad since it could cause the client to send too much traffic to a subset of hosts if the collection is being used for LEDS.

There is no room in the protocol today to really communicate the size of the collection, and arguably it's something that would provide little to no purpose other than for this specific edge case. My suggestion would be to mimic the glob collection deletion notification, but in reverse. Here is what it would look like (following the example in TP1):

  1. Client requests xdstp://some-authority/envoy.config.listener.v3.Listener/foo/*.
  2. Server responds with resources [xdstp://some-authority/envoy.config.listener.v3.Listener/foo/bar, xdstp://some-authority/envoy.config.listener.v3.Listener/foo/baz, xdstp://some-authority/envoy.config.listener.v3.Listener/foo/*].

By adding the glob collection's name in the response, the control plane can signal to the client that it has sent everything. This serves to effectively bookend the response from the control plane. The client can subsequently wait for this "end-of-glob-collection" notification to unambiguously determine whether it has received every resource in the collection. The resource named after the collection would have to be null or some special value to prevent it from being interpreted as an actual member of the collection. This proposition could require some changes on clients, but this problem seems important to address as more systems leverage the xDS protocol.

Add `buf` linting

We are just adding this to Envoy, initially just to stop unused imports, but i think many of the other checks can also be useful

[question] how to create a xDS data-plane from scratch?

I want to create a data-plane xDS compatible from scratch. How should I proceed?

  • has this repository already the minimal API-set to bootstrap a proof-of-concecpt?
  • the poc will be in Java, how cumbersome it is to build this xDS api using Java -- I know nothing about Starklark
  • after built, how can I use this API with a data-plane, let say go-control-plane or Istio, what is the trick to make this working?
  • is there any documentation/example/e2e-testing for this API in addition to Envoy website?

Missing CI and DCO

this repo doesnt seem to trigger CI on pull requests

im aware that in the older udpa repo there was (sometimes?) a need to recompile the go binary which might not get caught without CI

ref #3

Matching an element in a list

There are several use cases in Envoy when we need to iterate over a list of items and match each item individually:

  • selecting the first ALPN for which a nested filter chain matcher accepts;
  • matching an item in DNS SAN list in a TLS cert;
  • matching an IP value in XFF header.

Ideally, we want to support all existing matchers for these use cases, instead of adapting all matchers to expect a list.

How should we define this in xds?

Option 1: Add a common input that takes another input and produces an iterator of data items. This requires some changes in the implementations to return an iterator for data inputs.
Option 2: Add a custom matcher that takes another matcher as input. The nested matcher can be any of string, custom input, prefix map, exact map, or custom sub-linear matcher. The custom matcher parses the input and then attempts to match elements one by one.

Ref: envoyproxy/envoy#20666

Get cel.dev/expr error with go get

After improt github.com/cncf/xds/go/xds/type/v3 , go get command get an error

go: main imports
        github.com/cncf/xds/go/xds/type/v3 imports
        cel.dev/expr: cannot find module providing package cel.dev/expr

main.go

package main

import (
        "github.com/cncf/xds/go/xds/type/v3"
)

go.mod

module main

go 1.21

Request to regenerate Go grpc bindings with newer version of the tool

While looking at the generated *.pb.go files in this repository, I figured that the generated code had been generated by an older version of the tool which generates Go bindings for grpc services. The latest version can be found here.

Specifically, I'm talking about the generated functions which are used to register service handlers. For example, this one:

func RegisterOpenRcaServiceServer(s *grpc.Server, srv OpenRcaServiceServer) {

As one can see, this function takes two inputs:

  • the first one being a *grpc.Server, and
  • the second one being the service handler which is to be registered.

Newer versions of the tool generate code which uses the ServiceRegistrar interface as its first input instead. For example, see here. This is more flexible because it allows concrete types than a *grpc.Server on which the service handler is to be registered.

grpc-go provides a version of a gRPC server which uses xDS to get its configuration. You can find that here. For the grpc services contained in *pb.go files in this repository to be registered on the above mentioned xDS-enabled gRPC server, they need to be generated using newer versions of the protoc-gen-go-grpc.

MODULE.bazel depends on a bad protobuf version (for Java)

MODULE.bazel is using protobuf 27.0-rc2 and disagrees with bazel/repository_locations.bzl which uses 21.5. For starters, why use an RC version?

Protobuf versions after 23.x start a new major version in Java to 4.x. However, the ecosystem is blocked on protocolbuffers/protobuf#17247 before they can upgrade. Yes, you build from source with Bazel, but Bazel in Java still pulls many things from Maven Central (via maven_install) and so could have been compiled with older versions of protobuf, and thus be incompatible with 27.0-rc2.

In grpc-java we were trying to swap to xds from BCR, but noticed the newer protobuf, which means we can't upgrade as it is simply too dangerous/confusing to users. I'm happy to see #96, but it seems the versions should become somewhat aligned with repository_locations.bzl and protobuf downgraded to 21.7.

I looked through the transitive dependencies, and didn't see any that used such a new protobuf version. But I didn't actually try a build to verify.

CC @keith, @sergiitk

Add breaking change detection

following up on slack conversation about possibility of using buf breaking change detection

the context was a discussion about moving Envoy's protos to the xds repo and whether we would be able to avoid running all of Envoy's CI in order to detect breaking changes

i dont have a concrete answer to that yet, im just getting a feel of how buf works, but i think probably the best first step is to add the buf dependency, and some initial linting (#45 )

from there i would suggest we look at if/how it could be used with existing xds protos

my initial (limted) understanding is that it works by checking local protos against published ones - it may not do what we need in terms of above goals - but is probably still quite useful even if it doesnt

Broken build due to missing protobuf:py_proto_library rule

gRPC recently found out that cncf/xds repo won't work with the HEAD version of protobuf because protobuf:py_proto_library is removed (technically renamed) by protocolbuffers/protobuf#10132. Therefore, build ends up with the following error.

ERROR: Traceback (most recent call last):
        File "/usr/local/google/home/veblush/.cache/bazel/_bazel_veblush/c4652c20fd8d5880d194bf82693e4fee/external/com_github_cncf_udpa/bazel/api_build_system.bzl", line 2, column 66, in <toplevel>
                load("@com_google_protobuf//:protobuf.bzl", _py_proto_library = "py_proto_library")
Error: file '@com_google_protobuf//:protobuf.bzl' does not contain symbol 'py_proto_library'
ERROR: /usr/local/google/home/veblush/git/grpc/BUILD:7688:34: error loading package '@com_github_cncf_udpa//xds/type/v3': Extension file 'bazel/api_build_system.bzl' has errors and referenced by '//:xds_type_upbdefs'

They say that protobuf:py_proto_library is an internal rule so you may want to move away from it.

Use a precompiled `protoc` toolchain

we have just updated the Envoy repo to make use of precompiled protoc which speeds a load of things up

recompiling the go protos i immediately noticed that i was compiling protoc

one option is using protobuf from rules_proto which makes use of the precompiled bins but this then ties you to their version of protobuf

another option is to patch protobuf, this is what Envoy does

Better organize Bazel build definitions so that rules_go isn't a hard dependency

Currently, proto targets for different languages are defined in a BUILD file by loading the xds_proto_package macro (eg. in //xds/data/orca/v3). The bzl file that defines xds_proto_package again loads macros from rules_go:

load("@io_bazel_rules_go//proto:def.bzl", "go_grpc_library", "go_proto_library")

This is a bad practice because it forces any other project (see grpc) which depends on any target in //xds/data/orca/v3 to fetch rules_go, even though the project doesn't depend on the go_proto_library target. This is a common pitfall with Bazel, see bazelbuild/bazel#12835

A better package structure is to split targets into different packages by language, eg

  • proto_library under //xds/data/orca/v3
  • go_proto_library under //xds/data/orca/v3/go
  • py_proto_library under //xds/data/orca/v3/py
  • cc_proto_library under //xds/data/orca/v3/cc

so that dependencies for specific languages are not fetched when the corresponding package is not needed.

To maintain backwards compatibility, you can still have alias targets in //xds/data/orca/v3that points to other language specific packages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.