Git Product home page Git Product logo

spec's People

Contributors

jedisct1 avatar mathetake avatar piotrsikora avatar shukitchan avatar spacewander avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spec's Issues

Offline validation for config passed to proxy wasm

proxy-wasm extensions allows people to extend proxy functionalities in the request/response lifecycle and are deployed along with proxies but have their own lifecycle.

One big issue right now is that the config passed to those extensions are mainly validated in runtime, meaning that user can't have feedback about incorrect config before deployment have been attempted for example:

  • in envoy, the failure in config crashes the proxy.
  • in istio wasm plugins are eventually deployed and we have no clue if a plugin failed due to config unless we look at the logs or describe the pod.

A new proxy_validate_config method could be added to the ABI:

proxy_validate_config
params:
i32 (uint32_t) root_context_id
i32 (size_t) plugin_configuration_size
returns:
i32 (proxy_result_t) call_result

the result could be a list of errors e.g. [LINE 1] Unexpected token {or going forward a flatmap with the location e.g. .rules[0] but that assumes all configs are structured which I am not 100% sure.

Ping @mathetake @PiotrSikora

Related: corazawaf/coraza-proxy-wasm#88 (comment)

How can a module signal unexpected failure?

Heya. 😃

I'm playing with the proxy-wasm spec recently, and I've been stumbling upon this: If something goes (terribly) wrong in one of the exported functions of my wasm-module, what can I do about it?

My expectation would be some sort of host function to be imported by the wasm module, say proxy_abort, which would be called in those abysmal circumstances.

I may very well just be missing something obvious.... 🤔

The spec should define semantics and values for proxy_result_t

The spec mentions that proxy_result_t can express either a success or error status, but does not define how it does that. Looking at the implementation (enum class WasmResult), it seems that 0 is success and there is a set of well-defined non-zero error codes. The spec ought to document all these as part.

How to start a proxy-wasm-XXLang-sdk from scratch?

The combination of envy and wasm seems awesome, I'd like to help adding another programming language for it.

There're rust and c++ sdk here already, just wondering how to take the first step for that? Like, any docs or tutorial that I need to read first or something?

Add documentation about exported WASI functions

Given that Proxy-Wasm is a sort of "extension" of WASI, we should have the document about which functions in WASI should be exported to WasmVMs in this specification. That would be helpful for both SDK developers and users.

Actually, some of WASI functions are already available in cpp-host, though some of them just return __WASI_ESUCCESS and are virtually unimplemented.

Properly document v0.2.1 ABI

Unfortunately, Proxy-Wasm ABI hasn't been appropriately documented since the beginning and has started being used by Envoy and Istio without the proper reference except for the code base of Envoy and Proxy-Wasm SDK/host implementations. That has caused a lot of confusion to anyone interested in this project, not only the end users but also those who is willing to contirbute.

As the former/current leads of this project discussed in the comments #38 (comment) to break this standstill situation which has lasted for the last three years, we agreed on correctly documenting the currently implemented v0.2.1 ABI which is the de-factor in this space (implemented Envoy and other proxies).

After this is resolved, we can incrementally discuss and fix the issues associated with the current ABI, such as #5 #32 #38.

cc @vikaschoudhary16 @jcchavezs @anuraaga @PiotrSikora @martijneken @mpwarres @codefromthecrypt @M4tteoP

onConfigure should also return PENDING if config processing is not completed

Here is an example of a whitelist plugin that uses externally managed white lists.

  1. OnConfigure ( white_list_url ). --> url is valid so True
  2. But URL is not fetched yet, so the plugin cannot process requests

options

  1. Until fully initialized: the plugin just lets everything pass.
    This is left to the plugin to decide and code correctly.
  2. listener is kept in "warming" not ready until all plugins are operational
    a. onConfigure return PENDING
    b. At a later point on iocompletion, plugin calls proxy_config_done({success, failure})

ABI vNEXT: generic vs specialized context functions

The current version of the ABI defines generic proxy_on_context_create() and proxy_on_context_finalize() functions which are used for VM and plugin lifecycle, as well as for per-stream lifecycle, which makes "context" a bit overloaded and confusing.

For VM context:

proxy_on_context_create(VmContext, ...)
proxy_on_vm_start(...)
proxy_on_context_finalize(...)

For plugin context:

proxy_on_context_create(PluginContext, ...)
proxy_on_plugin_start(...)
proxy_on_context_finalize(...)

For HTTP per-stream context:

proxy_on_context_create(HttpContext, ...)
proxy_on_http_request_headers(...)
proxy_on_http_request_body(...)
proxy_on_http_response_headers(...)
proxy_on_http_response_body(...)
proxy_on_context_finalize(...)

Instead, we could have specialized "finalize" functions for each context type, and create context implicitly as part of each "start" function:

For VM context:

proxy_on_vm_start(...)
proxy_on_vm_shutdown(...)

For plugin context:

proxy_on_plugin_start(...)
proxy_on_plugin_shutdown(...)

For HTTP per-stream context:

proxy_on_http_request_headers(...)
proxy_on_http_request_body(...)
proxy_on_http_response_headers(...)
proxy_on_http_response_body(...)
proxy_on_http_finalize(...)

In case proxy_on_http_finalize() returns Action::Pause, the host would wait until proxy_resume() is called before calling proxy_on_http_finalize() again (which matches the behavior of other callbacks that have the ability to pause/resume processing).

@mattklein123 @htuch @yskopets @yuval-k @gbrail @jplevyak @kyessenov any thoughts?

Support Golang SDK

Is there any plan to support golang SDK recently? My team are planning to use golang to develop envoy extensions. We're considering implementing a proxy-wasm golang SDK by given spec if there's no recent plan to do so.

Is the reading host env function workable with istio-proxy?

@mathetake I am very interested on this reading host env feature. So I just reviewed all issues and PR around this topic. But still not very clear about how to use it in real case. Could you give some tips? Such as how to config it in envoy.yaml and how to write the code in the wasm filter in C++?
I actually had tried like this:
in istio envoyfilter.yaml

config:
  root_id: my_root_id
  vm_config:
    code:
      local:
        filename: /var/local/lib/wasm-filters/example-filter.wasm
    runtime: envoy.wasm.runtime.v8
    vm_id: "my_vm_id"
    environment_variables: ["HOME"]

in my wasm-filter.cc

    this->theHome = std::getenv("HOME");

Unluckily, I got nothing output. So anywrong I did on this? Bwt, is the function support by istio-envoy? and which should the istio version to be at least? Looking forward for your help, thanks in advance.

Create TextReadout metrics

Allow wasm filters to create TextReadout metrics, currently it only supports Counters, Gauges, and Histograms.

Possible use case:

A WASM filter with asynchronous initialization was developed, which fetches necessary configuration data. To allow other components to query the initialization status, the filter sets a gauge metric to signal when initialization is complete. Additionally, we want to expose a TextReadout metric to display an error message if the initialization fails. Currently, the filter logs errors, but we want other components to be able to query the error value directly, instead of searching through logs.

Support for writing for WASM for UDP

Hey, new to WASMs for envoy proxy, and while trying the examples I see the support for HTTP & TCP but there is no UDP protocol support. I want to check if this is something that will be added in the future in proxy-wasm spec? Thank you.

Can the same sandbox instance be shared with the same extension (such as Filter) ?

Take API 'proxy_get_buffer' as an example:

Filter A(context ID:1) and Filter B(context ID:2)are XxxFilter extension instances, Filter A is handling OnHttpRequestHeader,Filter B is handling OnHttpResponseBody, Both Filter instances invoke 'proxy_get_buffer' method to get the HTTP header, because they didn't pass the contextId, On the host side, how do you know which HTTP request header to return correctly?

If the contextId is passed to the host, the host will correctly identify the Filter instance and return the HTTP header.

According to my understanding, the instance of each WASM module should be equivalent to the isolation sandbox, and sharing the same instance should save resources. If my understanding is incorrect, please correct me.

Is it suitable to add a link to our wasm-nginx-module?

I am writing an Nginx module to introduce Proxy-Wasm to Nginx: https://github.com/api7/wasm-nginx-module

It is still in WIP stage, only has finished the plugin lifecycle management. I am planning to finish the HTTP part this year.

Since it is a big project to add Proxy-Wasm support in Nginx, I hope we can attract more people to work together.
Therefore, is it suitable to add a link under https://github.com/proxy-wasm/spec#host-environments?

Thanks for your reply.

ABI to customize upstream/origin selection

EDIT1: The example code for reproducing is at https://github.com/ElvinEfendi/envoy-wasm-regional-routing.

I have been experimenting with https://github.com/proxy-wasm/proxy-wasm-rust-sdk recently. I want to write a filter that for a given request, dynamically picks what Envoy cluster the request should be proxied to.

I tried using cluster_header: region routing functionality of Envoy and was hoping I could dynamically change the header value in the plugin. But that did not work as I expected. Turns out self.set_http_request_header(&"region", Some(&"us_central1")); changes/sets the header only for the upstream. Envoy's router does not see the updated value when making routing decision.

Is this an expected behaviour? If so, is there any plan to introduce a dedicated ABI for customizing Envoy's routing decision when it comes to upstream/cluster selection?

FWIW this seems to be possible with a Lua filter: https://medium.com/safetycultureengineering/edge-routing-with-envoy-and-lua-621f3d776c57

ABI version and enumeration constants

I noticed a recent change added an enumeration to the FilterHeadersStatus
return codes for ContinueAndDontEndStream. https://github.com/envoyproxy/envoy/blob/master/include/envoy/http/filter.h#L75

Envoy WASM doesn't yet have the change [ https://github.com/envoyproxy/envoy-wasm/blob/master/include/envoy/http/filter.h ]

The AssemblyScript runtime has its own version of the enumeration, at https://github.com/solo-io/proxy-runtime/blob/master/assembly/runtime.ts#L96 , which doesn't include the new enumeration. The current AssemblyScript runtime advertises itself as proxy_abi_version_0_2_0.

How will this work in practice when Envoy WASM picks up the new enumeration from Envoy? Do we expect

  • Envoy-Wasm will pick up the Envoy change but re-order the enumeration so binary enum constant values don't change?
  • Envoy-Wasm will reject plugins with ABI older version point release, forcing filter developers to recompile?
  • Envoy-Wasm will including enumeration mapping, and proxy_abi_version_0_2_0 filters can continue to return 4 for FilterHeadersStatus.StopAllIterationAndBuffer alongside proxy_abi_version_0_2_x filters returning 5 for proxy_abi_version_0_2_0?

Clarify proxy_action_t values

The current draft spec does't anywhere define what values proxy_action_t might take.

The C++ and Rust SDKs also seem to diverge on this, the Rust SDK only provides Action::Continue and Action::Pause for all handlers. The CPP SDK seems to define differnt actions for HTTP headers which include being able to terminate the stream.

It's not clear what is expected here!

For context, I'm attempting to write a WASM filter for Envoy currently.

I'm writing an L4 (Stream Context) filter that needs to determine if the caller is authorized to access the service. If not I need to be able to terminate the connection.

As far as I can see there is no way to do this currently - the Rust SDK only defines Continue and Pause which will just hang the connection not terminate it.

I also tried calling self.done() on the StreamContext to attempt to terminate that TCP stream however that causes Envoy to segfault so I presume that is not the correct way to achieve that. I don't see any other method in the ABI spec that would allow terminating the current stream though - is this intended to work?

Thanks for your help. I'm looking forward to proxy-wasm stablizing!

Contribute the host implementation of Golang

Recently I implemented the hose side of proxy wasm abi spec in Golang

The project was originally used for MOSN, another data plane for service mesh(kind of like Envoy). Lately, I realized that it can be contributed to the proxy-wasm project.

Currently, the project only implements the 0.1.0 version of abi spec, and the 0.2.x will be supported as soon as possible.

The project also provides a simple example, demonstrating how to use the proxy-wasm host in an HTTP server.

project link: https://github.com/mosn/proxy-wasm-go-host

I wonder if there are any license requirements or others?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.