Git Product home page Git Product logo

roadmap's People

Contributors

jenoch avatar mallets avatar olivierhecart avatar p-avital avatar snehilzs avatar sreeja avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

roadmap's Issues

Shm allocator

Summary

Current SHM allocator requires garbage collection of fragmented and no longer used portions of memory.
A possible way to improve the behaviour and the real time guarantees is to use arenas in Rust (e.g. here, here, and here).

Additionally, current SHM notification mechanism leverages network communication for pub/sub.
The mechanism could be improved to reduce latency even further by leveraging SHM itself.

Intended outcome

By using the new arena-based allocator the user can benefit from a very deterministic memory allocation and defragmentation.

How will it work?

Arena allocator and SHM-based notification can be configurable.
While SHM-based notification will not impact the API, a new allocator may require a dedicated API to be defined.

DDS plugin - Instance support

Summary

Currently, the DDS plugin does not support DDS instances.

Intended outcome

The DDS plugin should be extended to support DDS instances out of the box.

How will it work?

DDS instances should be handled out-of-the-box by the DDS plugin.

Clique routing

Summary

When operating in a fully-connected peer-to-peer scenario, peers directly communicate with each other without the need of speaking any routing protocol.
The possibility of removing the overhead introduced by any routing protocol in such scenarios it is of great benefit w.r.t. network bandwidth saving and stability of the system.
Those considerations are of particular interest in case of many zenoh peers are started at the same time in a given system, e.g. bootstrap of a robotics system.

Intended outcome

Zenoh is capable of operating in a clique topology without involving any routing protocol.

How will it work?

Zenoh is capable of operating in a clique topology without involving any routing protocol.

Question: Can multiple zenoh networks coexist in same physical network?

Zenoh nodes can discover each other. But what if someone wants to build distributed application based on Zenoh protocol with guarantee that only instances of this application can be discovered? For example what if, e.g., Somesung and Abble wants to use Zenoh library for interconnecting their home appliances, but do not want to route packets for concurrent? How to achieve that?

Storage - Support for wildcard updates

Summary

Currently PUT and DEL on wild cards might cause storages to diverge.

Intended outcome

Regardless of the list of values in individual storages at the time of receiving the wild card update, the effect of the update will be the same.

How will it work?

The philosophy behind wild card updates is that the existing values in the storage will be effected. This includes existing values in all the storages connected to Zenoh.
The wild card operations will be stored separately and applied to all updates when they eventually reach the storage. These operations will be garbage collected when it is safe to do so.

Access control

Summary

When accessing zenoh infrastructure it would be desirable to have some level of access control in place.
The access control could be based on:

  • Zenoh node ID and role (e.g., peer, client, or router)
  • Resources (e.g., /test/resource)

Intended outcome

When in place, routers and peers will be able to whitelist or blacklist access to certain zenoh nodes or resources.

How will it work?

Router and peers will be configurable in terms of:

  • node authentication parameters (e.g. TLS certificate, user/password, RSA key, etc.)
    • Note that user/password authentication is already provided today (see config)
  • blacklist or whitelist of resources

Tracking discussions #71 #67

User guide for consistency models

Summary

Currently the users of Zenoh has no guidelines on the consistency model achievable through Zenoh.

Intended outcome

A documentation on how to achieve different consistency levels in Zenoh.

How will it work?

The users can refer to guidelines to have different consistency levels for their applications using Zenoh.

Routing refactor

Summary

In order to support more diverse scenarios, zenoh should be able to speak different routing protocols.
This would allow zenoh to efficiently route data on different types of networks and systems that are characterised by various level of stability, bandwidth, latency, and dynamicity.

Therefore, the code dealing with data routing in zenoh needs to be enhanced and modularised to accommodate the requirement of supporting multiple routing protocols.

This roadmap item will:

  1. enable the new protocol (see issue #2 ) to support diverse routing protocols on the wire;
  2. allow a truly async API (see issue #47 ) where routing code will no longer be traversed but queried.

Intended outcome

It will be possible to implement and configure multiple routing protocols in zenoh without the need of changing the core of the routing engine.

How will it work?

Users will be able to implement and configure multiple routing protocols in zenoh without the need of changing the core of the routing engine.

Query with payload

Summary

When issuing a query it would be desirable to allow the user to attach a custom payload delivered to any queryable.

This roadmap item depends on issue #2 since the new protocol will enable zenoh to support queries with payload on the wire.

Intended outcome

Any custom payload can be attached to any query.

How will it work?

Users will be able to provide a custom payload when using the query API.

Truly async API

Summary

While Zenoh exposes an asynchronous API, some internal Zenoh code is fully synchronous.
This leads to the fact that async behaviour could be improved compared to today's status quo.

Intended outcome

An internal refactor (enabled by #5 ) will hence allow the asynchronous API to fully leverage the asynchronous read/write from the network.

How will it work?

No changes are expected to the API. All the work will be carried on the internal code.

Logging system

Summary

Current logging includes three levels:

  • Info
  • Debug
  • Trace

There is need of having a more fine grain control on the logging level for better and more effective debugging.
Moreover, since zenoh is a distributed system, there is need to correlate the logs generated by multiple applications (potentially running on different machines) to track distributed user's operations.

Intended outcome

The log output should facilitate fine grain filtering (e.g. more levels than just 3) and allow to easily group logs related to the same operation.

How will it work?

A well-defined format for log output needs to be defined as well as the internal mechanism to trace operations throughput the whole zenoh stack.

Integrating ROS2 security with Zenoh

Discussed in #75

Originally posted by ros2torial February 10, 2023
Hi,
We have used ROS2 security on the robot IPC and want to use zenoh to transfer the messages to cloud, but it seems like zenoh-bridge-dds doesn't have any feature to include ROS2 or DDS security feature. The security which we have used includes new nodes if the security environment variables are available. We also tried running zenoh dds bridge as a node but that also didn't work.

ros2 run zenoh_bridge_dds zenoh_bridge_dds -c client.json -d 4

Without security everything is working fine, but in order to keep everything secure we want to use ROS2 security.

Transport reliability - Tentative

Summary

Zenoh hop-to-hop reliability is today completely delegated to the underlying transport protocol, e.g. TCP, TLS, QUIC.
No hop-to-hop reliability is provided by Zenoh in case of using a non-reliable transport protocols, e.g. UDP, Serial.

Intended outcome

Zenoh will be able to retransmit lost messages, e.g. due to network/host congestion, network TX errors, etc.
This mechanisms is intended to work for Reliable messages only and to be limited to hop-to-hop communication.
It is not by any means intended to be fully end-to-end.
It is worth highlighting that in a stable system, end-to-end reliability can be achieved by chaining multiple hop-to-hop reliable communication.
However, in case of a node failure (e.g. a router going down), end-to-end reliability can not be ensured by the hop-to-hop reliability mechanism.

How will it work?

Reliable messages are acknowledged and retransmitted in case they have not been received.
No changes are expected to the API.

Storage alignment - Same key expression

Summary

Storages on Zenoh subscribing to the same key expression might diverge due to missing samples. Currently there is no way to converge these storages.

Intended outcome

The storages even though diverged temporarily, will eventually receive all updates.
The latest value for each resource in both key-value stores and time-series datastore that subscribe to the same key expression will be the same.

How will it work?

A background process will take care of aligning the storages. It periodically generates digests, based on the latest stable state of the storage, and publishes it to other storages subscribing to the same key expression. A storage that receives a digest from a remote replica will identify the updates it is missing. The storage then queries the missing updates from the remote storage and stores it.

[Zenoh Flow] WebUI

Summary

To ease the creation of the data flow YAML description, it would be beneficial to leverage a web-based graphical interface.

Intended outcome

It is possible to create and edit the YAML description of a data flow through a web-based graphical interface.

Steps

TBD.

Storage alignment - Time series support

Summary

Storages on Zenoh subscribing to the same key expression might diverge due to missing samples. Currently there is no way to converge these storages.
Time series data stores also might have different purging policies. It is different from key-value stores in the amount of metadata required to have a log of all updates, hence need a different strategy.

Intended outcome

The time series datastores even though diverged temporarily, will eventually receive all updates.
All the values until the highest lower bound (considering the oldest values across all datastores) in time-series datastore that subscribe to the same key expression will be the same.

How will it work?

A background protocol periodically generates digests, based on the latest stable state of the time series data store, and publishes it to other time series data stores subscribing to the same key expression. A time series store that receives a digest from a remote replica will identify the updates it is missing. The storage then queries the missing updates from the remote storage and stores it. The structure of the digest will be different, tailored to handling time-series values.

API alignment

Summary

APIs are not fully consistent within the same binding and also across bindings.
A higher lever of coherency is required and documentation should be updated accordingly.

Intended outcome

API will be more aligned across languages resulting in a better user experience.

How will it work?

Users will be able to use zenoh API in a similar manner regardless the programming language of their choice.

Zenoh 1.0.0 release

Target date: April 22nd, 2024.

A dedicated project has been created to track all the PRs and issues that will land in the release.
The high-level set of features is the following:

API polishing

  • Review the API and verify the API follows Zenoh's API principles
  • Polish-out some rough edges on the API

API alignment

  • Provide a coverage matrix for the API and all supported languages
  • Make sure the API is aligned across all language bindings

Shared Memory

  • Provide Shared Memory support in any topology and not-only in clique
  • New API for plugging different types of SHM backends

Protocol improvement

  • Review of the protocol
  • Improvement of declarations propagation

Tokio porting

  • Move Zenoh's async runtime to Tokio
  • Provide additional control on threads

CI/CD improvement

  • Improve cross-repo integration
  • Streamline release process

Eclipse Review

Java binding

Summary

At the moment there is no Java binding for zenoh

Intended outcome

Users will be able to use Java to interact with zenoh via a dedicated binding.

How will it work?

A jar will be provided to the users to be included as dependency.

Kafka plugin

Summary

A Kafka plugin for zenohd, in the same way than the DDS plugin.
Also available as a standalone zenoh-bridge-kafka binary.

Intended outcome

A new Zenoh plugin (and standalone bridge) that will map the Kafka producer/consumer to the Zenoh pub/sub.
This will allow Zenoh/Kafka integration use cases such as:

  • Zenoh publications re-published to Kafka
  • Zenoh subscribers consuming data from Kafka

How will it work?

The plugin will monitor the topics created in Kafka and create for each (configurable):

  • a Zenoh subscriber on a KeyExpr similar to the topic name, and that will receive Zenoh publications and re-publish to the Kafka topic
  • a Kafka Consumer that will re-publish on a Zenoh KeyExpr similar to the topic name

0.6.0-beta.1 release

Summary

0.5.0-beta.9 release was published in november 2021.
Since then, many changes have been introduced in the upstream repositories of the Zenoh project.

The 0.6.0-beta.1 release should be hence published.

The full progress of 0.6.0-beta.1 release is tracked by a dedicated project.

Intended outcome

The next Zenoh beta version is published.

How will it work?

User can use the new Zenoh beta version via crates.io and pip.

Multicast support

Summary

As of today, the main implementation of Zenoh (Rust) can communicate only over unicast connections for peer-to-peer, client-to-router, and router-to-router sessions.
At the same time, zenoh-pico can already operate over UDP multicast for peer-to-peer sessions.

Intended outcome

Zenoh can operate over UDP multicast and is capable of interoperating with zenoh-pico.

How it will work

Zenoh peers and router will be able to communicate over UDP multicast.
No changes are expected to the API, all the changes are expected to take place in the internal mechanics of Zenoh.

Monitoring tool productization

Summary

A first version of a zenoh administration tool is available here.

Intended outcome

The initial prototype is expected to evolve into a product allowing to monitor and visualise the real time status of a zenoh system.

How will it work?

A GUI-first administration/monitoring tool will be developed.
This tool will be closed source.

Zenoh protocol update

Summary

Current zenoh protocol definition makes extensibility and future backward compatibility hard to achieve in a multitude of scenarios.
Therefore, a new wire protocol definition is required to minimise any future changes.

In particular, the new protocol definition needs to provide:

  • The capability of introducing extensions without requiring the partial or complete deployment of a zenoh infrastructure
  • The support of multiple routing protocols that can be plugged in without completely changing the wire protocol
  • Support for adding custom payload to queries
  • More fine-grain QoS control
  • The possibility of performing put operations with acknowledgments
  • Allow marking of express traffic to not be batched

This would allow in the future to support:

  • Operation and Management mechanisms for live troubleshooting of a zenoh infrastructure
  • Session migration and mobility
  • Multicast operations
  • Different consolidation mechanisms for storages

Intended outcome

The new zenoh protocol will be implemented in zenoh and zenoh-pico and the two implementations will be able to interoperate.

How will it work?

Users will not directly interact with the new protocol.
This is part of the Zenoh internal implementation.

ROS1 bridge for Zenoh

Discussed in #59

Originally posted by yellowhatter November 2, 2022

There is a task to support ROS1 in zenoh.

I have a plan to implement zenoh plugin working with ROS1 in the same manner as it was made in zenoh-dds plugin. Of course the details of realization are different, but from the outside the plugins would be similar. The plugin is in Rust, working as client from ROS1 side. I'm also aware of the details (ros1-ros2 message compatibility, some optimizations etc) and targeting to build a complete solution which will be able to provide ROS1<->ROS1 and ROS1<->ROS2 communication.

Why not replacing Master?
Because ROS1 Master is not involved in data exchange process, ROS1 peers do establish direct data connections (rostcp or rosudp protocol) between each other while Master works as some kind of Nameservice (xml-based protocol) providing peers with information on topics and info needed to establish direct data connection with peer ( == subscribe for the topic). For Zenoh there is no use of reimplementing Master (as it won't give us optimizations on data transfer) - it is enough to create a special client based on ROS1 Client Library.
ROS1 Client Library has enough functionality (it implements rostcp/rosudp for data exchange and also xml-based protocol for Master) and there is no need to reimplement and support ROS1 protocol stack on our side - we could just use this library to do the trick.

Some optimizations could be made
There are currently some ideas on optimizations:

  1. Convert ROSx to ROSy data on receipt side as the default behavior. When publishing data, the ROSx Publisher doesn't know whether it's data will be consumed by somebody expecting ROSy or not, thus there is no need to waste CPU cycles on converting it on publisher's side. ROSx publisher will simply publish the data marking it has "ROSx" format, and the subscriber could perform conversion to ROSy if needed.
  2. Select publishing format by config. For the previous case, if there are some special needs, the publisher could be forced to convert the data to ROSy format by configuration. That could be useful in many cases (if many subscribers expect ROSy format or there is no CPU on subscriber's side to perform the convertion, etc).
  3. Subscribe on ROS1-Zenoh layer only on needed topics. The topic list to subscribe to could be determined automatically.

[Zenoh Flow] Documentation

Summary

We should provide a comprehensive documentation of Zenoh Flow.

Intended outcome

The documentation will appear on our website.

Steps

  • Understand Zenoh Flow.
  • Try Zenoh Flow.
  • Deploy a data flow on multiple daemons.
  • Input Rules.
  • (Rust) Hello World.
  • (Python) Hello World.
  • (C++) Hello World.

S3 Backend

A Zenoh backend providing Zenoh storages based on AWS S3 (or any S3-compatible Cloud storage such as MinIO).

This follows discussion in #53

Disable batching for high traffic priorities

Summary

Zenoh supports publishing data with different priorities.
As of today, 7 traffic classes are available:

  • RealTime
  • InteractiveHigh
  • InteractiveLow
  • DataHigh
  • Data
  • DataLow
  • Background

Batching technique is automatically applied to all traffic classes to optimise network bandwidth and maximise throughput.
However, automatic batching comes at the cost of slightly higher latency (few microseconds) to any data publication.
Nevertheless, specific traffic classes like RealTime, InteractiveHigh, and InteractiveLow are meant for latency sensitive data.

Intended outcome

Automatic batching should be configurable and disabled by default for latency sensitive data.

How will it work?

Queue configuration should include a parameter to enable/disable automatic batching per traffic class.
No changes are expected to the API.

MQTT plugin

Summary

A MQTT plugin for zenohd, in the same way than the DDS plugin.
Also available as a standalone zenoh-bridge-mqtt binary.

Intended outcome

A new Zenoh plugin (and standalone bridge) that will map the MQTT pub/sub to the Zenoh pub/sub.
Possible use cases:

  • MQTT + Zenoh integration
  • Routing MQTT from the device to the Edge and to the Cloud
  • Bridging 2 distinct MQTT systems across the Internet, with NAT traversal
  • Pub/sub to MQTT via the Zenoh REST API
  • MQTT-DDS/ROS2 (robot) communication (via zenoh)
  • MQTT record/replay with InfluxDB as a storage

How will it work?

Similarly architecture than the DDS plugin:
the discovery of MQTT publisher (or subscriber) will trigger in the plugin the creation of a matching MQTT subscriber (or publisher) and a corresponding Zenoh publisher (or subscriber)

Async executor control

Summary

Zenoh's futures executor is based on async-std and tokio depending on the various plugins and transports configured.
This leads to an unpredictable and large number of threads being spawned by the various libraries and/or frameworks.
As a result, Zenoh runtime makes uses of much more resources than needed and, above all, it has no control on it.

Intended outcome

The async executor should have a predictable usage of resources as well as the number of threads being spawned across the full Zenoh runtime: core, transport, and plugins.
Additionally, the user should be able to pass their own executor to be used by Zenoh.

How will it work?

Users will be able configure the async executor in terms of threads being used or, alternatively, to pass their own executor.

Buffer refactoring in zenoh core

Discussed in https://github.com/eclipse-zenoh/roadmap/discussions/12

Originally posted by Mallets April 11, 2022

Summary

Current ZBuf and WBuf implementations are ubiquitously used in zenoh code and bindings.
As a result, current implementation needs to accomodate a large set of requirements making it hard to scale and optimise for particular scenarios.
A recent PR, introduced common traits for ZBuf and WBuf implementations.
This allows to have targeted buffer implementation that expose a common trait leaving the internal structure tailored to the use case (e.g. serialisation, defragmentation, shared memory, etc.).

Intended outcome

Targeted implementations of ZBuf and WBuf for different places in the code.
As a result, memory management and performance should benefit from it.

How will it work?

Users will still use the same zenoh API that is based on the common traits.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.