eclipse-zenoh / roadmap Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Current SHM allocator requires garbage collection of fragmented and no longer used portions of memory.
A possible way to improve the behaviour and the real time guarantees is to use arenas in Rust (e.g. here, here, and here).
Additionally, current SHM notification mechanism leverages network communication for pub/sub.
The mechanism could be improved to reduce latency even further by leveraging SHM itself.
By using the new arena-based allocator the user can benefit from a very deterministic memory allocation and defragmentation.
Arena allocator and SHM-based notification can be configurable.
While SHM-based notification will not impact the API, a new allocator may require a dedicated API to be defined.
Currently, the DDS plugin does not support DDS instances.
The DDS plugin should be extended to support DDS instances out of the box.
DDS instances should be handled out-of-the-box by the DDS plugin.
When operating in a fully-connected peer-to-peer scenario, peers directly communicate with each other without the need of speaking any routing protocol.
The possibility of removing the overhead introduced by any routing protocol in such scenarios it is of great benefit w.r.t. network bandwidth saving and stability of the system.
Those considerations are of particular interest in case of many zenoh peers are started at the same time in a given system, e.g. bootstrap of a robotics system.
Zenoh is capable of operating in a clique topology without involving any routing protocol.
Zenoh is capable of operating in a clique topology without involving any routing protocol.
Zenoh nodes can discover each other. But what if someone wants to build distributed application based on Zenoh protocol with guarantee that only instances of this application can be discovered? For example what if, e.g., Somesung and Abble wants to use Zenoh library for interconnecting their home appliances, but do not want to route packets for concurrent? How to achieve that?
Currently PUT and DEL on wild cards might cause storages to diverge.
Regardless of the list of values in individual storages at the time of receiving the wild card update, the effect of the update will be the same.
The philosophy behind wild card updates is that the existing values in the storage will be effected. This includes existing values in all the storages connected to Zenoh.
The wild card operations will be stored separately and applied to all updates when they eventually reach the storage. These operations will be garbage collected when it is safe to do so.
When accessing zenoh infrastructure it would be desirable to have some level of access control in place.
The access control could be based on:
When in place, routers and peers will be able to whitelist or blacklist access to certain zenoh nodes or resources.
Router and peers will be configurable in terms of:
Currently the users of Zenoh has no guidelines on the consistency model achievable through Zenoh.
A documentation on how to achieve different consistency levels in Zenoh.
The users can refer to guidelines to have different consistency levels for their applications using Zenoh.
In order to support more diverse scenarios, zenoh should be able to speak different routing protocols.
This would allow zenoh to efficiently route data on different types of networks and systems that are characterised by various level of stability, bandwidth, latency, and dynamicity.
Therefore, the code dealing with data routing in zenoh needs to be enhanced and modularised to accommodate the requirement of supporting multiple routing protocols.
This roadmap item will:
It will be possible to implement and configure multiple routing protocols in zenoh without the need of changing the core of the routing engine.
Users will be able to implement and configure multiple routing protocols in zenoh without the need of changing the core of the routing engine.
When issuing a query it would be desirable to allow the user to attach a custom payload delivered to any queryable.
This roadmap item depends on issue #2 since the new protocol will enable zenoh to support queries with payload on the wire.
Any custom payload can be attached to any query.
Users will be able to provide a custom payload when using the query API.
While Zenoh exposes an asynchronous API, some internal Zenoh code is fully synchronous.
This leads to the fact that async behaviour could be improved compared to today's status quo.
An internal refactor (enabled by #5 ) will hence allow the asynchronous API to fully leverage the asynchronous read/write from the network.
No changes are expected to the API. All the work will be carried on the internal code.
Current logging includes three levels:
There is need of having a more fine grain control on the logging level for better and more effective debugging.
Moreover, since zenoh is a distributed system, there is need to correlate the logs generated by multiple applications (potentially running on different machines) to track distributed user's operations.
The log output should facilitate fine grain filtering (e.g. more levels than just 3) and allow to easily group logs related to the same operation.
A well-defined format for log output needs to be defined as well as the internal mechanism to trace operations throughput the whole zenoh stack.
Originally posted by ros2torial February 10, 2023
Hi,
We have used ROS2 security on the robot IPC and want to use zenoh to transfer the messages to cloud, but it seems like zenoh-bridge-dds doesn't have any feature to include ROS2 or DDS security feature. The security which we have used includes new nodes if the security environment variables are available. We also tried running zenoh dds bridge as a node but that also didn't work.
ros2 run zenoh_bridge_dds zenoh_bridge_dds -c client.json -d 4
Without security everything is working fine, but in order to keep everything secure we want to use ROS2 security.
Originally posted by Mallets April 11, 2022
Please ignore
Zenoh hop-to-hop reliability is today completely delegated to the underlying transport protocol, e.g. TCP, TLS, QUIC.
No hop-to-hop reliability is provided by Zenoh in case of using a non-reliable transport protocols, e.g. UDP, Serial.
Zenoh will be able to retransmit lost messages, e.g. due to network/host congestion, network TX errors, etc.
This mechanisms is intended to work for Reliable
messages only and to be limited to hop-to-hop communication.
It is not by any means intended to be fully end-to-end.
It is worth highlighting that in a stable system, end-to-end reliability can be achieved by chaining multiple hop-to-hop reliable communication.
However, in case of a node failure (e.g. a router going down), end-to-end reliability can not be ensured by the hop-to-hop reliability mechanism.
Reliable
messages are acknowledged and retransmitted in case they have not been received.
No changes are expected to the API.
The list of features to be included in 0.6.0-beta.2 release is available here.
Storages on Zenoh subscribing to the same key expression might diverge due to missing samples. Currently there is no way to converge these storages.
The storages even though diverged temporarily, will eventually receive all updates.
The latest value for each resource in both key-value stores and time-series datastore that subscribe to the same key expression will be the same.
A background process will take care of aligning the storages. It periodically generates digests, based on the latest stable state of the storage, and publishes it to other storages subscribing to the same key expression. A storage that receives a digest from a remote replica will identify the updates it is missing. The storage then queries the missing updates from the remote storage and stores it.
To ease the creation of the data flow YAML description, it would be beneficial to leverage a web-based graphical interface.
It is possible to create and edit the YAML description of a data flow through a web-based graphical interface.
TBD.
Originally posted by Mallets June 28, 2022
Please ignore
Storages on Zenoh subscribing to the same key expression might diverge due to missing samples. Currently there is no way to converge these storages.
Time series data stores also might have different purging policies. It is different from key-value stores in the amount of metadata required to have a log of all updates, hence need a different strategy.
The time series datastores even though diverged temporarily, will eventually receive all updates.
All the values until the highest lower bound (considering the oldest values across all datastores) in time-series datastore that subscribe to the same key expression will be the same.
A background protocol periodically generates digests, based on the latest stable state of the time series data store, and publishes it to other time series data stores subscribing to the same key expression. A time series store that receives a digest from a remote replica will identify the updates it is missing. The storage then queries the missing updates from the remote storage and stores it. The structure of the digest will be different, tailored to handling time-series values.
APIs are not fully consistent within the same binding and also across bindings.
A higher lever of coherency is required and documentation should be updated accordingly.
API will be more aligned across languages resulting in a better user experience.
Users will be able to use zenoh API in a similar manner regardless the programming language of their choice.
Target date: April 22nd, 2024.
A dedicated project has been created to track all the PRs and issues that will land in the release.
The high-level set of features is the following:
API polishing
API alignment
Shared Memory
Protocol improvement
Tokio porting
CI/CD improvement
Eclipse Review
At the moment there is no Java binding for zenoh
Users will be able to use Java to interact with zenoh via a dedicated binding.
A jar will be provided to the users to be included as dependency.
Tracking discussion #27.
A Kafka plugin for zenohd
, in the same way than the DDS plugin.
Also available as a standalone zenoh-bridge-kafka
binary.
A new Zenoh plugin (and standalone bridge) that will map the Kafka producer/consumer to the Zenoh pub/sub.
This will allow Zenoh/Kafka integration use cases such as:
The plugin will monitor the topics created in Kafka and create for each (configurable):
0.5.0-beta.9 release was published in november 2021.
Since then, many changes have been introduced in the upstream repositories of the Zenoh project.
The 0.6.0-beta.1 release should be hence published.
The full progress of 0.6.0-beta.1 release is tracked by a dedicated project.
The next Zenoh beta version is published.
User can use the new Zenoh beta version via crates.io and pip.
As of today, the main implementation of Zenoh (Rust) can communicate only over unicast connections for peer-to-peer, client-to-router, and router-to-router sessions.
At the same time, zenoh-pico can already operate over UDP multicast for peer-to-peer sessions.
Zenoh can operate over UDP multicast and is capable of interoperating with zenoh-pico.
Zenoh peers and router will be able to communicate over UDP multicast.
No changes are expected to the API, all the changes are expected to take place in the internal mechanics of Zenoh.
A first version of a zenoh administration tool is available here.
The initial prototype is expected to evolve into a product allowing to monitor and visualise the real time status of a zenoh system.
A GUI-first administration/monitoring tool will be developed.
This tool will be closed source.
Current zenoh protocol definition makes extensibility and future backward compatibility hard to achieve in a multitude of scenarios.
Therefore, a new wire protocol definition is required to minimise any future changes.
In particular, the new protocol definition needs to provide:
This would allow in the future to support:
The new zenoh protocol will be implemented in zenoh and zenoh-pico and the two implementations will be able to interoperate.
Users will not directly interact with the new protocol.
This is part of the Zenoh internal implementation.
Originally posted by yellowhatter November 2, 2022
I have a plan to implement zenoh plugin working with ROS1 in the same manner as it was made in zenoh-dds plugin. Of course the details of realization are different, but from the outside the plugins would be similar. The plugin is in Rust, working as client from ROS1 side. I'm also aware of the details (ros1-ros2 message compatibility, some optimizations etc) and targeting to build a complete solution which will be able to provide ROS1<->ROS1 and ROS1<->ROS2 communication.
Why not replacing Master?
Because ROS1 Master is not involved in data exchange process, ROS1 peers do establish direct data connections (rostcp or rosudp protocol) between each other while Master works as some kind of Nameservice (xml-based protocol) providing peers with information on topics and info needed to establish direct data connection with peer ( == subscribe for the topic). For Zenoh there is no use of reimplementing Master (as it won't give us optimizations on data transfer) - it is enough to create a special client based on ROS1 Client Library.
ROS1 Client Library has enough functionality (it implements rostcp/rosudp for data exchange and also xml-based protocol for Master) and there is no need to reimplement and support ROS1 protocol stack on our side - we could just use this library to do the trick.
Some optimizations could be made
There are currently some ideas on optimizations:
We should provide a comprehensive documentation of Zenoh Flow.
The documentation will appear on our website.
Zenoh supports publishing data with different priorities.
As of today, 7 traffic classes are available:
Batching technique is automatically applied to all traffic classes to optimise network bandwidth and maximise throughput.
However, automatic batching comes at the cost of slightly higher latency (few microseconds) to any data publication.
Nevertheless, specific traffic classes like RealTime
, InteractiveHigh
, and InteractiveLow
are meant for latency sensitive data.
Automatic batching should be configurable and disabled by default for latency sensitive data.
Queue configuration should include a parameter to enable/disable automatic batching per traffic class.
No changes are expected to the API.
A MQTT plugin for zenohd
, in the same way than the DDS plugin.
Also available as a standalone zenoh-bridge-mqtt
binary.
A new Zenoh plugin (and standalone bridge) that will map the MQTT pub/sub to the Zenoh pub/sub.
Possible use cases:
Similarly architecture than the DDS plugin:
the discovery of MQTT publisher (or subscriber) will trigger in the plugin the creation of a matching MQTT subscriber (or publisher) and a corresponding Zenoh publisher (or subscriber)
Zenoh's futures executor is based on async-std
and tokio
depending on the various plugins and transports configured.
This leads to an unpredictable and large number of threads being spawned by the various libraries and/or frameworks.
As a result, Zenoh runtime makes uses of much more resources than needed and, above all, it has no control on it.
The async executor should have a predictable usage of resources as well as the number of threads being spawned across the full Zenoh runtime: core, transport, and plugins.
Additionally, the user should be able to pass their own executor to be used by Zenoh.
Users will be able configure the async executor in terms of threads being used or, alternatively, to pass their own executor.
Originally posted by Mallets April 11, 2022
Current ZBuf
and WBuf
implementations are ubiquitously used in zenoh code and bindings.
As a result, current implementation needs to accomodate a large set of requirements making it hard to scale and optimise for particular scenarios.
A recent PR, introduced common traits for ZBuf
and WBuf
implementations.
This allows to have targeted buffer implementation that expose a common trait leaving the internal structure tailored to the use case (e.g. serialisation, defragmentation, shared memory, etc.).
Targeted implementations of ZBuf
and WBuf
for different places in the code.
As a result, memory management and performance should benefit from it.
Users will still use the same zenoh API that is based on the common traits.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.