google / tarpc Goto Github PK
View Code? Open in Web Editor NEWAn RPC framework for Rust with a focus on ease of use.
License: MIT License
An RPC framework for Rust with a focus on ease of use.
License: MIT License
Ctx
is Clone
right now as a workaround for needing to use the ctx in a move closure, but if the move closure is never executed (if no thread is available) the ctx is still needed to reply. So right now it's just cloned, but we can be smarter about this.
let (tx, rx) = mpsc::channel();
match thread_pool.execute(move || {
let ctx = rx.recv().unwrap();
ctx.reply(..);
}) {
Ok(()) => tx.send(ctx).unwrap(),
Err(_) => ctx.reply(BUSY)
}
Compiling tarpc-plugins v0.1.0 (https://github.com/google/tarpc#f9ff2c4e)
error[E0554]: #[feature] may not be used on the stable release channel
--> ~/.cargo/git/checkouts/tarpc-87655b19c07c6cd1/f9ff2c4/src/plugins/src/lib.rs:1:1
|
1 | #![feature(plugin_registrar, rustc_private)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
error: aborting due to previous error
error: Could not compile tarpc-plugins
.
rustc version : rustc 1.15.1 (021bd294c 2017-02-08)
cargo version : cargo 0.16.0-nightly (6e0c18c 2017-01-27)
OS version : macOS Sierra 10.12.3
Have this been discussed before?
I see you've said on reddit that you're thinking about making this library transport-agnostic. Here's my two cents:
urpc can listen on a single Read + Write
object, which is perfect for my use case โ I have a library called rusty-sandbox which creates a pipe between two processes and gives access to it as a File
. Please add support for this!
Can't do this for FutureService
since it doesn't manage its own reactor, but we can and should do it for SyncService
.
This removes a potential footgun of blocking the thread driving the sync server. It also requires changing listen
to take a SocketAddr
rather than A: ToSocketAddrs
, since we can't return an address anymore.
Options::default()
sets Options.reactor = None
, which causes Client::connect
to use REMOTE.spawn
. To my great surprise, REMOTE
is a static event loop driven by a single thread. Thus, if Options::default()
is used, all RPC calls block on a single thread. This should be clearly documented, as that thread will quickly become a big bottleneck for high performance systems.
warning: use of deprecated item: renamed to
send--> src\main.rs:23:1 | 23 | service! { | _^ starting here... 24 | | rpc simplify(src: String, target: String) -> String; 25 | | } | |_^ ...ending here | = note: #[warn(deprecated)] on by default = note: this error originates in a macro outside of the current crate
i use the code in the example on master branch
The examples right now show how to use a client to talk to a server in the same process. It would probably be more useful if the examples showed how to make a server binary (similar to the example at www.tokio.rs) and a client binary. This kind of setup is closer to the actual way people are likely to use tarpc.
As far as I can tell there are three changes: removal of poll_ready from the Service trait in tokio-service and tokio-proto; rename and changes to tokio_core::easy; changes to the ways that NewService can be implemented.
I was able to work around the first two by removing implementations of poll_ready and pulling a older copy of easy.rs into tarpc. I am struggling to understand how to respond to the NewService change though. This problem shows up when I try to build readme_future.rs.
rustc version: rustc 1.17.0-nightly (134c4a0f0 2017-03-20)
error: no field
value
on type&mut syntax::ast::Attribute
--> src\plugins\src\lib.rs:49:76
|
49 | if let NameValue(Spanned { node: Str(ref mut doc, _), .. }) = attr.value.node {
What is the recommended way to evolve an API when using tarpc.
E.g. when serving 1000s of clients that are built using the same subcrate with the service definition, how to update the whole system (server and all clients) to the new version?
Since there are no numbered fields like in proto, is the only way to evolve the API (e.g. when they require more arguments etc.) to introduce new functions, (like appending the version at the end like _v1
, _v2
...)? (And then removing old functions when no client is running the old version anymore.)
Bincode supports it now so there shouldn't be any blockers.
cc @compressed
|
syntaxClone
/Send
/'static
FutureService
with SyncClient
Only SyncService
needs to be Send
.
If we do this, does that mean Client::connect
will never return connection_refused
? Does it mean we'll need to set timeouts on all requests, since it could loop endlessly trying to connect?
I get the following while trying to use tarpc:
Compiling tarpc-plugins v0.1.0 (https://github.com/google/tarpc#a441fcb7)
error[E0061]: this function takes 1 parameter but 0 parameters were supplied
--> /root/.cargo/git/checkouts/tarpc-87655b19c07c6cd1/a441fcb/src/plugins/src/lib.rs:24:33
|
24 | let mut item = match parser.parse_trait_item() {
| ^^^^^^^^^^^^^^^^ expected 1 parameter
error[E0061]: this function takes 1 parameter but 0 parameters were supplied
--> /root/.cargo/git/checkouts/tarpc-87655b19c07c6cd1/a441fcb/src/plugins/src/lib.rs:90:33
|
90 | let mut item = match parser.parse_impl_item() {
| ^^^^^^^^^^^^^^^ expected 1 parameter
error: aborting due to 2 previous errors
error: Could not compile `tarpc-plugins`.
It also looks like you're failing in CI: https://travis-ci.org/google/tarpc
As previously discussed in #56, finer grained control over how work is distributed to multiple cores is useful. FutureService
tries to cater to that, but I have not yet been able to make it work when using multiple threads.
In particular, I run the following code, which I believe is how FutureService
should be run with multiple threads(?)
let s = Server;
let addr = "localhost:7000".first_socket_addr();
for _ in 0..4 {
let s = s.clone();
thread::spawn(move || {
let mut core = reactor::Core::new().unwrap();
let lf = s.listen(addr, Options::default().handle(core.handle()));
core.run(lf).unwrap();
});
}
loop {}
When run, three of the four threads crash with an "Address already in use" error:
$ rustup run nightly cargo run
Compiling test v0.1.0 (file:///home/jon/tmp/tarpc-multi)
Finished dev [unoptimized + debuginfo] target(s) in 4.86 secs
Running `target/debug/main`
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 98, message: "Address alreadyin use" } }', /buildslave/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcore/result.rs:870
note: Run with `RUST_BACKTRACE=1` for a backtrace.
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 98, message: "Address alreadyin use" } }', /buildslave/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcore/result.rs:870
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 98, message: "Address alreadyin use" } }', /buildslave/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/libcore/result.rs:870
Tarpc configures listeners to allow the reuse of ports, so it should work.
But this seems counter to the behavior I observe above. Am I using the API incorrectly?
[package]
name = "test"
version = "0.1.0"
[dependencies]
futures = "0.1.9"
tokio-core = "0.1"
tarpc = {git="https://github.com/google/tarpc.git"}
tarpc-plugins = {git = "https://github.com/google/tarpc"}
[[bin]]
name = "main"
path = "src/main.rs"
// src/main.rs
#![feature(conservative_impl_trait, plugin)]
#![plugin(tarpc_plugins)]
#[macro_use]
extern crate tarpc;
extern crate futures;
extern crate tokio_core;
use tarpc::util::Never;
use tokio_core::reactor;
use tarpc::util::FirstSocketAddr;
use std::thread;
use tarpc::server::Options;
pub mod ext {
service! { rpc foo() -> (); }
}
use self::ext::*;
#[derive(Clone)]
struct Server;
impl ext::FutureService for Server {
type FooFut = futures::Finished<(), Never>;
fn foo(&self) -> Self::FooFut {
futures::finished(())
}
}
fn main() {
let s = Server;
let addr = "localhost:7000".first_socket_addr();
for _ in 0..4 {
let s = s.clone();
thread::spawn(move || {
let mut core = reactor::Core::new().unwrap();
let lf = s.listen(addr, Options::default().handle(core.handle()));
core.run(lf).unwrap();
});
}
loop {}
}
Potentially replace this with something like tokio's NewService
trait.
cc @jonhoo
Please refer to #39 for the prior discussion.
For example, https://github.com/kmcallister/urpc abstracts service over Read + Write
trait. The benefit of doing this includes:
Considerations:
@tikue do you have any additional comment?
Currently, the Service
trait generated by service!
passes immutable &self
references to all methods. However for Service implementations that want to update some internal state in response to an RPC message, they can't update anything on self. Thus it might be nice to have some sort of extra pattern, ex. rpc mut
that will generate a trait method that allows mutation. Not sure if the behavior I'm asking about is supported in some other way as I'm fairly new to Rust, so feel free to tell me I'm being silly here.
From Future
to IntoFuture
for more flexibility?
I don't see a way to serve multiple services (for different 'concerns') on one server/port. Is there a way (like in gRPC)?
Similarly, if one client executable wants to use multiple services, can it reuse the same connection among the different instantiated clients? (like in gRPC)
Currently it's hardcoded to always be a result of some fashion -- by default the err
variant is Never
-- which allows us to flatten the returned future in the common case from Future<Item=Result<T, E>, Error=io::Error>
to Future<Item=T, Error=tarpc::Error>
. The benefit of not doing this would be further generalization (no need to deal with uninhabited types), slightly simpler macro code, as well as potentially the complete removal of tarpc::Error
.
Serialize errors are swallowed by the Sink
half of the framed stream; the client just sees a broken pipe. So the names should reflect that.
The spawner process would act as the server and its children as clients. The child process's stdin and stdout can be treated like a socket. The child process could initiate the connection in its main.
I would use that to build a plugin system that allows live load/reload/unload-ing the plugins.
Hi!
Looks like current master is broken due to issues with bincode
(using rustc 1.17.0-nightly (a559452b0 2017-03-17
) on Ubuntu 16.04):
Compiling tarpc v0.6.0 (file:///tmp/tarpc)
error: no associated item named `Infinite` found for type `bincode::SizeLimit` in the current scope
--> src/protocol.rs:50:33
|
50 | SizeLimit::Infinite)
| ^^^^^^^^^^^^^^^^^^^
error: no associated item named `Infinite` found for type `bincode::SizeLimit` in the current scope
--> src/protocol.rs:94:60
|
94 | SizeLimit::Infinite);
| ^^^^^^^^^^^^^^^^^^^
error: aborting due to 2 previous errors
error: Could not compile `tarpc`.
It looks like upstream bincode
also has trouble, although in a different way (cargo
crashes; this also happens locally for me on bincode
v1.0.0-alpha4
).
Any ideas how to solve this, or simply wait until cargo
and bincode
are sorted out?
Currently we hardcode bincode. If we support generic serializers then bincode should become a default feature so it can be disabled.
And the default should be sane. A few megs perhaps.
I looked through source code (this branch) trying to understand how to use UDP as underlying transport. I saw this. Does it contain all available variants?
Thought about implementing TryFrom<Something>
but you've already used it for TCP.
I know that branch is WIP but could you share your ideas about UDP support?
UPD. Not only Stream, but Listener too, of course.
PS. Sorry if my English is bad.
It would be useful to have metadata like in gRPC or HTTP Headers for authentication data that should be sent with every request.
(And then on top of that it would be useful to have a capability for something like BeforeMiddleware that can check if a client is authenticated before a request is processed. But then it's not pure RPC anymore?)
Any plans to add support for SSL/TLS?
It allows people to use tarpc in a more composable fashion with the rest of the tokio libraries (say, if they want to handle the incoming stream themselves) and larger ecosystem.
Likely I'd want to reserve the right to change the actual return type because once impl Trait
works with traits it'd be nice to just return -> impl NewService<..>
.
cc @jonhoo
SocketAddr
into ClientConfig
/ ServerConfig
.Client::new("127.0.0.1:9000", None)
seems pretty cryptic if you don't remember what the type signature is. I think it would make sense to provide the following functions instead:
fn new<A: ToSocketAddrs>(addr: A) -> ...
fn with_timeout<A: ToSocketAddrs>(addr: A, timeout: Duration) -> ...
In my opinion, this makes the intent of the programmer more clear, especially in cases when you don't want a timeout.
If you like the idea I could submit a PR. This would probably require bumping the version of the library to 0.3.0 though.
One argument against is that the user can stop running their reactor::Core
whenever they please. However, that would be an all-or-nothing situation. It might still be useful to shutdown some services but not all.
Unless I'm mistaken, tarpc
currently spawns a new thread for every RPC. For a high-traffic service where each RPC handler exist quickly, this adds a lot of overhead. It'd be nice if a thread pool could be used instead to save some of the overhead. Is this something that's on the roadmap? I noticed it was mentioned as a drawback in @shaladdle's Reddit post announcing the project in March, but haven't seen it mentioned anywhere else since.
client::Either
, __tarpc_service_Request
, etc.It would be nice if the server could notify the clients on its own, without needing the client to poll from the server. A client can register listeners to an event, and when the server triggers the event, it is multiplexed to all the listeners. Rough idea of what the code might look like:
service! {
event fn something_changed(olddata: i32, newdata: i32);
}
let client = Client::new(...);
client.something_changed(|old, new| {
println!("It changed!");
});
impl Server for MyServer {
fn some_method(&self) {
self.something_changed(self.old_data, self.new_data);
}
}
There is a precedent, the Deluge RPC supports events.
From @jonhoo:
an interesting observation: we get ~10-20% higher throughput by removing all the debug logging calls in tarpc
things like trace! compiles to a memory load and a branch in the final code, which actually ends up mattering for common calls
I know slog (https://github.com/slog-rs/slog) has unused logging calls be completely compiled away
and also compose a bit more nicely
Replace with tokio-rs/tokio-proto#135 (review)
Not currently possible because the shutdown hook is only usable via TcpServer
which we don't use.
Please, consider to update bincode
dependency. New version is 0.6.0. Main changes - new version of serde
with performance improvements.
Right now services must be Clone
, but it's not obvious (imho at least) why they are clone. Users might make incorrect assumptions about what mutating their service object within a service implementation does. They might think they're modifying the one copy of the struct, while really they're just modifying a clone.
struct MyServer {
counter: u64,
}
impl Service for MyServer {
fn do_thing(&mut self) {
self.counter += 1; // Only ever increments the copy that is made by the tarpc layer :(
}
Hello
Could you provide documentation on how to write the .proto files in rust code...
I'm trying to connect a rust client with a java server, so far I can't seem to reproduce the my proto file in rust
thanks
Following the observations in #88, I wanted to modify the code such that the thread making an RPC call is the one that drives the event loop. However, there doesn't seem to any way to do this with SyncClient
as far as I can tell? When you make the function call representing an RPC method, you have no way to also make that thread drive the core until the call completes? Of course, with a FutureClient
you could implement this yourself pretty trivially, but having this be a supported mode of operation for SyncClient
would be really nice.
Hi there, I'm scouting for a protocol to use for efficient transfer of statsd
type events sometime next year, do you think tarpc would be a good fit? I'm still in the conceptual stages, but a high-level view would be:
Theoretically with tokio/mio under the hood I might be able to get something like that spun up? I'm really interested in efficient transmission.
cc @tikue
Currently, we force users to call server.listen(reactor_handle, addr);
. Forcing users to use a handle is a little opinionated, since users of the futures API might want to be able to explicitly control how they run their server on their reactors.
This would change FutureService
to expose a bind
call, which would either return the bound address and a future which encapsulates the server itself, or just a future which exposes a local_addr
method. Then users can choose how to run this future on their reactor(s).
Serde 1.0 was recently released. It would be nice if tarpc support this new version.
It should grow as needed but shrink to a configurable size when threads are not busy.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.