thruster-rs / thruster Goto Github PK
View Code? Open in Web Editor NEWA fast, middleware based, web framework written in Rust
License: MIT License
A fast, middleware based, web framework written in Rust
License: MIT License
Currently when i try to use route parameters i get a panic with the message Chain out of cycle
from the thruster-core-async-await crate L39.
It would be nice if those contexts were drop-in replacements for one-another if I wanted to switch between backend implementations.
I'm new to Rust and Thruster, coming from a Node.js background. I would like to know if there is a way to gracefully shut down a Thruster server and do some cleanup before the process ends.
In Node, I can listen for a SIGTERM and prepare for the shutdown like this:
process.on('SIGTERM', () => {
log.warn('SIGTERM received. Stopping server.');
myServices.stopAll();
server.close();
});
Is there a way to do something similar with Thruster?
Thanks!
Hello, is there any way to use thruster with unix domain sockets?
If they is no way to do that now, I wonder if is possible to add a method .build_from_incoming
which use hyper::Server::builder
instead hyper::Server::bind
for create the underlying hyper::Server
or maybe just a new type of server thruster::UsdHyperServer
which impl the ThrusterServer
trait but ignoring the port
argument, like this:
use thruster::{
context::basic_hyper_context::{
generate_context, BasicHyperContext as Ctx, HyperRequest,
},
async_middleware, middleware_fn,
App, Context, ThrusterServer,
MiddlewareNext, MiddlewareResult,
};
use hyper::{
service::{make_service_fn, service_fn},
Body, Request, Response, Server,
server::accept,
};
use std::sync::Arc;
use async_trait::async_trait;
use tokio::net::UnixListener;
pub struct UdsHyperServer<T: 'static + Context + Send> {
app: App<HyperRequest, T>,
}
impl<T: 'static + Context + Send> UdsHyperServer<T> { }
#[async_trait]
impl<T: Context<Response = Response<Body>> + Send> ThrusterServer for UdsHyperServer<T> {
type Context = T;
type Response = Response<Body>;
type Request = HyperRequest;
fn new(app: App<Self::Request, T>) -> Self {
UdsHyperServer { app }
}
async fn build(mut self, path: &str, _port: u16) {
self.app._route_parser.optimize();
let arc_app = Arc::new(self.app);
async move {
let service = make_service_fn(|_| {
let app = arc_app.clone();
async {
Ok::<_, hyper::Error>(service_fn(move |req: Request<Body>| {
let matched = app.resolve_from_method_and_path(
&req.method().to_string(),
&req.uri().to_string(),
);
let req = HyperRequest::new(req);
app.resolve(req, matched)
}))
}
});
let mut listener = UnixListener::bind(path).unwrap();
let incoming = listener.incoming();
let incoming = accept::from_stream(incoming);
let server = Server::builder(incoming).serve(service);
server.await?;
Ok::<_, hyper::Error>(())
}
.await
.expect("hyper server failed");
}
}
#[middleware_fn]
async fn plaintext(mut context: Ctx, _next: MiddlewareNext<Ctx>) -> MiddlewareResult<Ctx> {
let val = "Hello, World!";
context.body(val);
Ok(context)
}
fn main() {
println!("Starting server...");
let mut app = App::<HyperRequest, Ctx>::create(generate_context);
app.get("/plaintext", async_middleware!(Ctx, [plaintext]));
let server = UdsHyperServer::new(app);
server.start("/tmp/thruster.sock", 4321);
// test the server with the following command:
// curl --unix-socket /tmp/thruster.sock http://host/plaintext
}
We should have a testing harness akin to supertest
in nodejs. That is, calling the harness would look something like:
use thruster::test;
use super::my_app::{Context, init};
...
let app: App<Context> = init();
let test_app = test::wrap(app);
let result = test_app.get("test/route");
assert!(result == "Hello, world!");
It might make sense to automatically wrap the response in an object as well?
I ha ve an error when i test basic middleware
16 | let ctx_future = chain.next(context)
| ^^^^^^^^^^ `futures::Future<Error=std::io::Error, Item=thruster::BasicContext> + std::marker::Send` does not have a constant size known at compile-time
|
= help: the trait `std::marker::Sized` is not implemented for `futures::Future<Error=std::io::Error, Item=thruster::BasicContext> + std::marker::Send`
= note: all local variables must have a statically known size
most_basic.rs
extern crate thruster;
extern crate futures;
use std::boxed::Box;
use futures::future;
use thruster::{App, BasicContext as Ctx, MiddlewareChain, MiddlewareReturnValue};
fn index(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
context.body = "Hello, Index!".to_owned();;
Box::new(future::ok(context))
}
fn profiling(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
println!("{}", "before");
let ctx_future = _chain.next(context)
.and_then(move |ctx| {
println!("{}", "after");
future::ok(ctx)
});
Box::new(ctx_future)
}
fn main() {
println!("Starting server...");
let mut app = App::<Ctx>::new();
app.use_middleware("/", profiling);
app.get("/", vec![index]);
App::start(app, "0.0.0.0", 4321);
}
Ami44
Will you be adding support for server side HTML generation?
For a full stack web framework the following would be required.
page generation
layouts
asset management i.e. webpack (for CSS, JS and images)
Hi!
I would like to share a state between requests storing it into request context.
The proposal is to create a new trait that implements a method generate
pub trait ContextGenerator {
fn generate(req: R) -> T;
}
The change is about
create a new constructor
create a basic implementation of ContextGenerator in order to support the older constructor
https://github.com/trezm/Thruster/blob/9ddae50167e3332fdea23bedfc1a36fff579b5a2/src/app.rs#L87
https://github.com/trezm/Thruster/blob/9ddae50167e3332fdea23bedfc1a36fff579b5a2/src/app.rs#L103
https://github.com/trezm/Thruster/blob/9ddae50167e3332fdea23bedfc1a36fff579b5a2/src/app.rs#L196
Would love to get the home-grown implementation of an http encoder/decoder more in line with hyper's perf.
For perf focused users, you can now easily use Hyper as the backend, but it would be nice to be on level playing fields in the future.
Hello
rustc --version
-> rustc 1.28.0 (9634041f0 2018-07-30)
cargo run --example most_basic
http://127.0.0.1:4321/plaintext
Ami44
Discussion issue for error handling in Thruster.
Are there plans to allow storing application state in some form? Like an R2D2 connection manager that can be used within routes to retrieve a database session?
What do you think about using the status codes from the http crate or implementing something similar? I think a defined status code type is way more ergonomic to use and less error prone for a developer than needing to enter a number or string of a status code.
The request
method is the generic version for the thruster_testing
request methods, and is the only way to pass headers when writing tests.
However, despite allowing the developer to write the method as a String, it only allows testing GET methods, as the route resolver does not use the method
param:
https://github.com/trezm/Thruster/blob/master/thruster-app/src/testing.rs#L26
We make multiple regex tests when we can likely make a single regex test. Just noodling locally it looks like this could increase our speed drastically.
This issue is for an investigation and a subsequent implementation. Important questions to answer will be:
request.body_stream().// do something with said stream
but am unclear how that would play with the existing body field.
Hi there,
I'm trying Thruster (good job !) and I would like to group my routes with the same base path. Example :
/admin
/admin/:adminId/posts
Middlewares (with a naive approach) :
_app.get("/admin", vec![show_admins]);
_app.get("/admin/:adminId/posts", vec![show_admin_posts]);
How do you associate a different middleware for /admin/:adminId/posts
? Currently, this route match the middleware of /admin/
first (which returns a json response for example).
thank you!
I was wondering why it isn‘t possible to use a different type of context in the sub apps. I haven‘t done much work with this library yet but this seems to hinder usability, does it? I can imagine a scenario where I‘d have an /api endpoint that automatically parses all requests as JSON (and stores the data on the context) while the other endpoints would treat the requests differently.
I am getting the following error:
error[E0277]: the trait bound `futures::future::FutureResult<thruster::BasicContext, _>: futures::future::Future` is not satisfied
--> src/main.rs:21:3
|
21 | Box::new(future::ok(context))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `futures::future::Future` is not implemented for `futures::future::FutureResult<thruster::BasicContext, _>`
|
= note: required for the cast to the object type `futures::future::Future<Item=thruster::BasicContext, Error=std::io::Error> + std::marker::Send`
Also, it seems that there is no need to list serde
, serde_json
, and tokio
explicitly in the example project.
Rust version: 1.27 and Nightly (2018-06-19).
P.S. I had to comment #![feature(test)]
in the lib.rs
to get it compiling to this state on stable Rust.
Hello
println!("{}", "do some others actions after start")
(or a function) after thruster::App::start(app, host, port);
main.rs
....
fn main() {
...
println!("Starting server {}://{}:{}", &protocol, &host, &port); // ok display
thruster::App::start(app, host, port);
println!("{}", "do some others actions after start"); // <== never display
Thanks
Ami44
I tried to set a Server
header, but it was getting doubled with the static header text in the Response
. Having Thruster
as the default is a great idea, but if it could go through the regular set
method on the response, that would allow a program to remove
it when desired.
Standard approach in my microservices is (or at least what I expected to work):
#[middleware_fn]
async fn server(mut context: Ctx, next: MiddlewareNext<Ctx>) -> Ctx {
context = await!(next(context));
context.set("Server", &format!("{} v{}", PKG_NAME, PKG_VERSION));
context
}
RFC:
I'm proposing moving middleware chains to using traits rather than static fns. It'll make it significantly easier to add objects to chains, rather than creating a new function for each chain item. Moreover, traits can respond dynamically rather than using a static function whose definition can't be changed.
This is similar to how Nickel.rs does it. You can take a look at that here:
https://github.com/nickel-org/nickel.rs/blob/master/src/middleware.rs
I only checked the examples for the feature, maybe it does already exist (?).
It's useful if you want to have multiple tokio servers in your application, but only one (tokio) event loop.
It's also discussed here (more elaborately):
While using the hyper server, this should already be possible. This is the tracking issue to make that support first class, along with an example and short guide.
Ideally the upgrade for the socket will be handled via a single middleware function, but we should consider the following:
As a new user, I find it constraining to code (each time) my own "context.rs" to start a new project. it's not intuitive.
Some basic methods in BasicContext should be proposed by default which allow to define headers (add, del) and define the return code (404)
Thanks
Ami44
setting a wildcard route doesn't actually propagate down, in other words, curling /test/a/b/2
causes an exception.
fn main() {
let host = "0.0.0.0";
let port = 8080;
println!("Starting server, accessible from : http://{}:{}", host, port);
let mut app = App::create(generate_context);
app.use_middleware("/", profiling);
app.get("/*", vec![not_found]);
app.get("/test/a/b", vec![test1]);
app.get("/test/a/c", vec![test1]);
App::start(app, host, port);
thank you for BasicContext update (and cookie). very great.
I search a minimal middleware example for return file content (image, favicon. ..) async example with tokio' not a sync example.
thanks
ami44
We should be using best practices in this repository. So:
In the basic example of the README the endpoint declaration is currently:
app.get("/plaintext", middleware![plaintext]);
But the correct way seems to be:
app.get("/plaintext", middleware![Ctx => plaintext]);
Might want to adjust it, through me off at first might do the same to others.
app.get("/", vec![index]);
not recognize !
127.0.0.1:4321/plaintext
: ok127.0.0.1:4321/
(or 127.0.0.1:4321
) : ko, display always page 404How catch index page ?
Ami44
most_basic.rs:
extern crate thruster;
extern crate futures;
use std::boxed::Box;
use futures::future;
use thruster::{App, BasicContext as Ctx, MiddlewareChain, MiddlewareReturnValue};
fn index(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
context.body = "Hello, Index!".to_owned();;
Box::new(future::ok(context))
}
fn plaintext(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
context.body = "Hello, Plaintext !".to_owned();;
Box::new(future::ok(context))
}
fn page404(mut context: Ctx, _chain: &MiddlewareChain<Ctx>) -> MiddlewareReturnValue<Ctx> {
context.body = "Hello, 404 !".to_owned();;
Box::new(future::ok(context))
}
fn main() {
println!("Starting server...");
let mut app = App::<Ctx>::new();
app.get("/", vec![index]);
app.get("/plaintext", vec![plaintext]);
app.get("/*", vec![page404]);
App::start(app, "0.0.0.0", 4321);
}
Thruster should be able to have have gRPC support like tonic. Update this issue with more details as they evolve.
The generate_context method is very simple and useful as a baseline. I think it'd be convenient if it was exported by the library its self.
RE: tokio-rs/tokio#1087 (comment)
For the time being, for await
support, you must use github to depend on tokio.
Checking tokio-async-await v0.1.7
error[E0432]: unresolved import `std::await`
--> /Users/ckarper/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-async-await-0.1.7/src/lib.rs:35:9
|
35 | pub use std::await as std_await;
| ^^^^^^^^^^^^^^^^^^^^^^^ no `await` in the root
Given our tree structure, which I believe is fairly comprehensive, as it hasn't changed for a long time, I'd like to better formalize the algorithm for matching routes. Right now the matching code has been taped up many times and is smelling pretty bad. With a more formal algorithm we can drastically clean it up.
As the title says responses with significant size take a lot of time such that i can't get more than 300 requests per second for a 1MiB image that is cached in ram.
Suppose I want to do a HTTP request and return a context which depends on it, when I receive a request on my Fanta endpoint.
I'd think I need to create a Hyper Client with a tokio Handle and then return the future request (for Fanta to make sure the future is run and the response from the future used).
Is this already possible?
I'm not sure where else to ask this. I like your library, but it's missing a CONTRIBUTIONS file.
This repo would get way more traction with a name like "Thrustr". You could even capitalize the "r" as a nod to Rust. Something like "ThrustR"?
I'll submit a PR.
With the ability to return trait types, we should move away from boxing all future responses in middleware chains.
Windows
cargo run --example most_basic
-> - ``error[E0432]: unresolved import `net2::unix```Thruster is only Linux ?
Ami44
Hello and thanks for working on Thruster!
I have been experimenting with various Rust web frameworks over the past few days to potentially replace my usage of Rocket. So far Thruster has been one of the more promising candidates. I love the simplicity of the API and how easy it is to create, manipulate and pass around the App
type.
I have a few questions on how to use the framework correctly:
context.body(entire_file_content)
? This seems very inefficient if that is the case :(/swagger
to serve the corresponding files in a ./docs
directory on disk. For instance, hitting /swagger/img/logo.png
should serve /docs/img/logo.png
App::create
, it would be very useful if it was possible to pass in a closure as the generate_context
argument. As it stands, I am not sure how to shove any data that is determined during application startup into the context objects. EDIT: I guess this is the answer: #130Many thanks in advance!
https://docs.rs/thruster/0.4.4/thruster/struct.Request.html#method.raw_body
It seems odd that thruster does extra work in this method when it's actually holding the body as a bunch of bytes internally.
Hello,
Currently set404
function allows to set the middleware if no route is successfully matched . This function can be renamed to be less specific, since it's the developer who defines the logic.
My proposition :
We can rename set404
with something like set_default_behavior
/ set_default_middleware
... or with the name you want, the discussion is opened :)
Hello
thanks
ami44
https://github.com/TechEmpower/FrameworkBenchmarks
If you're not familiar with this repo, it's a collection of benchmarks across frameworks to compare performance. They're rendered here every so often: https://www.techempower.com/benchmarks/.
I think it would be good to add a Thruster benchmark, just a simple one. I can PR it if you like, just wanted to get permission first!
I'm curious if you've thought about moving some of your types to the http crate. This would reduce the amount of code you're having to maintain, and also be more familiar to people coming from other frameworks (Hyper, etc).
No bother if you can't/don't want to, just thought it might be worth a suggestion! The reason I bring it up now is that it'd be easier now than later to migrate :p
I have created a little thruster template: https://github.com/ami44/thruster-basic-template with tests, coverage and livereload.
Can you test if is ok, and fork if you need
I'm going to be out of town from now on. I hope to integrate evolutions when you have solved other problems (but without guarantee).
Ami44
You really should reference the gitter channel somewhere in the README and maybe even in the docs. I accidentially saw the channel mentioned in a closed issue, otherwise wouldn't know that it exists. So make it more present, so that people join ;)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.