async-rs / async-std Goto Github PK
View Code? Open in Web Editor NEWAsync version of the Rust standard library
Home Page: https://async.rs
License: Apache License 2.0
Async version of the Rust standard library
Home Page: https://async.rs
License: Apache License 2.0
Currently, all CI tests are x86_64. We should have a policy and testing for other platforms, such as ARM, or non-64 bit x86 targets.
While this seems like something that shouldn't be considered for 1.0 IMHO, it would be good to start discussions about how an API could look like and what requirements different folks have here.
Also while this is kind of related to #60, my main point here is about being able to have control over the lifetime of the reactor/executor, allowing to run multiple and about which would be used when/where. See also rustasync/runtime#42 for a similar issue of mine for the runtime
crate, on which everything that follows is based.
Currently the executor and reactor and thread pools are all global and lazily started when they're first needed, and there's no way to e.g. start them earlier, stop them at some point, run multiple separate ones, etc.
This simplifies the implementation a lot at this point (extremely clean and easy to follow code right now!) and is also potentially more performant than passing around state via thread-local-storage (like in e.g. tokio
).
It however limits the usability at least in two scenarios where I'd like to make use of async-std
.
Anyway, reasons why this would be useful to have (I'm going to call the reactor/executor/threadpool combination a runtime for the following):
async-std
/etc included, so unloading a plugin also requires to be able to shut down the runtime at a specific point and to ensure that none of the code of the plugin is running anymore.Hi,
I'm trying replacing tokio
with async-std
in my own project and it's truly amazing. However there's no channel-equivalents like std::sync::mpsc
in async-std
, so I have to use the futures
crate's version.
I think it would be very nice to have mpsc
, oneshot
, etc. from futures
crate re-exported in this crate's namespace for consistency and convenience. Any plan for that?
Push book to netlify.
Required:
mdbook build docs
is added to .travis.yml
Hello, sorry for the ignorance, but I would like to know if this crate has available some
macro / mechanism to join futures so they can ran concurrently like futures crate has.
#![feature(pin, async_await, futures_api)]
use async_std::io;
use async_std::task;
use serde_derive::Deserialize;
#[macro_use]
extern crate futures;
#[derive(Deserialize, Debug)]
struct Post {
#[serde(rename = "userId")]
user_id : usize,
id: usize,
title: String,
completed: bool
}
fn main() {
task::block_on(async {
let post_fut = surf::get("https://jsonplaceholder.typicode.com/todos/1").recv_json::<Post>();
let post2_fut = surf::get("https://jsonplaceholder.typicode.com/todos/2").recv_json::<Post>();
let (result1, result2 ) = join!(post_fut, post2_fut);
println!("{:?}", result1.unwrap());
println!("{:?}", result2.unwrap());
});
}
Most of our examples use different import styles current. We should use just one.
In https://docs.rs/async-std/0.99.3/async_std/io/trait.BufRead.html#examples-2, this example is provided:
use async_std::fs::File;
use async_std::io::BufReader;
use async_std::prelude::*;
let file = File::open("a.txt").await?;
let mut lines = BufReader::new(file).lines();
let mut count = 0;
for line in lines.next().await {
line?;
count += 1;
}
I used it in this full program:
#![feature(async_await)]
use std::env::args;
use async_std::fs::File;
use async_std::io::{self, BufReader};
use async_std::prelude::*;
use async_std::task;
fn main() -> io::Result<()> {
let path = args().nth(1).expect("missing path argument");
let mut count = 0u64;
task::block_on(async {
//jlet file = File::open(&path).await?;
//let mut lines = BufReader::new(file).lines();
let file = File::open(&path).await?;
let mut lines = BufReader::new(file).lines();
let mut count = 0;
for line in lines.next().await {
line?;
count += 1;
}
println!("The file contains {} lines.", count);
Ok(())
})
}
However, running that counts 1 line for any file with >= 1 line that I run it on. In contrast, this full program works correctly:
#![feature(async_await)]
use std::env::args;
use async_std::fs::File;
use async_std::io::{self, BufReader};
use async_std::prelude::*;
use async_std::task;
fn main() -> io::Result<()> {
let path = args().nth(1).expect("missing path argument");
let mut count = 0u64;
task::block_on(async {
let file = File::open(&path).await?;
let mut lines = BufReader::new(file).lines();
while let Some(line) = lines.next().await {
line?;
count += 1;
}
println!("The file contains {} lines.", count);
Ok(())
})
}
The select!
macro together with FuturesExt::fuse
, FusedStream
and FusedFuture
is a pretty powerful mechanism which isn't all that easy to understand. The current version of the doc only mentions select!
in passing and doesn't address fuse
at all.
Linking to the respective section in the async book, or to the trait and function definitions might help, although I am not sure how stable they are.
The async_std::task::Builder
and async_std::task::spawn
methods assume that some kind of environment is present (ie. a background thread pool) where to spawn tasks.
Lines 172 to 192 in 532c73c
Crates that provide some sort of hidden environment generally provide a way to configure how it works. Example of what I mean:
Similarly, I think async_std
should provide some sort of set_task_spawner
function that allows configuring how that works.
The use-case I have in mind is the browser environment, where you want to drive tasks by using spawn_local
(which is implemented using setTimeout
).
In the current version of the book, the final code of the Handling Disconnection tutorial doesn't work:
use futures::{
channel::mpsc,
SinkExt,
FutureExt,
select,
};
---
error[E0432]: unresolved import `futures::select`
--> src/main.rs:13:5
|
13 | select,
| ^^^^^^ no `select` in the root
futures-preview 0.3.0-alpha.17
actually defines two select!
macros, futures::future::select
and futures::stream::select
, see its lib.rs
.
Changing the import to stream::select
(I hope I understood it correctly, given that we are working with streams; either way, importing future::select
has the same issue) makes the import work, but it cannot resolve the macro still:
async fn client_writer(
messages: &mut Receiver<String>,
stream: Arc<TcpStream>,
mut shutdown: Receiver<Void>,
) -> Result<()> {
let mut stream = &*stream;
loop {
select! {
msg = messages.next().fuse() => match msg {
Some(msg) => stream.write_all(msg.as_bytes()).await?,
None => break,
},
void = shutdown.next().fuse() => match void {
Some(void) => match void {},
None => break,
}
}
}
Ok(())
}
---
error: cannot find macro `select!` in this scope
--> src/main.rs:92:9
|
92 | select! {
| ^^^^^^
error: cannot find macro `select!` in this scope
--> src/main.rs:126:21
|
126 | let event = select! {
| ^^^^^^
warning: unused import: `future::select`
--> src/main.rs:13:5
|
13 | future::select,
| ^^^^^^^^^^^^^^
I want a feature like tokio framed streams( https://tokio.rs/docs/going-deeper/frames/ ), does async-std support this?
Similar to #129 for streams, this issue tracks what's left to port from std::io
to async_std::io
.
prelude
copy
empty
repeat
sink
stderr
stdin
stdout
BufReader
BufWriter
Bytes
Chain
Cursor
Empty
Error
IntoInnerError
IoSlice
IoSliceMut
LineWriter
Lines
Repeat
Sink
Split
Stderr
StderrLock
Stdin
StdinLock
Stdout
StdoutLock
Take
Read
methodsRead::by_ref
Read::bytes
Read::chain
Read::read_exact
Read::read_to_end
Read::read_to_string
Read::read_vectored
Read::take
Write
methodsWrite::by_ref
Write::write_all
Write::write_fmt
Write::write_vectored
BufRead
methodsBufRead::buffer
BufRead::consume
BufRead::lines
BufRead::read_line
BufRead::read_until
BufRead::split
BufWriter
methodsBufWriter::buffer
BufWriter::get_mut
BufWriter::get_ref
BufWriter::into_inner
BufWriter::new
BufWriter::with_capacity
BufReader
methodsBufReader::fill_buf
BufReader::get_mut
BufReader::get_ref
BufReader::into_inner
BufReader::new
BufReader::with_capacity
Currently this is hidden as an implementation detail of the network driver. Exposing would make it possible to hook up arbitrary Evented e.g. for other kernel event sources.
Perhaps this doesn't belong in async-std... in that case, maybe it could be extracted to another crate?
Collecting a few of them here before making a single commit with all fixes:
In general, Title Case is not followed consistently for titles.
[…] you link those in.. Both uses […]
There are two periods when there should be either one or three.
[…] we introducece functionality […]
Typo for "introduce".
[…] in which case we give at least 3 month of ahead notice.
This sounds a bit off to me. Maybe "we will give a notice at least 3 months ahead" is better.
[…] a very simplified view suffices for us:
The list that follows starts its items with a lowercase letter. However, the list immediately below starts them with an uppercase letter. This is a bit distracting, and not consistent. Perhaps using a sentence is more appropriated, such as "Computation is a sequence of composable operations which can branch based on a decision, and either run to succession and yield a result, or they can yield an error".
[…] and how to react on potential events the... well...
Future
Probably something like "and how to react on potential events in the… well, Future
" is better.
I noticed here that code blocks are not syntax-highlighted. Is there a reason for this?
When this function is called, it will produce a Future<Output=String>
That's not the case though, is it? The function is async fn ... -> Result<String, io::Error>
, not async fn ... -> String
.
[…] a value available sometime later
Should that be "available some time later" or "some later time"?
*we will introduce you to
tasks
, which we need to actually run Futures
A bit earlier it was said that calling poll
repeatedly was enough to drive a future to completion. So is "need" the right word here?
Now that we know what Futures are, we now want to run them!
"Now" is repeated too soon. Maybe "Now that we know what Futures are, we want to run them!" works better.
[…] task can also has a name and an ID, just like a thread
Task can also have a name.
The carry desirable metadata for debugging
They carry.
[…] task api handles […]
task API.
[…] mix well with they concurrent execution […]
with the concurrent.
Result<T,E>
Missing space after the comma.
client_writer
uses HashMap
but doesn't have a use
for HashMap
.
futures::select!
requires that the streams selected over implement futures::FusedStream
. There's a .fuse()
method on futures::StreamExt
which fuses any stream. The problem is, to use StreamExt::fuse
on has to import this trait, and then the .next
method collides between async_std::Stream
and futures::StreamExt
.
See also async-rs/a-chat#1 (comment)
Land #61 and make sure it doesn't break netlify ;).
As as the title says, async_std should provide macro like #[async_std::test]
similar to #[tokio::test]
or #[runtime::test]
for writing tests.
Right now I feel we should go with the following prelude:
pub use crate::future::Future;
pub use crate::io::BufRead as _;
pub use crate::io::Read as _;
pub use crate::io::Seek as _;
pub use crate::io::Write as _;
pub use crate::stream::Stream;
pub use crate::time::Timeout as _;
I don't think we should add or remove anything from this list, but am unsure about which traits should be anonymously imported (as _
) and which shouldn't. I feel like fully importing traits like Read
could be a potential source of conflicts if the user uses std::io::Read
in their code at the same time.
What about trait Stream
? Should that one be anonymous or not? In a way, it is fundamental just like Iterator
, which is even in the std
prelude. So maybe we should import it fully.
But I'm still not 100% decided...
Hey!
Thanks for this awesome initiative :)
I can build and run separate binaries that links to async-std as extern crate (like the examples in the readme) but I can't run the examples from within async-std.
johannes@jm:~/dev/async-test % rustc -V
rustc 1.39.0-nightly (53df91a9b 2019-08-27)
johannes@jm:~/dev/async-test % cargo -V
cargo 1.39.0-nightly (3f700ec43 2019-08-19)
johannes@jm:~/dev/async-test % uname -a
FreeBSD jm 13.0-CURRENT FreeBSD 13.0-CURRENT r349834+a82ad980c917(dell-fix_iichid-evdev) DELL-NODEBUG amd64
This is what I get
johannes@jm:~/dev/async-std % cargo run --example hello-world
Compiling libnghttp2-sys v0.1.2
Compiling openssl-sys v0.9.49
Compiling backtrace-sys v0.1.31
Compiling mime_guess v2.0.1
Compiling mime v0.3.13
Compiling tempdir v0.3.7
Compiling proc-macro-hack v0.5.9
Compiling rand_chacha v0.2.1
error: failed to run custom build command for `libnghttp2-sys v0.1.2`
Caused by:
process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/libnghttp2-sys-4c6f3caedee97f80/build-script-build` (signal: 11, SIGSEGV: invalid memory reference)
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `backtrace-sys v0.1.31`
Caused by:
process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/backtrace-sys-78dbde0feafa0d65/build-script-build` (signal: 11, SIGSEGV: invalid memory reference)
--- stdout
cargo:rustc-cfg=rbt
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `openssl-sys v0.9.49`
Caused by:
process didn't exit successfully: `/usr/home/johannes/dev/async-std/target/debug/build/openssl-sys-7d3ff8c9464a6e09/build-script-main` (signal: 11, SIGSEGV: invalid memory reference)
warning: build failed, waiting for other jobs to finish...
error: build failed
Any idea what this might depend on?
let (reader, writer) = &mut (&stream, &stream);
io::copy(reader, writer).await?;
The text says " [cargo add][cargo-add] " (instead of being the link).
client_writer
uses Arc
but doesn't have a use
for Arc
.
With #125 out, it's probably worth looking at which other parts of std::iter
we can port to async_std::stream
. This issue is intended to track what's left for us to port.
from_fn
repeat_with
successors
DoubleEndedStream
ExactSizeStream
Extend
FusedStream
Product
Sum
Stream::all
Stream::any
Stream::by_ref
Stream::chain
Stream::cloned
Stream::cmp
Stream::collect
Stream::copied
Stream::count
Stream::cycle
Stream::enumerate
Stream::eq
Stream::filter
Stream::filter_map
Stream::find
Stream::find_map
Stream::flat_map
Stream::flatten
Stream::fold
Stream::for_each
Stream::fuse
Stream::ge
Stream::gt
Stream::inspect
Stream::last
Stream::le
Stream::lt
Stream::map
Stream::max
Stream::max_by
Stream::max_by_key
Stream::min
Stream::min_by
Stream::min_by_key
Stream::ne
Stream::nth
Stream::partial_cmp
Stream::partition
Stream::peekable
-> wip #366Stream::position
Stream::product
Stream::rev
Stream::rposition
Stream::scan
Stream::size_hint
Stream::skip
Stream::skip_while
Stream::step_by
Stream::sum
Stream::take
Stream::take_while
Stream::try_fold
Stream::try_for_each
Stream::unzip
Stream::zip
IntoStream
implsCurrently not possible. See #129 (comment)
FromStream
implsFromStream<()> for ()
FromStream<char> for String
FromStream<String> for String
FromStream<&'a char> for String
FromStream<&'a str> for String
FromStream<T> for Cow<'a, [T]> where T: Clone
FromStream<A> for Box<[A]>
FromStream<A> for VecDeque<A>
FromStream<Result<A, E>> for Result<V, E> where V: FromStream<A>
FromStream<Option<A>> for Option<V> where V: FromStream<A>
FromStream<(K, V)> for BTreeMap<K, V> where K: Ord
FromStream<(K, V)> for HashMap<K, V, S> where K: Eq + Hash, S: BuildHasher + Default
FromStream<T> for BinaryHeap<T> where T: Ord
FromStream<T> for BTreeSet<T> where T: Ord
FromStream<T> for LinkedList<T>
FromStream<T> for Vec<T>
FromStream<T> for HashSet<T, S> where T: Eq + Hash, S: BuildHasher + Default
DoubleEndedStream
DoubleEndedStream::poll_next_back
DoubleEndedStream::next_back
DoubleEndedStream::nth_back
DoubleEndedStream::rfind
DoubleEndedStream::rfold
DoubleEndedStream::try_rfold
First, thanks a lot for this great library and it's accompanying documentation!
This description of std::future::Future
form the book sounds not quite correct:
In some sense, the std::future::Future can be seen as a minimal subset of futures::future::Future
https://book.async.rs/overview/std-and-library-futures.html
Actually both traits are the same. It's just a reexport:
The current version of the Handling Disconnects section of the book states:
In the shutdown case we use match void {} as a statically-checked unreachable!().
Please explain the significance of this statement. In how far is this a statically checked version of unreachable!
?
This is the first time I am encountering this pattern and I am confused by the statement.
msg
gets .trim()
called on it once when first sliced, then again before the .to_string()
call.
Here is the tasks example:
use async_std::fs::File;
use async_std::task;
async fn read_file(path: &str) -> Result<String, io::Error> {
let mut file = File::open(path).await?;
let mut contents = String::new();
file.read_to_string(&mut contents).await?;
contents
}
fn main() {
let reader_task = task::spawn(async {
let result = read_file("data.csv").await;
match result {
Ok(s) => println!("{}", s),
Err(e) => println!("Error reading file: {:?}", e)
}
});
println!("Started task!");
task::block_on(reader_task);
println!("Stopped task!");
}
But it does not work. File does not have a read_to_string method seemingly.
Here is what I had to do to get it to work:
async fn read_file(path: &str) -> io::Result<String> {
//let mut file = File::open(path).await?;
fs::read_to_string(path).await
}
fn main() {
let reader_task = task::spawn(async {
let result = read_file("data.csv").await;
match result {
Ok(s) => println!("{}", s),
Err(e) => println!("Error reading file: {:?}", e),
}
});
println!("Started task!");
task::block_on(reader_task);
println!("Stopped task!");
}
Let's use cargo-deadlinks
with the following command on Travis:
cargo deadlinks --check-http
This currently fails with some errors which are due to re-exports from std
:
Found invalid urls in /home/stjepang/work/async-std/target/doc/async_std/io/type.Result.html:
Linked file at path /home/stjepang/work/async-std/target/doc/async_std/result/enum.Result.html does not exist!
Found invalid urls in /home/stjepang/work/async-std/target/doc/async_std/io/struct.Error.html:
Linked file at path /home/stjepang/work/async-std/target/doc/std/io/struct.Error.html does not exist!
Linked file at path /home/stjepang/work/async-std/target/doc/std/io/enum.ErrorKind.html does not exist!
Linked file at path /home/stjepang/work/async-std/target/doc/async_std/ffi/struct.NulError.html does not exist!
The way we resolve these errors is by writing shim docs for re-exports from std
, similarly to how we did that here:
async-std/src/os/unix/net/mod.rs
Line 14 in addda39
The idea is that under the docs
feature flag we generate "fake" docs linking to async-std
's types, but otherwise re-export real types from std
.
When I add async-std as a dependency and try to compile with the beta channel I get:
error[E0554]: `#![feature]` may not be used on the beta release channel
--> /home/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/async-std-0.99.3/src/lib.rs:30:1
|
30 | #![feature(async_await)]
| ^^^^^^^^^^^^^^^^^^^^^^^^
cargo-generate
is great, we should ship a template for an async-std
app with:
src/main.rs
TCP/UDP listeners and streams take A: std::net::ToSocketAddrs
. Some ToSocketAddrs
impls will resolve domain names. However, ToSocketAddrs
is not futures-aware, so DNS lookups are synchronous.
Possible solutions:
ToSocketAddrs
with an async version (probably more efficient)Timeouts are confusing. @spacejam recently wrote an example that contains the following piece of code:
stream
.read_to_end(&mut buf)
.timeout(Duration::from_secs(5))
.await?;
The problem here is that we need two ?
s after .await
and it's easy to forget that.
I think the confusing part is in that the .timeout()
combinator looks like it just transforms the future in a similar vein to .map()
or .and_then()
, but it really does not!
Instead, .timeout()
bubbles the result of the future so that its type becomes Result<Result<_, io::Error>, TimeoutError>
.
Perhaps it would be less confusing if timeout()
was a free-standing function in the future
module rather than a method on the time::Timeout
extension trait?
future::timeout(
stream.read_to_end(&mut buf),
Duration::from_secs(5),
)
.await??;
This timeout()
function would stand alongside ready()
, pending()
, and maybe some other convenience functions in the future
module.
Here's another idea. What if we had io::timeout()
function that resolves to a Result<_, io::Error>
instead of bubbling the results? Then we could write the following with a single ?
:
io::timeout(
stream.read_to_end(&mut buf),
Duration::from_secs(5),
)
.await?;
Now it's also more obvious that we're setting a timeout for an I/O operation and not for an arbitrary future.
In addition to that, perhaps we could delete the whole time
module? I'm not really a fan of it because it looks nothing like std::time
and I generally dislike extension traits like Timeout
.
Note that we already have a time-related function, task::sleep()
, which is not placed in the time
module so we probably shouldn't worry about grouping everything time-related into the time
module. I think it's okay if we have io::timeout()
and future::timeout()
.
Finally, here's a really conservative proposal. Let's remove the whole time
module and only have io::timeout()
. A more generic function for timeouts can then be left for later design work. I think I prefer this option the most.
We should make a pass over the docs soon, making sure they are free of warnings.
Hi,
And kudos for this very promising project.
I'm currently trying to replace all instances of futures.rs
and tokio
with async-std
.
However, hyper
requires streams that implement the tokio::io::AsyncRead
and AsyncWrite
traits.
Given a stream obtained from async-std
, such as a TcpStream
, how can I get something that implements tokio
's traits?
Thanks again for async-std
!
Currently, we have Stream
API implementation. For async processing of incoming data, we can add Flow
API.
Imagine I have a stream like this:
let mut k = stream::cycle(vec![1, 2, 3]);
I want to dispatch events to different processing stages. Like I want to add all 1
s, multiply all 2
s etc.
There is no convenient way to do that except initially creating different streams for them.
This is what I call partition
.
Then I want to merge these and create a unified stream.
Process things in this merged stream and continue doing it.
This is what I call merge
and priority selection based merge
.
I want to create fully replicated streams from a single stream.
That I call broadcast
.
Design document with active discussion:
https://paper.dropbox.com/doc/async-process--Ae7VXYrJ4sSucoYlMC7XYBQvAg-Fbg2Jq7UbhqihtnWpc1EY
External process execution is clearly asynchronous. This API should make it possible to both asynchronously handle IO streams and also just wait for them to finish their execution.
There is already a BufReader
, so people would expect to have a BufWriter
too. A lazy implementation is to simply wrap BufWriter
from futures-0.3
.
To support executor agnostic libraries it would be useful to have a handle that can be passed to them if they require the ability to spawn new tasks.
Equal to https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.size_hint. Don't think this needs to be marked async
. This would've been useful to calculate the right pre-alloc size for calling collect
into a vec in #125. Thanks!
Instead of re-exporting types Empty
, Sink
, and Cursor
from std::io
into async_std::io
, I believe we should create our own equivalents of those types.
The problem with these is that they implement synchronous and asynchronous traits at the same time, and I think that is a mistake. My intuition says that's wrong because types should either be synchronous or asynchronous, never both at the same time.
As a more concrete example, consider the fact that Sink
implements two methods write_all
, one coming from std::io::Write
and the other coming from async_std::io::Write
. Here's an attempt at calling write_all
while both traits are imported:
use std::io::prelude::*;
use async_std::io;
use async_std::prelude::*;
use async_std::task;
fn main() -> io::Result<()> {
task::block_on(async {
let s = io::sink();
s.write_all(b"hello world").await?;
Ok(())
})
}
This errors out with:
error[E0034]: multiple applicable items in scope
--> examples/foo.rs:10:11
|
10 | s.write_all(b"hello world").await?;
| ^^^^^^^^^ multiple `write_all` found
|
= note: candidate #1 is defined in an impl of the trait `std::io::Write` for the type `std::io::Sink`
= help: to disambiguate the method call, write `std::io::Write::write_all(s, b"hello world")` instead
= note: candidate #2 is defined in an impl of the trait `async_std::io::write::Write` for the type `_`
= help: to disambiguate the method call, write `async_std::io::write::Write::write_all(s, b"hello world")` instead
Having types that implement both synchronous and asynchronous traits at the same time is never a convenience, I think, and can only be a nuisance like in this example.
cc @yoshuawuyts
Longhauling blocking requests are panicking the blocking thread pool because max threads is not 10_000 for OSX. It is 4096. Current method panics.
Solution: having a variable max threads based on errors coming up from thread pool during spawning dynamic threads.
Hi! I was using async-std
in my cacache
library. One of the things that I'm trying to do is implement AsyncWrite
for it, but it turns out I'm using a tmpfile library that does sync i/o. Because of that, I pretty much copy-pasted the AsyncWrite
impl for async-std
's File
, and it turns out with the latest version, task::blocking
is now private, so I can't just... do that. (To clarify, I was using async-std pre-release, when I needed to use async-pool for this, and I just started porting the code over tonight when I ran into this).
For the sake of compatibility, it would be nice to have this available. My code that's doing this is over here, in case there turns out to be a Better Way™ to do what I'm trying to do that hopefully doesn't involve reimplementing tmpfile logic: https://github.com/zkat/cacache-rs/blob/zkat/async/src/content/write.rs#L147-L256
Cheers!
There are quite a few functions in the book that runs while
loop inside as are supposed to be called instead task::spawn
. E.g. server
, client
, client_writer
. I think it makes sense to extend those name to explicitly set expectation on loop inside it, like server_loop
, client_loop
, client_writer_loop
. Would be happy to provide PR if that sounds like a helpful change.
Another thought about naming, client
function technically is not about "client", it's about "connection". Maybe it should be connection_loop
? It makes disconnect handling easier to read.
Our CI times regressed heavily by building mdbook. We should move to downloading it instead: https://github.com/rust-lang-nursery/mdBook/releases
This would also make implementing things like #77 much easier.
In the 2016 futures announcement post, the join
combinator was shown as something that would make choosing between several futures easy.
In rust-lang/futures-rs#1215 and beyond this was changed to several methods: join
, join1
, join2
, join3
, and join!
, try_join
macros (proc macros since 0.3.0-alpha.18
so statements can be written inline).
Future::join
It still seems incredibly useful to be able to join multiple futures together, and having a single combinator to do that seems like the simplest API, even if resulting code might not look completely symmetrical. I propose we add Future::join
:
use async_std::future;
let a = future::ready(1);
let b = future::ready(2);
let pair = a.join(b);
assert_eq!(pair.await, (1, 2));
Future::try_join
The futures-preview
library also exposes a try_join
method. This is useful when you want to unwrap two results. Internally it uses TryFuture
as a reference, which means this method should only exist on futures where Output = Result<T, E>
, and I'm not entirely sure if that's feasible. However if it is it might be convenient to also expose:
use async_std::future;
let a = future::ready(Ok::<i32, i32>(1));
let b = future::ready(Ok::<i32, i32>(2));
let pair = a.try_join(b);
assert_eq!(pair.await, Ok((1, 2)));
Future::join_all
The third join combinator present is Future::join_all
. The docs don't make a big sell on them (inefficient, set can't be modified after polling started, prefer futures_unordered
), but it's probably still worth mentioning. I don't think we should add this combinator, but instead point people to use fold
instead:
don't do this
use async_std::future::join_all;
async fn foo(i: u32) -> u32 { i }
let futures = vec![foo(1), foo(2), foo(3)];
assert_eq!(join_all(futures).await, [1, 2, 3]);
do this instead
let futures = vec![foo(1), foo(2), foo(3)];
let futures = futures.fold(|p, n| p.join(n))
assert_eq!(futures.await, [1, 2, 3]);
note: not tested this, but in general I don't think we need to worry about this case too much as handling the unordered case seems much more important and would cover this too.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.