sergejjurecko / mio_httpc Goto Github PK
View Code? Open in Web Editor NEWmio based async and sync http client
License: Apache License 2.0
mio based async and sync http client
License: Apache License 2.0
It doesn't look like this crate is used anywhere.
I'm getting a ResponseTooBig
error. I could adjust max_response
, but then my memory usage would increase.
Could I instead 'stream' to a file, in chunks relative to speed (speed of disk IO, speed of network IO)?
flate2
crate is significantly slower than miniz_oxide
crate. Most HTTP clients use miniz_oxide
. mio_httpc
already depends on miniz_oxide
transitively via failure
crate; it would be nice to switch to it for HTTP stream decoding too.
The only thing missing from miniz_oxide
is gzip header decoding, but it is provided by the flate2
crate that wraps it.
The parsing of /etc/resolv.conf
to determine a nameserver isn't working; because parsing fails, this line is never executed:
Line 280 in 1804900
The file /etc/resolv.conf
contains lines like the below to configure a nameserver:
nameserver 127.0.0.53
The IP address is currently being parsed into a SocketAddr
struct (inferred to be that type due to the linked line of source above). This parsing always fails due to the lack of :port
after the IP address.
Parsing operates correctly if the inferred type is IpAddress
and a fixed port number of 53
is provided, as in the replacement line below:
srvs.push(SocketAddr::new(adr, 53));
Is it possible to add a code example for WebSocket server implementation?
I don't know about other OSes, but on Windows an Instant - Duration
subtraction fails if Duration > time since the Windows have started
.
This subtraction was used here:
Line 20 in 739cec6
This line has been causing panic when using mio_httpc for the first five minutes after Windows start.
PR for this issue is coming soon. Maybe it would be wise to audit other Instant/Duration uses if there's perhaps another problem like this.
Many websites, e.g. http://globalvision2000.com, take over a minute to download with mio_httpc, while curl completes in just a few seconds.
curl:
0.01user 0.01system 0:02.04elapsed 1%CPU (0avgtext+0avgdata 12728maxresident)k
0inputs+0outputs (0major+834minor)pagefaults 0swaps
mio_httpc:
0.00user 0.02system 1:07.08elapsed 0%CPU (0avgtext+0avgdata 4244maxresident)k
0inputs+0outputs (0major+277minor)pagefaults 0swaps
There's at least 70,000 such websites in the top million, although I didn't test the entire million because this issue was making the test unbearably slow. I'm using Tranco list generated on the 3rd of February.
Archive with all the occurrences I've encountered: mio_httpc-timeouts.tar.gz
Exact code used for testing mio_httpc, with Cargo.lock
and all: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/f206362f2e81521bbefb84007cdd25242f6db590/mio_httpc-smoke-test/src/main.rs
This issue is distinct from #25 because it still happens in v0.9.3
On some websites, e.g. http://yakitoriya.ru , mio_httpc panics with the following error:
thread 'main' panicked at 'index out of bounds: the len is 5 but the index is 5', /home/shnatsel/.cargo/registry/src/github.com-1ecc6299db9ec823/mio_httpc-0.9.3/src/types.rs:285:50
The error message can also point to index 6 rather than 5.
curl and Firefox work fine.
There's at least 43 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences (although I didn't run the test on the entire million): mio_httpc-oob-panic.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/f206362f2e81521bbefb84007cdd25242f6db590/mio_httpc-smoke-test/src/main.rs
On some websites, e.g. http://trueleafmarket.com, mio_httpc fails with the following error:
Httparse error: too many headers
curl works fine for the same websites. reqwest
doesn't seem to have this issue either, despite also using the httparse
crate.
There's at least 7295 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences (although I didn't run the test on the entire million): mio_httpc-too-many-headers.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/f206362f2e81521bbefb84007cdd25242f6db590/mio_httpc-smoke-test/src/main.rs
According to the documentation on call
(and simple_call
), "CallBuilder is invalid after this call and will panic if used again". If using the CallBuilder
again should be an invalid operation, why not make call
a "by-value" function? That way, it's impossible for the user to use the CallBuilder
struct again.
This would also allow you remove the Option
part of the cb
field as well
pub fn call(self, httpc: &mut Httpc, poll: &Poll) -> Result<Call> {
httpc.call::<CONNECTOR>(self.cb, poll)
}
}
Kept debugging this up until the point where I had a Cargo.toml
with:
[package]
name = "projectname"
version = "0.1.0"
authors = ["Samuel Marks @SamuelMarks"]
build = "build.rs" # your https://github.com/SergejJurecko/mio_httpc/blob/5e82852/build.rs
edition = "2018"
[dependencies]
mio = "0.6.16"
mio_httpc = "0.8.3"
crypto-hash = "0.3.1"
ring = "0.13.2"
webpki = "0.18.1"
webpki-roots = "0.15"
rustls = {version="0.14", features = ["dangerous_configuration"]}
openssl = { version = "0.10.*", features = ["v102", "v110"] }
native-tls = "0.2"
[target.'cfg(any(target_os = "macos", target_os = "ios"))'.dependencies]
core-foundation = "0.6"
core-foundation-sys = "0.6"
Yet whenever I run this function, it dies with the No TLS
error:
pub fn download() {
let poll = Poll::new().unwrap();
let urls = vec!["https://github.com/SergejJurecko/mio_httpc"];
let cfg = if let Ok(cfg) = HttpcCfg::certs_from_path(".") {
cfg
} else {
Default::default()
};
let mut htp = Httpc::new(10, Some(cfg));
for url in urls {
println!("Get {}", url);
let call = CallBuilder::get()
.url(url)
.expect("Invalid url")
.timeout_ms(10000)
.https() // also tried without this line
.simple_call(&mut htp, &poll)
.expect("Call start failed");
do_call(&mut htp, &poll, call);
println!("Open connections={}", htp.open_connections());
}
}
Hello,
I've been briefly investigating the bug reported here on reddit.
I added some printf debugging to observe what is happening:
diff --git a/src/call.rs b/src/call.rs
index ef6ed0b..5a95bb9 100644
--- a/src/call.rs
+++ b/src/call.rs
@@ -391,6 +391,7 @@ impl CallImpl {
con.reg(cp.poll, Ready::writable())?;
return Ok(SendState::Wait);
} else {
+ println!("ie {:?}", ie);
return Err(::Error::Closed);
}
}
@@ -424,6 +425,7 @@ impl CallImpl {
}
}
_ => {
+ println!("io_ret {:?}", io_ret);
return Err(::Error::Closed);
}
}
@@ -444,6 +446,7 @@ impl CallImpl {
// }
loop {
io_ret = con.read(&mut buf[orig_len..]);
+ println!("io_ret {:?}", io_ret);
match &io_ret {
&Err(ref ie) => {
if ie.kind() == IoErrorKind::Interrupted {
@@ -474,6 +477,7 @@ impl CallImpl {
}
match io_ret {
Ok(0) => {
+ println!("read zero");
return Err(::Error::Closed);
}
Ok(bytes_rec) => {
@@ -583,4 +587,4 @@ impl CallImpl {
}
}
}
-}
\ No newline at end of file
+}
This yields the following output:
$ RUST_BACKTRACE=1 CARGO_LOG=debug cargo run --example get --features rustls -- "https://edition.cnn.com"
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running `target/debug/examples/get 'https://edition.cnn.com'`
Get https://edition.cnn.com
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Ok(2759)
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Ok(11036)
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Ok(2589)
io_ret Ok(2759)
io_ret Err(Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } })
io_ret Ok(5518)
io_ret Ok(2759)
io_ret Ok(5348)
io_ret Ok(0)
read zero
thread 'main' panicked at 'Call failed: Closed', /checkout/src/libcore/result.rs:906:4
So the failure is that we're interpreting rustls returning 0 as an EOF, which is probably fair enough. Rustls will report this when it has no more plaintext to read -- it's a temporary condition resolved by feeding rustls more ciphertext.
If I alter tls-api-rustls to attempt to feed rustls more ciphertext on these occasions, everything works. I'll submit a PR to that project.
Cheers,
Joe
On some websites mio_httpc v0.9.4 fails to uphold the timeout configured to 40 seconds and gets killed by an external watchdog after 60 seconds in my tests.
There's 2045 such occurrences in my test of the top million websites (I'm using Tranco list generated on the 3rd of February).
Archive with test tool output for all occurrences: mio_httpc-hangs.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/8e3285a45e1d657744a2697ced1bd8461031fb86/mio_httpc-smoke-test/src/main.rs
Similar issues have been observed in other clients on long redirect chains; in that case the clients used to reset the timeout on every redirection.
On some websites, e.g. http://usmilitaryfsbo.com, mio_httpc v0.9.4 fails with the following error:
IO error: failed to fill whole buffer
Firefox, curl and ureq work fine.
There's 22 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences: mio_httpc-0.9.4-buffer-half-empty.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/8e3285a45e1d657744a2697ced1bd8461031fb86/mio_httpc-smoke-test/src/main.rs
On some websites, e.g. http://lastgreatliar.com, mio_httpc v0.9.4 fails with the following error:
Error: Chunk was larger than configured CallBuilder::chunked_max_chunk. 262144
Firefox, curl and ureq work fine.
There's 13628 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences: mio_httpc-0.9.4-cannot-chuck-the-chunk.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/8e3285a45e1d657744a2697ced1bd8461031fb86/mio_httpc-smoke-test/src/main.rs
This makes me wonder, why is the maximum chunk size limited in the first place (as opposed to limiting the size of the entire response)? Is the entire chunk retained in memory?
Hey,
I tried to run the examples and received the following errors:
$ RUST_BACKTRACE=1 cargo run --example get --features native -- "https://edition.cnn.com"
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running `target/debug/examples/get 'https://edition.cnn.com'`
thread 'main' panicked at 'Call failed: Io(Error { repr: Os { code: 17, message: "File exists" } })', /checkout/src/libcore/result.rs:906:4
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at /checkout/src/libstd/sys_common/backtrace.rs:68
at /checkout/src/libstd/sys_common/backtrace.rs:57
2: _ZN3std9panicking12default_hook28_$u7b$$u7b$closure$u7d$$u7d$17h6b0e028b9e47eeccE.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:381
3: _ZN3std9panicking12default_hook17h5c0ea1fecbcb832fE.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:397
4: std::panicking::rust_panic_with_hook
at /checkout/src/libstd/panicking.rs:577
5: _ZN3std9panicking11begin_panic17hd62d509897a218a3E.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:538
6: std::panicking::begin_panic_fmt
at /checkout/src/libstd/panicking.rs:522
7: rust_begin_unwind
at /checkout/src/libstd/panicking.rs:498
8: core::panicking::panic_fmt
at /checkout/src/libcore/panicking.rs:71
9: core::result::unwrap_failed
at /checkout/src/libcore/macros.rs:23
10: <core::result::Result<T, E>>::expect
at /checkout/src/libcore/result.rs:799
11: get::main
at examples/get.rs:32
12: __rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:101
13: std::rt::lang_start
at /checkout/src/libstd/panicking.rs:459
at /checkout/src/libstd/rt.rs:58
14: main
15: __libc_start_main
16: _start
Same error for websockets example. For streaming, got this:
$ RUST_BACKTRACE=1 cargo run --example get_streaming --features native -- "https://edition.cnn.com"
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running `target/debug/examples/get_streaming 'https://edition.cnn.com'`
thread 'main' panicked at 'Failed while sending entity already exists', examples/get_streaming.rs:34:24
stack backtrace:
0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
1: std::sys_common::backtrace::print
at /checkout/src/libstd/sys_common/backtrace.rs:68
at /checkout/src/libstd/sys_common/backtrace.rs:57
2: _ZN3std9panicking12default_hook28_$u7b$$u7b$closure$u7d$$u7d$17h6b0e028b9e47eeccE.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:381
3: _ZN3std9panicking12default_hook17h5c0ea1fecbcb832fE.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:397
4: std::panicking::rust_panic_with_hook
at /checkout/src/libstd/panicking.rs:577
5: _ZN3std9panicking11begin_panic17hd62d509897a218a3E.llvm.C2EB5BE0
at /checkout/src/libstd/panicking.rs:538
6: std::panicking::begin_panic_fmt
at /checkout/src/libstd/panicking.rs:522
7: get_streaming::main
at examples/get_streaming.rs:34
8: __rust_maybe_catch_panic
at /checkout/src/libpanic_unwind/lib.rs:101
9: std::rt::lang_start
at /checkout/src/libstd/panicking.rs:459
at /checkout/src/libstd/rt.rs:58
10: main
11: __libc_start_main
12: _start
On some websites, e.g. http://humaxdigital.com, mio_httpc panics with the following error:
thread 'main' panicked at 'called
Result::unwrap()
on anErr
value: InvalidQueryType { code: 3176 }', /home/shnatsel/.cargo/registry/src/github.com-1ecc6299db9ec823/mio_httpc-0.9.3/src/resolve/mod.rs:18:37
Checking the website with Firefox, it appears that it redirects to an invalid domain name.
There's at least 7 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences (although I didn't run the test on the entire million): mio_httpc-unwrap-panic.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/f206362f2e81521bbefb84007cdd25242f6db590/mio_httpc-smoke-test/src/main.rs
flagging so you are aware, unfortunately have not identified what the root issue is here. I have a program using mio_httpc that has been chugging along fine for quite some time, however today it experienced a crash on startup, which traced back to this line in the v0.8.x codebase:
Line 18 in d29d56f
I started using a VPN for work in the past few days and when I disconnected the problem went away. I don't know enough about DNS to understand why that is, but figured I would flag as the code obviously did not contemplate it was possible to fail there.
here is the backtrace:
thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: UnexpectedEOF', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/mio_httpc-0.8.9/src/resolve/mod.rs:18:37
stack backtrace:
0: 0x55ad00671b7c - std::backtrace_rs::backtrace::libunwind::trace::h2ab374bc2a3b7023
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/../../backtrace/src/backtrace/libunwind.rs:90:5
1: 0x55ad00671b7c - std::backtrace_rs::backtrace::trace_unsynchronized::h128cb5178b04dc46
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5
2: 0x55ad00671b7c - std::sys_common::backtrace::_print_fmt::h5344f9eefca2041f
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys_common/backtrace.rs:67:5
3: 0x55ad00671b7c - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h213003bc5c7acf04
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys_common/backtrace.rs:46:22
4: 0x55ad0069a88c - core::fmt::write::h78bf85fc3e93663f
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/fmt/mod.rs:1126:17
5: 0x55ad0066a2b5 - std::io::Write::write_fmt::he619515c888f21a5
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/io/mod.rs:1667:15
6: 0x55ad00673610 - std::sys_common::backtrace::_print::hf706674f77848203
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys_common/backtrace.rs:49:5
7: 0x55ad00673610 - std::sys_common::backtrace::print::hf0b6c7a88804ec56
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys_common/backtrace.rs:36:9
8: 0x55ad00673610 - std::panicking::default_hook::{{closure}}::h2dde766cd83a333a
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:210:50
9: 0x55ad006731c7 - std::panicking::default_hook::h501e3b2e134eb149
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:227:9
10: 0x55ad00673e34 - std::panicking::rust_panic_with_hook::hc09e869c4cf00885
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:624:17
11: 0x55ad006738e0 - std::panicking::begin_panic_handler::{{closure}}::hc2c6d70142458fc8
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:521:13
12: 0x55ad00672024 - std::sys_common::backtrace::__rust_end_short_backtrace::had58f7c459a1cd6e
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys_common/backtrace.rs:141:18
13: 0x55ad00673849 - rust_begin_unwind
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/panicking.rs:517:5
14: 0x55ad00698381 - core::panicking::panic_fmt::hf443e5eeb6cc9eab
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/panicking.rs:96:14
15: 0x55ad00698633 - core::result::unwrap_failed::h885d3f7beb571353
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/core/src/result.rs:1617:5
16: 0x55ad00112daf - mio_httpc::resolve::dns_parse::he24239990e18e59e
17: 0x55ad000f70a2 - mio_httpc::connection::ConTable::event_send::h3259495a5535a252
18: 0x55ad0012409e - mio_httpc::httpc::HttpcImpl::call_send::hc9c3473a10bf6a8d
19: 0x55ad000ef43c - mio_httpc::api::simple_call::SimpleCall::perform::h5574351afdae176d
20: 0x55ad000446b1 - [redacted]
21: 0x55acffffb1c2 - [redacted]
22: 0x55ad00017fcf - std::sys_common::backtrace::__rust_begin_short_backtrace::hae0f040f8475adec
23: 0x55ad0000fd00 - core::ops::function::FnOnce::call_once{{vtable.shim}}::h377c60db722ec711
24: 0x55ad0067ab63 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h59eef3b9c8a82350
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/alloc/src/boxed.rs:1636:9
25: 0x55ad0067ab63 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::hb5bbe017c347469c
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/alloc/src/boxed.rs:1636:9
26: 0x55ad0067ab63 - std::sys::unix::thread::Thread::new::thread_start::h62931528f61e35f5
at /rustc/9bb77da74dac4768489127d21e32db19b59ada5b/library/std/src/sys/unix/thread.rs:106:17
27: 0x7f8e45eff609 - start_thread
at /build/glibc-YbNSs7/glibc-2.31/nptl/pthread_create.c:477:8
28: 0x7f8e45cba293 - clone
at /build/glibc-YbNSs7/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
29: 0x0 - <unknown>
get_streaming.rs example with "https://api.hitbtc.com/api/2/public/currency" url returns empty body, while browsers/hyper/reqwest return the full body.
This trait, along with a myriad of other #[derive]
-able ones are missing from some of the core structs.
There's no reason not to do so?
I've used mio_httpc on past projects successfully and am planning to on another one currently. As part of my planning/review, I looked at whether the mio_httpc could be easily modified to accommodate the changes in mio's api between v0.6 and v0.7. After about an hour of trying to make the modifications myself, I gave up (at least temporarily). It seems that the changes are deep enough that it would require modifying the types that mio_httpc using internally to represent a connection, among many other things.
I am writing to inquire about whether you plan to modify mio_httpc in the near term to use mio's v0.7 api? The mio project authors have stated that mio v1.0 will have a similar api to v0.7. However, while the v0.7 api is arguably cleaner and simpler than v0.6, it brings no new significant functionality. There are no any advertised performance advantages to v0.7, either.
My conclusion from this review is that my current project will need to settle on using mio v0.6, which is not what I had hoped for, but will certainly work fine. Any thoughts you have on this would be greatly appreciated. Thanks again for your work on this excellent library!
On some websites, e.g. http://eagle.ca, mio_httpc v0.9.4 fails with the following error:
Error parsing chunked transfer
Firefox, curl and ureq work fine.
There's 154 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences: mio_httpc-error-parsing-chunked-transfer.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/8e3285a45e1d657744a2697ced1bd8461031fb86/mio_httpc-smoke-test/src/main.rs
Fetching some URLs, notably http://google.com
, times out. Other clients such as curl, ureq, reqwest, etc work fine.
Self-contained for reproducing the issue can be found here:
https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/34ecea4ceacda1ac6134c4037c1bc9227254c87b/mio_httpc-smoke-test/src/main.rs
I'm trying to use mio_httpc to get an infinite MJPEG stream from a Hikvision camera. The camera HTTP server doesn't send the Content-Length header, which kind of makes sense since the stream is infinite. However, mio_httpc streaming example (https://github.com/SergejJurecko/mio_httpc/blob/master/examples/get_streaming.rs) returns an empty body instead of forever returning chunks as expected.
That behavior seems to be incorrect. https://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.4 says at paragraph 5, that unless a special case, a HTTP body length with no Content-Length header specified gets determined by the server closing the connection at the end of the body.
This simple change has fixed the issue for me: dvtomas@e100706 . I can turn it into a PR if wanted, but I'm not sure if the change maybe breaks something else, I only tried it with my use case.
Hi, i'm pushing HTTP request with wrong address, lets say wrongexample.com
it gets Cref(10)
, next it stuck to resolve and in this loop for cref in self.httpc.timeout().into_iter() {println!("{:?}", &cref)}
in a while i start to see Cref(11)
. Due to this error i cant shutdown this stuck http connection.
Thanks for upgrading Mio to 0.7!
It seems that the docs are not upgraded yet, simple cargo test
will reveal where.
On some websites, e.g. http://odnoklassniki.ru, mio_httpc fails with the following error:
Chunk was larger than configured CallBuilder::chunked_max_chunk. 48669
The exact chunk size given in the error message varies somewhat.
curl works fine for the same websites.
There's at least 32979 such websites in the top million (I'm using Tranco list generated on the 3rd of February).
Archive with all occurrences (although I didn't run the test on the entire million): mio_httpc-chunk-cannot-chuck.tar.gz
Code used for testing: https://github.com/Shnatsel/rust-http-clients-smoke-test/blob/f206362f2e81521bbefb84007cdd25242f6db590/mio_httpc-smoke-test/src/main.rs
I'm tinkering with async CallBuilder & SimpleCall API, and I use another offset i.e. Httpc::new(2048, None). It doesn't seem to work as the Calls are still created with ids starting from 10 and therefore the responses don't match (incoming events have correct ids starting from 2048)
Hi,
are there any plans to bump mio dependency to v. 0.8? It's been out for about a year now..
I might be wrong, since I am just looking at the code:
https://github.com/SergejJurecko/mio_httpc/blob/master/src/api/websocket.rs#L465-L481
Seems when a control frame following a non-fin text / binary frame, the code treats the control frame as a text or binary frame. Which seems to be incorrect.
RFC 6455 says:
o Control frames (see Section 5.5) MAY be injected in the middle of
a fragmented message. Control frames themselves MUST NOT be
fragmented.
It seems impossible to make basic post/put requests using this library.
Take this snippet:
let mut builder = CallBuilder::post(vec![]);
builder.host("127.0.0.1").port(8000).exec().unwrap();
Start a simple local server with python3 -m http.server
and then run the rust client code. The server error is code 400, message Bad HTTP/0.9 request type ('POST')
and the mio result is thread 'main' panicked at 'called
Result::unwrap()on an
Err value: Httparse(Version)'
.
Is there a way to change the mio_httpc's internal http version? As I understand it defaults to http 0.9 which only supports GET requests and is very old.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.