algesten / ureq Goto Github PK
View Code? Open in Web Editor NEWA simple, safe HTTP client
License: Apache License 2.0
A simple, safe HTTP client
License: Apache License 2.0
Right now certificate management is a compile-time decision. Ideally we could specify an optional server certificate via the Request API and if it is set, the https would leverage that certificate otherwise it would leverage the default provided by configure_certs. I'm happy to write this code but would like feedback before submitting a PR.
In a world of async-everything and big dependency chains, this is very nice!
https://tools.ietf.org/html/rfc7230#section-3.2.2
A recipient MAY combine multiple header fields with the same field
name into one "field-name: field-value" pair, without changing the
semantics of the message, by appending each subsequent field value to
the combined field value in order, separated by a comma.
Note: In practice, the "Set-Cookie" header field ([RFC6265]) often
appears multiple times in a response message and does not use the
list syntax, violating the above requirements on multiple header
fields with the same name. Since it cannot be combined into a
single field-value, recipients ought to handle "Set-Cookie" as a
special case while processing header fields.
Right now, response.header() only returns the value of the first header field with the requested name. It would be good to handle the case where there are repeats. Fortunately, the HTTP spec allows doing so by concatenation, which lets us preserve a nice simple API.
Hi, I have this error when making a post
No cached session for DNSNameRef("discordapp.com")
Not resuming any session
Code
let resp = ureq::post(&url) // &url == discord webhook
.send_json(json!({ "content": "test" }));
info!("Response: {} Status: {} StatusLine: {}", resp.status(), resp.status_text(), resp.status_line());
When sending a request with 'Accept-Encoding: identity', a response with "Transfer-Encoding: gzipped" still seems to be possible.
When running the same request with curl on the same server, the response is not gzipped.
One possible source of error seems to be here:
Line 51 in 3014f58
"Transfer-Encoding" seems to be a response header, not a request header, so should this be checking Accept-Encoding? (reference)
Because adding rustls requires gcc toolchain (on windows) and perl for building ring (rustls dependency).
I'm probably doing something stupid. I cloned the repo (my HEAD is at 09dabbd) and ran cargo test
. I get
failures:
---- src/lib.rs - (line 9) stdout ----
error[E0432]: unresolved import `ureq::json`
--> src/lib.rs:11:5
|
5 | use ureq::json;
| ^^^^^^^^^^ no `json` in the root
error: cannot determine resolution for the macro `json`
--> src/lib.rs:16:16
|
10 | .send_json(json!({
| ^^^^
|
= note: import resolution is stuck, try simplifying macro imports
error[E0599]: no method named `send_json` found for type `&mut ureq::Request` in the current scope
--> src/lib.rs:16:6
|
10 | .send_json(json!({
| ^^^^^^^^^ method not found in `&mut ureq::Request`
error: aborting due to 3 previous errors
Some errors have detailed explanations: E0432, E0599.
For more information about an error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
src/lib.rs - (line 9)
test result: FAILED. 47 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out
I assume it's supposed to pass because CI is working. My rustc is stable-x86_64-apple-darwin unchanged - rustc 1.40.0 (73528e339 2019-12-16)
Hi! Seems like ureq is not compatible with some websites. Any idea why?
Here is an example:
let resp = ureq::get("https://www.okex.com/api/spot/v3/products")
.set("Connection", "keep-alive")
.set("Accept-Encoding", "identity")
.timeout_connect(5_000)
.timeout_read(5_000)
.timeout_write(5_000)
.call();
if !resp.ok() {
eprintln!("Error! Code {}, line {}", resp.status(), resp.status_line());
}
It prints: Error! Code 500, line HTTP/1.1 500 Bad Status
Same URL works fine via firefox/chrome/curl/hyper/mio_httpc.
Same thing happens for URL https://api.fcoin.com/v2/public/symbols
Hey!
It seems as if the only way to access response headers right now is explicitly by header name. There are some cases where I'd like to iterate over all existing headers.
Maybe I've missed something, but otherwise it would be nice if there was an accessor for the headers
field.
Cheers!
Right now, connections stay in the pool indefinitely. If someone makes requests to a wide variety of hosts, that can quickly fill up the pool. Each entry in the pool uses an FD, and eventually a program will hit ulimit_nofile and stop working.
We should set a max size for the pool, and when a new connection needs to be added but the pool is full, remove one of the existing connections. A couple of options for how to remove:
Is it possible to make the cookie
dependency optional? When making server-to-server requests it's not often that you use cookies, and the cookie
dependency brings in quite a few others:
│ ├── cookie v0.12.0
│ │ ├── time v0.1.42
│ │ │ └── libc v0.2.65 (*)
│ │ │ [dev-dependencies]
│ │ │ └── winapi v0.3.8
│ │ └── url v1.7.2
│ │ ├── idna v0.1.5
│ │ │ ├── matches v0.1.8
│ │ │ ├── unicode-bidi v0.3.4
│ │ │ │ └── matches v0.1.8 (*)
│ │ │ └── unicode-normalization v0.1.8
│ │ │ └── smallvec v0.6.10
│ │ ├── matches v0.1.8 (*)
│ │ └── percent-encoding v1.0.1
So, would it be acceptable to add a cookie
feature (enabled by default perhaps) that disables this dependency?
While the library does many things without the dependency baggage, this one feature is crucial for many uses of post. Not supporting it precludes usage of this library.
Please consider expediting.
It looks like PoolKeys only consider host and port, which can lead to problems if a proxy is being used as a gateway to internal addresses - eg. proxy1 connects to 1.2.3.4 in a private network, proxy2 connects to 1.2.3.4 in another private network. Another (perhaps more limiting) option might be to make proxies agent-scoped, so that different connection pools are used.
The doc build for 1.2.0 at https://docs.rs/crate/ureq/1.2.0/builds/261220 is failing due to:
[INFO] [stderr] Documenting ureq v1.2.0 (/opt/rustwide/workdir)
[INFO] [stderr] error: You have both the "tls" and "native-tls" features enabled on ureq. Please disable one of these features.
[INFO] [stderr] --> src/lib.rs:185:1
[INFO] [stderr] |
[INFO] [stderr] 185 | / std::compile_error!(
[INFO] [stderr] 186 | | "You have both the \"tls\" and \"native-tls\" features enabled on ureq. Please disable one of these features."
[INFO] [stderr] 187 | | );
[INFO] [stderr] | |__^
The Cargo manifest suggests that this project is dual licensed under MIT OR Apache-2.0. However, there are no LICENSE files and the README suggests it's only licensed under MIT.
If maximum compatibility with the Rust ecosystem is desired, I would follow the recommendations at https://rust-lang-nursery.github.io/api-guidelines/necessities.html#crate-and-its-dependencies-have-a-permissive-license-c-permissive.
At https://github.com/algesten/ureq/blob/master/src/unit.rs#L167, requests that fail on bad_status_read are retried, but only if no body bytes were sent. The HTTP RFC says:
A user agent MUST NOT automatically retry a request with a non-
idempotent method unless it has some means to know that the request
semantics are actually idempotent, regardless of the method, or some
means to detect that the original request was never applied.
It's possible to have a POST request with an empty body; that would be non-idempotent, but would also have zero body bytes sent.
Relatedly, the comments in unit.rs discuss "body bytes sent" which suggests this code could be run if a request with a body was made, but the error happened before any bytes of the body were sent. However, body_bytes_sent is only set if the whole body is successfully sent. I think it would be clearer to run the retry only if body's size is set and is zero.
Right now ureq has no way to time out DNS lookups. It uses to_socket_addrs, which says:
Note that this function may block the current thread while resolution is performed.
Under the hood, I believe this uses getaddrinfo on Linux, which does not allow setting a timeout.
Some documentation about how curl handles this is here: https://github.com/curl/curl/blob/26d2755d7c3181e90e46014778941bff53d2309f/lib/hostip.c#L91-L115. It sounds like the options are:
This may not be a terribly big priority because in practice getaddrinfo does have built-in timeouts on many systems. For instance, on Linux the default config has a timeout of 5s for name resolution. The Windows default is 15s.
I had a question about coding style. I noticed there are a bunch of empty comments, like so:
fn from_str(s: &str) -> Result<Self, Self::Err> {
//
let line = s.to_string();
Are these meant to denote something? Should I clean them up?
Some crates for exampe tar
take a Write
object to send their data to instead of creating a Read
implemntation where data can be pulled from. It would be nice if there was a way to write
the body of a request.
ureq currently defaults to connections with no timeout, meaning it will hang indefinitely if the remote host doesn't reply. This leads to a resource leak. This post explains the problem in detail: https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779
For reference, curl has saner defaults: it sets the default connection timeout of 300 seconds and the rest to indefinite.
I am looking for a http request crate that would be tiny (LoC-wise), comparing to reqwest
, with ability to disable anything other than the simplest plain http, suitable for security-concious environments (like querying BitcoinCore RPC port, etc.) . I was suggested ureq
, so I decided to review the source.
Here's the result: dpc/crev-proofs@42a3b5c
Sorry for giving a negative review, but the point of reviewing is to judge and point out problems. I also reserve a right to be wrong about some parts. :)
I hope at least it will help you improve some stuff.
It would be useful to be able to save multiple streams to the same address in the connection pool - for example, when doing chunked downloading via Range header, multiple connections can be used and reused.
Some websites transmit data really slowly, but ureq drops the connection almost immediately after establishing it, without actually downloading the content, and without reporting an error either.
Example of where this happens is 7911game.com
(warning: malicious website, ships some kind of VBScript so I assume it exploits Internet Explorer). curl
takes a long time to download it and loads it gradually, as does https://github.com/jayjamesjay/http_req
Was curious about your thoughts of having the proxy code support reading from environmental variables (http_proxy, https_proxy)? Easy enough to do it in my library, but I wanted to sync in here first to see if it made sense to upstream it.
#67 introduced TCP listeners that act as HTTP servers for the purpose of testing. These relying on spawning threads. Ideally we'd like the thread running the accept loop to exit when the test case is over. That's a bit challenging because the listener.incoming()
iterator blocks.
Right now there are a few possible solutions:
(2) is unsatisfying for a real server because you need a sleep() to avoid spinning the CPU, but that sleep necessarily delays acceptance of new connections. However, it is probably good enough for test cases.
Hi,
I'm using ureq in a project to do ADFS authentication for AWS. The login mechanics basically work like this:
GET
the login page, which will set an auth-request cookiePOST
post credentials (along with the auth-request cookie)GET
s the redirect URL with the auth request and response cookie, to which the server replies with an HTML page that includes a form with SAML assertion that can be used to login to AWS.(If this sounds somewhat circuitous, it is, but that's how ADFS works.)
I was having trouble getting this to work with ureq. I'm using an Agent
(for automatic cookie persistence) but the login procedure kept failing at step 4. On a hunch I used redirect(0)
on the POST
in step 2, extracted the cookies from the response and did the redirect request "manually" and things suddenly worked. This seems to indicate that cookies set during a request in a chain of redirects are not used in subsequent requests.
Cannot handle error in such case:
use std::io;
fn main() {
let mut reader = ureq::get("https:://123").call().into_reader();
let mut out = Vec::new(); // error message will be there
io::copy(&mut reader, &mut out).unwrap(); // no error
dbg!(String::from_utf8_lossy(&out));
}
Looks like no error propagate from read to io::copy.
Hi, Thanks to ureq, have achieved what I wanted to a large extent in my project. Is there any multipart suppor/example right now?
Hi there!
I tried to build without default features:
[dependencies]
ureq = { version = "0.4", default-features = false }
Sadly, this results in:
error[E0432]: unresolved import `native_tls`
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/error.rs:1:5
|
1 | use native_tls::Error as TlsError;
| ^^^^^^^^^^ Maybe a missing `extern crate native_tls;`?
error[E0432]: unresolved import `native_tls`
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/error.rs:2:5
|
2 | use native_tls::HandshakeError;
| ^^^^^^^^^^ Maybe a missing `extern crate native_tls;`?
error[E0599]: no variant named `Https` found for type `stream::Stream` in the current scope
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/stream.rs:28:17
|
12 | pub enum Stream {
| --------------- variant `Https` not found here
...
28 | Stream::Https(_) => "https",
| ^^^^^^^^^^^^^^^^ variant not found in `stream::Stream`
|
= note: did you mean `stream::Stream::Http`?
error[E0599]: no variant named `Https` found for type `stream::Stream` in the current scope
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/stream.rs:41:13
|
12 | pub enum Stream {
| --------------- variant `Https` not found here
...
41 | Stream::Https(_) => true,
| ^^^^^^^^^^^^^^^^ variant not found in `stream::Stream`
|
= note: did you mean `stream::Stream::Http`?
error[E0599]: no variant named `Https` found for type `stream::Stream` in the current scope
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/stream.rs:59:13
|
12 | pub enum Stream {
| --------------- variant `Https` not found here
...
59 | Stream::Https(stream) => stream.read(buf),
| ^^^^^^^^^^^^^^^^^^^^^ variant not found in `stream::Stream`
|
= note: did you mean `stream::Stream::Http`?
error[E0599]: no variant named `Https` found for type `stream::Stream` in the current scope
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/stream.rs:71:13
|
12 | pub enum Stream {
| --------------- variant `Https` not found here
...
71 | Stream::Https(stream) => stream.write(buf),
| ^^^^^^^^^^^^^^^^^^^^^ variant not found in `stream::Stream`
|
= note: did you mean `stream::Stream::Http`?
error[E0599]: no variant named `Https` found for type `stream::Stream` in the current scope
--> /home/lukas/.cargo/registry/src/github.com-1ecc6299db9ec823/ureq-0.4.8/src/stream.rs:80:13
|
12 | pub enum Stream {
| --------------- variant `Https` not found here
...
80 | Stream::Https(stream) => stream.flush(),
| ^^^^^^^^^^^^^^^^^^^^^ variant not found in `stream::Stream`
|
= note: did you mean `stream::Stream::Http`?
I can't connect to a website:
let agent = ureq::agent();
agent.get("https://redacted.com/Default.asp?procedura=ERRORE").call();
I get Network Error: Connection reset by peer (os error 54)
, but curl
handles it fine.
* Trying 217.19.150.244...
* TCP_NODELAY set
* Connected to redacted.com (0.0.0.0) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: OU=Domain Control Validated; CN=*.redacted.com
* start date: Feb 3 18:49:20 2020 GMT
* expire date: Mar 6 12:51:01 2022 GMT
* subjectAltName: host "redacted.com" matched cert's "*.redacted.com"
* issuer: C=BE; O=GlobalSign nv-sa; CN=AlphaSSL CA - SHA256 - G2
* SSL certificate verify ok.
> GET /Default.asp?procedura=ERRORE HTTP/1.1
> Host: redacted.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
In my ureq application, when I set the timeout to 20ms, I get an Io(Custom { kind: TimedOut, error: "timed out reading response" })
error. However, when I set the timeout to 100ms, I get a Custom { kind: InvalidData, error: "Failed to read JSON: timed out reading response" }
error in the request.into_json()
part.
I expect there to be only a single error associated with a timeout, not two.
In one of my apps, I use ureq to read/write from a key/value server. When I perform writes, the status response is sufficient to know failure or success. However, if I do not read the response body, the connection doesn't return to the pool since the data remains on the stream. This seems odd and requires in my app adding an explicit read on the response in order to allow the stream to return to the pool. I was curious why this is the case and whether or not we could just flush the buffer:
https://github.com/algesten/ureq/blob/master/src/pool.rs#L218
I've tested ureq
by downloading homepages of the top million websites with it. I've found a panic in ring
, and 13 out of 1,000,000 websites triggered a panic in ureq::stream::connect_https
.
Steps to reproduce:
Run this simple program with "yardmaster2020.com" given as the only command-line argument.
The same website opens fine in Chrome. Full list of websites where this happens: amadriapark.com, bda.org.uk, egain.cloud, gdczt.gov.cn, hsu.edu.hk, mathewingram.com, roadrover.cn, srichinmoyraces.org, thetouchx.com, tradekorea.com, utest.com, wlcbcgs.cn, yardmaster2020.com
Backtrace:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidDNSNameError', src/libcore/result.rs:1189:5
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:77
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1057
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1426
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:195
9: std::panicking::default_hook
at src/libstd/panicking.rs:215
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:472
11: rust_begin_unwind
at src/libstd/panicking.rs:376
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:84
13: core::result::unwrap_failed
at src/libcore/result.rs:1189
14: ureq::stream::connect_https
15: ureq::unit::connect_socket
Would you be able to upgrade the current dependency on webpki from 0.19 to the latest, 0.21? I'm getting version conflicts when combining with other crates that would be solved by this upgrade.
Disclaimer: this is a nitpick.
Since ureq seems to get a new update about every week, every week i get a ping from dependabot. Updating is no big deal, but the lack of a changelog makes it a bit less convenient, since i have to check the commits to see what changed.
It would be more convenient to either have a maintained CHANGELOG.md or just some short bullet points on the tags.
Request::new can be called with something that's not a URL, e.g. /path
, and it will automatically prepend http://localhost/
. I think that's the wrong thing in most cases. We should instead make it an error to construct a Request with a string that doesn't parse as a URL.
This will probably break a lot of doctests, but I think we should update the doctests to use real URLs. For now those can be http://localhost/
; when #82 is fixed, those can be http://example.com/
, with an override to point example.com
to localhost
so the tests run quickly and don't hit the network.
While working on a documentation issue, I tried to check out an earlier version of ureq (v1.1.2), but I found that there are no tagged releases in this repo. Would you be willing to start git tag
'ing releases? There's some discussion about using git tag for crate releases at https://users.rust-lang.org/t/psa-please-git-tag-your-crates-io-releases/22223/10.
https://tools.ietf.org/html/rfc7230#section-3.2.4
No whitespace is allowed between the header field-name and colon. In
the past, differences in the handling of such whitespace have led to
security vulnerabilities in request routing and response handling. A
server MUST reject any received request message that contains
whitespace between a header field-name and colon with a response code
of 400 (Bad Request). A proxy MUST remove any such whitespace from a
response message before forwarding the message downstream.
It looks like ureq::Error
doesn't implement std::error::Error
trait (or failure::Fail
). It makes its use as cause for other errors somewhat problematic. Do you think it is possible to implement it?
It seems if an URL with a port is used – for example, "http://localhost:9222/json/version" – then the "Host" header is not set correctly (the port seems to be missing).
I connected to the HTTP front end of a DevTools protocol server, which returns a WebSocket URL.
When I connected via ureq
this WebSocket URL did not include the port of the server (and connecting to the WebSocket URL failed, therefore).
When I inspected the response of the server with different clients (Chrome and Curl), the server returned the correct WebSocket URL.
After I added the correct "Host" header to the ureq
request myself, the server returned the correct WebSocket URL as well.
Therefore I assume that ureq
does not set the "Host" header correctly.
According to MDN, if no port is included in the "Host" header, it defaults to 80 (for HTTP) or 443 (for HTTPS), which in the above case is incorrect.
The connection pool does not cope with HTTPS servers closing the connection, due to rustls
being mean.
If I make a request, then wait for the connection to time out, then make another request, I see the error: BadStatus
. This means that every other request I make fails.
This test fails, because https://fau.xxx/ is currently running an nginx with a keepalive_timeout
of 2s:
#[test]
fn connection_reuse() {
let agent = ureq::Agent::default().build();
let resp = agent.get("https://fau.xxx/").call();
// use up the connection so it gets returned to the pool
assert_eq!(resp.status(), 200);
resp.into_reader().read_to_end(&mut vec![]).unwrap();
// wait for the server to close the connection
std::thread::sleep(Duration::from_secs(3));
// try and make a new request on the pool
let resp = agent.get("https://fau.xxx/").call();
if let Some(err) = resp.synthetic_error() {
panic!("boom! {:?}", err);
}
assert_eq!(resp.status(), 200);
}
nginx defaults to 75s, some servers have much longer timeouts, some much shorter, but everyone will eventually see this problem.
I was hoping that attempting a write during send_prelude
would trigger the retry code, #8, but this does not help.
I do not see a way to fix this right now. A read(&mut [])?
in send_prelude
doesn't trigger it, and we aren't expecting any data to be readable at that point, so clever buffering wouldn't help.
Right now ureq will always use to_socket_addrs to lookup hostnames. For testing, it would be useful to force all lookups to resolve to localhost and a test server running there. This is particularly useful when a variety of hostnames are needed, or when testing large transfers that would be expensive to send over a real network.
We should add an internal Resolver interface that uses to_socket_addrs by default, but can be overridden during testing to provide mocked-out results.
I'm using ureq 0.11.2, and I can't seem to set cookies on a request correctly.
I have the following project:
# Cargo.toml
[package]
name = "ureq-cookies"
version = "0.1.0"
authors = ["Alex Chan <[email protected]>"]
edition = "2018"
[dependencies]
ureq = "0.11.2"
// main.rs
extern crate ureq;
fn main() {
let agent = ureq::agent();
let cookie = ureq::Cookie::new("name", "value");
agent.set_cookie(cookie);
let resp = agent.get("http://127.0.0.1:5000/").call();
println!("{:?}", resp);
}
This code:
set_cookie
to create a cookie name=value
This is based on the example code given in https://github.com/algesten/ureq/blob/master/src/agent.rs#L189-L196
I'd expect this snippet to send the cookie name=value
to http://127.0.0.1:5000, but the server isn't receiving the cookie. Am I doing something wrong?
If I set the Cookie header manually, the server does receive the cookie, but this seems to defeat the point of having a set_cookie()
method:
extern crate ureq;
fn main() {
let mut agent = ureq::agent();
agent.set("Cookie", "name=value");
let resp = agent.get("http://127.0.0.1:5000/").call();
println!("{:?}", resp);
}
This is the body of set_cookie()
:
Lines 199 to 205 in da42f2e
If it can't acquire the state as mutable, the cookie is quietly dropped. I wonder if that's what's happening here?
At http://127.0.0.1:5000, I'm running a small Python web server. On every request, it prints the headers and the cookies it received:
#!/usr/bin/env python
# -*- encoding: utf-8
import flask
app = flask.Flask(__name__)
@app.route("/")
def index():
print("\nGot a request!")
print("Headers: %r" % dict(flask.request.headers))
print("Cookies: %r" % flask.request.cookies)
return "hello world"
if __name__ == "__main__":
app.run(port=5000)
This is the output:
# Using set_cookie("name", "value")
Got a request!
Headers: {'Host': '127.0.0.1', 'User-Agent': 'ureq', 'Accept': '*/*'}
Cookies: {}
# Using set("Cookie", "name=value")
Got a request!
Headers: {'Host': '127.0.0.1', 'User-Agent': 'ureq', 'Accept': '*/*', 'Cookie': 'name=value'}
Cookies: {'name': 'value'}
On ureq
version 0.12.0
:
When using the Request.send
method, no headers are set to indicate some body content to the server. It causes some servers to completely ignore the body (like tiny_http
).
The Request.send
method should automatically enable the chunked
encoding.
A workaround for this issue is to set the Transfer-Encoding header prior sending the request to enable the chunked
transfer:
request.set("Transfer-Encoding", "chunked");
let response = request.send(body_reader);
I am doing this issue more as a warning for people like me that will stumble upon this gotcha. I have noticed that you are actually working a brand new version on ureq
that probably fixed that issue already so it's maybe not a good idea to spend time fixing this issue. Maybe it should just be mentioned in the documentation of Request.send
that the user may want to set the chunked
transfer.
ureq currently does not allow specifying a timeout for the entire request (i.e. until the request body is finished transferring), which means an established connection will keep going indefinitely if the remote host keeps replying.
This is fine for the simple use cases, like a user downloading something interactively, but enables denial-of-service attacks in automated scenarios: if the remote host keeps transferring data at a really low speed, e.g. several bytes per second, the connection will be open indefinitely. This makes it easy for an attacker who can submit URLs to the server to cause denial of service through some kind of resource exhaustion - running out of RAM, networking ports, etc.
Hi @algesten , we are the veloren team (a rust game) and we decided to use your ureq crate a few months ago for our auth api to replace reqwest
. We love the fact that it's so small and straight to the point.
But we just discoverd an issue. We are getting a 500 syntetic error BadStatus, similar to #10 (but not exactly).
We updated to 1.3
just to be sure, but the error is still persistent.
It seems to come sporadically all 5 minutes and looks like tls related.
I've read your reason for the Synthetic errors
but in case something is Bad inside ureq
or maybe in the way we are handling it, it makes stuff super hard to debug.
Do you have a default approach to take on errors like this ? btw we are using default-features = false and use only "tls"
Looks like the GitHub Actions PR was merged (#18) but it hasn't been enabled for this repo.
I believe it's still under beta, so you have to sign up for it at https://github.com/features/actions/signup/. Then, you have to enable it under the "Actions" tab in the repo.
Many thanks for your endeavors. We have actually included ureq into our trusted computing base of Libra. I would prefer we standardize its usage across our code base, but there are concerns that it does not currently support HTTP Proxies. Is this something you could consider? And if not, worst-case scenario, accept a 3rd party contribution?
Are there any plans to support providing a client certificate or custom root certificates?
It would be nice to support DEFLATE compression.
miniz_oxide
is the fastest Rust DEFLATE crate and it's 100% safe code. It is already used for this purpose in reqwest
, attohttpc
, etc. This dependency can be easily made optional if desired.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.