Git Product home page Git Product logo

Comments (17)

omarkj avatar omarkj commented on June 3, 2024

I don't understand this issue.

from vegur.

ferd avatar ferd commented on June 3, 2024

1xx codes can be used for temporary or incomplete responses. See for example:

102 Processing (WebDAV; RFC 2518). As a WebDAV request may contain many sub-requests involving file operations, it may take a long time to complete the request. This code indicates that the server has received and is processing the request, but no response is available yet.[3] This prevents the client from timing out and assuming the request was lost.

People could arguably decide to have a string of 1xx codes like "102: almost going" "103: progress 25%" then "105: okay this looks done" I guess. I'm not sure if we should handle them in a very fancy way. Code 100 at the very least.

from vegur.

omarkj avatar omarkj commented on June 3, 2024

Thanks for clarifying.

from vegur.

omarkj avatar omarkj commented on June 3, 2024

This one is solved as well, right?

from vegur.

archaelus avatar archaelus commented on June 3, 2024

Do we relay intermediate responses? Do we eat them? Do we crash?

from vegur.

ferd avatar ferd commented on June 3, 2024

We relay one, but error out if we get a second one, given 100-Continue cannot be followed by a non-terminal status code -- except for 101 Upgrade, which HTTPBis asks to support. Any other case we choke on.

from vegur.

omarkj avatar omarkj commented on June 3, 2024

Is this still an issue? Open for 3 months.

from vegur.

ferd avatar ferd commented on June 3, 2024

I'd vote to close this one as something we don't want to support, but that would require @archaelus's approval given he opened it.

from vegur.

archaelus avatar archaelus commented on June 3, 2024

We need to document this on devcenter. "Http intermediate responses: we don't support these except in the following cases: 100-Continue and 101-Switching Protocols"

(I also don't think we should implement it unless someone comes up with a compelling use case and we know that there's actually browser support).

from vegur.

ferd avatar ferd commented on June 3, 2024

WEBDAV is currently noted as no being supported in the new docs. Are there any other 1xx responses in there that people would expect using? Is it legit to just add as many as we want?

from vegur.

archaelus avatar archaelus commented on June 3, 2024

What bits of webdav don't we support? What bits of webdav could we easily support?

from vegur.

ferd avatar ferd commented on June 3, 2024

I think the rest of WEBDAV is implicitly supported as far as statuses go, but has some ambiguous cases: http://en.wikipedia.org/wiki/List_of_HTTP_status_codes

  • 102 Processing (WebDAV; RFC 2518)
  • 207 Multi-Status (WebDAV; RFC 4918) (following is multiple XML responses for sub-response codes)
  • 208 Already Reported (WebDAV; RFC 5842)
  • 422 Unprocessable Entity (WebDAV; RFC 4918)
  • 423 Locked (WebDAV; RFC 4918)
  • 424 Failed Dependency (WebDAV; RFC 4918)
  • 507 Insufficient Storage (WebDAV; RFC 4918)
  • 508 Loop Detected (WebDAV; RFC 5842

And methods (we take anything):

  • PROPFIND — used to retrieve properties, stored as XML, from a web resource. It is also overloaded to allow one to retrieve the collection structure (a.k.a. directory hierarchy) of a remote system.
  • PROPPATCH — used to change and delete multiple properties on a resource in a single atomic act
  • MKCOL — used to create collections (a.k.a. a directory)
  • COPY — used to copy a resource from one URI to another
  • MOVE — used to move a resource from one URI to another
  • LOCK — used to put a lock on a resource. WebDAV supports both shared and exclusive locks.
  • UNLOCK — used to remove a lock from a resource

There's also an HTTP 'If' header and dependencies on ETags, but I haven't looked at it in depth.

from vegur.

archaelus avatar archaelus commented on June 3, 2024

I think the 102: Processing status is the only way we'd break webdav through vegur. Presumably they issue that repeatedly until they're done. If we supported that, we'd also give Heroku customers a way to 'do long running jobs easily'. (While you generate the pdf, emit '102: Processing\r\n\r\n' every 25s)

from vegur.

ferd avatar ferd commented on June 3, 2024

I think it would be possible to make it work if we went to support it outside of the 100-continue (deep) workflow, which would be way too confusing otherwise. But if not, it's a question of looping on every Status, ...) when Status < 200 without relaying it.

If you really want that feature in, I guess it's workable. It just can't be used with a 100 Continue that had an expect: 100-continue, because any non-terminal status that follows 100 Continue in that context is an error and should be denied.

from vegur.

ferd avatar ferd commented on June 3, 2024

After re-reading the HTTP spec, there is nothing explicitly forbidding to send two consecutive 100-Continues -- the spec only mandates that a server must send a terminal status once it's done processing the request. I've updated docs to reflect this and commits referred above go in that direction.

from vegur.

daguej avatar daguej commented on June 3, 2024

What is the current status of this?

I just uploaded an app to heroku:

var express = require('express'),
    app = express();

app.get('/long', function(req, res) {
    var ivl = setInterval(function() {
        res._writeRaw('HTTP/1.1 102 Processing\r\n\r\n');
    }, 10000);

    setTimeout(function() {
        clearInterval(ivl);
        res.send({ ok: true });
    }, 35000);
});

app.listen(process.env.PORT || 3003);

...and then:

$ curl -i http://h102.herokuapp.com/long
HTTP/1.1 102 Processing
Server: Cowboy
Date: Tue, 25 Oct 2016 21:04:28 GMT
Connection: close
Via: 1.1 vegur

HTTP/1.1 102 Processing

HTTP/1.1 102 Processing

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 11
ETag: W/"b-gjgNHiY7YJPzx1NWkPzddQ"
Date: Tue, 25 Oct 2016 21:04:53 GMT
Connection: close

{"ok":true}

...which is more or less what I'd expect to see. The 102s are relayed to me and eventually I see the 200. This works properly in Chrome, Firefox, and IE.

What is interesting is that the heroku router adds its headers to the original 102 and then appears to treat the rest of the response as HTTP body bytes. From the logs:

at=info method=GET path="/long" host=h102.herokuapp.com request_id=dfc11416-7f80-4c88-b76a-7887df232ee1 fwd="x.x.x.x" dyno=web.1 connect=1ms service=35044ms status=102 bytes=293

...oops.

This should be supported. The spec says:

A client MUST be able to parse one or more 1xx responses received prior to a final response, even if the client does not expect one. A user agent MAY ignore unexpected 1xx responses.

A proxy MUST forward 1xx responses unless the proxy itself requested the generation of the 1xx response. For example, if a proxy adds an "Expect: 100-continue" field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).

from vegur.

ferd avatar ferd commented on June 3, 2024

From the README:

The proxy will return a configurable error code if the server returns a 100 Continue following an initial 100 Continue response. The proxy does not yet support infinite 1xx streams.

And:

Not Supported [...] HTTP Extensions such as WEBDAV, relying on additional 1xx status responses

Unfortunately, the Vegur application has been written before RFC 7231, at a time where the spec had this to say:

Proxies MUST forward 1xx responses, unless the connection between the
proxy and its client has been closed, or unless the proxy itself
requested the generation of the 1xx response. (For example, if a
proxy adds a "Expect: 100-continue" field when it forwards a request,
then it need not forward the corresponding 100 (Continue)
response(s).)

Which was ambiguous and let us implement the current behaviour.

There is no time allocated for modifications enabling things such as WEBDAV support or updating the spec to RFC7231 within Heroku projects at the time to my knowledge, even though I would like to at this point.

The best I can offer at the time is to bring this to our product managers to see what can be done, or to hope for open-source contributions which we'd be happy to help guide and eventually deploy.

I can let you know what comes out of this.

from vegur.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.