Git Product home page Git Product logo

swift-nio-extras's People

Contributors

adam-fowler avatar al45tair avatar artemredkin avatar calebkleveter avatar carolinacass avatar crontab avatar davidde94 avatar dnadoba avatar fabianfett avatar franzbusch avatar gkaindl avatar glbrntt avatar guoye-zhang avatar jovanmilenkovic avatar karwa avatar lukasa avatar nicolascombe5555 avatar palleas avatar peteradams-a avatar rnro avatar shekhar-rajak avatar sidepelican avatar simonjbeaumont avatar tanner0101 avatar tigerpixel avatar tkrajacic avatar tomerd avatar trevorah avatar weissi avatar yim-lee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swift-nio-extras's Issues

Length-delimited frame decoder.

It would be useful to have a general ChannelInboundHandler that can decode a length-delimited framing protocol. This is a protocol that has messages prefixed with an encoding of their length.

Such a protocol would look like this:

<fixed width integer n in network byte order><n bytes>

The simplest version of this would be one that uses a fixed-width integer.

For now we won't bother with varint support.

Should NIOHTTP1 Swift HTTPTypes conversion code be made public?

I'm talking about the code that can be found here

The NIOWebSocket upgrade helpers are reliant on the NIOHTTP1 types, specifically HTTPHead and HTTPHeaders . If a project is built using the new HTTP Types it will need to convert from the supplied NIOHTTP1 types to the new types. In theory the project could have its own conversion code but it might be a good idea to make the NIOExtras conversion code the official implementation.

Another situation where this conversion is being made is in the OpenAPI transports. Both Vapor and Hummingbird rely on the NIOHTTP1 types and have to convert to the new HTTP types to pass on the HTTP request to the openapi runtime and convert back to return the HTTP response.

get rid of `do { ... } catch { ... }` for expected errors

The

do {
    try someOperation()
XCTFail("should throw") // easy to forget
} catch error as SomethingError {
    XCTAssertEqual(.something, error as? SomethingError)
} catch {
    XCTFail("wrong error")
}

pattern is not only very long, it's also very error prone. If you forget
any of the XCTFails, you might not tests what it looks like

XCTAssertThrowsError(try someOperation) { error in
    XCTAssertEqual(.something, error as? SomethingError)
}

is much safer and shorter.

We should replace all uses of this pattern in Tests/**.

See also apple/swift-nio#1430 which does the same for swift-nio.

create a new module(?) that allows to use the universal bootstrap easily

I just had to create quite a long example on how to use the universal bootstrap over at apple/swift-nio-examples#48 . That is too long and unwieldy. NIO needs to do better.

We should create a new module(?) in this repo (swift-nio-extras) possibly called NIOUniversal or so which gets tools that are useful for using NIO universally across multiple networking platforms (BSD Sockets, Network.framework, and maybe soon IOCP).

Standard Way of Expressing Shared vs. Internal EventLoopGroup

As we've seen so far in the SSWG pitches/proposals for the nio-postgres (nio-redis initially modeled after this proposal) and nio-http packages there's been this idea of NIO-based library authors providing a means to users to either pass in their own EventLoopGroup they might intend to share, but still maintain ownership over the lifecycle, or to let the library decide that detail for itself.

NIORedis used to have an implementation and this was previously proposed as a part of NIO core but was rejected as it was too broad of a solution to be in NIO proper.

Could this have a home here? (bikeshedding the naming, of course)

public enum EventLoopGroupProvider {
    // library is given access to use this as a resource, but is not allowed to manage the lifecycle
    case shared(EventLoopGroup)
    // library is left to decide for itself how to create one as an implementation detail
    // and is fully responsible for this resource
    case createNew
}

perhaps a third case could be given that is .unique(EventLoopGroup) for the cases where a library might be the one that wants to provide EventLoopGroups, but that it's giving ownership of it to whoever is asking for an EventLoopGroupProvider?

NIOWritePCAPHandler crashes if used with UNIX Domain Sockets

When using NIOWritePCAPHandler with UNIX Domain Sockets, it crashes.

That's because UNIX Domain Sockets don't have IP addresses and ports.

If the user passes a fakeLocal/RemoteAddress, everything's good but we should just have a default one in case the user doesn't.

LengthFieldBasedFrameDecoder can't handle packets where the length field counts itself

Currently, the LengthFieldBasedFrameDecoder reads a length, and then reads however many bytes the length specifies. Some protocols, include the length in the declared length field of a message. Without a way to make the LengthFieldBasedFrameDecoder subtract the length of the length field from the size to read, LengthFieldBasedFrameDecoder can't handle these sorts of protocols.

HTTPResponseCompressor's API doesn't support selective compression

The current HTTPResponseCompressor API doesn't support selective compression:

https://github.com/apple/swift-nio/blob/ed28803a78b187e202c27a62c7a497fbe9cfbbd7/Sources/NIOHTTP1/HTTPResponseCompressor.swift#L59-L63

It's a duplex handler because it needs to grep the accept-encoding header from the request out in order to compress the response. The problem is that if the client says it supports compression the response compressor will then always compress...

The only way that I can see to not compress a certain response is to remove the responsecompressor just before sending a response and then to immediately create a new compressor that then needs to be added to the pipeline. And that will also only work without pipelining (or with the pipeline helper enabled). Don't think the current API is good enough tbh.

HTTPServerWithQuiescingDemo prints an error if the main code is wrapped in a function

Expected behavior

The demo should be able to shutdown gracefully on SIGINT.

Actual behavior

If the top-level code is wrapped in a function like so:

func run() throws {
    // main top-level code, unchanged
}

try run()

then upon receiving SIGINT the app prints:

ERROR: Cannot schedule tasks on an EventLoop that has already shut down.
    This will be upgraded to a forced crash in future SwiftNIO versions.

and seems to shut down everything and exit normally.

This is a pretty big inconvenience as in any more or less complex project the code that bootstraps the server will most likely be in a function.

The main suspicion is that some destructor is called at the end of run() that tries to use an EventLoop. No idea which object in the main code that might be. Any help would be greatly appreciated!

SwiftNIO-Extras version/commit hash

Current HEAD, a33bb16

Swift & OS version

Swift 5.10
macOS 14.3.1 (23D60)

Compile error on Ubuntu 22.10 x64

Expected behavior

Package builds without errors.

Actual behavior

Package builds with errors on Ubuntu 22.10 x64 :

/root/***/.build/checkouts/swift-nio-extras/Sources/CNIOExtrasZlib/include/CNIOExtrasZlib.h:17:10: error: 'zlib.h' file not found
#include <zlib.h>
         ^

Steps to reproduce

  1. git clone [email protected]:apple/swift-nio-extras.git
  2. cd swift-nio-extras/
  3. swift build

If possible, minimal yet complete reproducer code (or URL to code)

n/a

SwiftNIO-Extras version/commit hash

n/a

Swift & OS version (output of swift --version && uname -a)

Swift version 5.8 (swift-5.8-RELEASE)
Target: x86_64-unknown-linux-gnu
Linux vapor-tg-bot 5.19.0-23-generic #24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

NIOHTTPResponseDecompressor doesn't work without Content-Length header

Expected behavior

Compressed responses that do not contain a Content-Length header should be decompressed.

Actual behavior

If the Content-Length header is missing, the handler won't act on the body data and will directly pass compressed data to the next handlers

Steps to reproduce

Configure a pipeline as usual with a NIOHTTPResponseDecompressor and give it a compressed response without a Content-Length header

SwiftNIO-Extras version/commit hash

Current version at the time of posting this issue is 1.4.0

Swift & OS version (output of swift --version && uname -a)

Independent of Swift/OS version

HTTPResponseCompressor creates responses that gzip cannot decompress

HTTPResponseCompressor creates response bodies that cannot be decompressed by command line gzip, as well as other tools that use a zlib-based gzip stream decompressor, such as nodejs.

I've found this issue originally because VSCode is in the process of switching to a new javascript debugger, which broke loading of source maps for my Vapor app. It turns out that the reason for this is that they switched to a (very popular) Javascript library called got to load source maps, which pipes compressed responses through a zlib-based stream decoder, and that fails to decompress the responses generated by HTTPResponseCompressor.

The problem can also be reproduced on the command line (see below).

I've originally opened the issue with Vapor here, which has some additional info as well.

For reference, my original issue with vscode-js-debug (which triggered the issue) is here.

Browsers, as well as some other http-related tools, seem to be more lenient with the responses and decompress them just fine, but gzip and the zlib-bindings of nodejs do not (and maybe other zlib-based tools as well).

Expected behavior

Piping a gzip-compressed response through command-line gzip should decompress it just fine. Here is an example from google.com (though it can be tried with any other server able to produce a gzipped response):

xxx@yyy$ curl -H 'Accept-encoding: gzip' --output /tmp/other-compressed-response.gz 'http://www.google.com'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5186  100  5186    0     0  50349      0 --:--:-- --:--:-- --:--:-- 50349

xxx@yyy$ file /tmp/other-compressed-response.gz 
/tmp/other-compressed-response.gz: gzip compressed data, max compression, original size modulo 2^32 11804

xxx@yyy$ gunzip -tv /tmp/other-compressed-response.gz 
/tmp/other-compressed-response.gz:	  OK

E.g. gzip should decompress the response fine, and the file utility should show the correct decompressed length of the stream.

Actual behavior

However, using a simple Vapor demo app (which directly uses HTTPResponseCompresssor) yields the following results.

xxx@yyy$ curl -H 'Accept-encoding: gzip' --output /tmp/swift-nio-compressed-response.gz 'http://127.0.0.1:8080/index.html'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   247    0   247    0     0  61750      0 --:--:-- --:--:-- --:--:-- 61750

xxx@yyy$ file /tmp/swift-nio-compressed-response.gz
/tmp/swift-nio-compressed-response.gz: gzip compressed data, original size modulo 2^32 4294901760 gzip compressed data, reserved method, ASCII, has CRC, extra field, has comment, encrypted, from FAT filesystem (MS-DOS, OS/2, NT), original size modulo 2^32 4294901760

xxx@yyy$ gunzip -tv /tmp/swift-nio-compressed-response.gz
gunzip: /tmp/swift-nio-compressed-response.gz: unexpected end of file
gunzip: /tmp/swift-nio-compressed-response.gz: uncompress failed
/tmp/swift-nio-compressed-response.gz:	  NOT OK

Using this demo app, gzip fails to decompress the gzip response. Also, the file tool shows a really weird header, e.g. decompressed size shows up as 4294901760 (that's the actual issue – the failing tools all claim that the stream ends prematurely), and lots of extra header flags that appear to be unintentional.

The minimal Vapor demo app can be dowloaded in the issue at vapor here -> Vapor uses the response decompressor directly (https://github.com/vapor/vapor/blob/master/Sources/Vapor/HTTP/Server/HTTPServer.swift#L396-L405), so it's likely the problem is with NIO, rather than Vapor (though this is an assumption).

Steps to reproduce

  • Download demo.zip from the issue at Vapor.

  • Build it

  • Request the index.html file via cURL to the filesystem, but do not add the --decompress flag (cURL itself can decompress it too, like all browsers)

  • Check the compressed response with file or try to decompress with gzip

  • Alternatively (for the last step), you can also try it with nodejs's zlib bindings, like used in "got" -> I've added sample code for this in Vapor issue in the top-most post.

If possible, minimal yet complete reproducer code (or URL to code)

demo.zip – It's a Vapor app, though. I'd have to look into how to make a more minimal version, using just swift-nio-extras.

SwiftNIO-Extras version/commit hash

"revision": "7cd24c0efcf9700033f671b6a8eaa64a77dd0b72",
"version": "1.5.1"

Swift & OS version (output of swift --version && uname -a)

Apple Swift version 5.2.4 (swiftlang-1103.0.32.9 clang-1103.0.32.53)
Target: x86_64-apple-darwin19.4.0
Darwin <redacted> 19.4.0 Darwin Kernel Version 19.4.0: Wed Mar  4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64

Deflate decompression hangs on trailing garbage

I tried to extend HTTPRequestDecompressorTest with this test case:

    func testDecompressionTrailingData() throws {
        // Valid compressed data with some trailing garbage
        let compressed = ByteBuffer(bytes: [120, 156, 99, 0, 0, 0, 1, 0, 1] + [1, 2, 3])

        let channel = EmbeddedChannel()
        try channel.pipeline.addHandler(NIOHTTPRequestDecompressor(limit: .none)).wait()
        let headers = HTTPHeaders([("Content-Encoding", "deflate"), ("Content-Length", "\(compressed.readableBytes)")])
        try channel.writeInbound(HTTPServerRequestPart.head(.init(version: .init(major: 1, minor: 1), method: .POST, uri: "https://nio.swift.org/test", headers: headers)))

        try channel.writeInbound(HTTPServerRequestPart.body(compressed))
    }

and when tried to run it, the test hangs forever.

If confirmed, this is security issue which can cause DoS.

New release fails to be fetched using swift package update

Expected behavior

To correctly update package repository

Actual behavior

Resolving https://github.com/apple/swift-nio-extras.git at 1.10.1
... /.build/checkouts/swift-nio-extras: error: Couldn’t update repository submodules:
fatal: No url found for submodule path '.SourceKitten' in .gitmodules

Steps to reproduce

  1. Create new swift project, add swift-nio extras as dependendency
  2. swift package update

SwiftNIO-Extras version/commit hash

Version 1.10.1

Swift & OS version (output of swift --version && uname -a)

Swift version 5.4.2 (swift-5.4.2-RELEASE)
Target: x86_64-unknown-linux-gnu

Feature Request: WebSocket Compression Support

As mentioned here, it would be great if SwiftNIO supported WebSocket compression either out of the box or as a separate library (e.g. this library). RFC 7692 details the client–server negotiation mechanism and an initial compression algorithm that clients/servers can support.

Personally, I’m a user of Vapor, which is built on top of SwiftNIO. I don’t have a clear picture right now how to best implement either of the above. However, here are some potential resources:

compilation warnings

Since #154, compiling swift-nio-extras now has warnings because the executable products aren't correctly identified as .executableTargets.

Provide a "MaxConcurrentConnections" handler

Many kinds of servers want to limit the maximum number of concurrent connections they'll accept in order to bound their resource commitment. It's a bit non-obvious how to do this in NIO, but it's not terribly difficult to do. Things that are non-obvious but straightforward are a fairly good candidate for implementation in this module.

In this case, the mechanism for achieving this is to provide a ChannelDuplexHandler that users would insert .first in the server channel pipeline (using serverChannelInitializer). This handler would delay the read call in cases where the maximum number of connections has been reached. It can keep track of new connections using inboundIn and it can record when connections close by using the Channel.closeFuture.

A really great implementation would be able to tolerate both a "flexible" maximum and a hard cap. In hard-cap mode we'd need to set maxMessagesPerRead to 1 in order to prevent over-reading (though the handler could dynamically twiddle this setting). In "flexible" mode we'd allow a slight overrun, up to maxMessagesPerRead - 1 extra connections.

New release with support for visionOS?

I’m working on a project where we use gRPC Swift and the latest version 1.19.0 there depends on swift-nio-extras version 1.4.0. I see that there was #201 ~recently that effectively adds support for visionOS, but there’s no release with that PR yet.

My questions:

  • Are there plans for doing a swift-nio-extras release any time soon?
  • Are there any plans to do a gRPC Swift release any time soon that then points to that new swift-nio-extras release? (I’m asking here because it seems @glbrntt does releases for both)

As a workaround, I’m currently adding a dummy dependency to my project that pins swift-nio-extras to the specific commit that merged #201, but I’d like to get rid of that sooner rather than later 🙂

How to get printable debug information out of Debug(In|Out)boundEventsHandler?

Expected behavior

I'd like to print/log plain text conversations between client and server. Using the Debug(In|Out)boundEventsHandler, I'm only getting "NIOAny { ... }", which I don't understand how to convert to plain text.

After making this change the following change to the handler I'm able to print the conversation. However this cannot be done in the callback, as it doesn't have access to self.unwrapOutboundIn.

    public func write(context: ChannelHandlerContext, data: NIOAny, promise: EventLoopPromise<Void>?) {
+       let any = self.unwrapOutboundIn(data)
+       let io = any as! IOData
+       if case let IOData.byteBuffer(buffer) = io {
+           print(String(buffer: buffer))
+       }
        logger(.write(data: data), context)
        context.write(data, promise: promise)
    }

Actual behavior

Writing NIOAny { ByteBuffer { readerIndex: 0, writerIndex: 195, readableBytes: 195, capacity: 256, storageCapacity: 256, slice: _ByteBufferSlice { 0..<256 }, storage: 0x0000000102149370 (256 bytes) } } in handler1

SwiftNIO-Extras version

        .package(url: "https://github.com/apple/swift-nio.git", from: "2.25.1"),
        .package(url: "https://github.com/apple/swift-nio-extras.git", from: "1.7.0"),

Swift & OS version

Apple Swift version 5.3.2 (swiftlang-1200.0.45 clang-1200.0.32.28)
Target: x86_64-apple-darwin20.2.0
Darwin MacBook-Pro.localdomain 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:39:59 PST 2020; root:xnu-7195.60.75~1/RELEASE_X86_64 x86_64

Deflate decompression doesn't fail for truncated input

I tried to extend HTTPRequestDecompressorTest with this test case:

    func testDecompressionTruncatedInput() throws {
        // Truncated compressed data
        let compressed = ByteBuffer(bytes: [120, 156, 99, 0])

        let channel = EmbeddedChannel()
        try channel.pipeline.addHandler(NIOHTTPRequestDecompressor(limit: .none)).wait()
        let headers = HTTPHeaders([("Content-Encoding", "deflate"), ("Content-Length", "\(compressed.readableBytes)")])
        try channel.writeInbound(HTTPServerRequestPart.head(.init(version: .init(major: 1, minor: 1), method: .POST, uri: "https://nio.swift.org/test", headers: headers)))

        do {
            try channel.writeInbound(HTTPServerRequestPart.body(compressed))
            XCTFail("writeInbound should fail")
        } catch let error as NIOHTTPDecompression.DecompressionError {
            switch error {
            case .inflationError(Int(Z_BUF_ERROR)):
                // ok
                break
            default:
                XCTFail("Unexptected error: \(error)")
            }
        }
    }

As you can see I'm sending truncated invalid compressed data. I'd expect to get DecompressionError.inflationError(-5). Instead decompression succeeds.

(I guess root cause is in z_stream_s.inflatePart(to:minimumCapacity:) which always calls inflate with Z_NO_FLUSH parameter.)

Is the problem in my test case, or in the NIO code?

Using vendored copy of zlib SwiftPM package rather than system copy of zlib.

Discussion on which solution is a better solution,

  1. vendored copy of zlib SwiftPM package
  2. system copy of zlib.

as @Lukasa proposed using a separate SwiftPM package that is appropriately versioned and then add an appropriate dependency on it.
rather than integrating vendored copy of zlib into the swift-nio directly (-1 form @Lukasa)

if swift-nio 2.0 goes with the vendored copy of zlib SwiftPM package, then the swift-nio adopters can use same SwiftPM package.

we need to `fflush(stdout)` in the DebugInbound/OutboundEventsHandlers

print(message + " in \(context.name)")

This print will only happen once stdio flushes the stdout buffer. On a terminal that'll be immediately (usually, because line buffered) but in production, you're likely writing to a pipe and then stdout is fully buffered, ie. you'll only see these prints much later.

What we need after the print is a flush(stdout) to flush it straight away.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.