apple / swift-nio-extras Goto Github PK
View Code? Open in Web Editor NEWUseful code around SwiftNIO.
Home Page: https://swiftpackageindex.com/apple/swift-nio-extras/main/documentation/nioextras
License: Apache License 2.0
Useful code around SwiftNIO.
Home Page: https://swiftpackageindex.com/apple/swift-nio-extras/main/documentation/nioextras
License: Apache License 2.0
This function blocks, and should provide an async-safe alternative.
Many kinds of servers want to limit the maximum number of concurrent connections they'll accept in order to bound their resource commitment. It's a bit non-obvious how to do this in NIO, but it's not terribly difficult to do. Things that are non-obvious but straightforward are a fairly good candidate for implementation in this module.
In this case, the mechanism for achieving this is to provide a ChannelDuplexHandler
that users would insert .first
in the server channel pipeline (using serverChannelInitializer
). This handler would delay the read
call in cases where the maximum number of connections has been reached. It can keep track of new connections using inboundIn
and it can record when connections close by using the Channel.closeFuture
.
A really great implementation would be able to tolerate both a "flexible" maximum and a hard cap. In hard-cap mode we'd need to set maxMessagesPerRead
to 1
in order to prevent over-reading (though the handler could dynamically twiddle this setting). In "flexible" mode we'd allow a slight overrun, up to maxMessagesPerRead - 1
extra connections.
As mentioned, public convenience init(initialByteBufferCapacity: Int)
is now redundant, but has been left in purely for backwards compatibility when referring to the initializer directly rather than calling it, and can be removed in a future major version:
swift-nio-extras/Sources/NIOHTTPCompression/HTTPResponseCompressor.swift
Lines 132 to 137 in d1ead62
Currently, the LengthFieldBasedFrameDecoder reads a length, and then reads however many bytes the length specifies. Some protocols, include the length in the declared length field of a message. Without a way to make the LengthFieldBasedFrameDecoder subtract the length of the length field from the size to read, LengthFieldBasedFrameDecoder can't handle these sorts of protocols.
The
do {
try someOperation()
XCTFail("should throw") // easy to forget
} catch error as SomethingError {
XCTAssertEqual(.something, error as? SomethingError)
} catch {
XCTFail("wrong error")
}
pattern is not only very long, it's also very error prone. If you forget
any of the XCTFails, you might not tests what it looks like
XCTAssertThrowsError(try someOperation) { error in
XCTAssertEqual(.something, error as? SomethingError)
}
is much safer and shorter.
We should replace all uses of this pattern in Tests/**
.
See also apple/swift-nio#1430 which does the same for swift-nio
.
we should have API docs for this at https://docs.swiftnio.io
Compressed responses that do not contain a Content-Length
header should be decompressed.
If the Content-Length
header is missing, the handler won't act on the body data and will directly pass compressed data to the next handlers
Configure a pipeline as usual with a NIOHTTPResponseDecompressor and give it a compressed response without a Content-Length
header
Current version at the time of posting this issue is 1.4.0
swift --version && uname -a
)Independent of Swift/OS version
It would be useful to have a general ChannelInboundHandler
that can decode a length-delimited framing protocol. This is a protocol that has messages prefixed with an encoding of their length.
Such a protocol would look like this:
<fixed width integer n in network byte order><n bytes>
The simplest version of this would be one that uses a fixed-width integer.
For now we won't bother with varint support.
we need to link the docs from the README.
I tried to extend HTTPRequestDecompressorTest
with this test case:
func testDecompressionTrailingData() throws {
// Valid compressed data with some trailing garbage
let compressed = ByteBuffer(bytes: [120, 156, 99, 0, 0, 0, 1, 0, 1] + [1, 2, 3])
let channel = EmbeddedChannel()
try channel.pipeline.addHandler(NIOHTTPRequestDecompressor(limit: .none)).wait()
let headers = HTTPHeaders([("Content-Encoding", "deflate"), ("Content-Length", "\(compressed.readableBytes)")])
try channel.writeInbound(HTTPServerRequestPart.head(.init(version: .init(major: 1, minor: 1), method: .POST, uri: "https://nio.swift.org/test", headers: headers)))
try channel.writeInbound(HTTPServerRequestPart.body(compressed))
}
and when tried to run it, the test hangs forever.
If confirmed, this is security issue which can cause DoS.
As mentioned here, it would be great if SwiftNIO supported WebSocket compression either out of the box or as a separate library (e.g. this library). RFC 7692 details the client–server negotiation mechanism and an initial compression algorithm that clients/servers can support.
Personally, I’m a user of Vapor, which is built on top of SwiftNIO. I don’t have a clear picture right now how to best implement either of the above. However, here are some potential resources:
The demo should be able to shutdown gracefully on SIGINT.
If the top-level code is wrapped in a function like so:
func run() throws {
// main top-level code, unchanged
}
try run()
then upon receiving SIGINT the app prints:
ERROR: Cannot schedule tasks on an EventLoop that has already shut down.
This will be upgraded to a forced crash in future SwiftNIO versions.
and seems to shut down everything and exit normally.
This is a pretty big inconvenience as in any more or less complex project the code that bootstraps the server will most likely be in a function.
The main suspicion is that some destructor is called at the end of run()
that tries to use an EventLoop. No idea which object in the main code that might be. Any help would be greatly appreciated!
Current HEAD, a33bb16
Swift 5.10
macOS 14.3.1 (23D60)
I tried to extend HTTPRequestDecompressorTest
with this test case:
func testDecompressionTruncatedInput() throws {
// Truncated compressed data
let compressed = ByteBuffer(bytes: [120, 156, 99, 0])
let channel = EmbeddedChannel()
try channel.pipeline.addHandler(NIOHTTPRequestDecompressor(limit: .none)).wait()
let headers = HTTPHeaders([("Content-Encoding", "deflate"), ("Content-Length", "\(compressed.readableBytes)")])
try channel.writeInbound(HTTPServerRequestPart.head(.init(version: .init(major: 1, minor: 1), method: .POST, uri: "https://nio.swift.org/test", headers: headers)))
do {
try channel.writeInbound(HTTPServerRequestPart.body(compressed))
XCTFail("writeInbound should fail")
} catch let error as NIOHTTPDecompression.DecompressionError {
switch error {
case .inflationError(Int(Z_BUF_ERROR)):
// ok
break
default:
XCTFail("Unexptected error: \(error)")
}
}
}
As you can see I'm sending truncated invalid compressed data. I'd expect to get DecompressionError.inflationError(-5)
. Instead decompression succeeds.
(I guess root cause is in z_stream_s.inflatePart(to:minimumCapacity:)
which always calls inflate
with Z_NO_FLUSH
parameter.)
Is the problem in my test case, or in the NIO code?
I'd like to print/log plain text conversations between client and server. Using the Debug(In|Out)boundEventsHandler, I'm only getting "NIOAny { ... }", which I don't understand how to convert to plain text.
After making this change the following change to the handler I'm able to print the conversation. However this cannot be done in the callback, as it doesn't have access to self.unwrapOutboundIn
.
public func write(context: ChannelHandlerContext, data: NIOAny, promise: EventLoopPromise<Void>?) {
+ let any = self.unwrapOutboundIn(data)
+ let io = any as! IOData
+ if case let IOData.byteBuffer(buffer) = io {
+ print(String(buffer: buffer))
+ }
logger(.write(data: data), context)
context.write(data, promise: promise)
}
Writing NIOAny { ByteBuffer { readerIndex: 0, writerIndex: 195, readableBytes: 195, capacity: 256, storageCapacity: 256, slice: _ByteBufferSlice { 0..<256 }, storage: 0x0000000102149370 (256 bytes) } } in handler1
.package(url: "https://github.com/apple/swift-nio.git", from: "2.25.1"),
.package(url: "https://github.com/apple/swift-nio-extras.git", from: "1.7.0"),
Apple Swift version 5.3.2 (swiftlang-1200.0.45 clang-1200.0.32.28)
Target: x86_64-apple-darwin20.2.0
Darwin MacBook-Pro.localdomain 20.2.0 Darwin Kernel Version 20.2.0: Wed Dec 2 20:39:59 PST 2020; root:xnu-7195.60.75~1/RELEASE_X86_64 x86_64
When using NIOWritePCAPHandler with UNIX Domain Sockets, it crashes.
That's because UNIX Domain Sockets don't have IP addresses and ports.
If the user passes a fakeLocal/RemoteAddress, everything's good but we should just have a default one in case the user doesn't.
NIOExtras needs a cocoapod
Calling initiateShutdown
multiple times is not a good idea. We should document that it's illegal and (deterministically) fatalError
if the user still does that.
Discussion on which solution is a better solution,
as @Lukasa proposed using a separate SwiftPM package that is appropriately versioned and then add an appropriate dependency on it.
rather than integrating vendored copy of zlib into the swift-nio directly (-1 form @Lukasa)
if swift-nio 2.0 goes with the vendored copy of zlib SwiftPM package, then the swift-nio adopters can use same SwiftPM package.
I'm talking about the code that can be found here
The NIOWebSocket upgrade helpers are reliant on the NIOHTTP1 types, specifically HTTPHead and HTTPHeaders . If a project is built using the new HTTP Types it will need to convert from the supplied NIOHTTP1 types to the new types. In theory the project could have its own conversion code but it might be a good idea to make the NIOExtras conversion code the official implementation.
Another situation where this conversion is being made is in the OpenAPI transports. Both Vapor and Hummingbird rely on the NIOHTTP1 types and have to convert to the new HTTP types to pass on the HTTP request to the openapi runtime and convert back to return the HTTP response.
CI should check API breakages
Package builds without errors.
Package builds with errors on Ubuntu 22.10 x64 :
/root/***/.build/checkouts/swift-nio-extras/Sources/CNIOExtrasZlib/include/CNIOExtrasZlib.h:17:10: error: 'zlib.h' file not found
#include <zlib.h>
^
git clone [email protected]:apple/swift-nio-extras.git
cd swift-nio-extras/
swift build
n/a
n/a
swift --version && uname -a
)Swift version 5.8 (swift-5.8-RELEASE)
Target: x86_64-unknown-linux-gnu
Linux vapor-tg-bot 5.19.0-23-generic #24-Ubuntu SMP PREEMPT_DYNAMIC Fri Oct 14 15:39:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
HTTPResponseCompressor creates response bodies that cannot be decompressed by command line gzip
, as well as other tools that use a zlib-based gzip stream decompressor, such as nodejs.
I've found this issue originally because VSCode is in the process of switching to a new javascript debugger, which broke loading of source maps for my Vapor app. It turns out that the reason for this is that they switched to a (very popular) Javascript library called got to load source maps, which pipes compressed responses through a zlib-based stream decoder, and that fails to decompress the responses generated by HTTPResponseCompressor.
The problem can also be reproduced on the command line (see below).
I've originally opened the issue with Vapor here, which has some additional info as well.
For reference, my original issue with vscode-js-debug (which triggered the issue) is here.
Browsers, as well as some other http-related tools, seem to be more lenient with the responses and decompress them just fine, but gzip and the zlib-bindings of nodejs do not (and maybe other zlib-based tools as well).
Piping a gzip-compressed response through command-line gzip
should decompress it just fine. Here is an example from google.com
(though it can be tried with any other server able to produce a gzipped response):
xxx@yyy$ curl -H 'Accept-encoding: gzip' --output /tmp/other-compressed-response.gz 'http://www.google.com'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5186 100 5186 0 0 50349 0 --:--:-- --:--:-- --:--:-- 50349
xxx@yyy$ file /tmp/other-compressed-response.gz
/tmp/other-compressed-response.gz: gzip compressed data, max compression, original size modulo 2^32 11804
xxx@yyy$ gunzip -tv /tmp/other-compressed-response.gz
/tmp/other-compressed-response.gz: OK
E.g. gzip
should decompress the response fine, and the file
utility should show the correct decompressed length of the stream.
However, using a simple Vapor demo app (which directly uses HTTPResponseCompresssor) yields the following results.
xxx@yyy$ curl -H 'Accept-encoding: gzip' --output /tmp/swift-nio-compressed-response.gz 'http://127.0.0.1:8080/index.html'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 247 0 247 0 0 61750 0 --:--:-- --:--:-- --:--:-- 61750
xxx@yyy$ file /tmp/swift-nio-compressed-response.gz
/tmp/swift-nio-compressed-response.gz: gzip compressed data, original size modulo 2^32 4294901760 gzip compressed data, reserved method, ASCII, has CRC, extra field, has comment, encrypted, from FAT filesystem (MS-DOS, OS/2, NT), original size modulo 2^32 4294901760
xxx@yyy$ gunzip -tv /tmp/swift-nio-compressed-response.gz
gunzip: /tmp/swift-nio-compressed-response.gz: unexpected end of file
gunzip: /tmp/swift-nio-compressed-response.gz: uncompress failed
/tmp/swift-nio-compressed-response.gz: NOT OK
Using this demo app, gzip
fails to decompress the gzip response. Also, the file
tool shows a really weird header, e.g. decompressed size shows up as 4294901760
(that's the actual issue – the failing tools all claim that the stream ends prematurely), and lots of extra header flags that appear to be unintentional.
The minimal Vapor demo app can be dowloaded in the issue at vapor here -> Vapor uses the response decompressor directly (https://github.com/vapor/vapor/blob/master/Sources/Vapor/HTTP/Server/HTTPServer.swift#L396-L405), so it's likely the problem is with NIO, rather than Vapor (though this is an assumption).
Download demo.zip from the issue at Vapor.
Build it
Request the index.html file via cURL to the filesystem, but do not add the --decompress
flag (cURL itself can decompress it too, like all browsers)
Check the compressed response with file
or try to decompress with gzip
Alternatively (for the last step), you can also try it with nodejs's zlib bindings, like used in "got" -> I've added sample code for this in Vapor issue in the top-most post.
demo.zip – It's a Vapor app, though. I'd have to look into how to make a more minimal version, using just swift-nio-extras.
"revision": "7cd24c0efcf9700033f671b6a8eaa64a77dd0b72",
"version": "1.5.1"
swift --version && uname -a
)Apple Swift version 5.2.4 (swiftlang-1103.0.32.9 clang-1103.0.32.53)
Target: x86_64-apple-darwin19.4.0
Darwin <redacted> 19.4.0 Darwin Kernel Version 19.4.0: Wed Mar 4 22:28:40 PST 2020; root:xnu-6153.101.6~15/RELEASE_X86_64 x86_64
The opposite of HTTPResponseCompressor
would be nice for use with clients.
If you create a new QuiescingHelper()
but then end up not using it, it'll crash your program for a leaked promise. There's probably a better solution for that. If it should remain the API contract we can probably give the user a better message than the leaked promise?
I just had to create quite a long example on how to use the universal bootstrap over at apple/swift-nio-examples#48 . That is too long and unwieldy. NIO needs to do better.
We should create a new module(?) in this repo (swift-nio-extras) possibly called NIOUniversal
or so which gets tools that are useful for using NIO universally across multiple networking platforms (BSD Sockets, Network.framework, and maybe soon IOCP).
Since #154, compiling swift-nio-extras now has warnings because the executable products aren't correctly identified as .executableTarget
s.
HTTPType conversion ChannelHandlers HTTP1ToHTTPServerCodec
etc should conform to RemovableChannelHandler
The current HTTPResponseCompressor API doesn't support selective compression:
It's a duplex handler because it needs to grep the accept-encoding
header from the request out in order to compress the response. The problem is that if the client says it supports compression the response compressor will then always compress...
The only way that I can see to not compress a certain response is to remove the responsecompressor just before sending a response and then to immediately create a new compressor that then needs to be added to the pipeline. And that will also only work without pipelining (or with the pipeline helper enabled). Don't think the current API is good enough tbh.
As we've seen so far in the SSWG pitches/proposals for the nio-postgres
(nio-redis
initially modeled after this proposal) and nio-http
packages there's been this idea of NIO-based library authors providing a means to users to either pass in their own EventLoopGroup
they might intend to share, but still maintain ownership over the lifecycle, or to let the library decide that detail for itself.
NIORedis
used to have an implementation and this was previously proposed as a part of NIO core but was rejected as it was too broad of a solution to be in NIO proper.
Could this have a home here? (bikeshedding the naming, of course)
public enum EventLoopGroupProvider {
// library is given access to use this as a resource, but is not allowed to manage the lifecycle
case shared(EventLoopGroup)
// library is left to decide for itself how to create one as an implementation detail
// and is fully responsible for this resource
case createNew
}
perhaps a third case could be given that is .unique(EventLoopGroup)
for the cases where a library might be the one that wants to provide EventLoopGroup
s, but that it's giving ownership of it to whoever is asking for an EventLoopGroupProvider
?
NIOWritePCAPHandler should implement RemovableChannelHandler.
To correctly update package repository
Resolving https://github.com/apple/swift-nio-extras.git at 1.10.1
... /.build/checkouts/swift-nio-extras: error: Couldn’t update repository submodules:
fatal: No url found for submodule path '.SourceKitten' in .gitmodules
Version 1.10.1
swift --version && uname -a
)Swift version 5.4.2 (swift-5.4.2-RELEASE)
Target: x86_64-unknown-linux-gnu
CI should enable TSan
I’m working on a project where we use gRPC Swift and the latest version 1.19.0 there depends on swift-nio-extras version 1.4.0. I see that there was #201 ~recently that effectively adds support for visionOS, but there’s no release with that PR yet.
My questions:
As a workaround, I’m currently adding a dummy dependency to my project that pins swift-nio-extras to the specific commit that merged #201, but I’d like to get rid of that sooner rather than later 🙂
This print
will only happen once stdio
flushes the stdout
buffer. On a terminal that'll be immediately (usually, because line buffered) but in production, you're likely writing to a pipe and then stdout
is fully buffered, ie. you'll only see these prints much later.
What we need after the print
is a flush(stdout)
to flush it straight away.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.