ricea / compressstream-explainer Goto Github PK
View Code? Open in Web Editor NEWCompression Streams Explained
License: Apache License 2.0
Compression Streams Explained
License: Apache License 2.0
We have been discussing whether "CompressStream" is the best name for the interface. Options we are currently considering:
Please let us know what you think.
I'd love to see this be as deterministic as our text encoding setup, even if it needs to evolve over time somehow.
If we want to be forward compatible with non-gzip algorithms it seems v1 will have to take a dictionary argument with something that defaults to gzip
so that if you pass something else it'll throw.
Otherwise the default in legacy implementations will be to ignore the passed argument and simply use gzip, which is probably not desirable? Although I suppose if the type of compression is exposed in v2 there'll be feature detection possible using that as well. So maybe this is all okay. Leaving this here for your consideration.
It's ambiguous whether the "deflate" format is raw or includes the zlib header and footer. It might be good to rename it--maybe "deflate-raw".
The Chromium Intent to Implement thread mentions local storage usage. Wanted to bring up snappy. Reasons why it's a good future candidate:
Not claiming snappy should replace gzip as the default, just wanted to make you aware of it for future-proof designing. gzip / deflate is great for data to be sent over a network, snappy is great for data to be stored on disk.
Feedback from the Web Performance group at TPAC 2019 indicated that browsers will typically accept gzip input with incorrect checksums. We need to decide what the behaviour will be for DecompressionStream.
It depends on the internal state and the input, but when a new chuck is written to a CompressStream, the CompressStream may have 2 choices, buffering some input bytes until it can generate a full compressed byte, or flushing the buffered data (e.g. by performing the "sync flush" of Zlib).
Let's suppose that we want to stream a sequence of JSON objects through a CompressStream and DecompressStream (or some non-web decompressor) at the other peer where there could be some latency between objects. It's good if the receiver side could start processing new chunks as soon as possible. However, without an API to instruct the stream to or not to flush, CompressStream needs to decide whether or not to flush by itself.
In terms of efficiency, this might be negligible. For compatibility POV, maybe worth investigating.
If this kind of usage is just out of scope, it's ok. Never mind :) Then, I suggest that you discuss it in the explainer or somewhere more appropriate.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.