Comments (12)
This is very odd. When compression is enabled, s3backer uses content encoding deflate
, not gzip
. I have no idea how blocks were written with encoding gzip
....
Are you or were you doing anything unusual?
from s3backer.
Nothing unusual. It seems like the issue occured as compression was enabled with blank password by shell script which was used.
from s3backer.
If this is reproducible, could you create a simple shell script or short test of some kind that demonstrates the problem, when run against a fresh/empty Amazon S3 bucket? Thanks.
from s3backer.
More info is available now. s3backer was tested in HCP ( hitachi ) and HCP stores the data in gzip format. Disabling the content-encoding header will save the data in HCP as it is. As per them, the latest unreleased version of HCP has the fix for this. Is content-encoding enabled by s3backer only when encryption is enabled ?
from s3backer.
Looks lik HCP is adding 'gzip' compression on the fly when sending the response. This is because s3backer inlucdes the Accept: */*
header (actually it is libcurl that is adding this).
This looks like a SHOULD
violation of the spec; see http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
Try this patch and let me know if this fixes the problem:
diff --git a/http_io.c b/http_io.c
index 0fa47cc..24cb016 100644
--- a/http_io.c
+++ b/http_io.c
@@ -37,6 +37,7 @@
#define AUTH_HEADER "Authorization"
#define CTYPE_HEADER "Content-Type"
#define CONTENT_ENCODING_HEADER "Content-Encoding"
+#define ACCEPT_ENCODING_HEADER "Accept-Encoding"
#define ETAG_HEADER "ETag"
#define CONTENT_ENCODING_DEFLATE "deflate"
#define CONTENT_ENCODING_ENCRYPT "encrypt"
@@ -1022,6 +1023,7 @@ http_io_read_block(struct s3backer_store *const s3b, s3b_block_t block_num, void
struct http_io_private *const priv = s3b->data;
struct http_io_conf *const config = priv->config;
char urlbuf[URL_BUF_SIZE(config)];
+ char accepted_encodings[64];
const time_t now = time(NULL);
int encrypted = 0;
struct http_io io;
@@ -1088,6 +1090,14 @@ http_io_read_block(struct s3backer_store *const s3b, s3b_block_t block_num, void
io.headers = http_io_add_header(io.headers, "%s: \"%s\"", header, md5buf);
}
+ /* Set Accept-Encoding header */
+ snprintf(accepted_encodings, sizeof(accepted_encodings), "%s", CONTENT_ENCODING_DEFLATE);
+ if (config->encryption != NULL) {
+ snprintf(accepted_encodings + strlen(accepted_encodings), sizeof(accepted_encodings) - strlen(accepted_encodings),
+ ", %s-%s", CONTENT_ENCODING_ENCRYPT, config->encryption);
+ }
+ io.headers = http_io_add_header(io.headers, "%s: %s", ACCEPT_ENCODING_HEADER, accepted_encodings);
+
/* Add Authorization header */
if ((r = http_io_add_auth(priv, &io, now, NULL, 0)) != 0)
goto fail;
from s3backer.
it sort of fixes the problem with the Hitachi Content platform however it now breaks large file transfers going to Amazon S3 for some reason
from s3backer.
Any more info on this? Are you still having problems with that patch and if so can you provide more details?
from s3backer.
We were able to work through this issue by implementing gzip support as a content encoding mechanism that s3backer can understand. We are not using the above proposed patch.
from s3backer.
Ok thanks.
Next question, now I'm wondering if this patch is OK to commit. Obviously you won't be using it in this case, but more generally it does seemingly fix a bug in s3backer.
However, your comment that:
it now breaks large file transfers going to Amazon S3 for some reason
worries me. Any idea what was going on there?
from s3backer.
I do not believe that issue was caused by this patch. I expect that it was only detected while testing with this patch. @eolson78 should have more history here and be able to clarify.
from s3backer.
OK good, that would make more sense.
from s3backer.
Patch applied in d0e863e.
from s3backer.
Related Issues (20)
- mount token does not take into account bucket subdir HOT 1
- "Broken Pipe" errors when running in NBD mode HOT 7
- Drop features for dealing with eventually consistent servers? HOT 2
- Data corruption when using NBD mode HOT 14
- Cache bandwidth much lower in version 1.6.x than in 1.5.6 HOT 19
- Version 2.0.1 not pushed to AWS S3 download bucket HOT 1
- s3backer --nbd not doing anything HOT 2
- Docker build failing HOT 4
- s3 strong consistency HOT 1
- munmap_chunk(): invalid pointer HOT 2
- TRIM is very inefficient HOT 2
- block cache entry shrink policy not documented HOT 1
- Building with NBD results in configured build prefix being ignored for nbdkit plugin HOT 2
- nbdkit: error: invalid value "deflate" for boolean flag "--compress" HOT 4
- block cache flush and synchronous umount (with fuse) HOT 3
- too many time_wait socket when writing to a newly created disk HOT 1
- [enhancement] make holes in cache file for zero blocks with fallocate HOT 8
- [enhancement] zero_cache should write a full block when writing into all-zero block, thus avoiding read-modify-write cycle HOT 3
- [nbd] s3backer isn't creating the /dev/nbdX device HOT 9
- zero_block_cache_size incorrect in stats file HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s3backer.