Comments (12)
If you are using Linux you might want to take a look at traffic shaping:
http://www.google.com/search?q=linux+traffic+shaping
This is a more general and precise solution to the bandwidth prioritization
problem.
Also, you might consider reducing the number of block cache threads, which
directly
limits the number of concurrent write operations.
Original comment by [email protected]
on 19 Oct 2009 at 2:27
- Changed state: Accepted
- Added labels: Type-Enhancement
- Removed labels: Type-Defect
from s3backer.
Thanks for the tip, I am aware of the traffic shaping options in Linux but I'm
not
using a linux router to connect to my ADSL line and secondly was hoping to
avoid the
traffic shaping learning curve. Will try to reduce the block cache threads
though!
Original comment by [email protected]
on 19 Oct 2009 at 3:15
from s3backer.
Could this cURL option be reused via cmdline option?
CURLOPT_MAX_SEND_SPEED_LARGE
(http://curl.haxx.se/libcurl/c/curl_easy_setopt.html)
I'd do it myself, but I lack the C experience :(
Original comment by [email protected]
on 21 Oct 2009 at 7:32
from s3backer.
My C skills were less rusty than I thought, and the code is quite readable so I
made
a stab at it. Please let me know if it's good enough to be included (it's a
straightforward copy of existing configuration code).
I tested it and it works, but only in combination with --blockCacheThreads=1,
so you
might want to force it?
My SSH sessions are (more or less) responsive again! :)
Original comment by [email protected]
on 23 Oct 2009 at 8:10
Attachments:
from s3backer.
I just see that the change in s3b_config.h is not needed and should be skipped.
Original comment by [email protected]
on 23 Oct 2009 at 8:16
from s3backer.
Thanks for the patch. I can't get to this immediately but will work on it soon
when
time permits.
Original comment by [email protected]
on 26 Oct 2009 at 2:29
from s3backer.
Fixed in r420.
Original comment by [email protected]
on 26 Oct 2009 at 6:48
- Changed state: Fixed
from s3backer.
Looks good, but with all due respect I don't think it's a good idea to define
the up-
and download speeds as bits/s. Bit/s are commonly used to describe raw transfer
speeds, cURL can only measure real bytes sent and will /never/ be aware of the
overhead imposed by the tranmission protocol, like TCP/IP. Even cURL defines the
parameter in Bytes/s, for a good reason.
Original comment by [email protected]
on 26 Oct 2009 at 7:04
from s3backer.
It doesn't really matter what the units are... bytes/sec = bits/sec * 8
(s3backer
divides the quantity by eight before configuring cURL with it). So the choice
should
be whatever is most natural for users. This is simply a measure of bandwidth,
which
is a rate (quantity/time), not an absolute quantity.
I chose bits/sec because when people talk about bandwidth that's usually the
way they
refer to it (e.g., people say "my DSL line gets 1.5Mbps downstream and 384kbps
upstream").
The TCP, etc. overhead is not counted or implied in either case, so that's not
relevant. In other words, I'm not aware of any convention that
"bytes-per-second"
implicitly means "not counting overhead", whereas "bits-per-second" means
"counting
overhead". You seem to be implying that there is... I'm curious where you got
that
notion. The way I think of it, bandwidth is the same no matter how you logically
group the individual bits together.
Original comment by [email protected]
on 26 Oct 2009 at 8:18
from s3backer.
It's exactly that distinction that I mean: when speaking in terms of DSL bw,
people
refer to [M|k]bits/s, but you can not (or hardly) calculate that back to real
[k]bytes/s, because of the IP (and atm/pppoe) overhead. So, now telling
s3backer to
limit it's upload rate to 100kbits/s like you suggest will probably result in
using
up to 110kbits/s of the line. I agree I'm nitpicking here and the choice is
trivial,
but what I try to say is that it may be misleading to suggest that the upload
limit
of s3backer translates directly to the upload limit of the DSL line. It's wrong
by at
least a factor ~10%. Bits/s is probably better understood (because of the DSL
line
dicsussion). Bytes/s would just be 'more correct' because of the way cURL
calculates it.
In either case, I'm glad the option is implemented upstream ;)
Original comment by [email protected]
on 26 Oct 2009 at 8:33
from s3backer.
I think it's at least worth mentioning in the man page that the limits do not
count
overhead. I'll add something to that effect.
Original comment by [email protected]
on 26 Oct 2009 at 9:38
from s3backer.
Original comment by [email protected]
on 22 Oct 2010 at 8:01
- Added labels: AffectsVersion-1.3.1, FixVersion-1.3.2
from s3backer.
Related Issues (20)
- Mount failure results in mount flag not being cleaned up HOT 1
- PerformanceConsiderations - there is no "Buffer Cache" anymore HOT 1
- s3backer is silently accepting invalid command line parameters HOT 6
- block_cache assertion failures when running in NBD mode HOT 16
- mount token does not take into account bucket subdir HOT 1
- "Broken Pipe" errors when running in NBD mode HOT 7
- Drop features for dealing with eventually consistent servers? HOT 2
- Data corruption when using NBD mode HOT 14
- Cache bandwidth much lower in version 1.6.x than in 1.5.6 HOT 19
- Version 2.0.1 not pushed to AWS S3 download bucket HOT 1
- s3backer --nbd not doing anything HOT 2
- Docker build failing HOT 4
- s3 strong consistency HOT 1
- munmap_chunk(): invalid pointer HOT 2
- TRIM is very inefficient HOT 2
- block cache entry shrink policy not documented HOT 1
- Building with NBD results in configured build prefix being ignored for nbdkit plugin HOT 2
- nbdkit: error: invalid value "deflate" for boolean flag "--compress" HOT 4
- block cache flush and synchronous umount (with fuse) HOT 3
- too many time_wait socket when writing to a newly created disk HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s3backer.