Git Product home page Git Product logo

Comments (12)

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
If you are using Linux you might want to take a look at traffic shaping:

  http://www.google.com/search?q=linux+traffic+shaping

This is a more general and precise solution to the bandwidth prioritization 
problem.

Also, you might consider reducing the number of block cache threads, which 
directly
limits the number of concurrent write operations.

Original comment by [email protected] on 19 Oct 2009 at 2:27

  • Changed state: Accepted
  • Added labels: Type-Enhancement
  • Removed labels: Type-Defect

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
Thanks for the tip, I am aware of the traffic shaping options in Linux but I'm 
not
using a linux router to connect to my ADSL line and secondly was hoping to 
avoid the
traffic shaping learning curve. Will try to reduce the block cache threads 
though!

Original comment by [email protected] on 19 Oct 2009 at 3:15

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
Could this cURL option be reused via cmdline option? 
CURLOPT_MAX_SEND_SPEED_LARGE
(http://curl.haxx.se/libcurl/c/curl_easy_setopt.html)
I'd do it myself, but I lack the C experience :(

Original comment by [email protected] on 21 Oct 2009 at 7:32

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
My C skills were less rusty than I thought, and the code is quite readable so I 
made
a stab at it. Please let me know if it's good enough to be included (it's a
straightforward copy of existing configuration code).
I tested it and it works, but only in combination with --blockCacheThreads=1, 
so you
might want to force it?

My SSH sessions are (more or less) responsive again! :)

Original comment by [email protected] on 23 Oct 2009 at 8:10

Attachments:

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
I just see that the change in s3b_config.h is not needed and should be skipped.

Original comment by [email protected] on 23 Oct 2009 at 8:16

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
Thanks for the patch. I can't get to this immediately but will work on it soon 
when
time permits.

Original comment by [email protected] on 26 Oct 2009 at 2:29

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
Fixed in r420.

Original comment by [email protected] on 26 Oct 2009 at 6:48

  • Changed state: Fixed

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
Looks good, but with all due respect I don't think it's a good idea to define 
the up-
and download speeds as bits/s. Bit/s are commonly used to describe raw transfer
speeds, cURL can only measure real bytes sent and will /never/ be aware of the
overhead imposed by the tranmission protocol, like TCP/IP. Even cURL defines the
parameter in Bytes/s, for a good reason.

Original comment by [email protected] on 26 Oct 2009 at 7:04

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
It doesn't really matter what the units are... bytes/sec = bits/sec * 8 
(s3backer
divides the quantity by eight before configuring cURL with it). So the choice 
should
be whatever is most natural for users. This is simply a measure of bandwidth, 
which
is a rate (quantity/time), not an absolute quantity.

I chose bits/sec because when people talk about bandwidth that's usually the 
way they
refer to it (e.g., people say "my DSL line gets 1.5Mbps downstream and 384kbps
upstream").

The TCP, etc. overhead is not counted or implied in either case, so that's not
relevant. In other words, I'm not aware of any convention that 
"bytes-per-second"
implicitly means "not counting overhead", whereas "bits-per-second" means 
"counting
overhead". You seem to be implying that there is... I'm curious where you got 
that
notion. The way I think of it, bandwidth is the same no matter how you logically
group the individual bits together.

Original comment by [email protected] on 26 Oct 2009 at 8:18

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
It's exactly that distinction that I mean: when speaking in terms of DSL bw, 
people
refer to [M|k]bits/s, but you can not (or hardly) calculate that back to real
[k]bytes/s, because of the IP (and atm/pppoe) overhead. So, now telling 
s3backer to
limit it's upload rate to 100kbits/s like you suggest will probably result in 
using
up to 110kbits/s of the line. I agree I'm nitpicking here and the choice is 
trivial,
but what I try to say is that it may be misleading to suggest that the upload 
limit
of s3backer translates directly to the upload limit of the DSL line. It's wrong 
by at
least a factor ~10%. Bits/s is probably better understood (because of the DSL 
line
dicsussion). Bytes/s would just be 'more correct' because of the way cURL 
calculates it. 
In either case, I'm glad the option is implemented upstream ;)

Original comment by [email protected] on 26 Oct 2009 at 8:33

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024
I think it's at least worth mentioning in the man page that the limits do not 
count
overhead. I'll add something to that effect.


Original comment by [email protected] on 26 Oct 2009 at 9:38

from s3backer.

GoogleCodeExporter avatar GoogleCodeExporter commented on June 6, 2024

Original comment by [email protected] on 22 Oct 2010 at 8:01

  • Added labels: AffectsVersion-1.3.1, FixVersion-1.3.2

from s3backer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.