Git Product home page Git Product logo

bwscanner's People

Contributors

aagbsn avatar david415 avatar dependabot[bot] avatar donnchac avatar juga0 avatar teor2345 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bwscanner's Issues

Excessive memory usage

Possibly related to unhandled errors (#70, #81), a scan eats huge amounts of memory. Here's an example:

2018-02-06T00:04:41+0100 [INFO]: Performing a measurement scan with 6216 relays.

10h later, the python process consumes about 5GB of RAM (RSS):

F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
0  1000   996   583  20   0 5156300 5035808 SyS_ep Sl+ pts/1    2:30 /usr/bin/python /home/bwscanner/.local/bin/bwscan --launch-tor --loglevel debug scan

fix bug in two hop circuit generator

if unsure of why to merge this pull request:
#27

it's because the following short section of code fails; asking for more samples than are in the set! did we forget that this is a partitioned generator? it only generates circuits within the small domain of it's partition!

#!/usr/bin/env python

import random

num_relays = 100
partitions = 10
this_partition = 1

#for i in random.sample(range(this_partition, num_relays, partitions), num_relays/partitions):
for i in random.sample(range(this_partition, num_relays, partitions), num_relays):
    print i
Traceback (most recent call last):
  File "fu.py", line 10, in <module>
    for i in random.sample(range(this_partition, num_relays, partitions), num_relays):
  File "/usr/lib/python2.7/random.py", line 323, in sample
    raise ValueError("sample larger than population")
ValueError: sample larger than population

Do we need to buffer the requested bandwidth files in memory?

The scanner currently fetches a bandwidth file from the Tor Project bwauth server. These files are buffered and then hashed to see if the file downloaded completed successfully. These files can be > 50 MB in size. This buffering will be extra memory constraints on the bandwidth auth If many requests are happening simultaneously.

We should consider using a HTTP agent which hashes the data as it is received rather than buffering it. We don't necessarily need to use a cryptographically secure hash function, we can just use a cheaper checksum-type hash.

Tests fail using chutney bwscanner configuration

Running tox with ./chutney start networks/bwscanner fails with different errors as #94:

[FAIL]
Traceback (most recent call last):
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/home/user/_my/code/tor-related/bwscanner/test/test_fetcher.py", line 79, in test_do_failing_request
    yield agent.request("GET", url)
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/trial/_synctest.py", line 358, in __exit__
    self._expectedName, reason.getTraceback()),
twisted.trial.unittest.FailTest: twisted.internet.error.ConnectionRefusedError raised instead of ConnectionRefused:
 Traceback (most recent call last):
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1184, in gotResult
    _inlineCallbacks(r, g, deferred)
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/home/user/_my/code/tor-related/bwscanner/.tox/py27/local/lib/python2.7/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/home/user/_my/code/tor-related/bwscanner/test/test_fetcher.py", line 79, in test_do_failing_request
    yield agent.request("GET", url)
--- <exception caught here> ---
  File "/home/user/_my/code/tor-related/bwscanner/test/test_fetcher.py", line 79, in test_do_failing_request
    yield agent.request("GET", url)
twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.

Connect to a running tor only for chutney tests

Since bwscan can launch tor and connecting to a running tor is only needed for chutney, it would be probably safer to allow connecting to tor only for chutney.

See comment in #85 (comment)

In the case we want to leave the possibility to connect to a running tor, it should not fail silently when the user running bwscan does not have privileges for cookie authentication (this should probably be in other ticket)

Create tool to convert the measurements to a format the BWAuths can use

The TorFlow bandwidth scanner includes an aggregate.py script which takes measurement data and creates the bandwidth file that the bandwidth auths use for calculating their votes. The aggregation scripts depends on the dead Tor controller library TorCtl.

I have updated the aggregate.py script to use the Stem Tor controller library instead. This torflow branch is at https://github.com/DonnchaC/torflow/compare/remove-torctl-dependency.

Unhandled Errors

I'm running top of develop (943e003) and getting these errors:

It's not the only output I get, but I see a lot of them.

2018-01-25T22:38:43-0600 [DEBUG]: Download took 84.2130088806 for 8 MB
2018-01-25T22:38:43-0600 [INFO]: Download successful for router $4AAD21AD247E6B90D42ADEE8D908B3C6BE023B29.
2018-01-25T22:38:43-0600 [INFO]: Downloading file '64M' over [$547C1CDB516798EC66A01F04A5884DCE1A151919, $A44AE029015BA6FE0E9B90075C55617E0CD1E22B].
2018-01-25T22:38:48-0600 [WARN]: Download failed for router $6CDF0169775404CA9E5664CE327DBC3C5EED6196: <twisted.python.failure.Failure txtorcon.circuit.Circuit: <Circuit 43 FAILED [213.138.109.144] for GENERAL>>.
2018-01-25T22:38:48-0600 [INFO]: Downloading file '64M' over [$32EE911D968BE3E016ECA572BB1ED0A9EE43FC2F, $DA0FE8B5DD9717F52376F55885BC72E619ACD97D].
Unhandled Error
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/click-6.6-py2.7.egg/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "build/bdist.linux-x86_64/egg/bwscanner/scanner.py", line 103, in scan

  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/base.py", line 1194, in run
    self.mainLoop()
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/base.py", line 1203, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/base.py", line 825, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/tcp.py", line 479, in connectionLost
    self._commonConnection.connectionLost(self, reason)
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/tcp.py", line 293, in connectionLost
    protocol.connectionLost(reason)
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/protocols/tls.py", line 484, in connectionLost
    ProtocolWrapper.connectionLost(self, reason)
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/protocols/policies.py", line 124, in connectionLost
    self.factory.unregisterProtocol(self)
  File "/usr/local/lib/python2.7/dist-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/protocols/policies.py", line 185, in unregisterProtocol
    del self.protocols[p]
exceptions.KeyError: <<class 'twisted.internet.tcp.Client'> to ('127.0.0.1', 43337) at 7f5347412690>

Not sure what else I can provide except even more logs (running with debug logging).

test_measurement.py fails

Test fails running tox with ./chutney start networks/basic-025

[ERROR]
Traceback (most recent call last):
  File "/home/travis/build/juga0/bwscanner/test/test_measurement.py", line 68, in <lambda>
    scan.addCallback(lambda _: check_all_routers_measured(self.tmp))
  File "/home/travis/build/juga0/bwscanner/test/test_measurement.py", line 64, in check_all_routers_measured
    assert measured_relays == all_relays
exceptions.AssertionError: 
test.test_measurement.TestBwscan.test_scan_chutney

It might have to do with the algorithm used to choose the path [0]. Chutney creates less than 25 relays and the slice is 50.

[0] https://github.com/TheTorProject/bwscanner/blob/develop/bwscanner/circuit.py#L101

Close Tor circuits when their measurement request completes

It appears the bandwidth scanner is not closing Tor circuits when the measurement over that circuit completes. The Tor circuits are only closed later when the Tor circuit timeout is hit. It's going to put more load on the local Tor daemon to keep all of these circuits open when they are being unused.

Instead the circuit should be closed immediately after a request finishes regardless if the download succeed or failed.

fix integration tests for bandwidth measurements

Currently in the develop branch we have a couple of integration test classes set to be skipped because these tests contain race conditions. The race condition has to do with not sending enough cells over a circuit to make a proper bandwidth estimate because the circuit is closed too quickly after sending the entire test payload.

btw i am calling them integration tests since they require chutney instead of utilizing mock patterns.

__getitem__ on a NoneType

I see this frequently:

Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.CancelledError:

2018-02-05T21:46:19+0100 [CRITICAL]: Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.defer.CancelledError:

2018-02-05T21:46:19+0100 [WARN]: Download failed for router $84CA2CB2A1FA077BEF9B5F1982E3BF3C828E09F1: <twisted.python.failure.Failure exceptions.TypeError: 'NoneType' object has no attribute 'getitem'>.
2018-02-05T21:46:19+0100 [INFO]: Downloading file '2M' over [$0F1856142DF75D44EDAD8FC34EC98EB0823A1D41, $785600F315953262AFA771AF79D82F9BB309E77F].

OpenSSL Error

scan gives OpenSSL error for some relays:
2018-01-29T14:34:57+0000 [WARN]: Download failed for router $6F33E92A67EC038B559415AC56C860075F6D287F: <twisted.python.failure.Failure twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')]>]>.

Check the TLS options in the fetch agent

Add options to control the log verbosity

It's useful to be able control the verbosity of the log output when running this tool or when trying to debug problem. This tool should be able to output nicely formatted log messages to stdout and to log file(s).

We need to write a Twisted log observer

fix partition detection

currently our partition detection is broken. the algorithm which generates the circuit permutations really needs to be fixed. it must NOT make sequential connections through the same relay 7,000+ times. terrabad. our lazy permutations algorithm has the notion of partitions... so that we can have multiple computers running in parallel each computing their own partition of the total circuit permutation list. in fact the algorithm behaves correctly if many partitions are used but would hose the network if only a few are specified. we should fix this.

txsocksx dependency is not compatible with Python 3

root@hal:~/Projects/tor/bwscanner-develop# bwscan scan
Traceback (most recent call last):
File "/root/anaconda3/bin/bwscan", line 6, in
from pkg_resources import load_entry_point
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 3147, in
@_call_aside
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 3131, in _call_aside
f(*args, **kwargs)
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 3160, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 666, in _build_master
ws.require(requires)
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 984, in require
needed = self.resolve(parse_requirements(requirements))
File "/root/anaconda3/lib/python3.6/site-packages/pkg_resources/init.py", line 870, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'txsocksx==1.15.0.2' distribution was not found and is required by bwscanner

root@hal:~/Projects/tor/bwscanner-develop# pip3 install txsocksx==1.15.0.2
Collecting txsocksx==1.15.0.2
Using cached txsocksx-1.15.0.2.tar.gz
Complete output from command python setup.py egg_info:
zip_safe flag not set; analyzing archive contents...

Installed /tmp/pip-build-aa62oyed/txsocksx/.eggs/vcversioner-2.16.0.0-py3.6.egg
error in txsocksx setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers; 'int' object is not iterable

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-aa62oyed/txsocksx/

race condition in test.test_writer.TestResultSink.test_send_chunk_size

(virtenv-bwscanner) user@subgraph:~/code/bwscanner$ trial test.test_writer.TestResultSink.test_send_chunk_size
test.test_writer
  TestResultSink
    test_send_chunk_size ...                                               [OK]

-------------------------------------------------------------------------------
Ran 1 tests in 0.032s

PASSED (successes=1)
(virtenv-bwscanner) user@subgraph:~/code/bwscanner$ trial test.test_writer.TestResultSink.test_send_chunk_size
test.test_writer
  TestResultSink
    test_send_chunk_size ...                                            [ERROR]

===============================================================================
[ERROR]
Traceback (most recent call last):
  File "/home/user/code/bwscanner/test/test_writer.py", line 52, in <lambda>
    lambda results: walk(self.tmpdir, validateoutput, None)
  File "/home/user/virtenv-bwscanner/lib/python2.7/posixpath.py", line 231, in walk
    func(arg, top, names)
  File "/home/user/code/bwscanner/test/test_writer.py", line 47, in validateoutput
    results = json.load(testfile)
  File "/usr/lib/python2.7/json/__init__.py", line 291, in load
    **kw)
  File "/usr/lib/python2.7/json/__init__.py", line 339, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python2.7/json/decoder.py", line 364, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python2.7/json/decoder.py", line 382, in raw_decode
    raise ValueError("No JSON object could be decoded")
exceptions.ValueError: No JSON object could be decoded

test.test_writer.TestResultSink.test_send_chunk_size
-------------------------------------------------------------------------------
Ran 1 tests in 0.034s

FAILED (errors=1)

NOTE: temporally disable review on PRs

It is a good practice that at least two persons review the PRs of a third one, however right now in this project:

  • we are not using master branch, only develop
  • we are not doing versioning nor releases
  • nobody is running this code in production yet
  • there are not enough active developers/reviewers

Therefore, i would disable the need to review PRs until there are more reviewers.

Setting DisableDebuggerAttachment too late

The change introduced in 839fc5a, setting DisableDebuggerAttachment=0, makes tor protest with

Failure: txtorcon.torcontrolprotocol.TorProtocolError: 553 Transition not allowed: While Tor is running, changing DisableDebuggerAttachment is not allowed

This was tested with `--launch-tor' and tor-0.3.3.1.

``Download failed`` for most of the relays

We should investigate the reason this fail with so many relays.
The reports gets the string: <twisted.python.failure.Failure twisted.internet.defer.CancelledError: > , which does not tell either the reason.

[aggregate] Missing error handling

Running bwscan --launch-tor aggregate:

2018-02-08T11:38:16+0100 [INFO]: Spawning a new Tor instance.
2018-02-08T11:38:16+0100 [INFO]: NoOpProtocolFactory starting on 43457
2018-02-08T11:38:16+0100 [INFO]: Aggregating data from past 1 scans.
2018-02-08T11:38:16+0100 [INFO]: (TCP Port 43457 Closed)
2018-02-08T11:38:16+0100 [INFO]: Spawning tor process with DataDirectory /tmp/tortmpGlQ8n4
2018-02-08T11:39:30+0100 [INFO]: Loading JSON measurement files
2018-02-08T11:39:30+0100 [INFO]: Loaded 85 successful measurements and 37 failures.
2018-02-08T11:39:30+0100 [INFO]: Processing the loaded bandwidth measurements
Unexpected error

Traceback (most recent call last):
Failure: txtorcon.torcontrolprotocol.TorProtocolError: 552 Unrecognized key "ns/id/D33D98BA997A883CA0973BEF5BA85B422E997881"
2018-02-08T11:39:30+0100 [CRITICAL]: Unexpected error

Traceback (most recent call last):
Failure: txtorcon.torcontrolprotocol.TorProtocolError: 552 Unrecognized key "ns/id/D33D98BA997A883CA0973BEF5BA85B422E997881"
2018-02-08T11:39:30+0100 [INFO]: Main loop terminated.

I would guess this is due to and unhandled failure in a get_info_raw() request initiated by write_aggregate_data() but I don't have any idea of where to add error handling code for this.

        routerstatus_info = yield tor.protocol.get_info_raw('ns/id/' + relay_fp.lstrip("$"))

Add new/better bandwidth measurement algorithm

Currently our bandwidth measurements are done incrementally with each cell that traverses the circuit but instead we could measure the time once the entire test payload is received.

Chutney create more exits than non-exits relays

The default configuration in chutney for networks/basic-025 [0] creates 16 exits and only 4 non-exits relays (that are actually authorities).
Maybe it should be provided a custom chutney configuration file, since in the production Tor the there are more non-exit relays than exits?
The custom configuration file could be something like 14 relays and 2 exits? (assuming 1 exit per 7 relays)
Maybe this also has to do with #95?.
[0] https://github.com/torproject/chutney/blob/master/networks/basic-025#L6

TestBwscan test hangs after succeeding

The TestBwscan test runs the BwScan class to scan and collect measurements across a set of relays. This test is currently passing. However the trial test runner hangs after the test completes which causes the test suite to fail with a timeout error.

I'm not sure what is causing it to hang but its possible some Deferred or callLater which is not being cleaned up properly when the measurements finish.

Unhandled Error on disconnect

Possibly related to #70, I see this (in addition to the backtrace in #70):

2018-02-05T16:04:48+0100 [INFO]: Downloading file '4M' over [$771E0824EF82D85E04D80116554111E37F7EB796, $A3CB2E3D24A1688DC7213988B1BE9321B62AAD8C].
Unhandled Error
Traceback (most recent call last):
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 916, in dispatcher
return func(*args, **kwargs)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1480, in _finishResponse_WAITING
self._giveUp(Failure(reason))
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1533, in _giveUp
self._disconnectParser(reason)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1521, in _disconnectParser
parser.connectionLost(reason)
--- ---
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 537, in connectionLost
self.response._bodyDataFinished()
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 916, in dispatcher
return func(*args, **kwargs)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1169, in _bodyDataFinished_CONNECTED
self._bodyProtocol.connectionLost(reason)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/client.py", line 2113, in connectionLost
self.deferred.callback(b''.join(self.dataBuffer))
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 393, in callback
self._startRunCallbacks(result)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 494, in _startRunCallbacks
raise AlreadyCalledError
twisted.internet.defer.AlreadyCalledError:

2018-02-05T16:04:56+0100 [CRITICAL]: Unhandled Error
Traceback (most recent call last):
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 916, in dispatcher
return func(*args, **kwargs)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1480, in _finishResponse_WAITING
self._giveUp(Failure(reason))
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1533, in _giveUp
self._disconnectParser(reason)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1521, in _disconnectParser
parser.connectionLost(reason)
--- ---
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 537, in connectionLost
self.response._bodyDataFinished()
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 916, in dispatcher
return func(*args, **kwargs)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/_newclient.py", line 1169, in _bodyDataFinished_CONNECTED
self._bodyProtocol.connectionLost(reason)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/web/client.py", line 2113, in connectionLost
self.deferred.callback(b''.join(self.dataBuffer))
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 393, in callback
self._startRunCallbacks(result)
File "/home/bwscanner/.local/lib/python2.7/site-packages/Twisted-16.2.0-py2.7-linux-x86_64.egg/twisted/internet/defer.py", line 494, in _startRunCallbacks
raise AlreadyCalledError
twisted.internet.defer.AlreadyCalledError:

Pasting two backtraces here since I'm not sure what lines before and after the backtrace belong to which backtrace.

Limit the number of simultaneous requests

It should be possible to configure the number of simultaneous bandwidth measurements. If we have too many running together, the upstream bandwidth limits and the CPU load could distort the measurements.

We can use Twisted's DeferredSemaphore to do this rate limiting.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.