magic-wormhole / magic-wormhole Goto Github PK
View Code? Open in Web Editor NEWget things from one computer to another, safely
License: MIT License
get things from one computer to another, safely
License: MIT License
As per the issue on pyNaCl, it doesn't compile on Windows and, as such, neither does this. Where is pyNaCl used? I wanted to take a look to see if it could be easily replace with libnacl or something, but I am having a hard time locating it.
EDIT
I found a use of pyNaCl, but only in the twisted.transcribe. If that is all, could pyNaCl be removed for the required and put on the list of needed if you want "To run a relay server or use the async support, you must also install Twisted and pyNaCL."
Are there any plans for an nodejs version/simple guidelines for implementing it? ๐
There's room in the nameplate list to include per-nameplate attributes (each nameplate is delivered as a dictionary, and the nameplate ID is just one key, and we can add others). The first attribute I have in mind is a "wordlist identifier", so the sender can use an alternative wordlist, and the recipient can offer tab-completion against the right list. I'm undecided as to whether this identifier should be a simple string or integer, or if it should be a hash of the wordlist (encoded in some suitable form), so it can be a bit more open-ended (and guaranteed to be the same on both sides). The attributes are not protected at all (as they necessarily get processed before the PAKE happens), so we have to be careful with them. E.g. it's probably not a good idea to include the entire wordlist in the attribute, because then an attacker can swap it out for a really short list, and a receiver who isn't paying attention might just hit "xyTABaTABRETURN" and the attacker would have a higher chance than usual of guessing the code.
For the "choose a wormhole code offline, then type it into both machines later" mode, we could improve the receiver's experience (assuming they type it in after the sender does) by setting the wordlist identifier to None, which should disable tab-completion (at least on the wordlist).
We need to think through the user experience here, since it'd be a bit weird to have the receiver's flow change depending upon whether they happened to run before, or after, the sender runs. We have more information to work with if the receiver goes second. (A related question is what the receiver should do when it sees no data in the channel it was told to use: in the machine-generated-code case, that means they probably mistyped the channel-id, and it might be good to emit a warning).
I tried to pip install magic-wormhole today after watching the pycon16 talk on it. The install failed because the spake2 dependency doesn't have a 0.7 distribution.
At Glyph's suggestion, I think we should replace the messy ad-hoc state machine in Wormhole to use the Automat module.
Hi,
I thought of a feature that I could really use, but I'm not sure if it would fit in in the current wormhole archetecture. Anyway, reusable codes would be nice, in which you make a code, and then whenever you want to reuse it, perhaps for a specific friend, you can just enter it. That way, instead of having to give them the code every time, perhaps making the transfer less secure, depending on the method of transfer of the code, you could just tell them 'You have a file!' or something, and they, assumin they already know the code, could simply enter the predetermined code and get the file.
Thoughts?
-Michael.
At present, the verify()
API can only be called once, because it only stashes a single Deferred. It needs a Deferred because it might be called before or after the verifier becomes available).
It'd be nice to relax this requirement. To do so, I think I'd use a OneShotObserverList
(like the one I implemented in Tahoe, but without the eventual()
call that makes it harder to test synchronously).
Currently it does this:
(wormhole) ~ $ wormhole help
usage: wormhole SUBCOMMAND (subcommand-options)
wormhole: error: argument subcommand: invalid choice: 'help' (choose from 'server', 'send', 'receive')
README.md appears to claim that only the daemonizing form of wormhole-server start
is incompatible with python3.
however, running wormhole-server start --no-daemon
when using python3 also ends up producing a backtrace and then hanging.
2016-07-31T19:06:58-0400 [-] populating new database with schema v3
2016-07-31T19:06:58-0400 [twisted.scripts._twistd_unix.UnixAppLogger#info] twistd 16.2.0 (/usr/bin/python3 3.5.2) starting up.
2016-07-31T19:06:58-0400 [twisted.scripts._twistd_unix.UnixAppLogger#info] reactor class: twisted.internet.epollreactor.EPollReactor.
2016-07-31T19:06:58-0400 [-] PrivacyEnhancedSite starting on 4000
2016-07-31T19:06:58-0400 [wormhole.server.server.PrivacyEnhancedSite#info] Starting factory <wormhole.server.server.PrivacyEnhancedSite object at 0x7f4e3de18278>
2016-07-31T19:06:58-0400 [-] Transit starting on 4001
2016-07-31T19:06:58-0400 [wormhole.server.transit_server.Transit#info] Starting factory <wormhole.server.transit_server.Transit object at 0x7f4e3de18358>
2016-07-31T19:06:58-0400 [-] beginning app prune
2016-07-31T19:06:58-0400 [-] app prune ends, 0 apps
2016-07-31T19:06:58-0400 [-] get_stats took: 0.0011012554168701172
2016-07-31T19:06:58-0400 [-] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/application/service.py", line 283, in startService
service.startService()
File "/usr/lib/python3/dist-packages/twisted/application/internet.py", line 274, in startService
self._loopFinished = self._loop.start(self.step, now=True)
File "/usr/lib/python3/dist-packages/twisted/internet/task.py", line 194, in start
self()
File "/usr/lib/python3/dist-packages/twisted/internet/task.py", line 239, in __call__
d = defer.maybeDeferred(self.f, *self.a, **self.kw)
--- <exception caught here> ---
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3/dist-packages/wormhole/server/server.py", line 117, in timer
self.dump_stats(now, validity=EXPIRATION_CHECK_PERIOD+60)
File "/usr/lib/python3/dist-packages/wormhole/server/server.py", line 134, in dump_stats
json.dump(data, f, indent=1)
File "/usr/lib/python3.5/json/__init__.py", line 179, in dump
fp.write(chunk)
builtins.TypeError: a bytes-like object is required, not 'str'
2016-07-31T19:06:58-0400 [-] websocket listening on /wormhole-relay/ws
2016-07-31T19:06:58-0400 [-] Wormhole relay server (Rendezvous and Transit) running
2016-07-31T19:06:58-0400 [-] not blurring access times
I had to kill it with Ctrl-C.
I'm using python3-twisted version 16.2.0-1 from debian testing.
I'm afraid i don't know whether the README.md needs to be updated (dropping mention of "daemonizing") or whether there is some other bug worth fixing.
If you run wormhole --dump-timing=tx.json send
on one side, and wormhole --dump-timing=rx.json receive
on the other, then copy both files to the same place, you can run cd misc && python dump-timing.py tx.json rx.json
and it will open a web browser and show you a d3.js-based zoomable/pan-able timeline of the messages exchanged. It's pretty handy for finding places to squeeze out roundtrips.
But I broke it when I did the massive 0.8.0 refactoring. The task is to fix the javascript (in misc/web/timeline.js
) to make it work once more.
If a transfer fails on the receiver side, the sender currently sees an exception explaining what happened on the receiver side. For instance, if the transfer failed because the receiver already had a file with the same name, the sender sees an exception (with traceback) and a message that states the receiver already had a file by that name:
servo:~/tmp/cdtemp.RGVnNd 0$ wormhole send foo
Sending 0 byte file named 'foo'
On the other computer, please run: wormhole receive
Wormhole code is: 3-cherokee-stormy
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/wormhole/cli/cli.py", line 99, in _dispatch_command
yield maybeDeferred(command)
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python3/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_send.py", line 53, in go
yield d
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_send.py", line 140, in _go
% them_d["error"])
wormhole.errors.TransferError: remote error, transfer abandoned: file already exists
ERROR: remote error, transfer abandoned: file already exists`
Beyond the fact that it would be good if this exception was caught and presented to the user in a straightforward way, the actual exception message is an unnecessary leak of information about the state of the receiver to the sender. Clearly the protocol has some way of reporting errors from receiver to sender. However, I'm not sure that there is really any reason that the sender needs to know why the transfer failed, other than there was some problem on the receiver side. As receiver, I don't necessarily want the sender to know anything about the state of my machine, such as knowing the names of files in my receiving directory.
I guess the point is that I don't think the receiver should send any information back to the sender giving any information about the state of the receivers system. It should be sufficient just to say that the transfer failed on the receiver side, with no other context. Sender and receiver can then communicate out-of-band about the failure if they wish.
Also, users should not ever see exception tracebacks for "normal" failure modes. This type of failure should just print a message to the console ("transfer failed due to receiver error") and wormhole should exit with non-zero return code.
My plan here is to add a --tor
flag, which will:
txtorcon
will be asked to create a new Tor instance)txtorcon
to create an ephemeral Hidden Service, and use tor:XYZ.onion:PORT
as the "direct hint", instead of scanning and revealing local IP addressesIf both sides are using --tor
, they should be able to connect with the HS "direct hints". But if only one side is using Tor, the other won't be able to use the HS, so they must fall back to the relay.
Most (all?) distros require man pages for binaries that are shipped with packages. For Debian, we need man pages for wormhole and wormhole-server. For the initial packaging that I've put together (https://bugs.debian.org/833090) I've made very basic man pages with help2man which do not include any subcommand help. It would be great if we had a good way to generate man pages directly from the python built-in documentation that included subcommand usage for both wormhole and wormhole-server.
this would make magic-wormhole considerably more accessible to non-python developers
Just a heads up: I'm working on (incompatible) protocol changes to simplify the API. This will remove the distinction between "Initiator" and "Receiver", which I think will make it easier to use. The change will make wormhole-0.3.0 clients incompatible with the subsequent release, though. Depending upon what feedback I get about the underlying SPAKE2 crypto, I might make more breaking changes (to improve efficiency).
The client currently emits a message if its version does not match whatever the rendezvous server recommends, so at least this won't be a completely silent failure. When I make the next release, I'll bump the server's recommendation to activate this signal.
As mentioned in #19, it seems that --no-listen
is currently stuck on. I think this is a one-liner in cli/cli.py
where the listen
flag is set up (it gets cleared if --no-listen
is provided, but doesn't get set if --no-listen
is not provided). But I'd like to add a test too, something that parses an empty set of args and makes sure all the flags have their expected values.
While trying to port magic-wormhole's Tor support to Foolscap, I discovered that the "connect to existing Tor" code path doesn't read the SOCKS port from the running daemon the way it's supposed to. It works, but only because the running daemon happens to be using the default SOCKS port (which txtorcon.TorClientEndpoint
uses when you pass in socks_port=None
).
What it's supposed to do is to connect to the running daemon, fetch a copy of it's config, then extract config.SocksPort
and build the TorClientEndpoint
from that.
There is a round-trip between the time the sender gets the receiver's PAKE message (and thus computes the session key), and the time the sender gets the receiver's VERSION message (and thus knows that the receiver used the right code). If this takes more than a second or two, it might be good to display a message like "Key established, waiting for confirmation..".
When attempting to receive a file that has the same name as an existing file, multiple uncaught exceptions are thrown:
servo:~/tmp/cdtemp.RGVnNd 0$ wormhole receive 3-cherokee-stormy
Error: refusing to overwrite existing file foo
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_receive.py", line 114, in _get_data
returnValue(them_d)
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1105, in returnValue
raise _DefGen_Return(val)
twisted.internet.defer._DefGen_Return: {'offer': {'file': {'filesize': 0, 'filename': 'foo'}}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_receive.py", line 95, in _go
yield self._parse_offer(them_d[u"offer"], w)
wormhole.cli.cmd_receive.RespondError: file already exists
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/wormhole/cli/cli.py", line 99, in _dispatch_command
yield maybeDeferred(command)
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python3/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_receive.py", line 67, in go
yield d
File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python3/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/lib/python3/dist-packages/wormhole/cli/cmd_receive.py", line 98, in _go
raise TransferError(r.response)
wormhole.errors.TransferError: file already exists
ERROR: file already exists
servo:~/tmp/cdtemp.RGVnNd 1$
There are exceptions during exceptions here. The receiver CLI should catch "normal" exceptions and present appropriate error messages to the user
Let's use the new PEP-508 syntax to declare a dependency on whatever windows-specific things we need (pypiwin32, I think) when we're building on windows.
At the moment, to use the --verify
feature, both sides need to turn it on (it tells the sender to print and wait for the verifier string, and it tells the receiver to print the verifier string). It'd be nice if either side could turn on --verify
, and if it weren't necessary for the other side to add the flag too.
To support this, I think the sender could put a flag (in the PAKE message, next to the actual SPAKE2 msg1) that tells the receiver "you should print the verifier string". This needs some interaction with the top-level application, though, since currently it is the application which owns the actual displaying of the verifier. I don't know how this should work cleanly.
It might also be interesting to have the receiver be able to provoke verification. In this case, the receiver would send the "please verify" flag in its own PAKE message, and the sender would turn on the display-and-compare behavior. An attacker could obviously strip out the flag, so --verify
really only protects the side(s) where it was provided.
I'd love to see this tool available via Homebrew ๐
I've been reading a bit lately about ICE for NAT traversal (https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment). Is there any interest in supporting this as the first option instead of the (comparatively) heavy-weight relay service?
Or perhaps you're more interested in routing via tor? That would, of course, solve the NAT problem as well.
We're currently using argparse
for CLI argument processing, but I'd like to move to Twisted's native argv-parsing code (twisted.python.usage.Options
). The two goals are:
wormhole rx
does exactly the same thing as wormhole receive
(I keep forgetting how to spell "receive" when doing a demo, how embarrassing!).If three or more clients claim a single nameplate, that indicates a problem (since only two can work). That condition is called "crowded", and is supposed to be recorded in the server-side nameplate_usage
table (by setting the mood
column to crowded
). It doesn't seem to do this yet: the CrowdedError
is raised when the third claim arrives, but this doesn't mark the mood properly.
The slides have example, but the README should have a quick "look here's how easy it is to use" at the top.
I started a wormhole send
last night, and didn't do the matching receive until the next morning. The two sides then failed to connect. It looks like both the sender and the server think they have a websocket connection open, but my home router's NAT table had dropped the forwarding entry because it hadn't seen any traffic in a while.
We need to turn on the TCP_KEEPALIVE
option on both ends, so they eventually give up on connections that are broken this way. Note that TCP_KEEPALIVE
doesn't actually keep anything alive, usually: it merely accelerates connection death. The specific behavior (using the default linux kernel settings is): when two hours pass without any outbound (ACK-able) traffic, start sending pings (which should provoke ACKs) every 75 seconds; when 11 of these have been sent without a response (about 10 minutes), drop the connection. The new messages occur far to late to prevent a NAT box from timing out. So it really just reduces the idle-until-dead time from infinity seconds to about two hours.
It's possible to change these settings in the kernel, but almost nobody does, so you always need some application-level timers.
We also need something to send a periodic ping message, maybe once every 5 or 10 minutes, to keep the NAT entry alive. I'll check to see if Autobahn already has an option for this.. seems like a pretty common issue with long-running websockets (something I'd expect to be included in the spec, in fact).
There's a pathological failure mode here, where the server is doing TCP_KEEPALIVE
, but the client is not, and the NAT entry gets deleted. The server sees the socket disconnect (from the timeout), then eventually expires the nameplate (about 20min with the new expiration code). If the second client tries their wormhole receive
after this point, they'll wind up re-allocating the same nameplate (since it's now idle), and then they'll wait forever for a sender which has already been lost. And the sender doesn't realize anything's gone wrong because they have keepalives turned off, so they think they still have a connection.
Fortunately, sending a periodic ping should both prevent the NAT entry from expiring, and will detect it fairly quickly (a few minutes, I think) if it ever does get dropped.
Both functions currently do similar things (they stash a Deferred so that _signal_error()
can errback it), but they do it in different ways:
input_code
stashes a Deferred into self._input_code_waiter
get_code
stashes itself into self._get_code
, from which we can get ._allocated_d
I'd like to have a web-capable version. My plans are:
It'd be nice if the wormhole receive
side could remind users that tab-completion is available. This might also help avoid the confusion that happens when the sender says (speaks) "the wormhole code is three purple kumquat", and the receiver doesn't guess that there are supposed to be hyphens in between words instead of spaces.
I didn't want to make wormhole receive
show too-specific of an example (e.g. Type in the wormhole code here. Wormhole codes look like "4-purple-kumquat".
), for fear that folks would literally type in the example text. But maybe something like Wormhole codes look like "NN-WORD-WORD"
might be sufficiently non-specific to avoid confusion.
$ wormhole --version
magic-wormhole 0.8.0
$ wormhole send 104OLYMP
Building zipfile..
main function encountered error
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/site-packages/wormhole/cli/cmd_send.py", line 51, in go
d = self._go(w)
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/lib/python2.7/site-packages/wormhole/cli/cmd_send.py", line 63, in _go
offer, self._fd_to_send = self._build_offer()
File "/usr/lib/python2.7/site-packages/wormhole/cli/cmd_send.py", line 215, in _build_offer
num_files += 1
File "/usr/lib64/python2.7/zipfile.py", line 801, in __exit__
self.close()
File "/usr/lib64/python2.7/zipfile.py", line 1347, in close
" would require ZIP64 extensions")
zipfile.LargeZipFile: Central directory offset would require ZIP64 extensions
Currently, magic-wormhole relies on *nix commands which is not avalible on cygwin, say ifconfig.
Please consider a compatible way to make it work cygwin.
I'm trying to figure out how to split this package up into separate modules, specifically so that:
pip install magic-wormhole
gets the wormhole
executablewormhole
executable uses Twisted (this makes the Transit/connection stuff easier to implement, and will speed things up by parallelizing connections in a way that's difficult with threads)My current plan (for which I'm looking for feedback) is to drop the blocking-style Transit library (so blocking-style users get Wormhole
, and can exchange messages, but don't get Transit
for bulk file transfer), and then split things up into four pypi distributions:
PyPI name | scripts | import name | dependencies | contains |
---|---|---|---|---|
magic-wormhole-lib | wormhole |
blocking-flavor library, code shared by both blocking+twisted flavors | ||
txwormhole | txwormhole |
magic-wormhole-lib (for common code) | twisted-flavor library | |
magic-wormhole | wormhole |
wormhole_cli (only for itself) |
magic-wormhole-lib, txwormhole | the actual tool |
magic-wormhole-server | wormhole-server |
wormhole_server (only for itself) |
twisted, sqlite | Relay and Transit servers |
This would all live in a single git repo (this one), split into four subdirectories (with four separate setup.py
files), and the tox.ini file would install all four into a single virtualenv before running the tests. The tests themselves would be moved to a fifth subdirectory.
(this depends upon a Versioneer enhancement (python-versioneer/python-versioneer#61), which is blocked by a pip bug (pypa/pip#3615), but having tox use --editable=
might be a temporary workaround, at least until release time)
Running 'tox' on wormhole trunk (currently at cdb5c19) hangs at test_scripts.Cleanup
. Downgrading Twisted from current 16.3.0 to 16.2.0 lets it pass all tests.
@gtank noticed that typing in spaces (instead of hyphens) into the wormhole code causes an exception. I'm guessing the readline library (in particular the completion function I wrote) doesn't appreciate those characters.
The source tree is compatible with py2.6/3.3/3.4/3.5, which occasionally makes for some awkward constructs. There are a lot of strings declared as u"foo"
instead of just "foo"
. If we added from __future__ import unicode_literals
, especially to wormhole.py
, we could probably clean this up.
We had an idea today, along the lines of #32, but for GPG keys instead of SSH keys. In particular, at a conference, I was asked to sign someone else's GPG public key. We wound using wormhole send
to get me his key, I did gpg --import
to add it to my local keyring, then gpg --edit-key
to sign it, then gpg --export
to write the signed key to a new file, then I did wormhole send
to get the signed key back to him, then he did gpg --import
to add the new signature to his keyring. The big security benefit of using wormhole for this is that I know I got the right key, as opposed to having him dictate his keyid and fingerprint to me first.
It might be handy to have a wormhole command that does all this for you. He would run wormhole gpg please-sign
, which would find (perhaps ask for) the right public key, send it over to the side that runs wormhole gpg sign
, which does the same signing dance, then sends the signed key back. It could have options to specify which key you want sign with (or it could do gpg --list-secret-keys
and then ask the user to pick one).
If this actually turns out to be useful, it might be good to set up a plugin mechanism so that the "gpg" subcommand doesn't need to be shipped with the "magic-wormhole" package directly, but could be installed with some "magic-wormhole-gpg" package instead.
I'd expect an attempt to use a non-responsive or non-available relay to cause "wormhole send" to fail with an error code, but instead it hangs:
0 dkg@alice:~$ wormhole --relay-url ws://no-such-server/v1 send --text 'bananas'
Sending text message (7 bytes)
On the other computer, please run: wormhole receive
Unhandled Error
Traceback (most recent call last):
Failure: twisted.internet.error.DNSLookupError: DNS lookup failed: Couldn't find the hostname 'no-such-server'.
I have to hit Ctrl-C to get out of this state.
Sometimes people have less free space on disk than the size of the zipped data they want to send. In such cases it would be reasonable to stream data and calculate checksums on the fly rather than to create a zipfile first.
A suggestion by @teufen
on twitter: allow some way for the receiving user to allow an existing target file to be overwritten. This is especially useful if you're running the same wormhole send
multiple times (with a newer version of the file).
It's not safe, in general, to overwrite existing files, especially because the filename is being selected by the sender. But we could give the receiver a way to opt-in to the overwrite, if it's happening predictably. And the existing --output-file=
option is a pretty clear and predictable signal: any overwriting is going to affect a single user-specified path.
So the rule would be that if the recipient receives a file that already exists, but the filename came from --output-file=
, then print a overwriting FILENAME
message and overwrite it.
TBD: what to do when --output-file=
is a directory: do we rm -rf
the old directory and replace it entirely? (also, maybe this suggests that --output-file=
is misnamed, if it could refer to directories)
We came up with a neat feature idea at PyCon: using magic-wormhole to set up SSH pubkeys.
The use case is that Alice owns a computer, and wants to give Bob SSH acccess to it. Either Alice is root on the host and she's setting up a new account for Bob, or Alice is a normal user (logged in already) and is trying to add her own pubkey.
Alice runs something like wormhole add-ssh
, maybe as wormhole add-ssh --user=bob
. Then Bob runs wormhole send-ssh
. The add-ssh command generates and displays a wormhole code. The send-ssh command looks in ~/.ssh/
, finds your pubkeys, and asks you which one you want to send, then accepts the wormhole code, and sends the pubkey. When add-ssh receives the pubkey, it appends it to ~/.ssh/authorized_keys
of the given user account.
In a high volume relay, an attacker looking to intercept files can use the _list
function to find nameplates waiting for a receiver, and use them to attempt file interception. On a high traffic relay, say 2-3 transfers a second, you have about a 30% chance of an intercept over the course of 2 hours.
If we prevent auto-complete on nameplates however, an attacker would have to guess nameplates to find one with a sender listening. Moreover, if they are rapidly polling low nameplates, legitimate users would be less likely to receive those nameplates from the server, especially if the rate limiting is implemented on lonely nameplates.
Looking towards the future: magic-wormhole uses a single central rendezvous server to allow clients to find each other based upon just a small integer. If/when it becomes more popular, the traffic on this server might be an issue. How could we use multiple server hosts to avoid this bottleneck?
(note that this is independent of the "transit relay" server, which is morally equivalent to a TURN server and helps file transfers to work when both clients are behind NAT boxes, by gluing two TCP streams together)
There are two components to the rendezvous server. The first is channel allocation: the most common workflow is for wormhole send
to allocate a channel by asking the server to assign one, and this needs to be unique (so the server needs to know a complete list of all channel IDs currently in use). These are called "nameplates" in the protocol and the code.
The second is "mailbox" management: a semi-persistent set of messages, and a publish-subscribe protocol for both sides to add and retrieve those messages. Each nameplate maps to a mailbox, each mailbox has a queue/set, and each client connects to a mailbox.
I think the channel-allocation part requires some sort of singular central table. Even if we get multiple servers involved, I think they'll need to talk to a single memcached or something.
The mailboxes could be sharded out among multiple machines, as long as there's a way for every client to get connected to the right machine. For example, we could use DNS round-robin or something to point to a fleet of servers, all of which can use that memcached instance to find out where a given mailbox lives. If a client connects to the wrong server, that server looks up the mailbox in the central table, learns about the correct server, then redirects the client to them. I think that means one non-sharded message per lookup, but everything else scales with the number of servers we use.
A friend and I were able to transfer a file directly, when both of us were normally using a VPN (which means the data has to go through the relay, very slow), by switching to a local ad-hoc network. (on OS-X, this is done by choosing "Create Network.." from the wifi menu). To accomplish this, I had to run a relay locally, and he had to use --relay-url=ws://MY-IP-ADDR:4000/v1
. Neither of us were connected to the real internet while on the ad-hoc network, so the normal relay wasn't available.
It'd be slick to have a mode that does this without running a local relay. MCT and I were thinking about this a while ago. I'm imagining something like this:
--relay-url=multicast
, or --relay-multicast
, or MDNS, or some flag to say "use the local network instead of the default relay server"There might be a cheaper/easier/less-coding way to go, like by having a distinguished side (maybe the sender) run a full rendezvous server (with an in-memory database instead of relay.sqlite
on disk), then register it on a well-known MDNS/zeroconf/bonjour name. The receiver could repeatedly try to connect to a relay at that well-known name, and then speak the normal relay protocol.
The Transit object, in .get_connection_hints()
, returns a list of IP addresses (gleaned by running ifconfig
) that the other side should connect to. We should probably strip out 127.0.0.1
from the list, because that is only ever useful when both host are running on the same machine, which only ever happens for developer tests.
(it might be nice to have a flag to turn this back on, for the test suite, but for normal CLI use, 127.0.0.1 it isn't helpful)
codes.py
has:
readline.parse_and_bind("tab: complete")
On OS X, which uses libedit
, you need:
readline.parse_and_bind("bind ^I rl_complete")
In the server, I think I've seen cases where messages
rows are left lingering even though their corresponding mailbox
row has been deleted. I'm thinking that the free_mailbox
function isn't correctly DELETEing the rows.
I had to remove the blocking API the other week when performing my massive refactoring. I want to bring it back, by using Crochet.
To quickly reclaim unused channel-IDs, the server should notice when the last client has disconnected, and delete the nameplate/mailbox/messages right away. (in the future, we'll have "persistent wormholes" that don't have this property, but clients will ask explicitly to keep their channel alive).
When a nameplate is released and deleted, the server should avoid re-allocating it to someone else for maybe 10 minutes. This would add some weak protection against an MitM attack: if they guess the code correctly, they'll steal the connection to the first side, but they wouldn't be able to make a connection to the second side until the server releases the channel.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.