Git Product home page Git Product logo

user's Introduction

user

user is a CLI renter for Sia. It is an alternative to siad's renter module. The biggest difference is that user is a program that you invoke to perform specific actions, whereas siad is a daemon that runs continuously in the background. The other major difference is that siad manages your contracts for you by select good hosts, maintaining a pool of usuable contracts, and automatically renewing contracts when necessary. user, by contrast, offloads these responsibilities to a muse server.

user provides some functionality that siad does not. It allows you to upload and download with having to run a full node; it efficiently stores small files; it makes it easy to share files with your friends; and more. On the other hand, since user does not run in the background, it cannot automatically repair your files like siad does.

Setup

First you'll need to install user by checking out this repository and running make. Run user version to confirm that it's installed.

user cannot operate on its own; it needs contracts, which it gets from a muse server. You can specify the address of this server using a CLI flag, but it's more convenient to add it to your config file.

Uploading and Downloading Files

user stores and retrieves files using metafiles, which are small files containing the metadata necessary to retrieve and modify a file stored on a host. Uploading a file creates a metafile, and downloading a metafile creates a file. Metafiles can be downloaded by anyone possessing contracts with the file's hosts. You can share a metafile simply by sending it; to share multiple files, bundle their corresponding metafiles in an archive such as a .tar or .zip.

Note that metafiles represent "snapshots" of a file at a particular time; if you share a metafile, and then modify your copy, the recipient will not see your modifications. Likewise, if the recipient modifies their copy, it will not affect your own.

The upload and download commands are straightforward:

$ user upload [file] [metafile]

$ user download [metafile] [file]

file is the path of the file to be read (during upload) or written (during download), and metafile is the path where the file metadata will be written. The extension for metafiles is .usa (a for "archive"). If you omit the final argument, the name of the file or metafile is chosen automatically (by either removing or appending the .usa extension).

When uploading, you must specify the desired redundancy of the file, which you can do by passing the -m flag or by setting the min_shards value in your config file. This value refers to the minimum number of hosts that must be reachable for you to download the file. For example, if you have 10 contracts, and you upload with -m 5, you will be able to download as long as any 5 hosts are reachable. The redundancy of the file in this example is 2x.

The upload command erasure-encodes file into "shards," encrypts each shard with a different key, and uploads one shard to each host. The download command is the inverse: it downloads shards from each host, decrypts them, and joins the erasure-encoded shards back together, writing the result to file.

Uploads and downloads are resumable. If metafile already exists when starting an upload, or if file is smaller than the target filesize when starting a download, then these commands will pick up where they left off.

You can also upload or download multiple files by specifying a directory path for both file and metafile. The directory structure of the metafiles will mirror the structure of the files. This variant is strongly recommended when uploading many small files, because it allows user to pack multiple files into a single 4MB sector, which saves lots of bandwidth and money. (Normally, each uploaded file must be padded to 4MB.)

It is also possible to redirect a download command:

$ user download [metafile] | wc -l

This means you can pipe downloaded files directly into other commands without creating a temporary file.

Migrating Files

When metafiles have a redundancy greater than 1x, they can still be downloaded even if some of their hosts are unreachable. But if too many hosts become unreachable, the metafile will be lost. For this reason, it is prudent to re-upload your metafiles to better hosts if they are at risk of being lost. In us, this process is called "migration."

If you have a local copy of the original file, you can re-upload it to the new hosts immediately. If you don't have a local copy, you must download the file first, then re-upload it to the new hosts. In user, these options are called file and remote, respectively.

Let's assume that you uploaded a file to three hosts with min_shards = 2, and one of them is now unresponsive. You would like to repair the missing redundancy by migrating the shard on the unresponsive host to a new host. If you had a copy of the original file, you could run:

$ user migrate -file=[file] [metafile]

Unfortunately, in this example, you do not have the original file. However, there are still two good hosts available, so you can download their shards and use them to reconstruct the third shard by running:

$ user migrate -remote [metafile]

Note that in a remote migration, the file is not actually downloaded to disk; it is processed piecewise in RAM. You don't need any free disk space to perform a migration.

Like uploads and downloads, migrations can be resumed if interrupted, and can also be applied to directories.

Configuration

user can be configured via a file named ~/.config/user/config.toml:

# API address of muse server.
# REQUIRED.
muse_addr = "muse.lukechampine.com/<my-muse-id>"

# API address of SHARD server.
# OPTIONAL. If not provided, the muse server will be used instead.
shard_addr = "shard.lukechampine.com"

# Minimum number of hosts required to download a file. Also controls
# file redundancy: uploading to 40 hosts with min_shards = 10 results
# in 4x redundancy.
# REQUIRED (unless the -m flag is passed to user).
min_shards = 10

Extras

Uploading and Downloading with FUSE

FUSE is a technology that allows you to mount a "virtual filesystem" on your computer. The user FUSE filesystem behaves like a normal folder, but behind the scenes, it is transferring data to and from Sia hosts. You can upload a file simply by copying it into the folder, or download by opening a file within the folder.

The command to mount the virtual filesystem is:

$ user mount [metadir] [mnt]

metadir is the directory where metafiles will be written and read. Each such metafile will correspond to a virtual file in the mnt directory. For example, if you create bar/foo.txt in mnt, then bar/foo.txt.usa will appear in metadir.

Unlike most user commands, mount will remain running until you stop it with Ctrl-C. Don't kill it suddenly (e.g. by turning off your computer) or you will almost certainly lose data. If you do experience an unclean shutdown, you may encounter errors accessing the folder later. To fix this, run fusermount -u on the mnt directory to forcibly unmount it.

Downloading over HTTP

user can serve a directory of metafiles over HTTP with the serve command:

$ user serve [metadir]

You can then browse to http://localhost:8080 to view the files in your web browser.

user's People

Contributors

eriner avatar jkawamoto avatar lukechampine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

user's Issues

Feature Request: Contract file removal

Removal of the expired contracts files/links from the ~/.config/user/contracts-available(and enabled) directories that the user program determines has expired either when uploading or downloading.

idea: automatic creation & prolongation existing contracts

Flexibilitity is major condition for good software, but for usability we need also some automations of our processes.

So I propose to create functionality. Implement next variables in configuration files.
minimal redunancy
maximal redunancy
minimal shards
maximal shards
forecasted_download_bandwidth_per_month
forecasted_upload_bandwidth_per_month
maximal_price_for_terrabite_in_month (cost of storage + cost of uploads + cost of downloads)
minimal_upload_bandwidth_filter
minimal_download_bandwidth_filter
revision_time (time for posibiility to check conditions by our contract)

So if we have some important indicators except total raiting in the https://siastats.info/hosts
I propose to implemement uder defined host filters for creating contracts.
For exaple I need very good bandwidth for my work - price is always important, but bandwidth can have major & critical priority for me.
Please implement some filters. For example for posibility to define iminimal badwidth need implement 2 flters like separate conditionals for define minimal required host bandwidth for upload & download.

Feature Request: Duplicate host contracts

Have user inform the user when forming new contracts that an existing non _old contract is already in the available folder and not marked as being re-negotiated or expired.

I was able to form two different contracts with a single host(unknowingly) and of course the upload test failed when trying to upload to the same host with two different contracts. It came up with some error about duplicate host contracts (can't recall exactly, was a while ago.)

Fix short hostkey lookup

This should work:

user scan 4074d1 1 1 1

But instead it returns:

Scan failed: could not lookup host: host announcement not found

This is because the scanHost function is calling ResolveHostKey on the truncated pubkey (4074d1). ResolveHostKey is only supposed to be used with full pubkeys. So at some point (either before scanHost or within it) we need to be calling LookupHost in order to turn the truncated pubkey into a full pubkey.

Feature Request: Ability to specify redundancy

Beyond specifying chunk size with the –m option during upload, specify a redundancy requirement as well.

For example:

user upload –m 10 –r 3.0 test.zip

This would specify that user was to use 30 “active” and responding hosts out of the enabled contracts to upload this file to.
Doing the “active” and responding hosts would hopefully help prevent upload failure because user currently uses all enabled hosts and fails the upload if a single host were to fail with any error.
However it should also fail the upload if there isn’t enough hosts responding to fulfill the minimum redundancy requirement out of the enabled contracts.

Panic error when trying to download

After having ~15 host contracts expire, trying to download from the rest (out of 50) it gets this panic error(plus it was about 5 minutes before the error showed up after issuing the download command.)

root@sia-test:~# user download DOGP.zip.50hosts.usa
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x5f7906]

goroutine 132 [running]:
lukechampine.com/us/renter.(*ShardDownloader).CopySection(0x0, 0x92af80, 0xc003e65b90, 0x0, 0x400000, 0xc0001b09c0, 0xc003df8e10)
	/root/go/src/lukechampine.com/us/renter/download.go:116 +0x26
lukechampine.com/us/renter/renterutil.(*PseudoFS).fileReadAt.func1(0xc003f2e000, 0xc0000c4000, 0x32, 0x32, 0x0, 0x400000, 0xc003f2c000, 0x32, 0x32, 0xc000179c20)
	/root/go/src/lukechampine.com/us/renter/renterutil/fileops.go:430 +0x11c
created by lukechampine.com/us/renter/renterutil.(*PseudoFS).fileReadAt
	/root/go/src/lukechampine.com/us/renter/renterutil/fileops.go:427 +0x499
root@sia-test:~# 

user with walrus & shard - how to use ?!

Dear Luke,
I want to use "user" with walrus & shard for posibillity to automaticaly manage contracts from my siad. I want to try "user" with "fuse", but need automatically manage or use existed in siad contracts.

  1. SIAD started and have contracts and some SC amounts.
  2. ./walrus -http 127.0.0.1:9999

Listening on 127.0.0.1:9999...
3) # ./shard -r 127.0.0.1:9385
2019/11/27 21:14:41 Listening on :8080...
4) config.toml for user
siad_addr = "localhost:9980"
min_shards = 10
shard_addr = "127.0.0.1:8080"
walrus_addr = "127.0.0.1:9999"

5)# ./user mount /mnt/user_meta /mnt/user
Mounted!
Create 1.txt: minShards cannot be greater than the number of hosts - this happened when i tried to copy 1.txt to the mounted /mnt/user

Where is a problem in my configuration or somthing else ?

P.S. Repertory is very good idea, but have a problems with "file based cache" - it's not usable if you want to work with big files on fuse. Chunk based cache ofcouse will be better the file based cache. But main problem it's if i have big file, but want to modify small part of this file - repertory trying to reapload full file. If my file is 2 GB and I want to modify only several KB - I don't have a reason to upload 2 GB again. I hope that "user" provide me a posibillity to do this. It's a very great idea to move anytnig into SIA. For example I want to make some file with ext4 file system on fuse and the try to allocate traditional data base MySQL or MySQL Server in the blockchain.

Many thanks in advance.

Seed should not be required when scanning

user scan will prompt you for your seed, even though the only reason it needs a wallet is to get an estimate for the contract fees. This is because renterutil.NewWalrusClient requires the seed as a parameter.

I'm not sure how best to address this. I could make NewWalrusClient "lazy," such that it only requires a seed if you call SignTransaction or NextWalletAddress. But that's kind of a weird API. Passing the seed to those functions directly would be reasonable, but then they wouldn't implement the same interface as before. Finally, I could add a separate "unprivileged" client type, but that's a weird API too (why would you not have the seed to your own wallet?). I'll have to sit on this awhile longer.

Inaccurate upload speeds

Currently if you upload a small file with user upload, it will claim that it uploaded the whole file super fast, like 50MB/s or something. Then it will hang for a bit before exiting.

Of course, what's really happening is that the file was simply copied into the upload buffer super fast, and the hanging at the end is when the file was actually being uploaded.

Fortunately, there's an easy fix: just call Sync after each Write call, and treat each Write+Sync as one write when calculating the speed.

Filename argument to form/renew is handled unintuitively

If you run user form a1b2c3d4 10SC 1000 foo.contract, you would expect the resulting contract to be written to foo.contract, but instead it is written to [contract dir]/foo.contract. The contract is also enabled via symlink. If an explicit filename is provided, user should not try to be smart, it should just write the contract to the specified path.

idea: upload & download acseleration speed

After investigation siad & siac I can say that we have greate posibillities to accelerate download & upload speed becuase original sia client it's seems don't use this posibilies even if have enough contracts. I tried to setup 150 hosts, have a lot of contracts, but have very bad speed and redunancy in this case is only 3x - "renter allowance" command give this information.

For example. We can see a lot of hosts who is proposing very cheap storage but have very small speeds - for example will think near 1 mbit upload & 1 mbit download ! Usualy costs is near 100 SC per TB. More fast hosts propose speed nead 50 mbit - but costs is near 400 SC per TB.

What we can do for posibillity to have cheap prices and fast speed ?!

Algorithm.
Redunancy - user ajustable variable. For example 4x
Hosts - user ajastable variable. For example 40 hosts.
Speed Upload/Download for each host 1 Mbit - just for example.
This situation give us posibillity to have 10 Mbit speed for upload / download even if clients have speed only 1 Mbit.

  1. All our data must be devided on 10 parts which must be uploaded 4 times on 40 hosts.
  2. We must start proces upload/download 4 times - beucase we need to economize our own bandwidth. But each times of upload/download (4 cycles total and deppend from redunancy) must start upload 10 processes in one time - it will give us increated speed. And second reason after 1 times our data will be uploaded - we will have full copy of required data on hosts and already can start download it if it will be required by situation (we want to use network file systems like ext4 into the sia).
  3. In real life all this hosts will have different speed - one host will have 1 Mbit but other host will have 2 Mbit. We need try to finish our each cycle in one time and will have situation when our worker (process for download 1 parts of data from 10) will be finished much earlier than another process. In this case we need to get part of data from from another worker who is still in process of uploading - but we need not interrupt this worker, but just give part of data which is not uploaded. For example worker uploading part with weight 100 MB, currently uploaded 40 MB, another worker already uploaded own 100MB and now is free. So this worker can get 30 MB from worker who need to upload also 60 MB. If you have question that 10 workers total and one is finished then which worker must be selected for posibillity to give own data to the another worker - ofcouse every time you need to select most bisy worker who need to upload data more then other workers.
  4. After 1 cycle we must start next cycles also one by one for posibillity to increse redunancy for according to our required settings.

wrong number of words in seed phrase

I am trying to use seed phrase from SIAD

export WALRUS_SEED=.....
SEED PHRASE IS EXACTLY CORRECT

./user form 7962d480 8SC 278793
Using WALRUS_SEED environment variable
wrong number of words in seed phrase

Suggestion: Better cost controls

Thanks to metrics logging, it's possible to determine exactly how many coins you spent on a particular operation. However, this currently isn't very user-friendly: you need to process the logs yourself. It would be much better to have a flag like --print-receipt that would cause user to print something like Total cost: 10 mS after an upload/download finishes. (Or update the cost in real-time, like the progress bar does with bytes.)

Going further, I would really like to add the ability to estimate the cost of an operation in advance. Ideally, you could add an --estimate-cost flag to any command, and instead of actually performing the upload/download, user would just print Estimated cost: 6 mS. Unfortunately, implementing this is would take quite a bit of work: I would need to duplicate the upload/download functions and strip out the actual transfer logic, which means both adding code bloat and risking desynchronization between the cost-calculation logic and the actual logic. (Perhaps with some really aggressive mocking, the existing logic could be reused, with the I/O bits changed to no-ops... but that would be quite a hack.)

Anyway, this would then enable a much-needed feature: the ability to restrict the cost of a particular operation, aborting if the cost exceeds the limit at any point. One of the known issues with Sia is that hosts can raise their prices after you form contracts with them, tricking you into overpaying. With this feature, you could detect when a host tries to sneakily raise their prices, and immediately blacklist them. (The cost estimation feature also helps with this, but it's annoying to estimate costs before every operation, and hosts can still cheat by waiting to raise their prices until after the operation begins.)

Price-raising attacks are unfortunately unavoidable in Sia, because hosts must be free to vary their prices in response to fluctuations in the cost of storage and the exchange rate of siacoins. However, such attacks are possible in the real world as well: a supplier can agree to a low price initially, and then demand a higher price when the delivery date. Even if they signed a contract, suing the supplier takes time -- often too much time, if you're on a tight schedule -- so in many cases the only option is to pay. Sia is actually an improvement in this regard, because storing redundantly means no single "supplier" can hold your data hostage. You can simply refuse to pay the higher price and blacklist the scammer forever. And since hosts must sign their requested price, you can share the signatures publicly, allowing others to verify the malicious price increase and add the scammer to their own blacklist too.

Upload error does not identify which hosts failed

On uploads would like to be able to know on failure's hosts that are causing. Here is an example of what I am getting but I have no idea what hosts are failing the upload.

root@sia-test:~# user upload -m 10 fedora30.tar.xz 
fedora30.tar.xz                                                                                                                                                                                                                                94%   44.17 MB    2.06 MB/s    
Upload failed: could not upload to some hosts:
communication error: expected at least 134879004443715567616 to be exchanged, but 133422648888169332736 was exchanged: rejected for high paying renter valid output
communication error: expected at least 256129706665699180544 to be exchanged, but 254770441480522694656 was exchanged: rejected for high paying renter valid output
communication error: expected at least 240066711699578683392 to be exchanged, but 239989039403292950528 was exchanged: rejected for high paying renter valid output
not enough storage remaining to accept sector
communication error: expected at least 61641257413385388032 to be exchanged, but 59932466894914715648 was exchanged: rejected for high paying renter valid output
not enough storage remaining to accept sector
root@sia-test:~# 

as you can see, I have 6 hosts listed that cause a failed to receive an upload, but I have no idea which hosts they are in my contracts.

Thanks

Error on resuming upload

root@sia-test:~# user upload -m 10 fedora30.tar.xz.50hosts
fedora30.tar.xz.50hosts                                                                                                                                                                                                                        100%   44.17 MB    2.02 MB/s    
root@sia-test:~# cp DOGP.zip DOGP.zip.50hosts
root@sia-test:~# user upload -m 10 DOGP.zip.50hosts
DOGP.zip.50hosts                                                                                                                                                                                                                               38%   217.04 MB   749.8 KB/s    
Upload failed: could not upload to some hosts:
76f9101f: read tcp 192.168.1.4:59218->136.61.3.89:9982: i/o timeout
root@sia-test:~# user upload -m 10 DOGP.zip.50hosts
DOGP.zip.50hosts                                                                                                                                                                                                                               19%   217.04 MB        0 B/s    
Upload failed: file is not writeable

I am going to assume that the file that is not writable is the DOGP.zip.50hosts.usa file. Is that correct?

Could the usa file have remained locked after the error occurred?

As you can see the first upload to the 50hosts with the 44 mb file was successfully completed.
The second file is 217 mb and failed with the i/o time error and then the second failure was that the file is not writable. The file was uploaded consecutively after the first had finished.

Error on doing file Checkup

root@sia-test:~# user checkup fedora*.usa
panic: nonce must be 24 bytes

goroutine 21 [running]:
lukechampine.com/us/renter.(*KeySeed).XORKeyStream(0xc0000e4020, 0xc0005e4000, 0x36600, 0x36600, 0xc0000242d0, 0x1c, 0x30, 0x0)
	/root/go/src/lukechampine.com/us/renter/meta.go:96 +0x137
lukechampine.com/us/renter.(*ShardDownloader).DownloadAndDecrypt(0xc0000e4000, 0x1, 0xc161e0, 0x64f2003c, 0x0, 0xc0001cc000, 0x0)
	/root/go/src/lukechampine.com/us/renter/download.go:148 +0x2d6
lukechampine.com/us/renter/renterutil.checkup(0xc00014e840, 0xc00008b410, 0xc0000a4a00, 0x7f361fb25550, 0xc0000961e0)
	/root/go/src/lukechampine.com/us/renter/renterutil/scan.go:85 +0x626
created by lukechampine.com/us/renter/renterutil.Checkup
	/root/go/src/lukechampine.com/us/renter/renterutil/scan.go:32 +0x89


root@sia-test:~# user checkup fedora30.tar.xz.usa
panic: nonce must be 24 bytes

goroutine 21 [running]:
lukechampine.com/us/renter.(*KeySeed).XORKeyStream(0xc0000e2020, 0xc0005da000, 0x400000, 0x400000, 0xc000024300, 0x1c, 0x30, 0x0)
	/root/go/src/lukechampine.com/us/renter/meta.go:96 +0x137
lukechampine.com/us/renter.(*ShardDownloader).DownloadAndDecrypt(0xc0000e2000, 0x0, 0xc161e0, 0x24b3b074, 0x0, 0xc0001ce000, 0x0)
	/root/go/src/lukechampine.com/us/renter/download.go:148 +0x2d6
lukechampine.com/us/renter/renterutil.checkup(0xc00014a8a0, 0xc00008b410, 0xc0000a2aa0, 0x7f7dcd48d368, 0xc0000961d0)
	/root/go/src/lukechampine.com/us/renter/renterutil/scan.go:85 +0x626
created by lukechampine.com/us/renter/renterutil.Checkup
	/root/go/src/lukechampine.com/us/renter/renterutil/scan.go:32 +0x89
root@sia-test:~# 

The first one I tried to do a wildcard, the second one was a straight specific filename.

Edit: I was able to download the file and un-tar it without any errors.

Checkup all enabled contracts

user checkup performs a "health check" on a contract or metafile by downloading a random piece of it from its hosts. As a convenience, it should be possible to run a checkup on all enabled contracts with one command. In the same vein, perhaps we could allow running a checkup on all metafiles in a folder.

Now that I think of it, checkup is really doing "double-duty:" when you run it on a contract, you're checking whether that contract is usable, but when you run it on a metafile, you're checking whether that metafile is retrievable. These are different things: the fact that a contract is usable (i.e. the renter and host agree on its current state and can revise it) does not imply that the host is actually storing all of the contract data.

For clarity, it might be best to split these into separate commands. Contract checkups should be run frequently, so that unusable contracts are detected as soon as possible and can be replaced. (Also, contract checkups should probably call the Write RPC, since that's a tougher "check" than Read.) Metafile checkups can be run less frequently, and perhaps the checkup should involve downloading a much greater percentage of the file, rather than a single sector.

can't create contract from user via siad

2: ed25519:e96501f8eedb642bcd9d315eb1c800d1e0bd4bcb3f4da8d89e9e6a22fad0b881 plumbus-1.asuscomm.com:9982 1.4.1.2 1.24631e+09 5.9050 TB 250 mS 20 SC 75 SC 5 SC 0.882 111111111111111111111111111111
1: ed25519:7962d48088ec70f0c7c28eed10d987601684bcdeed95e37cb4779a2db3b165af siahost1.beawesomeinstead.com:9982 1.4.1.2 3.02773e+09 3.4323 TB 2 SC 20 SC 60 SC 3 SC 0.887 111111111111111111111111111111
Pubkey Address Version Score Remaining Storage Contract Fee Price (/ TB / Month) Collateral (/ TB / Month) Download Price (/TB) Uptime Recent Scans

./user form 7962d480 8SC 239913

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form 7962d480 8SC 235600

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form 7962d480 8SC 235900

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form 7962d480 20SC 235900

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form 7962d480 50SC 235900

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form 7962d480 60SC 235900

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

./user form e96501f8 60SC 235900

Contract formation failed: FormContract: ReadResponse: communication error: renter proposed a file contract with a too-long duration

Feature Request: Contract list be sortable

Currently the contracts list function groups the enabled at the top and with the host being the sorting key.

I would like to be able to sort by the ending height(with grouping or without) or maybe even sometimes how much funds are left(not likely, but maybe available).

Connection timed out error causing other errors

I've noticed that when there is a connection timed out error it will cause all other good hosts to give the broken pipe error.

root@sia-test:~# user upload -m 10 fedora30.tar.xz
fedora30.tar.xz                                                                                                                                                         94%   44.17 MB   684.8 KB/s    
Upload failed: could not upload to some hosts:
43cd88ca: dial tcp 87.79.165.10:9982: connect: connection timed out
write tcp 192.168.1.4:54370->73.193.37.231:9978: write: broken pipe
write tcp 192.168.1.4:33496->63.155.9.70:9982: write: broken pipe
write tcp 192.168.1.4:39910->87.158.160.17:9982: write: broken pipe
write tcp 192.168.1.4:50618->96.227.220.184:9982: write: broken pipe
write tcp 192.168.1.4:60452->212.232.75.200:9982: write: broken pipe
write tcp 192.168.1.4:45590->79.114.65.81:9982: write: broken pipe

Is this normal behavior? I was able to successfully able to upload this file after removing the one timed out host.

Idea: per-host progress bars

https://github.com/vbauerster/mpb is a neat package that would let us implement this. That way you could see in real-time which hosts were fast and which were slow.

There's a problem though, which is that 50+ progress bars is...too many. So we'd probably want to display just the 10 fastest hosts. But that means the bars would be swapping positions all the time, which isn't great either. Might be less of a problem if we colored each host differently, I dunno. Open to suggestions here.

Allow ~ in config paths

This config file:

contracts_enabled = "~/us/contracts"

will result in user looking for a directory called ~ in the current directory. You could argue that this is "behaving as intended" (since ~ is specific to Unix, and what if you want a directory named ~?), but in practice it's confusing and unintuitive.

I don't know if there's a standard way to replace ~ with the actual home dir, but it shouldn't be hard to implement ourselves.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.