yadayada / acd_cli Goto Github PK
View Code? Open in Web Editor NEWAn unmaintained command line interface and FUSE filesystem for Amazon (Cloud) Drive
License: Other
An unmaintained command line interface and FUSE filesystem for Amazon (Cloud) Drive
License: Other
Dealing with a large number of files seems to cause a very long journaling backup in the sqlalchemy / sqlite backend.
Documents: 6107, 16.5 GiB
Other: 54512, 101.0 GiB
Photos: 91785, 668.0 GiB
Videos: 977, 294.0 GiB
Total: 153381, 1.1 TiB
The last error messages I see when sync is called with --debug
are:
15-05-24 11:25:25.723 [INFO] [acdcli.cache.sync] - 587 duplicate folders not inserted.
15-05-24 11:25:25.723 [INFO] [acdcli.cache.sync] - 1 folder(s) updated.
and after an hour and a half in ~/.local/acd_cli/
:
67084288 May 24 13:48 nodes.db
9943440 May 24 13:48 nodes.db-journal
The journal file grows very slowly as it attempts to insert / update records. I'm not sure if the bottleneck has to do with indexing in the sqlite database or is just a performance limitation of the backend, but it seems as though re-syncing isn't going to be logistically scalable for hundreds of thousands of files otherwise. (I'm using a 3-4 year old machine with an SSD drive and plenty of memory, if it makes any difference.)
Hi,
I'm getting the following error, could you help me?
15-05-20 00:40:47.269 [INFO] [acd_cli] - Plugin leaf classes: StreamPlugin, TestPlugin
15-05-20 00:40:47.269 [INFO] [acd_cli] - StreamPlugin attached.
15-05-20 00:40:47.269 [INFO] [acd_cli] - TestPlugin attached.
15-05-20 00:40:47.270 [INFO] [acdcli.api.common] - Initializing acd with path "/home/xxx/.cache/acd_cli".
15-05-20 00:40:47.271 [INFO] [acdcli.cache.db] - Initializing cache with path "/home/xxx/.cache/acd_cli".
15-05-20 00:40:47.352 [INFO] [acdcli.cache.db] - Cache considered uninitialized.
Syncing...
15-05-20 00:40:47.695 [INFO] [acdcli.api.metadata] - Getting changes with checkpoint "None".
Traceback (most recent call last):
File "/usr/local/bin/acd_cli", line 9, in <module>
load_entry_point('acdcli==0.2.1', 'console_scripts', 'acd_cli')()
File "/usr/local/bin/acd_cli.py", line 936, in main
sys.exit(args.func(args))
File "/usr/local/bin/acd_cli.py", line 434, in sync_action
r = sync_node_list(full=args.full)
File "/usr/local/bin/acd_cli.py", line 115, in sync_node_list
nodes, purged, ncp, full = metadata.get_changes(checkpoint=None if full else cp, include_purged=not full)
File "/usr/local/lib/python3.4/dist-packages/acdcli/api/metadata.py", line 47, in get_changes
r = BackOffRequest.post(get_metadata_url() + 'changes', data=json.dumps(body), stream=True)
File "/usr/local/lib/python3.4/dist-packages/acdcli/api/common.py", line 44, in <lambda>
get_metadata_url = lambda: endpoint_data['metadataUrl']
KeyError: 'metadataUrl'
If I run the upload command a second time on the same set of directories, will all the uploads be replaced or will only new and changed files be updated?
Hi,
When doing an upload, the mac crashed and now I get this error while trying to upload:
Traceback (most recent call last):
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 442, in do_execute
cursor.execute(statement, parameters)
sqlite3.DatabaseError: database disk image is malformed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "./acd_cli.py", line 935, in <module>
main()
File "./acd_cli.py", line 920, in main
db.init(CACHE_PATH)
File "/Users/eric/.virtualenvs/acd_cli/acd_cli/acdcli/cache/db.py", line 251, in init
uninitialized = not engine.has_table(Node.__tablename__)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 2056, in has_table
return self.run_callable(self.dialect.has_table, table_name, schema)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1959, in run_callable
return conn.run_callable(callable_, *args, **kwargs)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1467, in run_callable
return callable_(self, *args, **kwargs)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/dialects/sqlite/base.py", line 966, in has_table
connection, "table_info", table_name, schema=schema)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/dialects/sqlite/base.py", line 1307, in _get_table_pragma
cursor = connection.execute(statement)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 906, in execute
return self._execute_text(object, multiparams, params)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1054, in _execute_text
statement, parameters
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1332, in _handle_dbapi_exception
exc_info
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 181, in reraise
raise value.with_traceback(tb)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
File "/Users/eric/.virtualenvs/acd_cli/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 442, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.DatabaseError: (sqlite3.DatabaseError) database disk image is malformed [SQL: 'PRAGMA table_info("nodes")']
Here are some errors. If helpful, I'll paste different ones as I encounter them.
This was after a sync has been done.
15-05-20 08:42:41.859 [ERROR] [acd_cli] - Uploading "DSC03681-Edit.tif" failed. Name collision with non-cached file. If you want to overwrite, please sync and try again.
15-05-20 08:52:55.102 [ERROR] [acd_cli] - Uploading "DSC04718.ARW" failed. Code: 500, msg: {"logref":"6ef2915c-fe8a-11e4-91ae-1f772ce1db15","message":"Internal failure","code":""}
15-05-20 09:32:14.355 [ERROR] [acd_cli] - Uploading "IMG_3942.JPG" failed. Code: 500, msg: {"message":"Internal failure"}
There was another one about a token yesterday, but I lost that log.
The find
option currently allows you to search by file-name. This is good, but it would be great to also search by hash-value, which you already generated as far as I understand.
For example, ACD automatically renames all photos that you have uploaded from your phone (it adds a time and date stamp). If you now want to check if that file has been uploaded after downloading your photos to your computer, it would be great to be able to check with the file-hash and then upload any missing photos / videos.
I guess this is going in the direction of your proposed smart-sync
feature and would be a decent first step into that direction.
Version 03a5e3a
File overwrite throws this error:
Traceback (most recent call last):
File "C:\acd_cli-master\acd_cli.py", line 940, in <module>
main()
File "C:\acd_cli-master\acd_cli.py", line 936, in main
sys.exit(args.func(args))
File "C:\acd_cli-master\acd_cli.py", line 512, in overwrite_action
ql = QueuedLoader(max_retries=args.max_retries)
AttributeError: 'Namespace' object has no attribute 'max_retries'
Updated to the latest version today and I ran acd_cli.py clear-cache
after obtaining my oauth_data.
Running acd_cli.py sync
results in the following error on Ubuntu 14.10 server 64-bit:
Syncing... Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 322, in _make_request
httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 496, in urlopen
body=body, headers=headers)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 324, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.4/http/client.py", line 1172, in getresponse
response.begin()
File "/usr/lib/python3.4/http/client.py", line 351, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.4/http/client.py", line 313, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.4/socket.py", line 371, in readinto
return self._sock.recv_into(b)
File "/usr/lib/python3.4/ssl.py", line 745, in recv_into
return self.read(nbytes, buffer)
File "/usr/lib/python3.4/ssl.py", line 617, in read
v = self._sslobj.read(len, buffer)
ConnectionResetError: [Errno 104] Connection reset by peer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 327, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 546, in urlopen
raise MaxRetryError(self, url, e)
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='cdws.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: /drive/v1/nodes?filters=kind%3AFILE&startToken=<mytoken> (Caused by <class 'ConnectionResetError'>: [Errno 104] Connection reset by peer)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/myhomedir/acd_cli/acd_cli.py", line 573, in <module>
main()
File "/home/myhomedir/acd_cli/acd_cli.py", line 569, in main
args.func(args)
File "/home/myhomedir/acd_cli/acd_cli.py", line 233, in sync_action
sync_node_list()
File "/home/myhomedir/acd_cli/acd_cli.py", line 46, in sync_node_list
files = metadata.get_file_list()
File "/home/myhomedir/acd_cli/acd/metadata.py", line 20, in get_file_list
return get_node_list(filters='kind:FILE')
File "/home/myhomedir/acd_cli/acd/metadata.py", line 16, in get_node_list
return paginated_get_request(oauth.get_metadata_url() + 'nodes', q_params, {})
File "/home/myhomedir/acd_cli/acd/common.py", line 28, in paginated_get_request
headers=dict(headers, **oauth.get_auth_header()))
File "/usr/lib/python3/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 375, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdws.us-east-1.amazonaws.com', port=443): Max retries exceeded with url: /drive/v1/nodes?filters=kind%3AFILE&startToken=<mytoken> (Caused by <class 'ConnectionResetError'>: [Errno 104] Connection reset by peer)
Verison 03a5e3a
Windows XP, Windows 7
When downloading a file, the download stops after 100%, the speed goes to 0, and nothing else happens.
When downloading a folder, the download stops at 100% of the first file.
The file is downloaded ok, but it keeps the .__incomplete
extension.
Child relationship gets lost on moving a remote folder to trash. Syncing will reattach children.
While doing an upload, acd_cli seems to have stumbled repeatedly over one of the files which contained an unusual file name.
I was attempting to upload a directory which contained this file.
Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
The verbose logs showed several attempts and failures (many lines that looked like this)
15-05-19 19:40:11.117 [INFO] [root] - Skipping upload of existing file "Marvel's Agents of S.H.I.E.L.D - 2x07 - The Writing on the Wall.mp4.txt".
15-05-19 19:40:11.118 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x08 - The Things We Bury.mp4.txt
15-05-19 19:40:11.123 [INFO] [acd_cli] - Remote mtime: 2015-05-19 16:13:27.697999, local mtime: 2015-04-01 07:57:20, local ctime: 2015-04-01 07:57:20
15-05-19 19:40:11.123 [INFO] [root] - Skipping upload of existing file "Marvel's Agents of S.H.I.E.L.D - 2x08 - The Things We Bury.mp4.txt".
15-05-19 19:40:11.124 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:11.736 [INFO] [acdcli.api.common] - POST "https://content-na.drive.amazonaws.com/cdproxy/nodes"
15-05-19 19:40:11.789 [INFO] [requests.packages.urllib3.connectionpool] - Starting new HTTPS connection (1): content-na.drive.amazonaws.com
15-05-19 19:40:12.164 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:12.165 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:14.160 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:14.160 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:15.327 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:15.328 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:15.603 [WARNING] [acdcli.api.common] - Waiting 6.332509 s because of error(s).
15-05-19 19:40:21.992 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:21.993 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:22.284 [WARNING] [acdcli.api.common] - Waiting 14.163617 s because of error(s).
15-05-19 19:40:36.509 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:36.510 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:36.795 [WARNING] [acdcli.api.common] - Waiting 18.809894 s because of error(s).
15-05-19 19:40:55.679 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:55.680 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:59.141 [ERROR] [acd_cli] - Uploading "Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"text/plain\"}"}
15-05-19 19:40:59.142 [INFO] [acd_cli] - Uploading /home/user_1/mounts/sd1_usb1/Media/TV_Shows/Marvel's Agents of S.H.I.E.L.D/Season 02/.meta/Marvel's Agents of S.H.I.E.L.D - 2x09 - …Ye Who Enter Here.mp4.txt
15-05-19 19:40:59.427 [WARNING] [acdcli.api.common] - Waiting 78.155362 s because of error(s).
After replacing the single char 3 dots block ('…') with 3 actual dot chars ('...') everything is now working.
Faulty since change to find_packages?!
Hi,
i guess this a Problem on the amazon API ? Everything worked fine yesterday, but now i cant sync through acd_cli nor get get a new token :
Internal Server Error
The server has either erred or is incapable of performing the requested operation.
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~tensile-runway-92512/1.384480744453469618/main.py", line 84, in get
resp = urllib.urlopen(AMAZON_OA_TOKEN_URL, urllib.urlencode(params))
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib.py", line 88, in urlopen
return opener.open(url, data)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib.py", line 205, in open
return getattr(self, name)(url, data)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/urllib.py", line 433, in open_https
errcode, errmsg, headers = h.getreply()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 623, in getreply
response = self._conn.getresponse()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/gae_override/httplib.py", line 526, in getresponse
raise HTTPException(str(e))
HTTPException: Deadline exceeded while waiting for HTTP response from URL: https://api.amazon.com/auth/o2/token
Great CLI by the way, it works very well for me :)
acd_cli needs bandwidth rate limiting. I have 20mbit, but I don't want to use 100% of it 24/7 if I have a huge upload.
Hello! Thank you for the beautiful and useful software :)
I have the latest version and it looks like this time to trigger a time out I have to press CTRL + C. This way acd_cli doesn't stop completely, but it simply skips the stuck upload.
My previous experience was that timeout generally "activates" without manual intervention (perhaps after 1 minute?). In a scenario with long uploads, manual intervention is not handy.
Here is what happened (I waited 10 minutes before pressing CTRL + C, no % change during these 10 minutes - TMUX environment):
>Current directory: EXAMPLE
Current file: EXAMPLE1.mp4
[####################### ] 69.3% of 1.4GiB, 1.1MB/s
15-05-06 10:37:20.704 [acd_cli] [WARNING] - Timeout while uploading "EXAMPLE1".
Current file: EXAMPLE2.jpg
[#################################] 100.0% of 203.5KiB,
This happened in a Kimsufi (OVH) server with Ubuntu 14.04. I don't understand if this is Kimsufi's fault, since timeouts mainly happens with this server. No issues with other servers/VPSs.
I'm still heavily testing acd_cli. Right now I'm uploading 120 GB.
Again, thank you!
ver. 0b7c00d
Download throws ZeroDivisionError.
Fix:
changing content.py / download_file
from:
...
pgo.progress(0, 0, total_ln, curr_ln)
...
to:
...
pgo.progress(total_ln, curr_ln, 0, 0)
...
When I try to upload a folder which has already been uploaded (to sync new files) I now get:
Traceback (most recent call last):
File "./acd_cli.py", line 546, in <module>
main()
File "./acd_cli.py", line 542, in main
args.func(args)
File "./acd_cli.py", line 263, in upload_action
upload(path, args.parent, args.overwrite, args.force)
File "./acd_cli.py", line 63, in upload
upload_folder(path, parent_id, overwr, force)
File "./acd_cli.py", line 160, in upload_folder
upload(full_path, curr_node.id, overwr, force)
File "./acd_cli.py", line 63, in upload
upload_folder(path, parent_id, overwr, force)
File "./acd_cli.py", line 160, in upload_folder
upload(full_path, curr_node.id, overwr, force)
File "./acd_cli.py", line 66, in upload
upload_file(path, parent_id, overwr, force)
File "./acd_cli.py", line 102, in upload_file
mod_time = mod_time.timestamp()
AttributeError: 'datetime.datetime' object has no attribute 'timestamp'
I'm on latest master and have already tried clearing the cache and resyncing.
Uploading file named "čšž.txt" throws this error
15-05-05 20:18:46.235 [acd_cli] [ERROR] - Uploading "čšž.txt" failed. Code: 26, msg: couldn't open file "C:\123\čšž.txt"
Fix that works for me:
in content.py
import locale
and change upload_file from:
...
c.setopt(c.HTTPPOST, [('metadata', json.dumps(metadata)),
('content', (c.FORM_FILE, file_name.encode('UTF-8')))])
...
to:
...
c.setopt(c.HTTPPOST, [('metadata', json.dumps(metadata)),
('content', (c.FORM_FILE, file_name.encode(locale.getpreferredencoding())))])
...
Right now the source code directory hosts all data. It would be great to allow the user to configure a data directory, where she would place the oauth_data
file. At least it should be the current working directory by default and not the source code directory in my opinion.
This is just a placeholder for generating easy packages with FPM. All instructions were successfully executed under Ubuntu 15.04:
fpm -s python -t deb --python-pip /usr/bin/pip3 --python-bin /usr/bin/python3 ./setup.py
fpm -s python -t rpm --python-pip /usr/bin/pip3 --python-bin /usr/bin/python3 ./setup.py
uname -a
Linux OMV 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u1 x86_64 GNU/Linux
pip install sqlalchemy
Requirement already satisfied (use --upgrade to upgrade): sqlalchemy in /usr/local/lib/python2.7/dist-packages
./acd_cli.py sync
Traceback (most recent call last):
File "./acd_cli.py", line 11, in
from cache import sync, query, db
File "/root/acd_cli/cache/sync.py", line 6, in
from sqlalchemy.exc import *
ImportError: No module named sqlalchemy.exc
I tested the latest version on two different machine on different networks and it looks like upload speeds are unstable. I was able to reach an impressive peak of 41MB/s but it stays there for ~ 5 seconds to then drops to a far lower speed (sometime 5MB/s, 7MB/s, 20MB/s, etc.).
Download from what I've seen is even more unstable and didn't go over 17MB/s.
Hi @yadayada,
great work from what I can see. Have you thought about publishing this through PyPi - once there is a "stable" version available? It may make sense to create release versions for this purpose and since your README.md
already contains a list of features and planned work, this should be fairly straight forward.
Cheers!
Now whenever I try to install, it gives this error
Running setup.py (path:/tmp/pip-vdq1vkdi-build/setup.py) egg_info for package from file:///root
Traceback (most recent call last):
File "/tmp/pip-vdq1vkdi-build/acdcli/api/content.py", line 10, in <module>
from requests_toolbelt import MultipartEncoder
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/__init__.py", line 19, in <module>
from .adapters import SSLAdapter, SourceAddressAdapter
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/adapters/__init__.py", line 12, in <module>
from .ssl import SSLAdapter
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/adapters/ssl.py", line 13, in <module>
from requests.packages.urllib3.poolmanager import PoolManager
ImportError: No module named 'requests.packages'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-vdq1vkdi-build/setup.py", line 4, in <module>
from acd_cli import __version__
File "/tmp/pip-vdq1vkdi-build/acd_cli.py", line 18, in <module>
from acdcli.api import *
File "/tmp/pip-vdq1vkdi-build/acdcli/api/content.py", line 12, in <module>
from acdcli.bundled.encoder import MultipartEncoder
File "/tmp/pip-vdq1vkdi-build/acdcli/bundled/encoder.py", line 12, in <module>
from requests.packages.urllib3 import fields
ImportError: No module named 'requests.packages'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "/tmp/pip-vdq1vkdi-build/acdcli/api/content.py", line 10, in <module>
from requests_toolbelt import MultipartEncoder
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/__init__.py", line 19, in <module>
from .adapters import SSLAdapter, SourceAddressAdapter
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/adapters/__init__.py", line 12, in <module>
from .ssl import SSLAdapter
File "/usr/local/lib/python3.4/dist-packages/requests_toolbelt/adapters/ssl.py", line 13, in <module>
from requests.packages.urllib3.poolmanager import PoolManager
ImportError: No module named 'requests.packages'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-vdq1vkdi-build/setup.py", line 4, in <module>
from acd_cli import __version__
File "/tmp/pip-vdq1vkdi-build/acd_cli.py", line 18, in <module>
from acdcli.api import *
File "/tmp/pip-vdq1vkdi-build/acdcli/api/content.py", line 12, in <module>
from acdcli.bundled.encoder import MultipartEncoder
File "/tmp/pip-vdq1vkdi-build/acdcli/bundled/encoder.py", line 12, in <module>
from requests.packages.urllib3 import fields
ImportError: No module named 'requests.packages'
----------------------------------------
I guess I'm missing a package.
I'm getting a new error with the latest master. When I sync I get the following:
Syncing...
Traceback (most recent call last):
File "./acd_cli.py", line 606, in <module>
main()
File "./acd_cli.py", line 602, in main
args.func(args)
File "./acd_cli.py", line 265, in sync_action
sync_node_list(full=args.full)
File "./acd_cli.py", line 52, in sync_node_list
r = metadata.get_changes(checkpoint=cp)
File "/home/ssitcm/acd_cli/acd/metadata.py", line 51, in get_changes
if not status['end']:
KeyError: 'end'
This happens on Raspbian with Python 3.4.
When I try using the acd cli i get asked to go to the url, but when I visit said url I get the following from amazon:
Error Summary
400 Bad Request
The redirect URI you provided has not been whitelisted for your application
Request Details
redirect_uri=http%3A%2F%2Fwww.prostipad.si
client_id=amzn1.application-oa2-client.5499b8c9f5ca4892979b82b3099ecd18
response_type=code
scope=clouddrive%3Aread+clouddrive%3Awrite
Amazon support had said, that I whitelisted the wrong url? Which one should I?
Maybe acd_cli
isn't really the tool to do this, but what do you think about splitting very large files before uploading them?
acd_cli
makes uploading files to amazon a lot easier.
With the unlimited file storage capacity, people will want to store very large files there.
It would be great if acd_cli
could become the tool to allow this in a smarter fashion than doing it manually.
For example, I have some old, encrypted backup images that I would like to backup to ACD - they are over 100 GB in size. I tried uploading them through the official app, but after waiting for a while ...
and reaching 100%
in the upload view, it simply gives an error and the file doesn't show up online. This is frustrating and annoying.
I tried to upload such large files (> 100 GB) through acd_cli
and while it appeared to work fine, I received a time-out at the very end and the file didn't show up ACD.
The largest file I managed to upload was just over 40GB, which is already pretty good.
Probably by means of split-rar archive creation. Note that you need not to compress anything, but simply split the file (which makes it faster and will mostly be limited by disk read/write speeds).
Well, if the file is split - each part could be uploaded / downloaded in parallel as discussed here #7 (ok, maybe for uploading this doesn't make too much sense, but I think for downloading might very well be worth it). This would very much improve the speed everybody could use ACD at.
Hello,
I have successfully done with the first sync , but I get the error, listed below , on the second sync:
root@raspberry:~/acd_cli# acd_cli sync
Traceback (most recent call last):
File "/usr/local/bin/acd_cli", line 9, in <module>
load_entry_point('acdcli==0.2.1', 'console_scripts', 'acd_cli')()
File "/usr/local/bin/acd_cli.py", line 914, in main
if not common.init(CACHE_PATH):
File "/usr/local/lib/python3.2/dist-packages/acdcli/api/common.py", line 61, in init
return oauth.init(path) and _load_endpoints()
File "/usr/local/lib/python3.2/dist-packages/acdcli/api/oauth.py", line 40, in init
_get_data()
File "/usr/local/lib/python3.2/dist-packages/acdcli/api/oauth.py", line 62, in _get_data
oauth_data = json.load(oa)
File "/usr/lib/python3.2/json/__init__.py", line 264, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.2/json/__init__.py", line 309, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.2/json/decoder.py", line 353, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.2/json/decoder.py", line 369, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Invalid control character at: line 2 column 196 (char 213)
Here's what I have when doing a sync on a NAS:
# python acd_cli.py sync
Syncing...
Traceback (most recent call last):
File "acd_cli.py", line 940, in <module>
main()
File "acd_cli.py", line 936, in main
sys.exit(args.func(args))
File "acd_cli.py", line 434, in sync_action
r = sync_node_list(full=args.full)
File "acd_cli.py", line 115, in sync_node_list
nodes, purged, ncp, full = metadata.get_changes(checkpoint=None if full else cp, include_purged=not full)
File "/root/.virtualenvs/acd_cli/acd_cli/acdcli/api/metadata.py", line 71, in get_changes
o = json.loads(line.decode('utf-8'))
File "/usr/local/python3/lib/python3.4/json/__init__.py", line 318, in loads
return _default_decoder.decode(s)
File "/usr/local/python3/lib/python3.4/json/decoder.py", line 343, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/python3/lib/python3.4/json/decoder.py", line 359, in raw_decode
obj, end = self.scan_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 2472936 (char 2472935)
15-05-21 01:46:31.276 [WARNING] [acdcli.api.metadata] - End of change request not reached.
I'm in the process of uploading 100s of GB to ACD which takes some time. Sometimes, when I check on the progress, I notice that it appears to have stalled or is waiting for something. It would be nice if I could have the option to show a date and time of each operation (similar to a log file) so that I can see how long the operation has been sitting idle.
Some of the files I try to upload report a error code 400 on upload. The exact error is below. What is the cause of this?
15-05-22 09:12:45.772 [ERROR] [acd_cli] - Uploading "Super secrete video file name.nfo" failed. Code: 400, msg: {"message":"{"name":"content","Content-Type":"application/octet-stream"}"}
I am syncing before each time I try to upload but I am uploading a large list of files. 95% of theses files are already uploaded so I get a "Skipping Upload" message. I also see
15-05-22 09:16:19.738 [WARNING] [acdcli.api.common] - Waiting 154.651692s because of error(s).
Not sure if that is due to the "errors" skipping files or not though.
Been using 0.2.1 and working good, but I am now trying to upgrade to 0.2.2 with
pip3 install --upgrade .
but I get the following:
Traceback (most recent call last):
File "/usr/bin/pip3", line 9, in <module>
load_entry_point('pip==1.5.4', 'console_scripts', 'pip3')()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 351, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2363, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2088, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python3/dist-packages/pip/__init__.py", line 61, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File "/usr/lib/python3/dist-packages/pip/vcs/mercurial.py", line 9, in <module>
from pip.download import path_to_url
File "/usr/lib/python3/dist-packages/pip/download.py", line 25, in <module>
from requests.compat import IncompleteRead
ImportError: cannot import name 'IncompleteRead'
Using Ubuntu 14.04 LTS
acd_cli mkdir /video/
acd_cli mkdir /video/test/
Results:
/test
/video
Version:
0.2.2a1
After accepting the auth request, I saved the text to a "oauth_data" file in the application directory. I've also tried converting it to UTF-8 instead of the default ANSI, but that just crashes the JSON decoder completely (comment if you want a stacktrace of that).
C:\acd_cli-master>python acd_cli.py upload C:\test.txt
Invalid ID format.
Neither --verbose
nor --debug
seem to print any additional info.
The text is valid JSON format:
{
"access_token": "<REMOVED FOR SECURITY REASONS>",
"exp_time": 1430170624.685374,
"expires_in": 3600,
"refresh_token": "<REMOVED FOR SECURITY REASONS>",
"token_type": "bearer"
}
In my Amazon account I can see that acd_cli_oa is successfully authorized, so I'm not exactly sure what's wrong.
I have updated to your newest version and when I try to sync, I get the following error (running with -v as well)
15-05-01 08:14:20.529 [acd_cli] [INFO] - Getting changes with checkpoint "None".
15-05-01 08:14:20.534 [acd.common] [INFO] - Retry 0, waiting 0.000882 secs
15-05-01 08:14:20.543 [urllib3.connectionpool] [INFO] - Starting new HTTPS connection (1): cdws.us-east-1.amazonaws.com
Traceback (most recent call last):
File "./acd_cli.py", line 615, in <module>
main()
File "./acd_cli.py", line 611, in main
args.func(args)
File "./acd_cli.py", line 274, in sync_action
sync_node_list(full=args.full)
File "./acd_cli.py", line 52, in sync_node_list
r = metadata.get_changes(checkpoint=cp)
File "/home/drew/Documents/acd_cli/acd/metadata.py", line 48, in get_changes
ro = str.splitlines(r.text)
File "/usr/lib/python3/dist-packages/requests/models.py", line 711, in text
encoding = self.apparent_encoding
File "/usr/lib/python3/dist-packages/requests/models.py", line 598, in apparent_encoding
return chardet.detect(self.content)['encoding']
File "/usr/lib/python3/dist-packages/chardet/__init__.py", line 30, in detect
u.feed(aBuf)
File "/usr/lib/python3/dist-packages/chardet/universaldetector.py", line 128, in feed
if prober.feed(aBuf) == constants.eFoundIt:
File "/usr/lib/python3/dist-packages/chardet/charsetgroupprober.py", line 64, in feed
st = prober.feed(aBuf)
File "/usr/lib/python3/dist-packages/chardet/sbcharsetprober.py", line 72, in feed
aBuf = self.filter_without_english_letters(aBuf)
File "/usr/lib/python3/dist-packages/chardet/charsetprober.py", line 57, in filter_without_english_letters
aBuf = re.sub(b'([A-Za-z])+', b' ', aBuf)
File "/usr/lib/python3.4/re.py", line 175, in sub
return _compile(pattern, flags).sub(repl, string, count)
MemoryError
One of the major drawbacks of ACD is that you cannot share a folder,
not even several files at once. Sharing any larger number of files is
a major annoyance.
It would be nice if acd_cli would support sharing of files, like:
Would this be possible?
My environment is Mac OS X 10.10.3. Python 3.4.3, installed from pyenv. I found that uploading error occurs when the file path contains Unicode chars. For example,
$ acd_cli ul 2000.03.24\ 國樂比賽/_0A_0037.jpg /
15-05-19 22:39:16.263 [ERROR] [acd_cli] - Uploading "_0A_0037.jpg" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"image/jpeg\"}"}
[ ] 0.0% of 1MiB 0/1 -285.4KB/s
1 file(s) failed.
if I moved the file from 2000.03.24\ 國樂比賽/_0A_0037.jpg
to 2000.03.24\ 1/_0A_0037.jpg
, the upload succeeded.
$ acd_cli ul 2000.03.24\ 1/_0A_0037.jpg /
[#################################] 100.0% of 1MiB 1/1 229.3KB/s
This error also occurs when the basename contains Unicode path.
$ acd_cli ul 2000.03.24\ 1/_0A_0037\ 拷貝.jpg /
15-05-19 22:40:04.722 [ERROR] [acd_cli] - Uploading "_0A_0037 拷貝.jpg" failed. Code: 400, msg: {"message":"{\"name\":\"content\",\"Content-Type\":\"image/jpeg\"}"}
[ ] 0.0% of 1MiB 0/1 -69.6KB/s
1 file(s) failed.
Any help?
I get this error with overwrite command
Traceback (most recent call last):
File "C:\acd_cli\acd_cli.py", line 692, in <module>
main()
File "C:\acd_cli\acd_cli.py", line 688, in main
args.func(args)
File "C:\acd_cli\acd_cli.py", line 346, in overwrite_action
if utils.is_uploadable(args.file):
AttributeError: 'module' object has no attribute 'is_uploadable'
When trying to list the content of my amazon drive with the tree command I get the following:
Traceback (most recent call last):
File "./acd_cli.py", line 216, in <module>
main()
File "./acd_cli.py", line 115, in main
print('\n'.join(tree))
UnicodeEncodeError: 'ascii' codec can't encode character '\u0161' in position 1499: ordinal not in range(128)
Hi there.
Fantastic work. Thanks!
I have a request for being able to specify exclude filters for the upload command.
I have a large photo library which also contain video files, but I only want to upload the photos so that the video files don't take up my quota on the Amazon Drive.
For the moment I have simply added these lines to the upload_file function:
# Test to not upload mov and mp4 files
if short_nm.lower().endswith('.mov') or short_nm.lower().endswith('.mp4'):
print('Skipping "%s" because it\'s a movie.' % short_nm)
return 0
And while this is of course not an ideal long term fix, it solves my immediate problem.
Keep up the good work.
Thanks,
Stefan
Hi,
Can this do concurrent uploads? If so, how many?
Thanks!
It would be nice for those of us who run acd_cli from a screen session to have a bit more introspection into what's going on during runs, as the current display is just a status bar and a set of statistics for an upload action.
It's only slightly non-trivial to throw together something with a cross-platform TUI like blessings, PyTVision, or textland. Hell, a pluggable UI architecture wouldn't be a terrible idea (though it would likely take a bit of work).
It would be nice to have subfolders create automatically. For example, if I do
./acd_cli.py create /test/folder
And I do not have a folder called test already, the command would create "test" as well as "folder" inside of test.
When recursively uploading and a (large) file is skipped which has an MD5 hashing processes running, it does not terminate.
This morning I couldn't acd_cli sync
due to what appears to be an expired oauth_token
. An error appeared when trying to refresh the token as shows below (I am running Python 3.4
under Mac OS X 10.10.3).
I initially thought the host https://tensile-runway-92512.appspot.com
was down (as reported in the error), but this is not the case. I managed to solve this by removing the existing cache and re-syncing, but in case you are interested here is the (verbose) output:
python3 acd_cli.py -v s
15-05-21 09:20:55.517 [INFO] [acd_cli] - Plugin leaf classes: StreamPlugin, TestPlugin
15-05-21 09:20:55.517 [INFO] [acd_cli] - StreamPlugin attached.
15-05-21 09:20:55.517 [INFO] [acd_cli] - TestPlugin attached.
15-05-21 09:20:55.518 [INFO] [acdcli.api.common] - Initializing acd with path "~/Library/Caches/acd_cli".
15-05-21 09:20:55.518 [INFO] [acdcli.api.oauth] - Token expired at 2015-05-19 15:56:30.331763.
15-05-21 09:20:55.518 [INFO] [acdcli.api.oauth] - Refreshing authentication token.
15-05-21 09:20:55.539 [INFO] [requests.packages.urllib3.connectionpool] - Starting new HTTPS connection (1): tensile-runway-92512.appspot.com
15-05-21 09:20:55.540 [ERROR] [acdcli.api.oauth] - Error refreshing authentication token.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 761, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 204, in connect
conn = self._new_conn()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 88, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 78, in create_connection
sock.connect(sa)
OSError: [Errno 64] Host is down
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py", line 370, in send
timeout=timeout
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 597, in urlopen
_stacktrace=sys.exc_info()[2])
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/retry.py", line 245, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py", line 309, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 341, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 761, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 204, in connect
conn = self._new_conn()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 88, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 78, in create_connection
sock.connect(sa)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', OSError(64, 'Host is down'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "acd_cli.py", line 940, in <module>
main()
File "acd_cli.py", line 920, in main
if not common.init(CACHE_PATH):
File "~/acd_cli-github/acdcli/api/common.py", line 61, in init
return oauth.init(path) and _load_endpoints()
File "~/acd_cli-github/acdcli/api/oauth.py", line 40, in init
_get_data()
File "~/acd_cli-github/acdcli/api/oauth.py", line 67, in _get_data
_get_auth_token()
File "~/acd_cli-github/acdcli/api/oauth.py", line 80, in _get_auth_token
_refresh_auth_token()
File "~/acd_cli-github/acdcli/api/oauth.py", line 111, in _refresh_auth_token
response = requests.post(APPSPOT_URL, data=ref)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/api.py", line 108, in post
return request('post', url, data=data, json=json, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(64, 'Host is down'))
Failure:
root@linux:~/.acd# ./acd_cli.py upload /root/12.9GB.file /
Current file: /root/12.9GB.file
[##################################################] 100.00% of 12.9GiB
Uploading "12.9GB.file" failed. Code: 504, msg: {"message": "[acd_cli] no body received."}
Success:
root@linux:~/.acd# ./acd_cli.py upload 10GB.file /
Current file: /root/10GB.file
[##################################################] 100.00% of 10.0GiB
I've repeated the experiment several times. Is this a client or service limitation?
I started uploading a folder with about 1000 files and got this error a couple of times:
Traceback (most recent call last):
File "./acd_cli.py", line 46, in upload
r = content.upload_file(path, parent_id)
File "/Users/jure/acd_cli/acd/content.py", line 70, in upload_file
raise RequestError(status, body)
acd.common.RequestError: 400
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./acd_cli.py", line 416, in <module>
main()
File "./acd_cli.py", line 412, in main
args.func(args)
File "./acd_cli.py", line 184, in upload_action
upload(args.file, parent)
File "./acd_cli.py", line 38, in upload
upload_folder(path, parent_id)
File "./acd_cli.py", line 95, in upload_folder
upload_folder(full_path, curr_node.id)
File "./acd_cli.py", line 95, in upload_folder
upload_folder(full_path, curr_node.id)
File "./acd_cli.py", line 97, in upload_folder
upload(full_path, curr_node.id)
File "./acd_cli.py", line 54, in upload
print('Uploading "%s" failed. Code: %s, msg: %s' % e.status_code, e.msg)
TypeError: not enough arguments for format string
Is this a server error? Another time I got a 504. Maybe if you could make it so that it would continue uploading the res of files, despite the error or try reuploading the file.
When issuing a sync and a previous checkpoint exists, nodes manually purged from the cloud drive don't get updated.
Workaround: use sync --full
until fixed.
When I try the command
acd_cli.py tree -4nNUhxdTqWdKCLhJJC_CF
I get back
acd_cli.py: error: unrecognized arguments: -4nNUhxdTqWdKCLhJJC_CF
Is there a way I can pass in folders that have a key starting with a hyphen? I have tried using quote and speech marks but I have had no success.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.