Git Product home page Git Product logo

s5cmd's Introduction

Go Report Github Actions Status

Overview

s5cmd is a very fast S3 and local filesystem execution tool. It comes with support for a multitude of operations including tab completion and wildcard support for files, which can be very handy for your object storage workflow while working with large number of files.

There are already other utilities to work with S3 and similar object storage services, thus it is natural to wonder what s5cmd has to offer that others don't.

In short, s5cmd offers a very fast speed. Thanks to Joshua Robinson for his study and experimentation on s5cmd; to quote his medium post:

For uploads, s5cmd is 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.

If you would like to know more about performance of s5cmd and the reasons for its fast speed, refer to benchmarks section

Features

s5cmd supports wide range of object management tasks both for cloud storage services and local filesystems.

  • List buckets and objects
  • Upload, download or delete objects
  • Move, copy or rename objects
  • Set Server Side Encryption using AWS Key Management Service (KMS)
  • Set Access Control List (ACL) for objects/files on the upload, copy, move.
  • Print object contents to stdout
  • Select JSON records from objects using SQL expressions
  • Create or remove buckets
  • Summarize objects sizes, grouping by storage class
  • Wildcard support for all operations
  • Multiple arguments support for delete operation
  • Command file support to run commands in batches at very high execution speeds
  • Dry run support
  • S3 Transfer Acceleration support
  • Google Cloud Storage (and any other S3 API compatible service) support
  • Structured logging for querying command outputs
  • Shell auto-completion
  • S3 ListObjects API backward compatibility

Installation

Official Releases

Binaries

The Releases page provides pre-built binaries for Linux, macOS and Windows.

Homebrew

For macOS, a homebrew tap is provided:

brew install peak/tap/s5cmd

Unofficial Releases (by Community)

Packaging status

Warning These releases are maintained by the community. They might be out of date compared to the official releases.

MacPorts

You can also install s5cmd from MacPorts on macOS:

sudo port selfupdate
sudo port install s5cmd

Conda

s5cmd is included in the conda-forge channel, and it can be downloaded through the Conda.

Installing s5cmd from the conda-forge channel can be achieved by adding conda-forge to your channels with:

conda config --add channels conda-forge
conda config --set channel_priority strict

Once the conda-forge channel has been enabled, s5cmd can be installed with conda:

conda install s5cmd

ps. Quoted from s5cmd feedstock. You can also find further instructions on its README.

FreeBSD

On FreeBSD you can install s5cmd as a package:

pkg install s5cmd

or via ports:

cd /usr/ports/net/s5cmd
make install clean

Build from source

You can build s5cmd from source if you have Go 1.19+ installed.

go install github.com/peak/s5cmd/v2@master

⚠️ Please note that building from master is not guaranteed to be stable since development happens on master branch.

Docker

Hub

$ docker pull peakcom/s5cmd
$ docker run --rm -v ~/.aws:/root/.aws peakcom/s5cmd <S3 operation>

ℹ️ /aws directory is the working directory of the image. Mounting your current working directory to it allows you to run s5cmd as if it was installed in your system;

docker run --rm -v $(pwd):/aws -v ~/.aws:/root/.aws peakcom/s5cmd <S3 operation>

Build

$ git clone https://github.com/peak/s5cmd && cd s5cmd
$ docker build -t s5cmd .
$ docker run --rm -v ~/.aws:/root/.aws s5cmd <S3 operation>

Usage

s5cmd supports multiple-level wildcards for all S3 operations. This is achieved by listing all S3 objects with the prefix up to the first wildcard, then filtering the results in-memory. For example, for the following command;

s5cmd cp 's3://bucket/logs/2020/03/*' .

first a ListObjects request is send, then the copy operation will be executed against each matching object, in parallel.

Specifying credentials

s5cmd uses official AWS SDK to access S3. SDK requires credentials to sign requests to AWS. Credentials can be provided in a variety of ways:

  • Command line options --profile to use a named profile, --credentials-file flag to use the specified credentials file

    # Use your company profile in AWS default credential file
    s5cmd --profile my-work-profile ls s3://my-company-bucket/
    
    # Use your company profile in your own credential file
    s5cmd --credentials-file ~/.your-credentials-file --profile my-work-profile ls s3://my-company-bucket/
  • Environment variables

    # Export your AWS access key and secret pair
    export AWS_ACCESS_KEY_ID='<your-access-key-id>'
    export AWS_SECRET_ACCESS_KEY='<your-secret-access-key>'
    export AWS_PROFILE='<your-profile-name>'
    export AWS_REGION='<your-bucket-region>'
    
    s5cmd ls s3://your-bucket/
  • If s5cmd runs on an Amazon EC2 instance, EC2 IAM role

  • If s5cmd runs on EKS, Kube IAM role

  • Or, you can send requests anonymously with --no-sign-request option

    # List objects in a public bucket
    s5cmd --no-sign-request ls s3://public-bucket/

Region detection

While executing the commands, s5cmd detects the region according to the following order of priority:

  1. --source-region or --destination-region flags of cp command.
  2. AWS_REGION environment variable.
  3. Region section of AWS profile.
  4. Auto detection from bucket region (via HeadBucket API call).
  5. us-east-1 as default region.

Examples

Download a single S3 object

s5cmd cp s3://bucket/object.gz .

Download multiple S3 objects

Suppose we have the following objects:

s3://bucket/logs/2020/03/18/file1.gz
s3://bucket/logs/2020/03/19/file2.gz
s3://bucket/logs/2020/03/19/originals/file3.gz
s5cmd cp 's3://bucket/logs/2020/03/*' logs/

s5cmd will match the given wildcards and arguments by doing an efficient search against the given prefixes. All matching objects will be downloaded in parallel. s5cmd will create the destination directory if it is missing.

logs/ directory content will look like:

$ tree
.
└── logs
    ├── 18
    │   └── file1.gz
    └── 19
        ├── file2.gz
        └── originals
            └── file3.gz

4 directories, 3 files

ℹ️ s5cmd preserves the source directory structure by default. If you want to flatten the source directory structure, use the --flatten flag.

s5cmd cp --flatten 's3://bucket/logs/2020/03/*' logs/

logs/ directory content will look like:

$ tree
.
└── logs
    ├── file1.gz
    ├── file2.gz
    └── file3.gz

1 directory, 3 files

Upload a file to S3

s5cmd cp object.gz s3://bucket/

by setting server side encryption (aws kms) of the file:

s5cmd cp -sse aws:kms -sse-kms-key-id <your-kms-key-id> object.gz s3://bucket/

by setting Access Control List (acl) policy of the object:

s5cmd cp -acl bucket-owner-full-control object.gz s3://bucket/

Upload multiple files to S3

s5cmd cp directory/ s3://bucket/

Will upload all files at given directory to S3 while keeping the folder hierarchy of the source.

Stream stdin to S3

You can upload remote objects by piping stdin to s5cmd:

curl https://github.com/peak/s5cmd/ | s5cmd pipe s3://bucket/s5cmd.html

Or you can compress the data before uploading:

tar -cf - file.bin | s5cmd pipe s3://bucket/file.bin.tar

Delete an S3 object

s5cmd rm s3://bucket/logs/2020/03/18/file1.gz

Delete multiple S3 objects

s5cmd rm s3://bucket/logs/2020/03/19/*

Will remove all matching objects:

s3://bucket/logs/2020/03/19/file2.gz
s3://bucket/logs/2020/03/19/originals/file3.gz

s5cmd utilizes S3 delete batch API. If matching objects are up to 1000, they'll be deleted in a single request. However, it should be noted that commands such as

s5cmd rm s3://bucket-foo/object s3://bucket-bar/object

are not supported by s5cmd and result in error (since we have 2 different buckets), as it is in odds with the benefit of performing batch delete requests. Thus, if in need, one can use s5cmd run mode for this case, i.e,

$ s5cmd run
rm s3://bucket-foo/object
rm s3://bucket-bar/object

more details and examples on s5cmd run are presented in a later section.

Copy objects from S3 to S3

s5cmd supports copying objects on the server side as well.

s5cmd cp 's3://bucket/logs/2020/*' s3://bucket/logs/backup/

Will copy all the matching objects to the given S3 prefix, respecting the source folder hierarchy.

⚠️ Copying objects (from S3 to S3) larger than 5GB is not supported yet. We have an open ticket to track the issue.

Using Exclude and Include Filters

s5cmd supports the --exclude and --include flags, which can be used to specify patterns for objects to be excluded or included in commands.

  • The --exclude flag specifies objects that should be excluded from the operation. Any object that matches the pattern will be skipped.
  • The --include flag specifies objects that should be included in the operation. Only objects that match the pattern will be handled.
  • If both flags are used, --exclude has precedence over --include. This means that if an object URL matches any of the --exclude patterns, the object will be skipped, even if it also matches one of the --include patterns.
  • The order of the flags does not affect the results (unlike aws-cli).

The command below will delete only objects that end with .log.

s5cmd rm --include "*.log" 's3://bucket/logs/2020/*'

The command below will delete all objects except those that end with .log or .txt.

s5cmd rm --exclude "*.log" --exclude "*.txt" 's3://bucket/logs/2020/*'

If you wish, you can use multiple flags, like below. It will download objects that start with request or end with .log.

s5cmd cp --include "*.log" --include "request*" 's3://bucket/logs/2020/*' .

Using a combination of --include and --exclude also possible. The command below will only sync objects that end with .log or .txt but exclude those that start with access_. For example, request.log, and license.txt will be included, while access_log.txt, and readme.md are excluded.

s5cmd sync --include "*.log" --exclude "access_*" --include "*.txt" 's3://bucket/logs/*' .

Select JSON object content using SQL

s5cmd supports the SelectObjectContent S3 operation, and will run your SQL query against objects matching normal wildcard syntax and emit matching JSON records via stdout. Records from multiple objects will be interleaved, and order of the records is not guaranteed (though it's likely that the records from a single object will arrive in-order, even if interleaved with other records).

$ s5cmd select --compression GZIP \
  --query "SELECT s.timestamp, s.hostname FROM S3Object s WHERE s.ip_address LIKE '10.%' OR s.application='unprivileged'" \
  s3://bucket-foo/object/2021/*
{"timestamp":"2021-07-08T18:24:06.665Z","hostname":"application.internal"}
{"timestamp":"2021-07-08T18:24:16.095Z","hostname":"api.github.com"}

At the moment this operation only supports JSON records selected with SQL. S3 calls this lines-type JSON, but it seems that it works even if the records aren't line-delineated. YMMV.

Count objects and determine total size

$ s5cmd du --humanize 's3://bucket/2020/*'

30.8M bytes in 3 objects: s3://bucket/2020/*

Run multiple commands in parallel

The most powerful feature of s5cmd is the commands file. Thousands of S3 and filesystem commands are declared in a file (or simply piped in from another process) and they are executed using multiple parallel workers. Since only one program is launched, thousands of unnecessary fork-exec calls are avoided. This way S3 execution times can reach a few thousand operations per second.

s5cmd run commands.txt

or

cat commands.txt | s5cmd run

commands.txt content could look like:

cp s3://bucket/2020/03/* logs/2020/03/

# line comments are supported
rm s3://bucket/2020/03/19/file2.gz

# empty lines are OK too like above

# rename an S3 object
mv s3://bucket/2020/03/18/file1.gz s3://bucket/2020/03/18/original/file.gz

Sync

sync command synchronizes S3 buckets, prefixes, directories and files between S3 buckets and prefixes as well. It compares files between source and destination, taking source files as source-of-truth;

  • copies files those do not exist in destination
  • copies files those exist in both locations if the comparison made with sync strategy allows it so

It makes a one way synchronization from source to destination without modifying any of the source files and deleting any of the destination files (unless --delete flag has passed).

Suppose we have following files;

   -  29 Sep 10:00 .
5000  29 Sep 11:00 ├── favicon.ico
 300  29 Sep 10:00 ├── index.html
  50  29 Sep 10:00 ├── readme.md
  80  29 Sep 11:30 └── styles.css
s5cmd ls s3://bucket/static/
2021/09/29 10:00:01               300 index.html
2021/09/29 11:10:01                10 readme.md
2021/09/29 10:00:01                90 styles.css
2021/09/29 11:10:01                10 test.html

running would;

  • copy favicon.ico
    • file does not exist in destination.
  • copy styles.css
    • source file is newer than to remote counterpart.
  • copy readme.md
    • even though the source one is older, it's size differs from the destination one; assuming source file is the source of truth.
s5cmd sync . s3://bucket/static/

cp favicon.ico s3://bucket/static/favicon.ico
cp styles.css s3://bucket/static/styles.css
cp readme.md s3://bucket/static/readme.md

Running with --delete flag would delete files those do not exist in the source;

s5cmd sync --delete . s3://bucket/static/

rm s3://bucket/test.html
cp favicon.ico s3://bucket/static/favicon.ico
cp styles.css s3://bucket/static/styles.css
cp readme.md s3://bucket/static/readme.md

It's also possible to use wildcards to sync only a subset of files.

To sync only .html files in S3 bucket above to same local file system;

s5cmd sync 's3://bucket/static/*.html' .

cp s3://bucket/prefix/index.html index.html
cp s3://bucket/prefix/test.html test.html
Strategy
Default

By default s5cmd compares files' both size and modification times, treating source files as source of truth. Any difference in size or modification time would cause s5cmd to copy source object to destination.

mod time size should sync
src > dst src != dst
src > dst src == dst
src <= dst src != dst
src <= dst src == dst
Size only

With --size-only flag, it's possible to use the strategy that would only compare file sizes. Source treated as source of truth and any difference in sizes would cause s5cmd to copy source object to destination.

mod time size should sync
src > dst src != dst
src > dst src = dst
src <= dst src != dst
src <= dst src == dst

Dry run

--dry-run flag will output what operations will be performed without actually carrying out those operations.

s3://bucket/pre/file1.gz
...
s3://bucket/last.txt

running

s5cmd --dry-run cp s3://bucket/pre/* s3://another-bucket/

will output

cp s3://bucket/pre/file1.gz s3://another-bucket/file1.gz
...
cp s3://bucket/pre/last.txt s3://anohter-bucket/last.txt

however, those copy operations will not be performed. It is displaying what s5cmd will do when ran without --dry-run

Note that --dry-run can be used with any operation that has a side effect, i.e., cp, mv, rm, mb ...

S3 ListObjects API Backward Compatibility

The --use-list-objects-v1 flag will force using S3 ListObjectsV1 API. This flag is useful for services that do not support ListObjectsV2 API.

s5cmd --use-list-objects-v1 ls s3://bucket/

Shell auto-completion

Shell completion is supported for bash, pwsh (PowerShell) and zsh.

Run s5cmd --install-completion to obtain the appropriate auto-completion script for your shell, note that install-completion does not install the auto-completion but merely gives the instructions to install. The name is kept as it is for backward compatibility.

To actually enable auto-completion:

in bash and zsh:

you should add auto-completion script to .bashrc and .zshrc file.

in pwsh:

you should save the autocompletion script to a file named s5cmd.ps1 and add the full path of "s5cmd.ps1" file to profile file (which you can locate with $profile)

Finally, restart your shell to activate the changes.

Note The environment variable SHELL must be accurate for the autocompletion to function properly. That is it should point to bash binary in bash, to zsh binary in zsh and to pwsh binary in PowerShell.

Note The autocompletion is tested with following versions of the shells:
zsh 5.8.1 (x86_64-apple-darwin21.0)
GNU bash, version 5.1.16(1)-release (x86_64-apple-darwin21.1.0)
PowerShell 7.2.6

Google Cloud Storage support

s5cmd supports S3 API compatible services, such as GCS, Minio or your favorite object storage.

s5cmd --endpoint-url https://storage.googleapis.com ls

or an alternative with environment variable

S3_ENDPOINT_URL="https://storage.googleapis.com" s5cmd ls

# or

export S3_ENDPOINT_URL="https://storage.googleapis.com"
s5cmd ls

all variants will return your GCS buckets.

s5cmd reads .aws/credentials to access Google Cloud Storage. Populate the aws_access_key_id and aws_secret_access_key fields in .aws/credentials with an HMAC key created using this procedure.

s5cmd will use virtual-host style bucket resolving for S3, S3 transfer acceleration and GCS. If a custom endpoint is provided, it'll fallback to path-style.

Retry logic

s5cmd uses an exponential backoff retry mechanism for transient or potential server-side throttling errors. Non-retriable errors, such as invalid credentials, authorization errors etc, will not be retried. By default, s5cmd will retry 10 times for up to a minute. Number of retries are adjustable via --retry-count flag.

ℹ️ Enable debug level logging for displaying retryable errors.

Integrity Verification

s5cmd verifies the integrity of files uploaded to Amazon S3 by checking the Content-MD5 and X-Amz-Content-Sha256 headers. These headers are added by the AWS SDK for both standard and multipart uploads.

  • Content-MD5 is a checksum of the file's contents, calculated using the MD5 algorithm.
  • X-Amz-Content-Sha256 is a checksum of the file's contents, calculated using the SHA256 algorithm.

If the checksums in these headers do not match the checksum of the file that was actually uploaded, then s5cmd will fail the upload. This helps to ensure that the file was not corrupted during transmission.

If the checksum calculated by S3 does not match the checksums provided in the Content-MD5 and X-Amz-Content-Sha256 headers, S3 will not store the object. Instead, it will return an error message to s5cmd with the error code InvalidDigest for an MD5 mismatch or XAmzContentSHA256Mismatch for a SHA256 mismatch.

Error Code Description
InvalidDigest The checksum provided in the Content-MD5 header does not match the checksum calculated by S3.
XAmzContentSHA256Mismatch The checksum provided in the X-Amz-Content-Sha256 header does not match the checksum calculated by S3.

If s5cmd receives either of these error codes, it will not retry to upload the object again and exit code will be 1.

If the MD5 checksum mismatches, you will see an error like the one below.

ERROR "cp file.log s3://bucket/file.log": InvalidDigest: The Content-MD5 you specified was invalid. status code: 400, request id: S3TR4P2E0A2K3JMH7, host id: XTeMYKd2KECOHWk5S

If the SHA256 checksum mismatches, you will see an error like the one below.

ERROR "cp file.log s3://bucket/file.log": XAmzContentSHA256Mismatch: The provided 'x-amz-content-sha256' header does not match what was computed. status code: 400, request id: S3TR4P2E0A2K3JMH7, host id: XTeMYKd2KECOHWk5S

aws-cli and s5cmd are both command-line tools that can be used to interact with Amazon S3. However, there are some differences between the two tools in terms of how they verify the integrity of data uploaded to S3.

  • Number of retries: aws-cli will retry up to five times to upload a file, while s5cmd will not retry.
  • Checksums: If you enable Signature Version 4 in your ~/.aws/config file, aws-cli will only check the SHA256 checksum of a file while s5cmd will check both the MD5 and SHA256 checksums.

Sources:

Using wildcards

On some shells, like zsh, the * character gets treated as a file globbing wildcard, which causes unexpected results for s5cmd. You might see an output like:

zsh: no matches found

If that happens, you need to wrap your wildcard expression in single quotes, like:

s5cmd cp '*.gz' s3://bucket/

Output

s5cmd supports both structured and unstructured outputs.

  • unstructured output
$ s5cmd cp s3://bucket/testfile .

cp s3://bucket/testfile testfile
$ s5cmd cp --no-clobber s3://somebucket/file.txt file.txt

ERROR "cp s3://somebucket/file.txt file.txt": object already exists
  • If --json flag is provided:
{
    "operation": "cp",
    "success": true,
    "source": "s3://bucket/testfile",
    "destination": "testfile",
    "object": "[object]"
}
{
    "operation": "cp",
    "job": "cp s3://somebucket/file.txt file.txt",
    "error": "'cp s3://somebucket/file.txt file.txt': object already exists"
}

Configuring Concurrency

numworkers

numworkers is a global option that sets the size of the global worker pool. Default value of numworkers is 256. Commands such as cp, select and run, which can benefit from parallelism use this worker pool to execute tasks. A task can be an upload, a download or anything in a run file.

For example, if you are uploading 100 files to an S3 bucket and the --numworkers is set to 10, then s5cmd will limit the number of files concurrently uploaded to 10.

s5cmd --numworkers 10 cp '/Users/foo/bar/*' s3://mybucket/foo/bar/

concurrency

concurrency is a cp command option. It sets the number of parts that will be uploaded or downloaded in parallel for a single file. This parameter is used by the AWS Go SDK. Default value of concurrency is 5.

numworkers and concurrency options can be used together:

s5cmd --numworkers 10 cp --concurrency 10 '/Users/foo/bar/*' s3://mybucket/foo/bar/

If you have a few, large files to download, setting --numworkers to a very high value will not affect download speed. In this scenario setting --concurrency to a higher value may have a better impact on the download speed.

Benchmarks

Some benchmarks regarding the performance of s5cmd are introduced below. For more details refer to this post which is the source of the benchmarks to be presented.

Upload/download of single large file

get/put performance graph

Uploading large number of small-sized files

multi-object upload performance graph

Performance comparison on different hardware

s3 upload speed graph

So, where does all this speed come from?

There are mainly two reasons for this:

  • It is written in Go, a statically compiled language designed to make development of concurrent systems easy and make full utilization of multi-core processors.
  • Parallelization. s5cmd starts out with concurrent worker pools and parallelizes workloads as much as possible while trying to achieve maximum throughput.

performance regression tests

bench.py script can be used to compare performance of two different s5cmd builds. Refer to this readme file for further details.

Advanced Usage

Some of the advanced usage patterns provided below are inspired by the following article (thank you! @joshuarobinson)

Integrate s5cmd operations with Unix commands

Assume we have a set of objects on S3, and we would like to list them in sorted fashion according to object names.

$ s5cmd ls s3://bucket/reports/ | sort -k 4
2020/08/17 09:34:33              1364 antalya.csv
2020/08/17 09:34:33                 0 batman.csv
2020/08/17 09:34:33             23114 istanbul.csv
2020/08/17 09:34:33             26154 izmir.csv
2020/08/17 09:34:33               112 samsun.csv
2020/08/17 09:34:33             12552 van.csv

For a more practical scenario, let's say we have an avocado prices dataset, and we would like to take a peek at the few lines of the data by fetching only the necessary bytes.

$ s5cmd cat s3://bucket/avocado.csv.gz | gunzip | xsv slice --len 5 | xsv table
    Date        AveragePrice  Total Volume  4046     4225       4770   Total Bags  Small Bags  Large Bags  XLarge Bags  type          year  region
0   2015-12-27  1.33          64236.62      1036.74  54454.85   48.16  8696.87     8603.62     93.25       0.0          conventional  2015  Albany
1   2015-12-20  1.35          54876.98      674.28   44638.81   58.33  9505.56     9408.07     97.49       0.0          conventional  2015  Albany
2   2015-12-13  0.93          118220.22     794.7    109149.67  130.5  8145.35     8042.21     103.14      0.0          conventional  2015  Albany
3   2015-12-06  1.08          78992.15      1132.0   71976.41   72.58  5811.16     5677.4      133.76      0.0          conventional  2015  Albany
4   2015-11-29  1.28          51039.6       941.48   43838.39   75.78  6183.95     5986.26     197.69      0.0          conventional  2015  Albany

Beast Mode s5cmd

s5cmd allows to pass in some file, containing list of operations to be performed, as an argument to the run command as illustrated in the above example. Alternatively, one can pipe in commands into the run:

BUCKET=s5cmd-test; s5cmd ls s3://$BUCKET/*test | grep -v DIR | awk ‘{print $NF}’
| xargs -I {} echo “cp s3://$BUCKET/{} /local/directory/” | s5cmd run

The above command performs two s5cmd invocations; first, searches for files with test suffix and then creates a copy to local directory command for each matching file and finally, pipes in those into the run.

Let's examine another usage instance, where we migrate files older than 30 days to a cloud object storage:

find /mnt/joshua/nachos/ -type f -mtime +30 | awk '{print "mv "$1" s3://joshuarobinson/backup/"$1}'
| s5cmd run

It is worth to mention that, run command should not be considered as a silver bullet for all operations. For example, assume we want to remove the following objects:

s3://bucket/prefix/2020/03/object1.gz
s3://bucket/prefix/2020/04/object1.gz
...
s3://bucket/prefix/2020/09/object77.gz

Rather than executing

rm s3://bucket/prefix/2020/03/object1.gz
rm s3://bucket/prefix/2020/04/object1.gz
...
rm s3://bucket/prefix/2020/09/object77.gz

with run command, it is better to just use

rm s3://bucket/prefix/2020/0*/object*.gz

the latter sends single delete request per thousand objects, whereas using the former approach sends a separate delete request for each subcommand provided to run. Thus, there can be a significant runtime difference between those two approaches.

LICENSE

MIT. See LICENSE.

s5cmd's People

Contributors

ahmethakanbesel avatar arl avatar aykutfarsak avatar baragona avatar boraberke avatar denizsurmeli avatar disq avatar ehaupt avatar eminugurkenar avatar emres avatar fbarotov avatar goreleaserbot avatar greenpau avatar igungor avatar ilkinulas avatar kemege avatar khacminh avatar kucukaslan avatar mgiessing avatar nattofriends avatar nelhage avatar ocakhasan avatar onlined avatar pjgo avatar rajpratik71 avatar seruman avatar skeggse avatar sonmezonur avatar tombokombo avatar zemul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s5cmd's Issues

Add option for json output

It would be really useful if there was an option print the output in json format, so that it can be parsed easily.

Add option for ACL on cp command

Hey!

Awesome work! This is super fast.

Can you add an option for --acl bucket-owner-full-control when using cp please?

Cheers!
Josh

Support for multiple local sources (expanded wildcards)

If a wildcard are expanded to more than one local file, the command fails with "Invalid parameters" error message. As a workaround, local wildcards should be passed in single quotes to prevent wildcard expansion.

Commands with local-file sources should support more than one source to fix this. This would also enable multiple wildcards being specified as source.

Also keep the built-in wildcard expansion so that piped wildcards in a commands file continue to work.

Add option to cp to stdout

aws s3 cp has a magic "-" option which, on linux and macos at least, works like writing to /dev/stdout. s5cmd does not look like it supports providing an in-order stream of bytes (writing to stdout or a fifo fails because they don't allow seeking).

Example use case: We would like to be able to pipe extremely large files (terabytes) from s3 into another program such as sha512sum without writing it to disk or storing the entire file in RAM.

Support Transfer Acceleration

S3 has a feature called transfer acceleration.

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

CLI usage is documented here.

Add --no-verify-ssl capability

Hi!

During testing of self hosted s3 storage solutions it is vital to be able to turn off the ssl verification. This functionality is supported by the awscli client with the --no-verify-ssl flag.

It seems the aws-sdk-go has the functionality needed:
aws/aws-sdk-go#2404

Upload happens despite condition checks failing.

When giving a user following permissions:
PutObject
GetBucketLocation

And trying to upload with -n -s -u options there are no errors, and upload just happens:

s5cmd cp -n -s -u  test s3://test/test/
                    # Uploading test... (1 bytes)
2020/03/09 16:16:51 +OK "cp krisz-test s3://test/test/test"
 s5cmd cp -n -s -u  krisz-test s3://test/test/
                    # Uploading test... (1 bytes)
2020/03/09 16:18:23 +OK "cp test s3://test/test/test"

According to job_check:

func CheckConditions(ctx context.Context, src, dst *objurl.ObjectURL, opts opt.OptionList) *JobResponse {

We should fail because we get an error from HeadObject not having permission, but it just assumes the file is not present.
Also when giving Permission only to ListBucket and doing same operation on cloudtrail event is generated for accessdenied for headobject, but no error is thrown.

Change default copy behaviour to --parents

Not 100% sure about this, adding this here to track the issue.

Currently s5cmd flattens source directory hiearchy while downloading (cp/mv). For example:

s3://bucket/obj1
s3://bucket/obj2
s3://bucket/01/obj2

s5cmd cp 's3://bucket/*' .

The default behaviour is to find each object recursively, and download them while ignoring all prefixes after the wildcard. Result:

$ ls 

obj1
obj2

There'll be only 1 obj2 but we're not very confident which one.

We should make --parents the default behaviour, and --flatten an option.

Invalid symbolic link to directory with --parents

Tested with s5cmd 0.6.0

When using --parents parameter with cp, the symbolic link to a directory is managed as file

$ find tests -ls
  3318055      4 drwxrwxr-x   3 murlock  murlock      4096 Jun 17 11:59 test/
  3282596      0 lrwxrwxrwx   1 murlock  murlock        10 Jun 17 11:59 test/toto -> tata/160MB
  3282588      0 lrwxrwxrwx   1 murlock  murlock         4 Jun 17 11:59 test/tutu -> tata
  3281209      4 -rw-r--r--   1 murlock  murlock       111 Jun 17 11:58 test/magic
  3318056      4 drwxrwxr-x   2 murlock  murlock      4096 Jun 17 11:58 test/tata
  3281650      4 -rw-r--r--   1 murlock  murlock       111 Jun 17 11:58 test/tata/magic
  3282576 163844 -rw-rw-r--   1 murlock  murlock  167772160 Jun 17 11:58 test/tata/160MB
$ s5cmd  -endpoint-url http://127.0.0.1:5000 -us 5  cp --parents test/ s3://s5cmd/test2/
                    # Uploading magic... (111 bytes)
                    # Uploading 160MB... (167772160 bytes)
                    # Uploading magic... (111 bytes)
                    # Uploading toto... (167772160 bytes)
                    # Uploading tutu... (4096 bytes)
                    + "cp --parents test/magic s3://s5cmd/test2/magic"
                    + "cp --parents test/tata/magic s3://s5cmd/test2/tata/magic"
                    - "cp --parents test/tutu s3://s5cmd/test2/tutu": MultipartUpload: upload multipart failed upload id: ODgyNzQyZmUtNzkzYy00ZGNjLThjN2MtOGZkODRmMTdiOTJl caused by: EntityTooLarge: Your proposed upload exceeds the maximum allowed object size. status code: 400, request id: tx9cb2eb104ddb4fa0a26f2-005d079de3, host id: tx9cb2eb104ddb4fa0a26f2-005d079de3
                    + "cp --parents test/toto s3://s5cmd/test2/toto"
                    + "cp --parents test/tata/160MB s3://s5cmd/test2/tata/160MB"
2019/06/17 16:04:23 -ERR "cp test/ s3://s5cmd/test2/": Not all jobs completed successfully: 4/5
2019/06/17 16:04:24 # All workers idle, finishing up...

A strace show that symbolic link tutu was treated as file:

10057 newfstatat(AT_FDCWD, "test/tutu", {st_mode=S_IFDIR|0775, st_size=4096, ...}, 0) = 0                                           
10057 openat(AT_FDCWD, "test/tutu", O_RDONLY|O_CLOEXEC <unfinished ...>                                                             
10059 <... connect resumed> )           = -1 EINPROGRESS (Operation now in progress)                                                
10057 <... openat resumed> )            = 10      
10057 lseek(10, 0, SEEK_CUR)            = 0                                                                                         
10057 lseek(10, 0, SEEK_END)            = 9223372036854775807  
10061 pread64(10, 0xc00031c000, 32768, 1844674407370956) = -1 EISDIR (Is a directory)  

and network trace show the result of lseek used as-is:

    PUT /s5cmd/test2/tutu?partNumber=1&uploadId=NjEwZTE3MTYtMmUxNC00N2E4LWFjODgtOTFlMzYwYTM3NmJk HTTP/1.1\r\n
    Content-Length: 922337203685478\r\n

brew fails when installing

Getting the following when trying to install s5cmd on mac os x 10.14.6,
brew version :
Homebrew 2.2.1
Homebrew/homebrew-core (git revision 7bbf; last commit 2019-12-12)

I first get this error :
`brew install s5cmd
Error: Formulae found in multiple taps:
* peakgames/s5cmd/s5cmd
* peak/s5cmd/s5cmd

Please use the fully-qualified name (e.g. peakgames/s5cmd/s5cmd) to refer to the formula.`

If I then try

`brew install peak/s5cmd/s5cmd
==> Installing s5cmd from peak/s5cmd
==> Downloading https://github.com/peak/s5cmd/archive/v0.6.0.tar.gz
Already downloaded: /Users/xcclifto/Library/Caches/Homebrew/downloads/764786c429df6e3f007dd53961a9e7778fb4d11fc19113ed0880794d4c8
eb669--s5cmd-0.6.0.tar.gz
==> go build -o /usr/local/Cellar/s5cmd/0.6.0/bin/s5cmd
Last 15 lines from /Users/xcclifto/Library/Logs/Homebrew/s5cmd/01.go: /private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peak/s5cmd/vendor/github.com/peakgames/s5cmd/complet
e (vendor tree)
/usr/local/Cellar/go/1.13.4/libexec/src/github.com/peakgames/s5cmd/complete (from $GOROOT)
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peakgames/s5cmd/complete (from $GOPATH)
s5cmd.go:20:2: cannot find package "github.com/peakgames/s5cmd/core" in any of:
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peak/s5cmd/vendor/github.com/peakgames/s5cmd/core (v
endor tree)
/usr/local/Cellar/go/1.13.4/libexec/src/github.com/peakgames/s5cmd/core (from $GOROOT)
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peakgames/s5cmd/core (from $GOPATH)
s5cmd.go:21:2: cannot find package "github.com/peakgames/s5cmd/stats" in any of:
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peak/s5cmd/vendor/github.com/peakgames/s5cmd/stats (vendor tree) /usr/local/Cellar/go/1.13.4/libexec/src/github.com/peakgames/s5cmd/stats (from $GOROOT)
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peakgames/s5cmd/stats (from $GOPATH)
s5cmd.go:22:2: cannot find package "github.com/peakgames/s5cmd/version" in any of:
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peak/s5cmd/vendor/github.com/peakgames/s5cmd/version
(vendor tree)
/usr/local/Cellar/go/1.13.4/libexec/src/github.com/peakgames/s5cmd/version (from $GOROOT)
/private/tmp/s5cmd-20191212-45238-1woudal/s5cmd-0.6.0/src/github.com/peakgames/s5cmd/version (from $GOPATH)

If reporting this issue please do so to (not Homebrew/brew or Homebrew/core):
peak/s5cmd

Traceback (most recent call last):
26: from /usr/local/Homebrew/Library/Homebrew/build.rb:194:in <main>' 25: from /usr/local/Homebrew/Library/Homebrew/build.rb:113:in install'
24: from /usr/local/Homebrew/Library/Homebrew/utils.rb:478:in with_env' 23: from /usr/local/Homebrew/Library/Homebrew/build.rb:116:in block in install'
22: from /usr/local/Homebrew/Library/Homebrew/formula.rb:1128:in brew' 21: from /usr/local/Homebrew/Library/Homebrew/formula.rb:2036:in stage'
20: from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/lib/ruby/2.6.0/forwardable.rb:230:in stage' 19: from /usr/local/Homebrew/Library/Homebrew/resource.rb:75:in stage'
18: from /usr/local/Homebrew/Library/Homebrew/resource.rb:95:in unpack' 17: from /usr/local/Homebrew/Library/Homebrew/resource.rb:171:in mktemp'
16: from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:57:in run' 15: from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:57:in chdir'
14: from /usr/local/Homebrew/Library/Homebrew/mktemp.rb:57:in block in run' 13: from /usr/local/Homebrew/Library/Homebrew/resource.rb:172:in block in mktemp'
12: from /usr/local/Homebrew/Library/Homebrew/resource.rb:100:in block in unpack' 11: from /usr/local/Homebrew/Library/Homebrew/formula.rb:2060:in block in stage'
10: from /usr/local/Homebrew/Library/Homebrew/utils.rb:478:in with_env' 9: from /usr/local/Homebrew/Library/Homebrew/formula.rb:2061:in block (2 levels) in stage'
8: from /usr/local/Homebrew/Library/Homebrew/formula.rb:1133:in block in brew' 7: from /usr/local/Homebrew/Library/Homebrew/build.rb:145:in block (2 levels) in install'
6: from /usr/local/Homebrew/Library/Taps/peak/homebrew-s5cmd/Formula/s5cmd.rb:20:in install' 5: from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/lib/ruby/2.6.0/fileutils.rb:128:in cd'
4: from /usr/local/Homebrew/Library/Homebrew/vendor/portable-ruby/2.6.3/lib/ruby/2.6.0/fileutils.rb:128:in chdir' 3: from /usr/local/Homebrew/Library/Taps/peak/homebrew-s5cmd/Formula/s5cmd.rb:23:in block in install'
2: from /usr/local/Homebrew/Library/Homebrew/formula.rb:1864:in system' 1: from /usr/local/Homebrew/Library/Homebrew/formula.rb:1864:in open'
/usr/local/Homebrew/Library/Homebrew/formula.rb:1927:in `block in system': Failed executing: go build -o /usr/local/Cellar/s5cmd/
0.6.0/bin/s5cmd (BuildError)

9: from /usr/local/Homebrew/Library/Homebrew/brew.rb:38:in <main>' 8: from /usr/local/Homebrew/Library/Homebrew/brew.rb:141:in rescue in

'
7: from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:425:in dump' 6: from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:371:in issues'
5: from /usr/local/Homebrew/Library/Homebrew/exceptions.rb:375:in fetch_issues' 4: from /usr/local/Homebrew/Library/Homebrew/utils/github.rb:293:in issues_for_formula'
3: from /usr/local/Homebrew/Library/Homebrew/utils/github.rb:279:in search_issues' 2: from /usr/local/Homebrew/Library/Homebrew/utils/github.rb:388:in search'
1: from /usr/local/Homebrew/Library/Homebrew/utils/github.rb:213:in open_api' /usr/local/Homebrew/Library/Homebrew/utils/github.rb:259:in raise_api_error': Validation Failed: [{"message"=>"The listed users
and repositories cannot be searched either because the resources do not exist or you do not have permission to view them.", "reso
urce"=>"Search", "field"=>"q", "code"=>"invalid"}] (GitHub::ValidationFailedError)`

The authorization header is malformed

ERROR "AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2' status code: 400, request id: 21F0039B......, host id: 8JtyS7dK5tOHL2owqHw6kKtpvD.........+c=" %!v(MISSING)

I am trying to run this command AWS_REGION=us-west-2 s5cmd cp .. inside a Kubernetes pod, but getting above error, though aws configure is correctly set and keys are in env variables. Any suggestion what may be doing wrong.
s5cmd version = v0.7.0-2a123c

Note - Just noticed the ls command works fine though

Upload to S3 always adds metadata "content-type: binary/octet-stream" instead of default per filetype.

The question is, is there a way to control the metadata / contentype of the upload? It seem to default to upload as content-type: binary/octet-stream

The goal is to either be able to set the metadata content-type, or make s3 use the default content-type per filetype. When using the painfully slow aws cli, aws s3 cp or aws s3 sync it will automatically add a correct content type to metadata. This is critical for us that uses s3 to host websites.

I would be happy to look into a PR, but could use a hint or two to where to modify.

This could origin from the difference between using aws s3api and aws s3. I belive this tool is i using s3api?

Keep ObjectURL origin

While expanding a given ObjectURL in storage.List method, keep a reference to the the original expandable form.

For example;

originalSrc: s3://bucket/prefix/obj*
expandedSrc: s3://bucket/prefix/object1.gz

expandedSrc.Origin() should return originalSrc.

Remove operation on non-existent file return success

tmp  Δ aws s3 ls 's3://bucket/notfound'
tmp  Δ echo $?
1
tmp  Δ s5cmd rm 's3://bucket/notfound'
2020/02/25 13:08:38 +OK "rm s3://bucket/notfound"
tmp  Δ echo $?
0
tmp  Δ

Removing a non-existent object returns 0 exit code. Also the log is misleading. It says worker is finished successfully but we're not interested in that actually.

I expect the status code a non-zero value. Also I don't need an +OK log cause it's not what it says.

Workers gets stuck after processing a single command when processing a list of commands

I have a file "commands.txt" with a list of commands like this:

cp -n s3://bucket1/file1 s3://bucket2/file1
cp -n s3://bucket1/file2 s3://bucket2/file2
cp -n s3://bucket1/file3 s3://bucket2/file3
...

When I call s5cmd like

s5cmd -numworkers 2 -f commands.txt

I see output like

2020/02/17 23:24:47 # Using 2 workers
2020/02/17 23:24:47 +OK "cp s3://bucket1/file1 s3://bucket2/file1"
2020/02/17 23:24:47 +OK "cp s3://bucket1/file2 s3://bucket2/file2"

and then it gets stuck, until I hit Ctrl-C and see the following output

2020/02/17 23:23:31 # Got signal, cleaning up...
2020/02/17 23:23:31 # Exiting with code 0
2020/02/17 23:23:31 # Stats: S3 3 0 ops/sec
2020/02/17 23:23:31 # Stats: Total 3 0 ops/sec 1m19.084946532s

The first 2 files are actually copied correctly, the rest of the commands from the file is not executed. Same with taking the commands from the standard input.

I just installed the latest version of s5cmd - v0.7.0.

Deleting all the objects from the second bucket works fine with a command like:

s5cmd rm s3://bucket2/*

Any suggestions for workaround?

Thanks.

`s5cmd ls -e` output is broken

Listing prefix/object mix with the etag parameter results in an unaligned output.

Δ s5cmd ls -e 's3://bucket/tmp/a/'
                                      DIR b/
2020/03/16 13:02:07   d41d8cd98f00b204e9800998ecf8427e            0 file2.txt

Add option for md5 check

s5cmd does not have an option to check file contents for S3 operations. It has flags to check modification time and size of files

-s         Only overwrite if size differs
-u         Only overwrite if source file/object is newer (update)

However, these arguments do not ensure content matching. It can be done by verifying Etags of S3 objects.

Correct exit status for ls command

s5cmd ls s3://path/to/non-existent/object exits with code 0. This means this command cannot be used to check if an object exist is a bucket or not.

missing build instructions / binary releases

please provide some instructions on how to build this on linux - even after setting up go path and trying go build . its not installaing a binary - or just supply a binary via releases

Unexpected subdirectory structure when running cp

I'm running:

AWS_REGION=my-region /home/ubuntu/go/bin/s5cmd cp -u -s --parents s3://my-bucket/my-subdirectory/.local/* /home/ubuntu/.local/

I'm expecting the contents from .local inside my subdirectory to be copied into /home/ubuntu/.local/, instead, they are getting copied to /home/ubuntu/.local/my-subdirectory/.local

Is this an expected behaviour? as per the command option documentation, the dir structure is created from the first wildcard onwards, and I recall it working like that in previous versions of s5cmd

Please advise

Retry on AWS "NoCredentialProviders" error

When AWS credentials are provided by EC2 or Kube IAM roles, operations could fail with the error below. Retrying the same operations work as expected. This seems like an AWS SDK issue. A temporary solution would be to retry on "NoCredentialProviders" error code.

NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

fcntl: too many open files

I'm on MacOS 10.15.3 and I'm trying to upload to S3 a folder that contains 2616 folders with 1 to 10 files each.

With s5cmd -stats -r 0 -vv cp -n --parents <src> <dest> I immediately see this error:

VERBOSE: wildOperation lister is done with error: fcntl: too many open files

but the uploads seem to proceed, though the uploaded files at the end are generally less than 100.
Stats output:

2020/02/12 15:05:24 # All workers idle, finishing up...
2020/02/12 15:05:24 # Stats: S3             119   52 ops/sec
2020/02/12 15:05:24 # Stats: Failed         130   57 ops/sec
2020/02/12 15:05:24 # Stats: Total          249  109 ops/sec 2.282740841s

If I run the same command with the -numworkers 16 option, the copy ends without errors and all files are correctly uploaded to S3

~$ ulimit -H -n
unlimited

~$ ulimit -S -n
256

~$ launchctl limit maxfiles
maxfiles    10240          10240

Use with S3-compatible service

Hi,
Can't seem to find a way to set the URL in order to use with a S3-compatible service/server that is not aws.
If there isn't any way to do that with, for example, a command line flag, would you consider a pull request implementing that feature?

Task queue blocks on too many wildcard operations

Let me start by saying that s5cmd is a big improvement over aws and s3cmd. And wildcard support is by far the best feature of s5cmd.

However, it seems that s5cmd (release v0.6.0) is getting stuck when many wildcard commands are submitted using a file or stdin. As soon as number of wildcard (batch) operations is equal to or larger than numworkers nothing is actually downloaded. It looks like all workers are busy handling wildcards, waiting for actual put/get operations to finish. But there are no workers available to perform put/get operations.

This would not be a problem if number of workers could be set arbitrarily high. And technically it can. But we found out that on say i3.2 AWS instance (8 vCPUs) download speed is significantly reduced with 256 workers, compared to 16 or 32 workers. One can get around this by submitting listing commands in one file, then making a long list of non-batch operations in another file.

cp local to s3 failing - S3 url should start with s3://

When running command

AWS_REGION=eu-west-1 s5cmd cp -u -s --parents /tmp/backup s3://backup/x/y
2019/11/05 14:57:26 -ERR "cp -u -s --parents /tmp/backup s3://backup/x/y": Invalid parameters to "cp": S3 url should start with s3://

fails with missing s3:// but this is not missing?

Support for kms encryption?

In the vanilla aws cli, one would copy objects that are encrypted with a kms key as follows:
aws s3 cp s3://foo s3://bar --sse aws:kms

Is there something equivalent in s5cmd? Or does the tool currently not support encryption?

PS
This is an amazing tool, especially the auto-completion functionality!

cp -R option is confusing

cp command's -R option works on local files (as source) only. When downloading, if there's a wildcard it's always recursive so there's no real need for an -R option. Maybe add a dummy/no-op -R option to "from-remote" cp command?

Currently having the -R in an s3-sourced cp command fails with the confusing error message "File param resembles s3 object", as it parses the command as having a local source. (since -R is not an option for remote sourced files)

Another fix would be to reverse the parsing order so that arguments get parsed first, options later. Since s3 urls are expected to always start with s3:// scheme, it would select the correct command and fail with an "no such option" error message which is much clear.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.