Git Product home page Git Product logo

s3wipe's Introduction

s3wipe

A rapid parallelized AWS S3 key & bucket deleter.

I was recently tasked with deleting an bucket in Amazon's Simple Storage Service (S3) that contained an absolutely massive number of files.

Unfortunately, Amazon themselves do not give you an easy way to do this yourself. Their web interface stalls indefinitely when you delete an "adequately large" number of files, and their CLI tool (aptly named "aws-cli") only deletes files in a single-threaded fashion (i.e. slowly).

After googling around a bit, I encountered s3nukem (itself a fork of s3nuke), which appeared to be the solution to my problems. After a few minutes of trying to find the 'right' version of RightAWS (the s3nukem code & readme had a disagreement over this), I was able to get it up and running. However, after a bit of back-of-the-napkin math, it was looking like it was still going to take at least a month of running s3nukem before the bucket was deleted.

So, I wrote s3wipe. S3wipe, as far as I know, is the only S3 key/bucket deletion tool that:

  • Does parallel, thread-based delete AND list operations (more speed)
  • Performs batch deletes (MOAR SPEED!)
  • Will delete versioned objects (MOAR... well, deletes)

Using s3wipe, I was able to delete 400 million S3 objects in about 24 hours.

Installation

This is just a single-file script, so just go ahead and run it. It will need a semi-recent version of the "boto" python module to be installed, though, so:

pip install boto

or

yum install python-boto

or

apt-get install python-boto

Then:

wget https://raw.github.com/eschwim/s3wipe/master/s3wipe
chmod 755 s3wipe

Using Docker

Clone the repo:

git clone [email protected]:eschwim/s3wipe.git
cd s3wipe

Build the Docker image:

docker build -t s3wipe:latest .

Then run the script:

docker run s3wipe:latest --help

Usage

usage: s3wipe [-h] --path PATH --id ID --key KEY [--dryrun] [--quiet]
              [--batchsize BATCHSIZE] [--maxqueue MAXQUEUE] [--delbucket]

Recursively delete all keys in an S3 path

optional arguments:
  -h, --help             show this help message and exit
  --path PATH            S3 path to delete (e.g. s3://bucket/path)
  --id ID                Your AWS access key ID
  --key KEY              Your AWS secret access key
  --dryrun               Don't delete. Print what we would have deleted
  --quiet                Suprress all non-error output
  --batchsize BATCHSIZE  # of keys to batch delete (default 100)
  --maxqueue MAXQUEUE    Max size of deletion queue (default 10k)
  --delbucket            If S3 path is a bucket path, delete the bucket also

Changelog

v0.2

You can now delete all keys under an arbitrary S3 path, instead of only
entire buckets (although that is still an option, as well).

v0.1

Initial version.

s3wipe's People

Contributors

eschwim avatar dannysauer avatar geekpete avatar nicksantamaria avatar njam avatar

Stargazers

Michael Lorant avatar DG avatar  avatar Felipe De Bene avatar Rudger avatar Jon Atkinson avatar Russell Groves avatar François Davier avatar Ward avatar Camil Blanaru avatar Iain Samuel McLean Elder avatar  avatar Jacob Coleman avatar Anthony Garo avatar Gil Mandler avatar Xin Wang avatar  avatar Jacob  ZHOU avatar Kanishka Garg avatar  avatar Robert Hafner avatar Krzysztof Wilczyński avatar giangvincent avatar JMV avatar AJ Bourg avatar Sam Mingo avatar  avatar Dean Wyatte avatar  avatar Jason Weathered avatar John avatar Talha avatar Makar avatar Ivan Goncharov avatar Thomas Knox avatar Timothy J Laurent avatar Halil Kaya avatar Joshua Hoblitt avatar yuliyang_yewu avatar Sergey Grankin avatar James Tooze avatar aaron witt avatar Pritish Chakraborty avatar Alex avatar Corey Gale avatar Luis Paulo Almeida avatar Ravi Teja avatar Emil Koutanov avatar Adam Saegebarth avatar Doug avatar  avatar ivy avatar Chris Bethune avatar Olivier Bazoud avatar geoff avatar  avatar Martijn Heemels avatar Andrew McGilvray avatar Alan Grosskurth avatar  avatar

Watchers

James Cloos avatar  avatar

s3wipe's Issues

Enable S3 Bucket version before delete files

Hey man,

I used your cli to delete 170TB of data and work fine, but the only problem in line 196 you enable the Bucket versioning then, in the end, I don't delete 170TB of that I only added the delete mark in all files.
I'm not sure why you are doing that, because you can't delete bucket with versions file inside.

batchsize and maxqueue recommendations?

Can we get a little more help for when to modify batchsize and maxqueue from the defaults? When deleting millions of objects I just use the defaults and wait. It seems to dynamically adjust somehow. Can you throw something in the README when you get a chance? Thanks. Awesome script by the way!

Brian

Deleting s3://bucket/foo deletes s3://bucket/foo2

I was trying to delete a directory called foo, but if I passed s3://bucket/foo/, it would not proceed, and if I passed s3://bucket/foo it would also delete s3://bucket/foo2

I worked around it by deleting s3://bucket/foo/1, s3://bucket/foo/2 and so-forth for every first character.

Two possible solutions: allow trailing slashes, or assuming a trailing slash after the input (i.e. don't delete partial paths).

Can't use buckets with . in them

Read up here: boto/boto#2836

I fixed this by adding this in the headers:

import ssl
if hasattr(ssl, '_create_unverified_context'):
ssl._create_default_https_context = ssl._create_unverified_context

Make the script pip-installable

So that I can declare it as a development dependency using Poetry.

There is an s3wipe package on PyPI, but it's not mentioned in the README, so I don't trust it.

Currently I'm working around it by commiting a copy of the script to my own repo like this:

curl "https://raw.githubusercontent.com/eschwim/s3wipe/80cdb19655a4db48830c50969a43dfa36b657015/s3wipe" > bin/s3wipe
chmod +x bin/s3wipe

Minor bug: Float returned by listThreads

In line 208:
listThreads = args.maxthreads / 3

can return a float value which causes an error

For example:

INFO: Starting 66.66666666666666 delete threads...

Modify listThreads to force an integer value and it will work:
listThreads = int(args.maxthreads / 3)

large ammount of unneeded whitespace

s3wipe has a large number of lines with trailing spaces and a few lines that are only spaces -- this is pretty painful if your text editor is configured to highlight unneeded whitespace. Would a PR to clean this up be accepted?

Num threads is number of subdirectories

This is a problem if the number of subdirectories is high, exhausting the number of available file descriptors. Suggest setting a maximum, say --max-threads=100.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.