Git Product home page Git Product logo

aws-nuke's People

Contributors

bethge avatar bjoernhaeuser avatar dependabot[bot] avatar der-eismann avatar ekristen avatar ga-paul-t avatar github-actions[bot] avatar guillermo-menjivar avatar hligit avatar hmalphettes avatar hv202x1 avatar jami avatar jbmchuck avatar kurtmc avatar mikeschouw avatar mrkenkeller avatar mrprimate avatar nelsonjchen avatar optplx avatar rbroemeling avatar richardneililagan avatar sambattalio avatar sas-pemcne avatar sstoops avatar stephanlindauer avatar steved avatar svenwltr avatar swhite-oreilly avatar tomvachon avatar tylersouthwick avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-nuke's Issues

Endpoints aren't being applied fully

I'm finding place where the operations are not supported in said region or the API endpoint doesnt exist. Maybe its not merged or I missed some guidance, but I though the endpoint discovery in the core code was fixing this

Release v2.0.0

I want to release v2.0.0 soon, since there a quite a lot of issues which are already solved.

@rebuy-de/prp-aws-nuke Do you see any problems which we still need to tackle before doing the release?

/cc @tomvachon Also, do you have anything important we have missed to do?

runtime error - MacOS

$ aws-nuke-v1.2.1-darwin-amd64 -c aws-nuke-config.yml --profile aws-enterprise-uat

us-east-1 - RDSDBSubnetGroup - 'apiture-stacks-component-rds-databasesubnetgroup-17hxceov4kkcc' - would remove
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x185dbf6]

goroutine 53 [running]:

Listing MQBroker failed

Hi,

Received the following error when using aws-nuke (binary release).

Version: aws-nuke-v2.0.0.alpha1-darwin-amd64

Error message:

    !!! RequestError: send request failed
    !!! caused by: Get https://mq.ap-northeast-1.amazonaws.com/v1/brokers?maxResults=100: dial tcp 92.242.132.24:443: getsockopt: connection refused 

Parallelise API requests

Currently aws-nuke is rather slow, because it does every request sequentially. This means the process is sleeping the most of the time. Also the AWS API can probably handle a lot more requests (unfortunately I did not find any reliable documentation about this).

By parallelising the API request we can speed up aws-nuke a lot.

Unable to remove S3 buckets if versioning is enable

As per title, when there's versioning enable on S3 bucket with files, aws-nuke won't be able to remove the old versions.
Instead it will show an error message:-
"BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: XXXXXXXXXXXXX, host id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

Several SDK Services - Credential Scope Issues

I'm unable to add cleanup for the autoscalingplans API

ERRO[0003] Listing with resources.ResourceLister failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
!!! InvalidSignatureException: Credential should be scoped to correct service: 'autoscaling-plans'.
!!! status code: 400, request id: 5dbf75f6-217c-11e8-8a21-5b29c4644711
ERRO[0003] Listing with resources.ResourceLister failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
!!! InvalidSignatureException: Credential should be scoped to correct service: 'autoscaling-plans'.
!!! status code: 400, request id: 5dda299d-217c-11e8-a778-f18a042972d7
ERRO[0003] Listing with resources.ResourceLister failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
!!! InvalidSignatureException: Credential should be scoped to correct service: 'autoscaling-plans'.
!!! status code: 400, request id: 5e0f437a-217c-11e8-9aff-8bbaa412de1c
Scan complete: 0 total, 0 nukeable, 0 filtered.

My code is here if you see something: https://github.com/tomvachon/aws-nuke/blob/feature/autoscalingplans/resources/autoscalingplans-scalingplans.go

Segfault on large account

When running aws-nuke against a large account with many resources, aws-nuke dies during the resource enumeration (specifically Lambda functions) with this error:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc4ca02]

goroutine 37 [running]:
github.com/rebuy-de/aws-nuke/resources.(*RDSNuke).ListInstances(0xc4202021e0, 0xc420054b40, 0xc4215bdfb0, 0xfacb01, 0xca7ed2, 0xc420189101)
/go/src/github.com/rebuy-de/aws-nuke/resources/rds-instances.go:26 +0x152
github.com/rebuy-de/aws-nuke/resources.(*RDSNuke).ListInstances-fm(0xd759e0, 0xc420054b40, 0xc4215bdfb0, 0xe, 0x0)
/go/src/github.com/rebuy-de/aws-nuke/resources/listers.go:95 +0x2a
github.com/rebuy-de/aws-nuke/cmd.Scan.func1(0xc420116180, 0xc4204003b0, 0xc420054b40)
/go/src/github.com/rebuy-de/aws-nuke/cmd/scan.go:22 +0x77
created by github.com/rebuy-de/aws-nuke/cmd.Scan
/go/src/github.com/rebuy-de/aws-nuke/cmd/scan.go:39 +0x9e

Attribute name is a reserved keyword

I encountered this today:

us-east-2 - LambdaFunction - 'devenv-meta' - would remove
us-east-2 - LambdaFunction - 'devenv-carrier-read' - would remove

=============

Listing with resources.ResourceLister failed:

ValidationException: Invalid ProjectionExpression: Attribute name is a reserved keyword; reserved keyword: Path
	status code: 400, request id: 0EHE5HMS8R4PT0RJN55I0K2L43VV4KQNSO5AEMVJF66Q9ASUAAJG

Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.

=============

us-east-2 - DynamoDBTable - 'devenv-us-east-2' - would remove
us-east-2 - DynamoDBTable - 'devenv.sequences' - would remove

My config:

# aws-nuke configuration to exlude base-environment resources from deletion

regions:
  - "us-east-2"
  - "us-east-1"
  - "ap-south-1"
  - "ap-southeast-2"
  - "eu-west-1"

account-blacklist:
- "999999999999" # required dummy value for aws-nuke to run

accounts:
  "738525243859":
    filters:
      IAMUser:
      - "AWSAdmin"
      IAMUserPolicyAttachment:
      - "AWSAdmin -> AdministratorAccess"
      IAMUserAccessKey:
      - "AWSAdmin -> AKIAJVRMKJPZLU4YLKSA"
      IAMRole:
      - "AWSServiceRoleForOrganizations"

      # default vpc resources

      EC2VPC:
      - "vpc-19bd6a7f" # eu-west-1
      EC2Subnet:
      - "subnet-c7182e9c" # eu-west-1
      - "subnet-30996178" # eu-west-1
      - "subnet-03946165" # eu-west-1
      EC2RouteTable:
      - "rtb-995943ff" # eu-west-1
      EC2DHCPOption:
      - "dopt-f57b5e92" # eu-west-1
      EC2NetworkACL:
      - "acl-8b8ff5ed" # eu-west-1
      EC2SecurityGroup:
      - "sg-c6e93bbc" # eu-west-1
      EC2InternetGateway:
      - "igw-706cd517" # eu-west-1
      EC2InternetGatewayAttachment:
      - "igw-706cd517 -> vpc-19bd6a7f" # eu-west-1

Add support for regex in resources filter

Problem

Currently the resource filter only supports exact string matches of the resource identifier (eg s3://my-bucket/blubber.txt). We should also support more flexible filter options. A first big step would be the support of regular expressions. Later we might want to add additional resource-specific filter methods which could rely on data which isn't part of the resource identifier (eg based on the Tag or AZ of an EC2 Instance; or access keys of a specific user).

Proposal

My proposal for the configuration file is to optionally be able to specify a map instead of a string in the filter. This map should contain a type and optionally the data for the filter.

Example

Currently it look like this (which still should work in the future):

accounts:
  "00000000000":
    filters:
      S3Bucket:
      - "s3://my-bucket/bish.txt"
      - "s3://my-bucket/bash.txt"
      - "s3://my-bucket/bosh.txt"

It should also be possible to write it like this:

accounts:
  "00000000000":
    filters:
      S3Bucket:
      - type: exact
        id: "s3://my-bucket/bish.txt"
      - type: exact
        id: "s3://my-bucket/bash.txt"
      - type: exact
        id: "s3://my-bucket/bosh.txt"

Using a regex could look like this:

accounts:
  "00000000000":
    filters:
      S3Bucket:
      - type: regex
        id: "s3://my-bucket/b[iao]sh\\.txt"

Alternatives

Alternatively the map could just contain a single type value pair, like this:

accounts:
  "00000000000":
    filters:
      S3Bucket:
      - regex: "s3://my-bucket/b[iao]sh\\.txt"

But I think this would make things ambiguous. What if someone specified multiple types in the same filter? How should we handle extra data (in case of the extension to allow resource-specific filter)?


Please discuss.

Delete all versions in S3 bucket

Deleting versioned S3 buckets fails if all versions are not deleted prior to bucket deletion:

us-east-2 - S3Bucket - 's3://application-state' - BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
	status code: 409, request id: 166516D767C46F79, host id: xezvJzBCa84wcRsXRndei9W8Vz9iqDgZZ/Yad5LZfMQL2i9p4LRNfu8wLcR9AMWWCsMJf1A+Sdk=

Delete dependents first

Some AWS resources cannot be deleted before their dependent services are deleted. Is it possible for you to add the ability to remove resources in a specific order to ensure dependents are removed before their dependencies?

us-east-1 - IamRole - 'storage-v2-jdabilio-IamRoleLambda-129UJS1B57Z30' - DeleteConflict: Cannot delete entity, must delete policies first.
	status code: 409, request id: 6e327e18-bd22-11e7-995f-afa65320bd29
us-east-1 - IamRole - 'storage-v2-jdabilio-IamRoleLambda-137J19IBXLR69' - DeleteConflict: Cannot delete entity, must delete policies first.
	status code: 409, request id: 6e5fa903-bd22-11e7-995f-afa65320bd29
us-east-1 - IamRole - 'storage-v2-jdabilio-ScalingRole-N7NQ5CCTG0TK' - DeleteConflict: Cannot delete entity, must delete policies first.
	status code: 409, request id: 6e8d482f-bd22-11e7-995f-afa65320bd29
us-east-1 - IamRole - 'storage-v2-strgdel-IamRoleLambda-397JQ1MQUGNX' - DeleteConflict: Cannot delete entity, must delete policies first.

I know of dependency-graph in the npm ecosystem, but not one for Go.

Missing Resource: AMI

Along with the snapshots, we will need to delete the AMI's. These have dependancies on the snapshots, so retries will be needed

Error when running script - without no-dry-run parameter

Awesome script, but got the following error when running the script:

=============

Listing with resources.ResourceLister failed:

AccessDenied: Access Denied
status code: 403, request id: 221279D5AEAA9865, host id: UDu8naECiceHFqf0MaRNWauGjWzCHoe+bZOPKENizG3OTcJe9LE+tCViZsv5gYut6W4NQ96mMSE=

Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.

=============

=============

Listing with resources.ResourceLister failed:

AccessDenied: Access Denied
status code: 403, request id: B95637B9B6988946, host id: WBJrDH6j6B7Xjzalz/zeAYZQA8aoB1d1yDmMKu9OVjUzIXJyFWLsRHouRsWW7RZXZSa2k//xulQ=

Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.

=============

The script does complete, though it fails to detect the S3 buckets and objects. The account the script is running under has full admin access and I can see the s3 buckets doing a "aws s3 ls" command.

Reparent AWS-Nuke Project

@svenwltr and @bethge, this project has been getting quite a bit of traction lately. It might be time to reparent it up to its own Org in Github. I'd be happy to help out on the project admin side, but I doubt you want to give me access while its under the Rebuy Org.

Thoughts?

support STS login

I use aws-vault to login to aws.
as such there is no aws access keys, only sts temporary tokens.

using --profile Default works, but is not as intuitive.

Listing AutoScalingPlansScalingPlan failed

Hi,

Received the following error when running aws-nuke, binary release version.

Version: aws-nuke-v2.0.0.alpha1-darwin-amd64

Error message:

ERRO[0297] Listing AutoScalingPlansScalingPlan failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
    !!! InvalidSignatureException: Credential should be scoped to correct service: 'autoscaling-plans'. 
    !!! 	status code: 400, request id: 97980bff-4249-11e8-9ded-ed931840eb58 

Include AWS CLI Profile in config file per account

Let's say I have 20 AWS accounts, each of which involve a different AWS CLI profile stored in ~/.aws/config -- it would be awesome if I could include which AWS CLI profile it should use in my config file -- per account, so I don't have to for loop through running aws-nuke with 20 different config files.

[BUG] Global Region Creates Race Conditions for Unknown Services

When we don't know a service's given endpoint, the current code determines it as both "Global" and n * regions. As a result, the system tries to delete both at once, the problem arises when there is a implicit parent child dependancy.

For example: Using MediaStoreData to delete each item in a MediaStore Container requires an endpoint which is unique for each container which are also regionally isolated. When the "global" or "regional" delete occurs on the container, the listers end up continuing to try and iterate and it causes a Segfault for a nil value reference.

global - MediaStoreContainer - 'sadfdsgdsfgdsfgdsfg' - would remove
global - MediaStoreDataItems - 'ServicesToBlacklist.txt' - would remove
us-east-1 - MediaStoreContainer - 'sadfdsgdsfgdsfgdsfg' - would remove
us-east-1 - MediaStoreDataItems - 'ServicesToBlacklist.txt' - would remove
Scan complete: 4 total, 4 nukeable, 0 filtered.

Do you really want to nuke these resources on the account with the ID 99999999999 and the alias 'itsawsaccount-sandbox1'?
Do you want to continue? Enter account alias to continue.
> itsawsaccount-sandbox1

global - MediaStoreContainer - 'sadfdsgdsfgdsfgdsfg' - ContainerInUseException:
	status code: 400, request id: ES3S7MAXLS3OFWOSUEZH2VDDL24AZTO5NMY3MSM6TXC7SZHSQXF7NCFDFJSUPISXUIUGA53QUJETU2BSLFX7ILQ
global - MediaStoreDataItems - 'ServicesToBlacklist.txt' - triggered remove
us-east-1 - MediaStoreContainer - 'sadfdsgdsfgdsfgdsfg' - triggered remove
us-east-1 - MediaStoreDataItems - 'ServicesToBlacklist.txt' - ObjectNotFoundException: Object not found.
	status code: 404, request id: YNP7NR3DVES4G3GV2MI63GOMFAC5XLH3Y2SKWBKXOXKSGXWGDRFH62LW5U6PYL7NR6XCLL2YBLJKNF2IRMT4Y7A

Removal requested: 2 waiting, 2 failed, 0 skipped, 0 finished

global - MediaStoreContainer - 'sadfdsgdsfgdsfgdsfg' - ContainerInUseException:
	status code: 400, request id: 6LJILOE4ZER5BIP6GCPLVOJS5A377BKXRRP55KMGYYIGRAYEG6YHODVHMKJTYHQFZ5E6IHFNQW4X5BNW7CPAR7I
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1e8a90e]

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1e8a90e]

goroutine 1 [running]:
github.com/rebuy-de/aws-nuke/resources.ListMediaStoreDataItems(0xc4204cc480, 0xc4201f0000, 0x2718bfa, 0x13, 0xc4201fb758, 0x90)
	/Users/thv596/gopath/src/github.com/rebuy-de/aws-nuke/resources/mediastoredata-items.go:54 +0x56e

That code is here: https://github.com/tomvachon/aws-nuke/blob/feature/mediastore/resources/mediastoredata-items.go#L54

My proposal

If we cannot determine a endpoint for a given service, we assume its regional to prevent a race condition such as above.

open source aws-nuke

  • Pick a “unique” name
  • Add Licence
  • Add README
    • One-line description (#30)
    • Usage (#30)
    • Installing (#30)
    • Fancy badges (#32)
    • Development status (#30)
    • Contribution notes (#30)
    • Contact channels (#30)
  • Add GitHub Topics
  • Disable unused GitHub features (eg Projects, Wiki)
  • Disable squash merging, to honor external contributions
  • Protect master branch
    • Require pull request reviews before merging
      • Require review from Code Owners
      • Dismiss stale pull request approvals when new commits are pushed
    • Require status checks to pass before merging
  • Make project public usable
    • no reBuy specific code
    • no dependencies to reBuy infrastructure (eg Jenkins, ECR) (#31)
  • Use semver versions
  • Add Changelog
    • proper GitHub release descriptions might be enough
  • Add Proper issue labels
  • Pin project in rebuy-de organization

  • add proper example config

Include multipart upload objects

I don't believe these are currently caught. I'm not sure how to do this other than forcing a lifecycle policy if the bucket fails to delete...

Tag Support?

I'm wondering if there might be value to having tag support in AWS nuke. Picture a tag with a timestamp value. From there, you could configure aws-nuke to remove everything more than X days old, plus an exception list.

Auto-cleanup of dev accounts suddenly becomes trivial with the right automation in place to ensure those tags get placed / propagated.

New Feature: Re-create Default VPC

Hi there,

this removes the default vpc (which we want to keep going). Since AWS made it easier of late, can the tool issue the create-default-vpc API call at the end for a set of specific regions?

S3 Buckets not listed by aws-nuke

Hi,

I've likely misunderstood how to configure aws-nuke, however I've looked and all appears to be correct to me. When I list the nukable resources for my AWSAdmin profile there are no S3 buckets listed, however when listing buckets with aws-cli it returns results that aws-nuke doesn't. What is the possible cause for this discrepancy?

$ ./aws-nuke --config ~/.config/aws-nuke --profile AWSAdmin --no-dry-run

aws-nuke version v1.2.0 - Fri Sep 22 09:08:55 UTC 2017 - c6a7e876160e41915f9ddea00e801ca42507fd32

Do you really want to nuke the account with the ID 738525243859 and the alias 'developer'?
Do you want to continue? Enter account alias to continue.
> developer

us-east-1 - IamRolePolicyAttachement - 'AWSServiceRoleForOrganizations -> AWSOrganizationsServiceTrustPolicy' - would remove
us-east-1 - IamRole - 'awsdev-jacksond.jackson.delahunt' - filtered by config
us-east-1 - IamRole - 'AWSServiceRoleForOrganizations' - would remove
us-east-1 - IamRole - 'storage-v2-deltest-IamRoleLambda-1AU0GW2E61MB7' - would remove
us-east-1 - IamRole - 'storage-v2-deltest-IamRoleLambda-E6UOC555LZI1' - would remove
us-east-1 - IamRole - 'storage-v2-deltest-ScalingRole-1UV0H2HNB9OS9' - would remove
us-east-1 - IamRole - 'storage-v2-deltest-ScalingRole-6DV59BIFN55O' - would remove
us-east-1 - IamRole - 'service-registry-deltest-eu-west-1-lambdaRole' - would remove
us-east-1 - IamRole - 'service-registry-RegistryCreateExecutionR-YDWTINNDIOY6' - would remove
us-east-1 - IamRole - 'service-registry-TopicCreateExecutionRole-D5FJAGQSTJA9' - would remove
us-east-1 - IamUserAccessKeys - 'AWSAdmin -> AKIAJVRMKJPZLU4YLKSA' - filtered by config
us-east-1 - IamUserPolicyAttachement - 'AWSAdmin -> AdministratorAccess' - filtered by config
us-east-1 - IamUser - 'AWSAdmin' - filtered by config
us-east-1 - KMSAlias - 'alias/aws/dynamodb' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/ebs' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/elasticfilesystem' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/es' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/lambda' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/rds' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/redshift' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/s3' - cannot delete AWS alias
us-east-1 - KMSAlias - 'alias/aws/ssm' - cannot delete AWS alias
us-east-1 - KMSKey - '919f8ec8-de3a-4a37-bc12-cf0f841d74de' - cannot delete AWS managed key
Scan complete: 23 total, 9 nukeable, 14 filtered.

Do you really want to nuke these resources on the account with the ID 738525243859 and the alias '-developer'?
Do you want to continue? Enter account alias to continue.
> ^C

$ AWS_PROFILE=AWSAdmin aws s3 ls

2017-11-02 16:46:56 instrumentation-gateway-serverlessdeploymentbuck-1w8nw0dzv5sy2
2017-10-30 19:04:39 storage-v2-deltest-backup-ap-southeast-2
2017-10-30 18:59:56 storage-v2-deltest-backup-eu-west-1
2017-10-30 18:55:31 storage-v2-deltest-backup-us-east-2
2017-10-30 19:04:04 storage-v2-deltest-serverlessdeploymentbucket-1b5tn2usljmjp
2017-10-30 18:59:11 storage-v2-deltest-serverlessdeploymentbucket-jtpp4pw8xdys
2017-10-30 18:46:16 service-registry-deltest-state
2017-10-30 18:44:38 service-registry-serverlessdeploymentbuck-1sp84cw2ptgyv
2017-10-30 19:10:51 swagger-registry-deltest-storage

Listing SNSEndpoint failed

Hi,

Error when running aws-nuke, binary release version.

Version: aws-nuke-v2.0.0.alpha1-darwin-amd64

Error message:

ERRO[0198] Listing SNSEndpoint failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
    !!! InvalidParameter: 1 validation error(s) found.
    !!! - missing required field, ListEndpointsByPlatformApplicationInput.PlatformApplicationArn.

Contribution Question

What would you like to have proved in any pull request? Do you want output from the run, etc? I have a whole SLEW of services I'm going to add since I can understand this well enough to do a majority of them (thanks for making this go-idiot proof by the way)

Protect an S3Bucket and all its contents

Would be great if aws-nuke could allow one to protect an S3Bucket and all its contents with minimal configuration.

Perhaps by allowing a wildcard, so that s3://bucket-name/* would catch all the S3Objects in bucket-name.

Thanks for the useful tool.

Filters seem in error

aws-nuke deletes items identified in filter in us-east-1, us-east-2, ap-southeast-1, ap-souteast-2

with nuke-config.yml as

regions:
- ap-south-1
- eu-west-2
- eu-west-1
- ap-northeast-2
- ap-northeast-1
- sa-east-1
- ca-central-1
- ap-southeast-1
- ap-southeast-2
- eu-central-1
- us-east-1
- us-east-2
- us-west-1
- us-west-2

account-blacklist:
- "999999999999" # production

accounts:
  "000000000000": # aws-nuke-example
    filters:
      IAMUser:
      - "my-user"
      IAMUserPolicyAttachment:
      - "my-user -> AdministratorAccess"
      IAMUserAccessKey:
      - "my-user -> ABCDEFGHIJKLMNOPQRST

output to a text file as aws-nuke -c nuke-config.yml --profile xyz --force > output.txt

cat out.txt | grep "would remove" | grep "my-user"

shows would delete IAMUser, IAMUserPolicyAttachment, and IAMUserAccessKey for each of the noted regions above.

with --no-dry-run it does in fact delete the user and once that is done of courses errors out.

brew formula

would it be possible to make a brew formula, so it's easy to auto update?

resource-types exclude/include in config not working

adding a include or exclude filter via resource-types in the config file seems to have no effect.

resources are checked and removed anyway.

example config:

resource-types:
  include:
  - foo
  exclude:
  - S3Bucket

i've added a printf after the yaml.Unmarshal in config.go and the returned values are empty:

ResourceTypes:{Targets:[] Excludes:[]}}] ResourceTypes:{Targets:[] Excludes:[]}

I guess this is caused because types.Collection is used for these values and there seems to be no custom unmarshaler for this type.

elasticfilesystem listing fails

I run nukes in lots of regions, attempting to essentially factory reset a testing account. My output is full of lots of successful execution but also lots of errors, and one of the biggest categories is always around elasticfilesystem stuff. For example:

    !!! RequestError: send request failed
    !!! caused by: Get https://elasticfilesystem.eu-west-3.amazonaws.com/2015-02-01/file-systems: dial tcp: lookup elasticfilesystem.eu-west-3.amazonaws.com on 8.8.8.8:53: no such host

[Performance] Global Items are inspected in every region

For services which we know are global (e.g. IAM), it should only be inspected in one region and deleted in one region (or three if you want to be exact).

For normal usage, it should be deleted off the first region in the list. We should encourage users to put their highest usage region at the top for obvious reasons.

Scenario 2, would include GovCloud, so we should ensure that if someone runs this against GovCloud, that region is also interrogated for IAM (or other "global" services).

Like Scenario 2, and CN region should also be investigated separately in addition to the first "public" region in their list

Lookup mobile.eu-central-1.amazonaws.com on 127.0.0.53:53: no such host

Hi,

when I try to nuke an AWS account I get this output at the end of the scan for resources:

ERRO[0031] Listing MobileProject failed. Please report this to https://github.com/rebuy-de/aws-nuke/issues/new.
    !!! RequestError: send request failed
    !!! caused by: Get https://mobile.eu-central-1.amazonaws.com/projects?maxResults=100: dial tcp: lookup mobile.eu-central-1.amazonaws.com on 127.0.0.53:53: no such host 
$ nmap 127.0.0.53 -p53

Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-19 14:20 CEST
Nmap scan report for localhost (127.0.0.53)
Host is up (0.00012s latency).

PORT   STATE SERVICE
53/tcp open  domain
$ host mobile.eu-central-1.amazonaws.com
Host mobile.eu-central-1.amazonaws.com not found: 3(NXDOMAIN)

Seems like I cannot resolve that Host. Should I?

Running via assumed role

I am trying to run aws-nuke from my master AWS account, which is blacklisted -- but from which I am assuming a role into the target account I want nuked. When I try to do this (via the --profile option), I get this:

Error: You are trying to nuke the account with the ID XXXXXXXXXX, but it is blacklisted. Aborting.

I think this error is wrong though, since I am using the blacklisted account to assume a role into the account I actually want to nuke.

Delete all versions of s3_object

When aws-nuke wants to delete a bucket with versioned s3_objects inside, it yields this error message:

BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
	status code: 409, request id: XXXXXX, host id: XXXXXX

aws-nuke should be able to remove those versioned s3_objects.

Feature request: Support LogGroups

After aws-nukeing our resources, we are still left with LogGroups. Can you please add support for removing LogGroups?

  An error occurred: RegistryUpdateLogGroup - /aws/lambda/storage-v2-develop-registryUpdate already exists.

Deleting policy Operators fails

If a policy has more than one version, the nuke script is not able to handle it.

IamPolicy - 'arn:aws:iam::863709161118:policy/Operators' - DeleteConflict: This policy has more than one version. Before you delete a policy, you must delete the policy's versions. The default version is deleted with the policy.
        status code: 409, request id: 3257759b-c20b-11e6-86b4-6139370e013f

Resource test harness

The more edge cases we run into, the stronger the argument for integration tests. Even if it is rather involved to bootstrap the resource and validate deletion, the effort would even in a rather short-term outweigh the cost, especially for popular AWS services like S3.

@svenwltr and I have been whiteboarding a bit and would:

  • use terraform to create the resource under test. We would add a <resource>.tf file along every <resource>.go with the resource declaration and a <resource>_test.go file for test execution.
  • nuke it with --target flag
  • Validate deletion of the resource via output of terraform plan

I would stitch together a prototype to see if this is a viable solution.

Very open to suggestions.

Add resource filters to config

would be nice to have a filter for the targets [include/exclude] to get selective on which targest

AutoScalingGroup
CloudFormationStack
CloudTrailTrail
CloudWatchEventsRule
CloudWatchEventsTarget
EC2Address
EC2DhcpOption
EC2Instance
EC2InternetGateway
EC2InternetGatewayAttachement
EC2KeyPair
EC2NetworkACL
EC2RouteTable
EC2SecurityGroup
EC2Subnet
EC2Volume
EC2Vpc
EC2VpcEndpoint
ELBv2TargetGroup
KMSAlias
KMSKey
LambdaFunction
LaunchConfiguration
RDSDBSubnetGroup
SNSSubscription
SNSTopic

I was able to use a quick for loop and invoke via script as aws-nuke -c config.yml --profile myuser --target $i --force --no-dry-run & but would be nice to be able in the config.yml filter by service to nuke or not

Unable to delete objects in a folder with empty name ("")

Hi,

first: I love this tool. I found it today and it protected me very effective to delete unintentionally some roles, and it nuked all experimental created resources. Very good job!

I found a small bug:

It is possible to create folders in S3 which have an empty folder-name. aws-nuke is trying to delete the files in this folder again and again.

Example output:

eu-central-1 - S3Object - 's3://xyz-dev//eu-central-1/dev/xyz-infrastructure-ecs-2017-12-08-09-22-47-132549Z.json' - waiting

As you can see between the bucket-name "xyz-dev" and 2nd-level folder-name "eu-central-1" there is a empty folder.

I had to delete this folder manually and it works now.

Thanks again!

DNS Error

This is a run with errors and private data redacted.

The DNS lookups fail at consistent intervals. I've included the full log with the errors inline such that you might be able to correlate the timing of the error with a certain part of the resource deletion sequence.

aws-nuke v.1.3.0

Take AWS Keys/Tokens from Environment

Hi, IIUC if I don't want to use --profile I have to specify the AWS keys, ids and tokens explicitely like

$ aws-nuke-v2.0.1-linux-amd64 -c <someconfig> --access-key-id <foo> --secret-access-key <bar> --session-token <baz>

It didn't seem to work to implicitly have those values taken out of the usual environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN. Would simplify the usage of this great tool massively.

Would you consider this. If I get pointed into the right direction I could provide a PR myself. Thx, Christian.

SagemakerNotebookInstance not deleted

Hi,

with a targets configuration like:

  • SageMakerNotebookInstanceState
  • SageMakerNotebookInstance
  • SageMakerModel
  • SageMakerEndpoint
  • SageMakerEndpointConfig

notebooks are stopped but not deleted. I've tried to remove SageMakerNotebookInstanceState from targets but instance is not deleted because it's not in stopped state.

Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.