simonw / s3-credentials Goto Github PK
View Code? Open in Web Editor NEWA tool for creating credentials for accessing S3 buckets
Home Page: https://s3-credentials.readthedocs.io
License: Apache License 2.0
A tool for creating credentials for accessing S3 buckets
Home Page: https://s3-credentials.readthedocs.io
License: Apache License 2.0
list-user-policies
doesn't output JSON at all, it has a weird custom output - so I'm leaving it for the moment.
Originally posted by @simonw in #48 (comment)
With similar options as create
except this only ever outputs policy JSON to standard out.
I can use this for #36.
Originally posted by @simonw in #39 (comment)
Splitting this into a separate issue mainly so I can clearly document how to use Litestream in the comments here.
Goal is to confirm that S3 credentials created using s3-credentials create ... --prefix litestream-test/
can be used with Litestream to back up a SQLite database to that path within the bucket.
#42 would be better with a command for showing the policy (also need that for the README).
In #51 I created litestream-test-20220117
bucket and s3.read-write.litestream-test-20220117
user. Need to delete those again.
I believe this will work even if you don't haveGetUser
permission.
Saw this in https://aws-blog.de/2021/08/iam-what-happens-when-you-assume-a-role.html
https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html says:
No permissions are required to perform this operation. If an administrator adds a policy to your IAM user or role that explicitly denies access to the sts:GetCallerIdentity action, you can still perform this operation. Permissions are not required because the same information is returned when an IAM user or role is denied access.
It looks like it may be possible to create policies that only allow users to read and write files with a specified S3 path prefix: https://aws.amazon.com/premiumsupport/knowledge-center/iam-s3-user-specific-folder/
Supporting that as a feature - maybe with a --prefix foo/bar
option - could be really neat.
Similar to list-users
from #4. Useful so that if you've been playing with the tool you can easily see what buckets have been created.
See #11 (comment)_ for context.
The read-write policy currently uses "Action": "s3:*Object"
- and the read-only one uses Action": "s3:GetObject*"
.
This is pretty gross - surely explicitly listing the allowed actions is better practice?
I want to be able to see what the tool is going to do (including the policy documents) without actually calling the AWS APIs.
I'd like to be able to use this tool to easily upload and download from the S3 buckets, without having to switch to (and install) a a separate tool.
https://www.reddit.com/r/aws/comments/qlu3ag/comment/hj68pmv/ pointed out simulate_custom_policy()
which is an API method that lets you try out custom policies. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iam.html#IAM.Client.simulate_custom_policy
Must be some useful stuff I can do with this. A CLI command for testing policies with it would be fantastic.
The initial reason for creating this tool was that I wanted to be able to create long-lived (never expiring) tokens for the kinds of use-cases described in this post: https://simonwillison.net/2021/Nov/3/s3-credentials/
Expiring credentials are fantastic for all sorts of other use-cases. It would be great if this tool could optionally create those instead of creating long-lived credentials.
This would mean the tool didn't have to create users at all (when used in that mode) - it could create a role and then create temporary access credentials for that role using sts.assume_role()
: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role
The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.
But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.
These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.
I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts.
There is enough useful logic in here that it would be good to have it work as a stable, documented Python library (similar to sqlite-utils
).
The logic in the create
command is the most interesting here.
For --read-only
and --write-only
Integration tests for these combos would be good too.
Need to pick the actions I'm going to bake into that policy. Spun out from #15.
Current policy is:
s3-credentials/s3_credentials/policies.py
Lines 20 to 35 in 6e56b1b
Since polices are SO important I don't even want people to go and have to read the policies.py
file - so I'm going to embed examples of generated polices as an appendix at the bottom of the README, using this pattern: https://til.simonwillison.net/python/cog-to-update-help-in-readme
Need to pick the actions I'm going to bake into that policy. Spun out from #15.
Current policy is:
s3-credentials/s3_credentials/policies.py
Lines 38 to 48 in 6e56b1b
This can be considered a subset of:
Feels useful to have.
boto3
recipe:
iam = boto3.client('iam')
paginator = iam.get_paginator('list_users')
for response in paginator.paginate():
print(response)
I already have delete-user
- this would be a similar utility but for deleting buckets. Mainly so I don't have to remember how to do it with awscli
.
And add a new
--csv
option.
Originally posted by @simonw in #48 (comment)
E At index 4 diff: "call().get_user_policy(PolicyName='policy-one', UserName='one')"
!= "call().get_user_policy(UserName='one', PolicyName='policy-one')"
Looks like there's no guarantee to the ordering of the parameters when they are pretty-printed here:
s3-credentials/tests/test_s3_credentials.py
Lines 112 to 117 in a8e10fb
For setting the bucket-level CORS policy on newly created (or maybe even existing?) buckets.
This is the command which create a user and returns credentials for a specified bucket, optionally also creating the bucket as well.
See initial design notes in #1.
Now that I'm building time-limited credentials in #27 it's getting pretty inconvenient to pass them as the --access-key
and --secret-key
and --session-key
arguments.
I'm going to support a new option called --credentials
which, if provided, is treated as the path (or -
for stdin) to a JSON or INI file containing credentials.
The idea is that this will work:
% s3-credentials create mybucket --duration 15m > creds.json
% s3-credentials list-bucket mybucket --credentials=creds.json
See #26 for the research. It looks like the way to do this is:
arn:aws:iam::aws:policy/AmazonS3FullAccess
exists - if it does not, create it. It needs to have a known name - I propose using s3-credentials.AmazonS3FullAccess
here, and also populating the Description
field. The role needs to be assumable by the current account, see AssumeRolePolicyDocument
example in #26 (comment)sts.assume_role()
against that role, passing in as a policy the same inline policy document used for non-expiring credentials, using the code in policies.py
.AccessKeyId
, SecretAccessKey
AND the SessionToken
- all three are needed to make authenticated calls.This is just a small suggestion, because I recently worked with boto3.
boto3 includes a stubber. The stubber verifies that your mocked returns conform to the structure of the actual response and has some nice features to check the input of the caller.
https://botocore.amazonaws.com/v1/documentation/api/latest/reference/stubber.html
Mainly useful for ease of testing that the temporary credentials created in #27 actually work.
Based on #11 I'm now thinking that there is value in applying custom policies - since that way people can tweak the policies used and share them with others.
Maybe a --policy policy.json
option would be useful?
One challenge: the need to hard-code the name of the bucket into that policy. So perhaps it supports the absolute dumbest template system ever, like literally replacing $!BUCKET_NAME!$
in the JSON with the name of the bucket.
Before 1.0 I want to have stable output formats - in particular for the create
command.
I want to provide:
~/.AWS
https://github.com/simonw/s3-credentials/blob/main/s3_credentials/policies.py
My suggestions:
- specify individual actions explicitly (no wildcards)
- separate permissions by resource (Buckets vs. Objects)
Sid
is unnecessaryYour read/write policy is good, but instead of
*Object
, listGetObject
andPutObject
.Your read-only policy would be better written like your read/write policy, one section for the bucket permission (
ListBucket
), one for the object permission (which should beGetObject
, no wildcard).Your write-only policy is great as is.
You may want to add additional permissions to let clients set ACLs. But if it's all simple object-by-object stuff, these very simple policies are great.
Originally posted by @jdub in #7 (comment)
Need to pick the actions I'm going to bake into that policy. Spun out from #15.
Current policy:
s3-credentials/s3_credentials/policies.py
Lines 1 to 17 in 6e56b1b
I wanted to clean up all of the users and buckets I made while testing this tool.
Deleting buckets is easy enough with the aws s3
tool:
aws s3 rb s3://simonw-test-bucket-10 --force
Deleting users is harder:
aws iam delete-user --user-name s3.read-only.simonw-test-bucket-11
An error occurred (DeleteConflict) when calling the DeleteUser operation:
Cannot delete entity, must delete policies first.
I'm going to build a s3-credentials delete-user
command which deletes the inline policies first.
I got this working, but the files I upload with
s3-credentials put-object
all haveContent-Type: binary/octet-stream
.
Originally posted by @simonw in #42 (comment)
It would be useful to have an opt-in option for saying "this bucket should be configured as a website" - because setting that up without a tool is quite fiddly.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html has the details:
When you configure a bucket as a static website, if you want your website to be public, you can grant public read access. To make your bucket publicly readable, you must disable block public access settings for the bucket and write a bucket policy that grants public read access.
See #20 for "block public access" setting, and #19 for bucket policies.
Require a flag to enable public access to the bucket rather than making the bucket public by default. See docs
Originally posted by @zacaytion in #7 (comment)
I'm not an AWS expert. I would feel a lot more comfortable if some AWS experts could review this tool and make sure that what it is doing makes sense and there are no unpleasant flaws in the approach it is taking.
I just spotted
list-buckets
has the same not-quite-newline-delimited JSON output format, which is a bad default. I should fix that too.
Originally posted by @simonw in https://github.com/simonw/s3-credentials/issues> /28#issuecomment-1014838721
The S3 security best practices in https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html#security-best-practices-prevent suggest:
Use the
ListBuckets
API to scan all of your Amazon S3 buckets. Then useGetBucketAcl
,GetBucketWebsite
, andGetBucketPolicy
to determine whether the bucket has compliant access controls and configuration.
list-buckets
could do this with an extra --details
option (since it adds 3 new API calls per bucket).
These will be used by all of the commands, as an optional alternative to the boto3
default (see #1 (comment))
An optional flag for attaching bucket policies to the new s3 bucket. These are just like IAM user policies, but attached to the bucket itself.
Originally posted by @zacaytion in #7 (comment)
I need to research bucket policies to fully understand what kinds of things they are useful for and how they should be supported by this tool.
The goal of this tool is to provide a CLI for creating IAM access credentials - an access key and a secret key - that are restricted to either reading from a specific bucket, writing to a specific bucket or read/write to a specific bucket.
The goal is to never have to go through the manual process described in dogsheep/dogsheep-photos#4 ever again.
Refs #27. I think the README needs to very explicitly list out what the two different options do to your AWS account.
Originally posted by @zacaytion in #7 (comment)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.