Git Product home page Git Product logo

aws-s3-demo's Introduction

Amazon S3 Upload Walk-through and Demo

Dependencies

  • aws-sdk
  • axios
  • express
  • uuid
  • react-spinners
  • react-dropzone

.gitignore

BEFORE YOU DO ANYTHING ELSE BEYOND THIS POINT

  1. Go into the .gitignore file and add .env on a new line in the file, then save.

    • We will be putting your s3 API keys in a .env file. If you don't add your .env to your .gitignore and you push to github, evil people will use your keys for their evil purposes at your expense.
  2. Double check and if necessary review step 1.

  3. Triple check and if necessary review step 1.

Failing to do this step could easily cost you $5,000/day. I wish I were kidding.

Create a .env File

Create the file at the root of your project and add the following properties:

S3_BUCKET=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=

In order for your back-end code to work, it is important that the property names in your .env are exactly as shown above.

Account Setup

If you haven't already signed up for an amazon S3 account, you can do so here. S3 does cost money so you will need to input a credit card. However, AWS offers 12-months of free tier service as long as you don't exceed your limits. See details here.

Once your account is set up, go to https://console.aws.amazon.com and log in.

Create a New User and Generate Access Keys

  1. Once you are on the home page, type 'IAM' in the search box and click on the link to IAM in the search results.

  2. It is highly recommended by AWS that you delete your root access keys since they provide complete control over all AWS products and instead create a new IAM user with access restricted to only to specific products. In our case, we want to create a user with restricted access to S3.

    1. Click delete your root access keys, then manage security credentials, then continue to security credentials
    1. In the actions column, click delete, then yes to the confirmation box.
  3. Click 'Users' on the left navigation menu, then 'Add user'.

  4. Type a name for the user and check the 'programmatic access' checkbox, then click 'Next: permissions'

  5. No changes necessary on this screen, so click 'Next: tags'

  6. No changes necessary on the next screen so just click 'Next: review'

  7. No Changes necessary on this screen either, so click 'Create user'

  8. The next screen gives us the Access Key ID and Secret Access Key for the user. Click 'Show' in the secret access key column.

  9. Copy and paste your Key ID and Secret Access Key into your .env

    1. Copy and paste the value from the 'Access key ID' column into the AWS_ACCESS_KEY_ID= field of your .env.
    2. Copy and paste the value from the 'Secret access key' column into the AWS_SECRET_ACCESS_KEY= filed of your .env.
  10. Click 'Close' at the bottom right corner of the success screen.

  11. Click on the name of the user that you just created.

  12. Copy the user ARN into a separate note taking app. You can use your .env, just make sure to not put it in any document that will be committed to github. You can add a note to your .env by placing a # in front of it.

Create a New Bucket

  1. Click the services dropdown on the top navbar. Search for S3, or find it under 'storage' in the menu. S3 should also be an option in the 'History' list on the left part of the dropdown menu.

  2. Click 'Create bucket'

  3. Give your bucket a name. Bucket names need to be unique so it may take a few tries to find one that is available. Then select your region. The code in server.js is assuming the bucket region will be 'US West (N. California)', so if you pick a different region you may need to modify the name of the region in server.js.

  4. In step 2 of the prompt, we don't need to change anything so click 'Next'.

  5. Un-check the two boxes shown outlined in yellow in the image below.

  6. On this screen, review your bucket details. This is probably a good time to copy your bucket name to your .env in the S3_BUCKET= field.

  7. Once you are finished, click 'Create bucket'

Configure Bucket Permissions

  1. On your S3 dashboard, click the name of your bucket.

  2. Click the 'Permissions' tab at the top.

  3. Click on 'Bucket policy'

  4. Paste the following into the policy editor:

    Starter Bucket Policy
    {
        "Version": "2012-10-17",
        "Id": "Policy1531943908491",
        "Statement": [
            {
                "Sid": "Stmt1531943904542",
                "Effect": "Allow",
                "Principal": {
                    "AWS": "COPY ARN FROM IAM CREATED USER HERE"
                },
                "Action": [
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:PutObject",
                    "s3:Get*",
                    "s3:Put*"
                ],
                "Resource": "arn:aws:s3:::NAME-OF-BUCKET/*"
            }
        ]
    }
    
  5. There are 2 lines in this policy that need to be changed in the JSON:

  6. Copy and paste the ARN from the user that you created earlier into Principal.AWS line which is outlined in orange above.

  7. Copy and paste the bucket ARN found above the policy editor text box which is outlined in purple above into the resource property field which is highlighted in green above. After your bucket name, make sure to put a /* before the closing quotation mark.

  8. Once you are finished, click 'Save'

Update CORS Configuration

  1. Click on the 'CORS configuration' button at the top of the page

  2. Paste the following into the text box:

    CORS Configuration
    
    <CORSConfiguration>
        <CORSRule>
            <AllowedOrigin>*</AllowedOrigin>
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>PUT</AllowedMethod>
            <AllowedHeader>*</AllowedHeader>
        </CORSRule>
    </CORSConfiguration>
    
    
  3. Once you are finished, click 'Save'

NOTE: The bucket policy and CORS configuration above are meant to get you up and running in development. Prior to using your bucket in a production environment, you should review the AWS S3 Documentation and determine the best CORS configuration and bucket policy for your situation based on what you learn. With proper implementation, the bucket policy and CORS configuration can limit your exposure to tragic situations caused by bad people gaining access to your bucket.

This Demo App Should Now Work

  1. Open one terminal and run nodemon

  2. Open a second terminal and run npm start

  3. If a new browser window didn't open automatically, open a new one and navigate to http://localhost:3000

  4. You can now drag an image into the file drop zone, or you can also click inside the square and select a picture to upload.

  5. You should then see a loading animation inside the drop zone while your file is being uploaded to s3.

  6. If your upload is successful, you should see the placeholder url text at the top of the page change and shortly after you should see the your uploaded image on the screen.

  7. You should now be able to go to your S3 bucket and see that your image is now in the bucket. You may need to refresh your browser.

Code Walkthrough

App.js

In the return of the render method, we are using a package called react-dropzone. It can be installed in your project by running npm install react-dropzone. This is basically a fancy <input type='file' />

Dropzone

<Dropzone 
    onDropAccepted={this.getSignedRequest}
    style={{
    position: 'relative',
    width: 200,
    height: 200,
    borderWidth: 7,
    marginTop: 100,
    borderColor: 'rgb(102, 102, 102)',
    borderStyle: 'dashed',
    borderRadius: 5,
    display: 'flex',
    justifyContent: 'center',
    alignItems: 'center',
    fontSize: 28,
    }}
    accept='image/*'
    multiple={false} >
    
    { this.state.isUploading 
        ?  <GridLoader />
        : <p>Drop File or Click Here</p>
    }

</Dropzone>
  • onDropAccepted=The function to run when an acceptable file is accepted or dropped. We have designated that function to be this.getSignedRequest which is explained in the next section. - accept= Specific file types that are allowed to be dropped in the dropzone
  • multiple= false makes it so only one file can be dropped at a time. If you set this to true, you will need to refactor the code to iterate through the array of files.
  • The code between <Dropzone></Dropzone> is a ternary that renders a loading animation or text depending on the value of a boolean property on state. We toggle that value in the getSignedRequest and methods.

getSignedRequest

getSignedRequest = ([file]) => {
   this.setState({isUploading: true})

   const fileName = `${randomString()}-${file.name.replace(/\s/g, '-')}`

   axios.get('/sign-s3', {
     params: {
       'file-name': fileName,
       'file-type': file.type
     }
   }).then( (response) => {
     const { signedRequest, url } = response.data 
     this.uploadFile(file, signedRequest, url)
   }).catch( err => {
     console.log(err)
   })
}
  1. This method takes in the file as a parameter which is in an array. In this example, we are destructuring the parameter which names the first item in the array 'file'.

  2. The function then generates a file name using a random string, and then the name of the file. We are using a regular expression to replace all of the white space with hyphens.

  3. We then use axios to make a GET request to our server endpoint '/sign-s3'. The object in the second argument of axios.get() is a cleaner way to send query string parameters. The alternative would have been:

    axios.get(`/sign-s3?file-name=${fileName}&file-type=${file.type}`)

    But doesn't this look much cleaner?

    axios.get('/sign-s3', {
        params: {
        'file-name': fileName,
        'file-type': file.type
        }
  4. At this point, this get request is sent off to the server.

server.js

const aws = require('aws-sdk');

const {
    S3_BUCKET,
    AWS_ACCESS_KEY_ID,
    AWS_SECRET_ACCESS_KEY
} = process.env

app.get('/sign-s3', (req, res) => {

  aws.config = {
    region: 'us-west-1',
    accessKeyId: AWS_ACCESS_KEY_ID,
    secretAccessKey: AWS_SECRET_ACCESS_KEY
  }
  
  const s3 = new aws.S3();
  const fileName = req.query['file-name'];
  const fileType = req.query['file-type'];
  const s3Params = {
    Bucket: S3_BUCKET,
    Key: fileName,
    Expires: 60,
    ContentType: fileType,
    ACL: 'public-read'
  };

  s3.getSignedUrl('putObject', s3Params, (err, data) => {
    if(err){
      console.log(err);
      return res.end();
    }
    const returnData = {
      signedRequest: data,
      url: `https://${S3_BUCKET}.s3.amazonaws.com/${fileName}`
    };

    return res.send(returnData)
  });
});
  1. Our server endpoint app.get('/sign-s3') receives the request that we just made from App.js.
  2. We configure the aws-sdk with our app credentials.
  3. Our server then requests a 'signed url' from AWS. In order to upload our file, we need to authenticate with AWS using our secret key ID and secret access key and this step is how we do it.
  4. AWS responds to that request with a a signed URL.
  5. The signed url is sent back to the front-end (App.js specifically) which will then be used to upload the file. This process keeps our access keys secret since they are stored server-side.

App.js

.then inside getSignedRequest

  1. Once our server responds with the signed URL from AWS, the .then() from the GET request fires and pulls, the signedRequest and the URL from the response. The URL will be the URL of the stored photo which we can then use for the source in an <img /> tag as long as the photo upload is successful.
  2. The uploadFile method is then called with the file itself, the signed upload url (signedRequest), and the file url (url) as arguments.

uploadFile

  1. the uploadFile method takes the file to be uploaded, the signed upload url, and the file's potential source url as parameters.
  2. In order for the file to be treated like a file on the PUT request, we need to set a header of Content-Type with the file type.
  3. An axios PUT request is sent to the signed URL along with the file and the configuration object with the necessary header.
  4. Once the .then of the axios PUT method fires, we now know that the file upload was successful.
  5. Inside this .then is where you would normally send the URL to the back-end on a POST request to be inserted into the database.

aws-s3-demo's People

Contributors

travisallen6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

aws-s3-demo's Issues

Dropzone Version Issue

Instructions should note that this tutorial will not work on versions of dropzone beyond 8.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.