Git Product home page Git Product logo

s3-static-site-uploader's Introduction

Deploy static sites to Amazon S3 using Node!

  • Uploads are fast (only changed files are sent).
  • Old files (or files that no longer match the set of patterns) are deleted. Easily remove accidental uploads by simply changing the configuration and redeploying.
  • Configuration uses Ant Glob syntax. Easy to understand and update.
module.exports = {
	credentials:"aws-credentials.json",
	bucketName:"example.com",
	patterns:[
		"scripts/*.js",
		"stylesheets/default.css",
		"images/**/*.jpg",
		"index.html"
	]
}

Install via npm: npm install -g s3-upload

S3 Bucket Setup

Create a bucket

Log into your AWS S3 console and create a new bucket for your site.

Bucket names must conform with DNS requirements:

  • Should not contain uppercase characters
  • Should not contain underscores
  • Should be between 3 and 63 characters long
  • Should not end with a dash
  • Cannot contain two, adjacent periods
  • Cannot contain dashes next to periods (e.g., "my-.bucket.com" and "my.-bucket" are invalid)

Configure the Bucket Static Website Hosting

Once the bucket is created, select it and choose Properties > Static Website Hosting.

Choose proper values for the Index Document and Error Document fields (i.e. index.html and 404.html).

The Index Document is searched for relative to the requested folder: http://my.aws.site.com/some/subfolder/ becomes http://my.aws.site.com/some/subfolder/index.html.

The Error Document path is always relative to the root of the site. All errors are redirected to http://my.aws.site.com/404.html.

Configure a Public Readable Policy for the Bucket

Static sites hosted on S3 do not support private files (password protection, etc). You must make all files publicly accessible. From your buckets Properties page, choose Permissions > Edit/Add bucket policy. Copy and past the policy below (replace YOUR-BUCKET-NAME for the bucketName you created previously)

{
	"Version": "2008-10-17",
	"Statement": [
		{
			"Sid": "PublicReadForGetBucketObjects",
			"Effect": "Allow",
			"Principal": {
				"AWS": "*"
			},
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
		}
	]
}

S3 User Setup

Log into your AWS Console and go to the Users management console. Click the Create New Users button and enter a username.

Credentials File

Have AWS create a new key pair for the user and copy the contents into a aws-credentials.json file in the root directory of your project. You should add this file to .gitignore (or similar) so that credentials are not checked into version control.

{ 
	"accessKeyId": "PUBLIC_KEY", 
	"secretAccessKey": "SECRET_KEY", 
	"region": "us-west-2" 
}

note: As AWS SDK's documentation points out, you could also set those as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables

User Permissions

From the AWS IAM Users Console select the newly created user, then the Permissions Tab, then click the Attach User Policy button. Paste in the following (substituting BUCKET-NAME as appropriate).

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:DeleteObject",
        "s3:PutObject",
        "s3:ListBucket"
      ],
      "Sid": "AllowNewUserAccessToMyBucket",
      "Resource": [
        "arn:aws:s3:::BUCKET-NAME",
		"arn:aws:s3:::BUCKET-NAME/*"
      ],
      "Effect": "Allow"
    }
  ]
}

Create Config

Create a file called aws-upload.conf.js in the root directory of your project and copy and paste in the code below. Modify bucketName and the patterns array as appropriate for your project. All patterns are evaluated with the current directory as root, and the bucket directory structure will mirror the local one.

module.exports = {
	credentials:"aws-credentials.json",
	bucketName:"example.com",
	patterns:[
		"scripts/*.js",
		"stylesheets/default.css",
		"images/**/*.jpg",
		"index.html"
	]
}

Upload!

Simply call s3-upload from the same directory as your config file, and the upload will happen.

s3-static-site-uploader's People

Contributors

charlescbeebe avatar jamestalmage avatar qpre avatar rudolf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

s3-static-site-uploader's Issues

Question: How to upload to root directory on S3

my app dictionary

app - dist - html.index (and other build files.

my upload conf

module.exports = {
    credentials:"aws-credentials.json",
    bucketName:"bucketName",
    patterns:[
        "dist/*.html"
    ]
}

S3 directory

root - dist - index.html

The truth is that I want the following directory on S3

root - index.html

please tell me how to editing conf to last directory pattern,
thanks.

BadRequest: null

getting following error while running s3-upload

{ BadRequest: null
at Request.extractError (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/services/s3.js:557:35)
at Request.callListeners (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/request.js:38:9)
at Request. (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/Users/tushar/git/nb-skywalker-ui/s3/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
message: null,
code: 'BadRequest',
region: 'ap-southeast-1',
time: 2018-03-13T14:04:30.454Z,
requestId: null,
extendedRequestId: undefined,
cfId: undefined,
statusCode: 400,
retryable: false,
retryDelay: 17.536229402994373 }

Cannot find 'Q' on Ubuntu; need to use 'q'.

Works fine on my Mac, but barfs on an Ubuntu guest VM:

$ s3-upload

module.js:338
    throw err;
          ^
Error: Cannot find module 'Q'
    at Function.Module._resolveFilename (module.js:336:15)
    at Function.Module._load (module.js:278:25)
    at Module.require (module.js:365:17)
    at require (module.js:384:17)
    at TestHook (/home/vagrant/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/src/GlobRunner.js:3:14)
    at Object.<anonymous> (/home/vagrant/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/src/GlobRunner.js:62:18)
    at Module._compile (module.js:460:26)
    at Object.Module._extensions..js (module.js:478:10)
    at Module.load (module.js:355:32)
    at Function.Module._load (module.js:310:12)

Applying the Q/q substitution here worked for me: pkaminski@bea08cb

This is appending .s3. in a wrong place?

Hi There - I'm getting a timeout because the script is trying to connect to

website.s3.s3-website-ap-southeast

instead of

website.s3-website-ap-southeast (I think that is why)

Any ideas why that is inserted?

Add Ignore Section to Configuration

When you don't want to delete things in the bucket, just because they don't exist (or look different) locally.

Sometimes you want a set of files to remain on the server even when they don't appear locally. Is there a way to do this with the current configuration file?

I'm imagining something like this:

{
    credentials:"aws-credentials.json",
    bucketName:"example.com",
    patterns:[
        "scripts/*.js",
        "stylesheets/default.css",
        "images/**/*.jpg",
        "index.html"
    ],
    ignores:[
        "users/**"
    ]
}

Upload fails with old version of aws-sdk.

A freshly-installed s3-upload fails for me like so:

>> s3-upload: /Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:261
>>         throw err;
>>               ^
>> TypeError: Cannot read property 'length' of null
>>     at Object.AWS.util.buffer.concat (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/util.js:116:29)
>>     at Request.HTTP_DONE (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/event_listeners.js:214:36)
>>     at Request.callListeners (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:132:20)
>>     at Request.emit (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/sequential_executor.js:100:10)
>>     at Request.emitEvent (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/request.js:408:10)
>>     at IncomingMessage.onEnd (/Users/charles/.nvm/versions/node/v0.12.0/lib/node_modules/s3-upload/node_modules/aws-sdk/lib/event_listeners.js:184:26)
>>     at IncomingMessage.emit (events.js:129:20)
>>     at _stream_readable.js:908:16
>>     at process._tickCallback (node.js:355:11)

Updating to aws-sdk@2 fixes whatever it was that was causing this issue. Please bump the version of aws-sdk in the package.json :)

Log errors to error stream

if errors are logged to error stream this allows users to filter out the none useful upload successful messages from the failure messages which may be important.
Fixed in #18

Is there a way to enable support for Amazon's V4 authentication scheme?

The newer regions provided for S3 (e.g. Frankfurt = eu-central-1) do not support the old authentication scheme, that is V2. So I get the following error when running s3-upload:

{ [InvalidRequest: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.]
  message: 'The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.',
  code: 'InvalidRequest',
  name: 'InvalidRequest',
  statusCode: 400,
  retryable: false }

Is there a way to enable V4 aka AWS4-HMAC-SHA256 in s3-uploader?

See also http://stackoverflow.com/questions/26533245

Is there a way to specify to server with Content-Encoding: gzip?

You can serve stuff from s3 gzipped (with Content-Encoding: gzip) but the uploader has to upload it already gzipped. S3 will not do this for you. s3-uploader needs know to gzip files such as html, js, css but not already compressed files like png, jpeg.

More info:
http://www.rightbrainnetworks.com/blog/serving-compressed-gzipped-static-files-from-amazon-s3-or-cloudfront/
http://stackoverflow.com/questions/5442011/serving-gzipped-css-and-javascript-from-amazon-cloudfront-via-s3

Ability to set Content-Type in config

The current site I'm building contains html files without the .html extention, and once you remove that, s3 slaps on a application/octet-stream.

This is basically a nice-to-have, to avoid having to jump in the console.

S3-upload delete but not upload

Hello.

Im testing the module but it deletes all my files and not upload it again.

here is my config

module.exports = {
  credentials:"../aws-credentials.json",
  bucketName:"techempleo",
  patterns:[
    "statics/**/*"
  ]
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.