Git Product home page Git Product logo

dumrauf / serverless_static_website_with_basic_auth Goto Github PK

View Code? Open in Web Editor NEW
148.0 8.0 49.0 83 KB

Builds a serverless infrastructure in AWS for hosting a static website protected with Basic Authentication and published on a subdomain registered via Route 53

License: MIT License

JavaScript 7.42% Shell 46.08% CSS 0.16% HTML 2.94% HCL 43.40%
aws aws-lambda serverless static-website basic-authentication cloudformation terraform terraform-modules cloudfront

serverless_static_website_with_basic_auth's Introduction

Serverless Static Website With Basic Authentication

This repository contains a collection of Bash scripts and a choice of either a Terraform module or a set of CloudFormation templates that build a serverless infrastructure in AWS to host a static website protected with Basic Authentication. The static website is published on a subdomain registered in Route 53.

A live example can be found at https://serverless-static-website-with-basic-auth.dumrauf.uk/ using the demo username guest and password letmein. Note that access to the underlying S3 bucket hosting the static website is denied.

The master branch in this repository is compliant with Terraform v0.12; a legacy version that is compatible with Terraform v0.11 is available on branch [email protected].

You Have

Before you can use the tools in this repository out of the box, you need

If Terraform is the tool of choice then you also need

You Want

After creating the serverless infrastructure in AWS you get

  • a price class 100 CloudFront distribution which serves your static website using HTTPS (including redirect) and the ACM certificate provided in the input
  • a private S3 bucket which contains the static website and serves as the origin for the CloudFront distribution
  • a Lambda@Edge function which runs in the CloudFront distribution and performs the Basic Authentication for all requests
  • a private S3 bucket acting as a serverless code repository
  • potentially significant cost savings over using a dedicated EC2 instance, depending on your traffic
  • the whole thing in one go while getting another coffee

You Don't Want

Using the tools in this repository helps you avoid having

  • the static website run on a dedicated EC2 instance or ECS container
  • the static website to be hosted by S3 directly where it is publicly available to the whole world

For the Impatient

All entry points are Bash scrips located in the scripts folder.

Changing the Passwords

Unless you are happy with the demo username guest and password letmein, swap out the username-credentials dictionary const credentials in file lambda-at-edge-code/index.js with your own.

See the FAQs section about updating passwords at a later time in case changes are not reflected.

One-Shot Script

The entire serverless infrastructure can be created via

scripts/create_static_serverless_website.sh <parameter_1> ... <parameter_n>

where the parameters differ between CloudFormation and Terraform and additional setup may be required.

CloudFormation

As for CloudFormation, the entire serverless infrastructure can be created via

scripts/create_static_serverless_website.sh <website-directory> <subdomain> <domain> <hosted-zone-id> <acm-certificate-arn> <profile>

An example invocation may look like

scripts/create_static_serverless_website.sh  static-website-content/  static-website mydomain.uk  Z23ABC4XYZL05B  "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"  default

Note that you need to replace the example values with yours in order for the script to work.

Under the bonnet, the script calls

  1. bootstrap_serverless_repo.sh
  2. create_serverless_infrastructure.sh.sh
  3. upload_website_to_s3_bucket.sh

creating and uploaded the resources as indicated by the corresponding name.

Terraform

As for Terraform, the input variables for the example website static-website.example.com are definied in Terraform/settings/static-website.example.com.tfvars as

region = "us-east-1"

shared_credentials_file = "/path/to/.aws/credentials"

profile = "default"

hosted_zone_id = "Z23ABC4XYZL05B"

subdomain_name = "static"

domain_name = "example.com"

acm_certificate_arn = "arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012"

log_bucket_domain_name = "<your-log-bucket-domain>"

Note that you need to replace the example values with yours in order for Terraform to work.

With the Terraform configuration done, the entire serverless infrastructure can be created via

scripts/create_static_serverless_website.sh  <website-directory>  <profile>  <workspace-name>

Here, the <workspace-name> has to match the name of the input variables file in settings/ when neglecting the .tfvars extension (in this case static-website.example.com)

An example invocation may look like

scripts/create_static_serverless_website.sh  static-website-content/  default  static-website.example.com

Note that you need to replace the example values with yours in order for the script to work.

Under the bonnet, the script calls

  1. create_serverless_infrastructure.sh.sh
  2. upload_website_to_s3_bucket.sh

Syncing the Local Static Website with the S3 Bucket

The local static website contents can be synced with the corresponding S3 bucket serving as the CloudFront origin via

scripts/upload_website_to_s3_bucket.sh <website-directory> <profile>

If your static website is located at ../static-website-content/, sync it with the corresponding S3 bucket using profile default via

scripts/upload_website_to_s3_bucket.sh  "../static-website-content/"  default

creating and uploaded the resources as indicated by the corresponding name.

Using a Least Privileged User for all BAU Website Tasks

By default, an IAM user is also created who is only allowed to

  1. modify objects in the bucket hosting the website and
  2. create CloudFront invalidations

Using this least-privileged user's access keys minimises your potential attack surface and is highly recommended. Note that API access keys are not generated by default but can easily be obtained from the AWS console.

Invalidating the CloudFront Distribution

After syncing the static website with the S3 bucket, the CloudFront distribution will most likely keep a cached copy of the old static website until it expires.

This process can be expedited by invalidating the cache via

scripts/invalidate_cloudfront_chache.sh <profile> <paths>

The entire CloudFront distribution can be invalidated using profile default via

scripts/invalidate_cloudfront_chache.sh default '/*'

Here, note the single quotes around '/*' in order to avoid parameter expansion in Bash. Note that invalidations can incur costs.

How it Works Underneath the Bonnet

Again, the details differ when it comes to CloudFormation versus Terraform. Here, Terraform seems to simplify things a little.

In the case of CloudFormation, the Bash scripts essentially kick off two CloudFormation templates, namely

  1. bootstrap_serverless_code_repository.yaml and
  2. serverless_static_website_with_basic_auth.yaml

In the case of Terraform, the Bash scripts first switches to the workspace provided in the input or creates it if it doesn't exist. Afterwards, the Bash scripts essentially kick off a simple Terraform configuration in main.tf which utilises the serverless-static-website-with-basic-auth module.

The Serverless Code Repository Template

The Serverless Code Repository template is a CloudFormation specific implementation. Here, the bootstrap_serverless_code_repository.yaml creates a private S3 bucket which enforces encryption and acts as a serverless code repository. Another option would be to provide the code inline in the CloudFormation template but no matter how the code editor is set up, a good chunk of the template is always being marked as either plain text or plain wrong.

The Serverless Infrastructure Template/Module

The serverless_static_website_with_basic_auth.yaml template as well as the serverless-static-website-with-basic-auth module creates

  1. A Lambda@Edge function version which runs the Basic Authentication code
  2. A role to execute the Lambda@Edge function
  3. A CloudFront origin access identity
  4. A private S3 bucket which enforces encryption and permits the CloudFront origin access identity to read from the S3 bucket
  5. A CloudFront distribution which uses the S3 bucket previously created as the origin and has a CNAME entry for the subdomain to be registered in the next step
  6. A Route 53 RecordSetGroup which adds an A record for the subdomain to be registered and points to the CloudFront distribution URL created in the previous step

FAQs

Why do I have to Provide a Hosted Zone ID?

When using Route 53 as the domain registrar, a default hosted zone is usually created. This hosted zone contains four dedicated name servers. As of December 2017, creating a new hosted zone which uses specific name servers (namely the ones from the default hosted zone) is currently not possible via CloudFormation.

Why do I have to provide an ACM certificate ARN?

As of December 2017, CloudFormation only allows email validation for ACM certificates it issues; DNS validation is not an option even if the domain is registered via Route 53. Moreover, the entire stack remains in the CREATE_IN_PROGRESS state until the certificate has been validated which can introduce long delays. However, the AWS console allows to create an ACM certificate and add a record set to the corresponding hosted zone in Route 53 with one click.

I've Updated the Passwords and Redeployed the Stack but the Changes Haven't Been Reflected?

Here, the problem is that new versions are not automagically published even if the underlying code has changed. For this, the name of the version has to changed in the corresponding CloudFormation template. Manually change the name of the BasicAuthAtEdgeLambdaVersion and all its uses. Another redeploy should fix the problem.

Why is there no Alias being used in the Lambda?

As of December 2017, CloudFront can only reference a version in Lambda@Edge. Yup, it seems like yearly days here. Oh, and good luck deleting that Lambda@Edge via CloudFormation...

What's the Default Root Document for the Static Website?

The default root document is index.html. This value can be changed by updating the DefaultRootObject: index.html in the serverless_static_website_with_basic_auth.yaml template.

Why is the Least Privileged User Given Full Access to CloudFront on the cloudfront:CreateInvalidation Permission?

As of January 2018, CloudFront does not seem to provide fine grained access control for distributions on the cloudfront:CreateInvalidation permission. So much for true least privileged then...

I've got a Bug Fix/Improvement!

Splendid! Open a pull request and let's make things better for everyone!

Credits

The code in this repository builds upon a great article by Leonid Makarov describing the underlying idea as well as providing a Node.js implementation of Basic Authentication.

serverless_static_website_with_basic_auth's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

serverless_static_website_with_basic_auth's Issues

could not archive missing directory

~/Development/serverless_static_website_with_basic_auth/Terraform$ ./scripts/create_static_serverless_website.sh static-website-content/ default blah.com

Error: Error refreshing state: 1 error(s) occurred:

  • module.serverless-static-website-with-basic-auth.data.archive_file.basic_auth_at_edge_lambda_package: 1 error(s) occurred:

  • module.serverless-static-website-with-basic-auth.data.archive_file.basic_auth_at_edge_lambda_package: data.archive_file.basic_auth_at_edge_lambda_package: error archiving directory: could not archive missing directory: ~/Development/serverless_static_website_with_basic_auth/Terraform/lambda-at-edge-code/blah.com/

Add Subdomain/Domain to Assets

The Terraform module creates a number of assets in AWS for a single serverless static website. It's easy to loose track of which assets belong to which serveless static website instance.

It would be beneficial to add the subdomain/domain to the assets wherever possible.

Write CloudFront Access Logs to Single S3 Bucket

Having CloudFront write its access logs to a bucket has been introduced in #5.

At the moment, a new S3 bucket gets created for every serverless static website instance created by the Terraform module. Ironically, the CloudFront access logs for subdomains are already prefixed within the S3 bucket and hence put in their corresponding "directory". This forced creation of a new S3 logging bucket doesn't really seem to be necessary.

It would be nice to specify the S3 bucket used for CloudFront access logging and hence have all logs in one central S3 bucket. By definition of subdomains and domains, there shouldn't be any clashes.

Exclude Lambda@Edge Code from Git

Credentials shouldn't accidentally end up in Git. Here, the credentials are stored in the lambda-at-edge-code directory. Apart from the demo code, everything else should be ignored by Git.

Enable CloudFront Access Logging

CloudFront provides access logging out of the box. It should be enabled and go to a dedicated bucket - ideally, as secure as can possibly be.

Hard coded ZoneID

Hi

Thank you for your project.

Hard coded IDs here:
/CloudFormation/serverless_static_website_with_basic_auth.yaml
HostedZoneId: Z2FDTNDATAQYW2

and here:
/Terraform/modules/serverless-static-website-with-basic-auth/main.tf
zone_id = "Z2FDTNDATAQYW2"

License

Can you publish a license with this project to clarify if it can be cloned/reused/changed etc.

Provide Terraform Module

CloudFormation is great but having a version in another language would be even better. Terraform seems like a good candidate.

What's needed is a Terraform module which builds a serverless static website with basic auth from scratch.

Terraform initialized in an empty directory

~/Development/serverless_static_website_with_basic_auth/Terraform$ scripts/create_static_serverless_website.sh static-website-content/ default blah.com

output

--------------- CREATING SERVERLESS INFRASTRUCTURE ---------------

Workspace "blah.com" doesn't exist.

You can create this workspace with the "new" subcommand.
Created and switched to workspace "blah.com"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
Terraform initialized in an empty directory!`

fix
comment out line 18 of create_serverless_infrastructure.sh, pushd ../ > /dev/null

Split CloudFormation and Terraform Documentation

With CloudFormation sort of becoming legacy at this point, it would make sense to split the documentation and move it into the respective sub-directories.

This would allow for a more independent development of the Terraform module. Plus, less clutter, which is always good.

Provide Option to Use Different Basic Auth Credentials for Different Websites

The current Terraform module gets its Lambda@Edge code from a directory called lambda-at-edge-code containing the single file index.js. Using terraform workspaces for different websites doesn't change that. This implies that all websites deployed via terraform workspaces eventually share the same set of credentials as defined in

    // Credentials definition - customise to fit your needs
    const credentials = {
        'guest': 'letmein'
    };

in file index.js.

There should be a mechanism to define an index.js file per website that gets deployed. This would allow the credentials to be different for every website deployed via terraform workspaces.

Uploaded file must be a non-empty zip

Do you want to perform these actions in workspace "downloads.example.com"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.

Enter a value: yes

module.serverless-static-website-with-basic-auth.aws_lambda_function.basic_auth_at_edge_lambda: Creating...

Error: Error creating Lambda function: InvalidParameterValueException: Uploaded file must be a non-empty zip
{
RespMetadata: {
StatusCode: 400,
RequestID: "348674d6-a5fc-4afb-8869-f28b2032072b"
},
Message_: "Uploaded file must be a non-empty zip",
Type: "User"
}

on modules/serverless-static-website-with-basic-auth/main.tf line 80, in resource "aws_lambda_function" "basic_auth_at_edge_lambda":
80: resource "aws_lambda_function" "basic_auth_at_edge_lambda" {

/Users/me/src/downloads/serverless_static_website_with_basic_auth/Terraform/scripts/create_serverless_infrastructure.sh: line 32: popd: directory stack empty

When I unzip the auto generated zip, it is missing index.html!
Even though it is present in the source directory.

Unable to Redeploy Infrastructure

When trying to re-deploy the infrastructure with terraform apply ... after running terraform destroy ..., terraform fails as it

  1. Cannot delete the Lambda@Edge Function as it's a replicated function and it can take up to several hours before the function can be deleted
  2. Due to the above issue, a new Lambda@Edge function cannot be created, effectively delaying any rollout until the old Lambda@Edge function can be deleted

A simple solution could be to add some random characters to the function name.

Terraform and Azure DevOps

I have been working on Terraform using AzureDevOps before that I developed tf files using VS code and everything worked fine when try to move files from VS code to Azure DevOps , getting issue on Archive source file path it unable to find the directory, searched every where but unable to resolve this,

Path which was working fine on VS code was “…/Folder name” using same path in Azure DevOps as I have upload completed folder that I have build in VS code but it always get failed when try to archive files as it un-able to find the directory.

`
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
# Root module should specify the maximum provider version
# The > operator is a convenient shorthand for allowing only patch releases within a specific minor release.
version = "
>2.11"
}
}
}

provider "azurerm" {
features {}
#skip_provider_registration = true
}

locals {
location = "uksouth"
}

data "archive_file" "file_function_app" {
type = "zip"
source_dir = "../BlobToBlobTransferPackage"
output_path = "blobtoblobtransfer-app.zip"
}

module "windows_consumption" {
source = "./modules/fa"

archive_file = data.archive_file.file_function_app
}

output "windows_consumption_hostname" {
value = module.windows_consumption.function_app_default_hostname
}
`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.