toff63 / best-practices Goto Github PK
View Code? Open in Web Editor NEWThis project forked from hashicorp/best-practices
License: Mozilla Public License 2.0
This project forked from hashicorp/best-practices
License: Mozilla Public License 2.0
Take all instructions from Setup forward and paste into a new "Issue" on your repository, this will allow you to check items off the list as they're completed and track your progress.
FullAccess
to EC2, S3, Route53, and IAM in order for Terraform to create all of the resourcesNote: Terraform creates real resources in AWS that cost money. Don't forget to destroy your PoC environment when finished to avoid unnecessary expenses.
Set the below environment variables if you'll be using Packer or Terraform locally.
$ export AWS_ACCESS_KEY_ID=YOUR_AWS_ACCESS_KEY_ID
$ export AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_ACCESS_KEY
$ export AWS_DEFAULT_REGION=us-east-1
$ export ATLAS_USERNAME=YOUR_ORGANIZATION_NAME
$ export ATLAS_TOKEN=YOUR_ATLAS_TOKEN
Note: The environment variable
ATLAS_USERNAME
can be set to your individual username or your
organization name in Atlas. Typically, this should be set to your organization name - e.g. hashicorp.
There are certain resources in this project that require the use of keys and certs to validate identity, such as Terraform's remote-exec
provisioners and TLS in Consul/Vault. For the sake of quicker & easier onboarding, we've created a gen_key.sh and gen_cert.sh script that can generate these for you.
Note: While using this for PoC purposes, these keys and certs should suffice. However, as you start to move your actual applications into this infrastructure, you'll likely want to replace these self-signed certs with certs that are signed by a CA and use keys that are created with your security principles in mind.
site
keys
sh gen_key.sh site
in setup
.pub
) key from the existing private (.pem
) key specified (e.g. sh gen_key.sh site ~/.ssh/my-existing-private-key.pem
).pub
) and private (.pem
) key in the setup/. directorysite
and vault
certs
sh gen_cert.sh YOUR_DOMAIN YOUR_COMPANY
in setup (e.g. sh gen_cert.sh hashicorpdemo.com HashiCorp
)
site
(external self-signed cert for browsers) and one named vault
(internal self-signed cert for Consul/Vault TLS), both within the setup/. directoryUse the New Build Configuration tool to create each new Build Configuration below. Enter the names provided as you go through the checklist and be sure to leave the Automatically build on version uploads and Connect build configuration to a GitHub repository boxes unchecked for each.
After creating each Build Configuration, there is some additional configuration you'll need to do. The summary of what will need to be completed for each Build Configuration is below, the relevant values are provided as you go through the checklist.
ATLAS_USERNAME
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
: us-east-1
best-practices
GitHub repository you just forkedpacker
for Packer DirectoryYou can then go to "Builds" in the left navigation of each of the Build Configuration(s) and click Queue build, this should create new artifact(s). You'll need to wait for the base
artifact to be created before you queue any of the child builds as we take advantage of Base Artifact Variable Injection.
You do NOT want to queue builds for aws-us-east-1-ubuntu-nodejs
because this Build Template will be used by the application. Queueing a build for aws-us-east-1-ubuntu-nodejs
will fail with the error * Bad source 'app/': stat app/: no such file or directory
.
aws-us-east-1-ubuntu-base
Artifact
aws-us-east-1-ubuntu-base
aws-us-east-1-ubuntu-base
best-practices
repo
packer
aws/ubuntu/base.json
base
artifactWait until the Base Artifact has been created before moving on to the child Build Configurations. These will fail with an error of * A source_ami must be specified
until the Base Artifact has been created and selected.
For child Build Configurations, there is one additional step you need to take. In "Settings", set Inject artifact ID during build to aws-us-east-1-ubuntu-base
for each.
aws-us-east-1-ubuntu-consul
Artifact
aws-us-east-1-ubuntu-consul
aws-us-east-1-ubuntu-consul
aws-us-east-1-ubuntu-base
best-practices
repo
packer
aws/ubuntu/consul.json
aws-us-east-1-ubuntu-vault
Artifact
aws-us-east-1-ubuntu-vault
aws-us-east-1-ubuntu-vault
aws-us-east-1-ubuntu-base
best-practices
repo
packer
aws/ubuntu/vault.json
aws-us-east-1-ubuntu-haproxy
Artifact
aws-us-east-1-ubuntu-haproxy
aws-us-east-1-ubuntu-haproxy
aws-us-east-1-ubuntu-base
best-practices
repo
packer
aws/ubuntu/haproxy.json
aws-us-east-1-ubuntu-nodejs
Build Configuration
aws-us-east-1-ubuntu-nodejs
aws-us-east-1-ubuntu-nodejs
aws-us-east-1-ubuntu-base
best-practices
repo
packer
aws/ubuntu/nodejs.json
aws-us-east-1-ubuntu-nodejs
)
aws-us-east-1-ubuntu-consul
aws-us-east-1-ubuntu-vault
aws-us-east-1-ubuntu-haproxy
We built artifacts for the us-east-1
region in this walkthrough. If you'd like to add another region, follow the Multi-Region setup instructions below.
If you decide to update any of the artifact names, be sure those name changes are reflected in your terraform.tfvars
file(s).
aws-global
Environmentaws-global
Environment from GitHub
YOUR_ATLAS_ORG/aws-global
YOUR_GITHUB_USERNAME/best-practices
terraform
terraform push
your environment to Atlas to set the Terraform variables, the GitHub Ingress does not currently pull in variables
global
folder: cd terraform/providers/aws/global/.
terraform remote config -backend-config name=$ATLAS_USERNAME/aws-global
terraform get
terraform push -name $ATLAS_USERNAME/aws-global -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME" ../../../../terraform/
../../../../terraform/
must be provided so that the Terraform command and projectaws-global
environmentATLAS_USERNAME
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
: us-east-1
TF_ATLAS_DIR
: providers/aws/global
TF_ATLAS_DIR
variable to identify where it should run Terraform commands within the repoREPLACE_IN_ATLAS
domain
with your domain (e.g. hashicorpdemo.com
)
atlas_username
with your Atlas usernameiam_admins
with a comma separated list of users you'd like added to the admin
group in IAM (e.g. cameron,jay,jon,kevin
)
global-admin
aws-global
environmentaws-us-east-1-prod
Environmentaws-us-east-1-prod
Environment from GitHub
YOUR_ATLAS_ORG/aws-us-east-1-prod
YOUR_GITHUB_USERNAME/best-practices
terraform
terraform push
your environment to Atlas to set the Terraform variables, the GitHub Ingress does not currently pull in variables
us_east_1_prod
folder: cd terraform/providers/aws/us_east_1_prod/.
terraform remote config -backend-config name=$ATLAS_USERNAME/aws-us-east-1-prod
terraform get
terraform push -name $ATLAS_USERNAME/aws-us-east-1-prod -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME" ../../../../terraform/
../../../../terraform/
must be provided so that the Terraform command and projectaws-us-east-1-prod
environmentATLAS_USERNAME
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
: us-east-1
TF_ATLAS_DIR
: providers/aws/us_east_1_prod
TF_ATLAS_DIR
variable to identify where it should run Terraform commands within the repoREPLACE_IN_ATLAS
, you will use the contents of the keys and certs created in Generate Keys and Certs as values for most of these variables
atlas_token
with your Atlas tokenatlas_username
with your Atlas usernamesite_public_key
with the contents of site.pub
site_private_key
with the contents of site.pem
site_ssl_cert
with the contents of site.crt
site_ssl_key
with the contents of site.key
vault_ssl_cert
with the contents of vault.crt
vault_ssl_key
with the contents of vault.key
aws-us-east-1-prod
environment
This same process can be repeated for the aws-us-east-1-staging
environment as well as any other regions you would like to deploy infrastructure into. If you are deploying into a new region, be sure you have Artifacts created for it by following the Multi-Region steps below.
A HA Vault should have already been provisioned, but you'll need to initialize and unseal Vault to make it work. To do so, SSH into each of the newly provisioned Vault instances and follow the below instructions. The output from your apply in Atlas will tell you how to SSH into Vault.
Initialize Vault
$ vault init | tee /tmp/vault.init > /dev/null
Retrieve the unseal keys and root token from /tmp/vault.init
and store these in a safe place
Shred keys and token once they are stored in a safe place
$ shred /tmp/vault.init
Use the unseal keys you just retrieved to unseal Vault
$ vault unseal YOUR_UNSEAL_KEY_1
$ vault unseal YOUR_UNSEAL_KEY_2
$ vault unseal YOUR_UNSEAL_KEY_3
Authenticate with Vault by entering your root token retrieved earlier
$ vault auth
Shred the token
$ shred -u -z ~/.vault-token
After Vault is initialized and unsealed, update the below variable(s) and apply the changes. Next time you deploy your application, you should see the Vault/Consul Template integration working in your Node.js website!
aws-us-east-1-prod
environment: Update vault_token
with the root-token
git commit --allow-empty -m "Force a change in Atlas"
) to your demo-app-nodejs
repo, this should trigger a new "plan" in aws-us-east-1-prod
after a new artifact is builtaws-us-east-1-prod
environment: Queue a new plan and apply the changes to deploy the new application to see the Vault/Consul Template integration at workYou'll eventually want to configure Vault specific to your needs and setup appropriate ACLs.
If you'd like to expand outside of us-east-1
, there are a few changes you need to make. We'll use the region us-west-2
as an example of how to do this.
In the base.json Packer template...
Add a new variable for the new region's AMI and a new variable for the new Build name. Note that the AMI will need to be from the region you intend to use.
"us_west_2_ami": "ami-8ee605bd",
"us_west_2_name": "aws-us-west-2-ubuntu-base",
Add an additional builder for the new region
{
"name": "aws-us-west-2-ubuntu-base",
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-west-2",
"vpc_id": "",
"subnet_id": "",
"source_ami": "{{user `us_west_2_ami`}}",
"instance_type": "t2.micro",
"ssh_username": "{{user `ssh_username`}}",
"ssh_timeout": "10m",
"ami_name": "{{user `us_west_2_name`}} {{timestamp}}",
"ami_description": "{{user `us_west_2_name`}} AMI",
"run_tags": { "ami-create": "{{user `us_west_2_name`}}" },
"tags": { "ami": "{{user `us_west_2_name`}}" },
"ssh_private_ip": false,
"associate_public_ip_address": true
}
Add an additional post-processor for the new region
{
"type": "atlas",
"only": ["aws-us-west-2-ubuntu-base"],
"artifact": "{{user `atlas_username`}}/{{user `us_west_2_name`}}",
"artifact_type": "amazon.image",
"metadata": {
"created_at": "{{timestamp}}"
}
}
Once the updates to base.json have been completed and pushed to master
(this should trigger a new Build Configuration to be sent to Atlas), complete the Child Artifact steps with the new region instead of us-east-1
to build new artifacts in that region.
To deploy these new artifacts...
us_west_2_prod
and us_west_2_prod
In each of the new "us_west_2" terraform.tfvars
files...
us-east-1
with us-west-2
.ami-5fe36434
to ami-9fe2f2af
azs
variable depending on what the subnets in that region supportFinally, push these new environments to master
and follow the same steps you completed to deploy your environments in us-east-1
.
If you want to destroy the environment, run the following command in the appropriate environment's directory
$ terraform destroy -var "atlas_token=$ATLAS_TOKEN" -var "atlas_username=$ATLAS_USERNAME"
There is currently an issue when destroying the aws_internet_gateway
resource that requires you to run terraform destroy
a second time as it fails the first.
Note: terraform destroy
deletes real resources, it is important that you take extra precaution when using this command. Verify that you are in the correct environment, verify that you are using the correct keys, and set any extra configuration necessary to prevent someone from accidentally destroying infrastructure.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.