craigmonson / colonize Goto Github PK
View Code? Open in Web Editor NEWA terraform tool to manage environment driven templating.
License: MIT License
A terraform tool to manage environment driven templating.
License: MIT License
So yea... we need this as well... the whole "colonize help" stuff...
I am launching some RDS instances, which can take some time...a lot of time...
Currently, the output is being captured and printed at the end. The downside to this, is that Terraform is waiting for the creation to end, and Colonize looks like nothing is happening.
I am staring at this for the entire time until the end (took about 10-15m):
$ colonize apply
Running remote setup
Remote configuration updated
Remote state configured and pulled.
Executing terraform apply
It would be nice to indicate some sort of status that the process is still running OK or use a pipe to pipe the output from TF as it is running, so the output shows up in real-time.
When running a TF plan, and there is an error, the output only shows exit status 1
, instead of the full TF error output.
Steps to reproduce:
plan
commandActual Output
$ colonize plan -e dev
Removing .terraform directory...
Building combined terraform variable assignment files...
Building combined variable files...
Building combined terraform files...
[]
Building combined derived files...
Building remote config script...
Fetching terraform modules...
Running remote setup
Disabling remote
Executing terraform plan
Plan failed to run: exit status 1
Desired Output
$ colonize plan -e dev
Removing .terraform directory...
Building combined terraform variable assignment files...
Building combined variable files...
Building combined terraform files...
[]
Building combined derived files...
Building remote config script...
Fetching terraform modules...
Running remote setup
Disabling remote
Executing terraform plan
module root: 1 error(s) occurred:
* module 'instance': unknown variable referenced: 'domain'. define it with 'variable' blocks
Plan failed to run: exit status 1
When generating a leaf (or branch) it would be nice if the remote backend configuration could be automatically generated.
In the .colonize.yaml
you could for example specify a template to be used when generating the main.tf
file:
main_template: >
terraform {
backend "s3" {
bucket = "config_bucket"
key = "${var.branch}-${var.leaf}.tfconfig"
region = "us-east-1"
}
}
Maybe the interpolation should be different as you don't really have terraform variables at this point and it might get confusing.
Copied from PR #28
[...] I think we need some sort of a plan-apply command and/or a way to build dependency tree with the branch runs.
plan-apply: automatically (maybe a confirmation input and/or a flag to accept) run apply after you have planned (assuming no error)
dependency tree: If possible, do not run plan/apply on dependent leafs until required changes are applied
Here is the scenario where 1 or both of these things would be handy:
Assume you have a project where you have a leaf called security_groups where you define your security groups for your environment. Then you have another leaf that builds your EC2 instances, called webservers. You will need output from security_groups in order to correctly launch your webservers. You create a build_order.txt to make this a branch, where you have both leafs listed, so that it runs security_groups and then webservers.
Now, if you run a colonize plan -e dev to build out your dev environment, it will go through and run the plan on each leaf. The issue is, webservers cannot plan correctly, because the output from security_groups does not exist yet. So, you have to plan, apply, plan, and apply, in order to correctly build your webservers, which is the same as just running at the leaf level each time.
Scenario:
plan
with the --skip_remote
flag and the --remote_state_after_apply
flagapply
with the --remote_state_after_apply
flagResult:
TF apples correctly, but there is no remote state file and it states that there it was not remote synced
In the original Makefile system, remote states, like vpc, would NOT generate terraform code given that they were for the template being run. Ex: the vpc remote resource wouldn't be included when working on the vpc template. This makes sense, but was specific to the customer. You can achieve the same remote sources, but you can not skip them.
This should allow some sort of mechanism to skip a template, given some rule. One idea is to include some metadata in a comment section of a template. If some criteria is met (SKIP: template_name == "vpc") then, when combining, it would skip over this template.
Like many tools, an initialize command (init) would be super useful for setting up a new project with colonize. This would create a generic .colonize.yaml in the root of the project, a config ('env') directory, and boilerplate default files in the root config.
The boilerplate files should be controlled by a colonize "template" that would define what files should be created when initializing a new project.
With a colonize segment like this:
.
├── env
│ ├── non-prod-dr.tfvars
│ ├── non-prod.tfvars
│ ├── prod-dr.tfvars
│ └── prod.tfvars
├── main.tf
├── main.tf.non-prod
└── main.tf.non-prod-dr
the command colonize -e non-prod prep
will source both the main.tf.non-prod and main.tf.non-prod-dr file
colonize version
Colonize v0.1.0-alpha
Some flags are named with dashes and some with underscores. Since CLIs typically are usually to use dashes, I think we should update all to use dashes
Example:
We have --skip_remote, but I think it should be --skip-remote.
Example:
We use -r for --skip-remote, however, -r commonly means recursive in the CLI world.
Perhaps -s or -k would be better for this
When running commands, especially prep/plan/apply/destroy on branches, I think we should make the current task being run more clear and separated from the colonize/terraform output. My suggestion would be to look at Ansible's play headers
For example, from a recent prep
I ran on a branch
Running app/security_groups
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
Running app/database
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
Running app/instances
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
The Running <branch>
, seems to get a little lost in the output.
If we did something like the following, then I think it may be more clear:
Colonize [dev] ****************************************************************
Environments Dir = env
Build Order File = build_order.txt
Remote Setup File = remote_setup.sh
... Other config/options ...
PREP: [app/security_groups] ***************************************************
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
PREP: [app/database] **********************************************************
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
PREP: [app/instances] *********************************************************
Removing .terraform directory...
Building combined variables files...
Building combined terraform files...
Building combined derived files...
Building remote config script...
Fetching terraform modules...
Then this would obviously expand to each command, where it would follow the pattern of the command being run, followed by the output
Destroy command is not running prep first, so, if I had run a colonize clean
and then try to destroy something, I get
invalid value "/Users/jyore/Code/colonize-test/app/database/_combined.tfvars" for flag -var-file: Error reading /Users/jyore/Code/colonize-test/app/database/_combined.tfvars: open /Users/jyore/Code/colonize-test/app/database/_combined.tfvars: no such file or directory
running a colonize prep -e <env>
prior to the colonize destroy -e <env>
works, so I think we just need to add the prep command to run when destroy runs
The only place you can run colonize at the moment is on the leaf. Like the previous makefile system, we should make it so you can run a colonize command from a branch, and it'll walk the tree and run the command appropriately.
As they're both respected file extensions, we should allow for both.
Prebuilt binaries for those that do not want to do the Go install would be a good idea. We'll need it for Windows, OSX, & Linux
I have my provider.tf in the env directory and then i am defining some additional variables in the .tf. fashion. The _combined.tf output gets the correct tf in it, but it also gets some garbage
provider "aws" {
region = "${var.region}"
}
b0VIM 7.4gchXj8���jyoreJoeys-MacBook-Pro.local~jyore/Code/colonize-test/vpc/subn3210#"! Utp�adf���������|N ����}} } nonprod-application-az2 = "10.20.20.0/24" nonprod-application-az1 = "10.20.10.0/24" nonprod-database-az2 = "10.10.20.0/24" nonprod-database-az1 = "10.10.10.0/24" default = { type = "map"variable "subets" {} default = [ "a", "c" ] type = "list"variable "availability_zones" {
variable "availability_zones" {
type = "list"
default = [ "a", "c" ]
}
variable "subets" {
type = "map"
default = {
nonprod-database-az1 = "10.10.10.0/24"
nonprod-database-az2 = "10.10.20.0/24"
nonprod-application-az1 = "10.20.10.0/24"
nonprod-application-az2 = "10.20.20.0/24"
}
}
In the make system, and in terraform itself, when you ran a destroy, a warning would be printed to the screen and you would be prompted to enter a confirmation to continue. We should add this "last chance" step back in, along with an auto verify (like a --force) so colonize can be automated.
The current customer specific Makefile setup allows a pre and post apply command shell script hooks. We should duplicate this functionality for at least the apply command.
Not sure why this is doing this, but, the apply
command appears twice when running the help
command
Steps to reproduce
colonize help
Error Output
$ colonize help
A longer description that spans multiple lines and likely contains
examples and usage of using your application. For example:
Cobra is a CLI library for Go that empowers applications.
This application is a tool to generate the needed files
to quickly create a Cobra application.
Usage:
colonize [command]
Available Commands:
apply A brief description of your command
apply A brief description of your command
clean A brief description of your command
plan A brief description of your command
prep A brief description of your command
Flags:
-e, --environment string The environment to colonize
-a, --remote_state_after_apply Run remote state after terraform apply (if it was skipped).
-r, --skip_remote skip execution of remote configuration.
Use "colonize [command] --help" for more information about a command.
I think an --exclude
flag to the prep, plan, apply, & destroy commands could be useful, for one-time excluding a specific leaf or leafs from the run
Should this be required?
Just clean may be sufficient:
$ colonize clean
Currently, it gives:
$ colonize clean
environment can not be empty
You can pass any environment name to get it to execute, however
$ colonize clean -e anything
The "generate" command would add a new directory (leaf or branch), with the config directory initialized with what it knows about the operational environment. IE: It will drop down through the already existing configs, and compile info about the different environments, then output appropriate files into the config.
ex:
/foo
/foo/env/dev.tfvars
/foo/env/prod.tfvars
in the 'foo' directory, running "colonize generate web" would create:
/foo/web/env/dev.tfvars
/foo/web/env/prod.tfvars
prep (and possibly others) don't need skip-remote or remote-after-apply flags. We should clean these up.
Correct me if i'm wrong but I feel like the following is sensible to add the .gitignore by default during the init phase:
_*.tf
_*.tfvars
_remote_setup.sh
*.tfplan
When putting together the derived variables, I have the variables file generated from the content of the tfvars file. I extended this out to the env dirs, so those will automatically generate variable resources. This is ok, but steps away from how terraform actually works. I think I should revert the changes for non derived variables.
Right now, every .tf file in a config dir will be combined, unlike those in the leaf. We should allow for the same level of granularity in the config dirs ('env') so you can group config environment templates just like you can in the leaf.
ie:
foo/
foo/env/
foo/env/remote.tf.dev
foo/env/remote.tf.base
etc...
that would be combined all the way up depending on the same rules for the leaf.
Work is already being done on this, but opening this as a placeholder.
The remote state is not being pushed with a simple colonize apply
command. It will run and execute correctly, however, the remote state file is not available in s3. The ONLY way for the remote state file to update is if I run the command with the remote_state_after_apply and skip_remote flags colonize -r -a apply
, then it will execute and update the remote state.
I do not think that this is the desired behavior, is it?
Yea... should do that.
In the previous version, it automatically generated remote state blocks for the VPC, Security Groups and IAM leafs.
After a make plan, for example, the VPC remote state would be generated
$ cat _vpc_remote_state.tf
data "terraform_remote_state" "vpc" {
backend = "s3"
config {
bucket = "terraform-states-bucket"
key = "vpc_account_nonprod.tfstate"
region = "us-west-2"
}
}
This made referencing VPC/account level resources, much easier, without having to worry about manually defining these data resources in each leaf.
A possible idea, would be to use the .colonize.yaml file to specify specific leafs or branches to generate remote_state for...or just generate it for each leaf, regardless.
If going down the .colonize.yaml route, something like:
generate_remote_datasources:
- vpc
- security_groups
- iam
This could then scan the leafs and determine the bucket location, and generate files for each _<leaf-name>_remote_state.tf
. Again, this could alternatively just be done for each leaf, by default, potentially. They could also all be merged into a single file, if necessary.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.