Git Product home page Git Product logo

atlantis's People

Contributors

anubhavmishra avatar hussfelt avatar jkodroff avatar jwieringa avatar lkysow avatar mootpt avatar nadavshatz avatar nicholas-wu-hs avatar nick-hollingsworth-hs avatar reulan avatar so0k avatar suhussai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

atlantis's Issues

Ensure logging is consistent and all necessary steps are logged

Logging guielines

  • logging is going to be used for 2 purposes
    • debugging why someone's repo is erroring. So for this use case people will want to know how atlantis is interacting with their repo
    • debugging why atlantis is erroring. For this use case we'll need to make sure we're actually logging what atlantis is doing (so for example real paths)
  • log what happened, not what is about to happen*. This will reduce the amount of log statements
getting modified files
found 10 modified files
# or
getting modified files
error getting modified files

Do

found 10 modified files
#or
error getting modified files
  • *except for when the following log messages wouldn't make sense unless we knew their context. For example, when running plan in multiple directories, stating that "running plan in {dir}" is needed to understand the context of the following logs
  • for commands executed, we should always log the command and directory. If there is an error, we should include the output of the command on the comment back to github but not in our error logs since that would make them too noisy

Look at how we're commenting/templates and possibly refactor

Guidelines

  • any output from an executable will be in code blocks
    • ex. terraform output
  • for errors, we try to provide a human-readable reason which will be in regular text, and then the actual output can be in a code block
  • we should distinguish errors and failures. Errors are unexpected and so we won't be able to have a template for each of them. Failures should be expected, like a pull request not being approved, or no files being modified. Failures should have a simple comment without any detailed error text.

Examples below.

Decide on deployment: atlantis user, atlantis comment, github app

Installation Options

  • install webhook on individual repositories
  • install webhook on organization
  • install as oauth app
  • install as Github App

Can also choose between listening for comments mentioning an @atlantis user or just the string "atlantis"

Webhooks require a two step installation: create a user and then add the webhook to the repo
OAuth could be a one-step solution but we'd need to support oauth flow
I don't think Github Apps will work for Github Enterprise. CircleCI's instructions ask people to add it as an OAuth App. I haven't done a ton of research into this. Apps are nice because they automatically get a bot user associated with them, but I think it's probably over complicated for what we need.

So I think we should recommend adding as a git hook on the organization and also setting up an Atlantis user.

We could use the mentions API but I think we should still use webhooks so we can do things like auto-planning in the future (before atlantis is mentioned). We can also support both @atlantis and atlantis.

Improve e2e tests

  • Add atlantis apply to e2e tests
  • Add more e2e tests for all types of projects we support

Ensure aws sessions won't expire

Investigate calls to start session and assume role throughout the codebase
and ensure they will work past any timeouts and that we're also not spamming
aws sessions

TODOs stable release

  • Finish all the OSS steps for Hootsuite (Legal etc.)
  • Switch hootsuite/atlantis-example from private to public
  • Update atlantis to point to hootsuite/atlantis-example project instead of airauth/example
  • Update atlantis bootstrap mode to point the correct documentation
  • Update logo on embedded atlantis website

After Release:

  • Add CLA bot
  • Create 0.1 release

Lock pull request workspace during a run

This issue can be reproduced by commenting plan staging and plan production in quick succession.
Will error out with something like

error running plan: 2 error(s) occurred:

* file: open /private/tmp/atlantis/lkysow/atlantis-terraform-test/10/.terraform/modules/717f47b625fc0873bb7bd027de734d00/templates/userdata.tpl: no such file or directory in:

${coalesce(var.user_data_template, file("${path.module}/templates/userdata.tpl"))}
* file: open /private/tmp/atlantis/lkysow/atlantis-terraform-test/10/.terraform/modules/717f47b625fc0873bb7bd027de734d00/templates/userdata.tpl: no such file or directory in:

${coalesce(var.user_data_template, file("${path.module}/templates/userdata.tpl"))}

because we clean the workspace when we get a new event

Don't send accept header for pull review api

Right now we're sending the Accept: application/vnd.github.black-cat-preview+json header because our version of github enterprise hasn't been updated to the latest. This will cause approval checking to fail on github.com.

We should try without the header and if it fails, send it with the header. Or look into if there's a way to know which version of GitHub is being run.

Terminology/Functionality Changes

type Project struct {
	Repo Repo
	// Path to project root in the repo.
	// If "/" then project is at root
	// All paths must end with "/"
	Path string 
}

A project is where you'd run terraform plan/apply from. We have a Path because some repos may have multiple terraform projects. A project may have multiple environments or just the default environment. I don't think we need to have that data in the struct though.

type ProjectLock struct {
	Project Project
	Pull Pull
        Environment string
	Timestamp time.Time
}

I also think we should call this a ProjectLock because we're not locking a Run, the ProjectLock is created as a result of a Run.

func (p Project) Key() string {
    return fmt.Sprintf("%s%s", p.Repo.FullName, p.Path)
}

The Key() is used for looking up a ProjectLock in the database. When a plan command is run, we look up existing locks for that Project with the key.

Lock detail view

Should be able to click in to list of locks and see more detail.
Should also have a DELETE button that makes a DELETE request to
the backend and deletes the lock

Production-Ready Deployment: SSL? Load balanced?

I saw Mishra's ConfigMgmtCamp talk in Portland today, and this project is exactly what we'd been looking for.

A few questions as to running this in production:

  1. The Production-Ready section makes no mention of SSL, but I assume there would be no issue hosting this behind an SSL-enabled proxy. Would that work?
  2. Could Atlantis be hosted on multiple hosts behind a single load balancer? From my brief look at the code, it appears I'd have to work out a way to share the data dir that is used for the locking bolt database. Anything I'm missing?

Support terraform >0.9 and <= 0.9

How we'll support multiple terraform versions

  • detect terraform version by running terraform version #todo
  • detect project root
  • check terraform_version
  • if >= 0.9
    run terraform init
  • if >= 0.9 AND environment is specified
    run terraform env select {env}, may also need to create it with new
  • run pre_plan commands
  • run plan
    • if env/{env}.tfvars add -var-file=env/{env}.tfvars automatically
    • append all options that were specified in atlantis.yaml

Changes:

  • atlantis.yaml
    • get rid of stash_path
    • add terraform_args that lets users add args to all terraform commands, ex. -no-color. allow arguments to: plan, apply, init
  • change code flow to support the above ^^

Change S3 Plan Storage

Current Key

The object is the planfile itself and the key is determined as follows where prefix is set to /plans

  • 1: path: root, env: none
    • {prefix}/{owner}/{repo}/{owner}_{repo}_{pullNum}.tfplan
  • 2: path: root, env: env
    • {prefix}/{owner}/{repo}/{owner}_{repo}_{pullNum}.tfplan.{env}
  • 3: path: parent/child, env: none
    • {prefix}/{owner}/{repo}/{owner}_{repo}_{pullNum}_parent_child.tfplan
  • 4: path: parent/child, env: env
    • {prefix}/{owner}/{repo}/{owner}_{repo}_{pullNum}_parent_child.tfplan.{env}

Actions

The key is used as follows

  • apply {env} command comes in
  • pull down all plans with prefix {prefix}/{owner}/{repo}/{owner}_{repo}_{pullNum}
  • look at suffix and see if it has the correct env. Since no env is "" this will also return true for case 1 and 3.
  • download the matching plans into {scratchDir}/{owner}_{repo}_{pullNum}{_any sub dirs}.tfplan{.optional env}
  • parse the name and pull out any sub dirs
  • move plans to where the repo is cloned and into their correct sub dirs: {scratchDir}/{owner}/{repo}/{pullNum}/{subdirs}
  • cd into each sub dir and run apply with the -tfvars=env/{env}.tfvars if needed

Needed Functionality

  1. Look up plans by a) repo and pull, b) repo, pull and env so we can run apply and apply {env}
  2. Know subdir and env for each plan so we can put it in the correct location and apply the right terraform env {env} or -tfvars env/{env}.tfvars

Changes

  • Storing information in the key format and then parsing it back out is a bit of hack when there is S3 Metadata. I think we should use that to store repo, pull, env and subDirs. Then our key structure can be changed separately to the file metadata.
  • Then we can could change the keys but we'd still need to keep them unique and still need to be able to pull based on prefix. Maybe {prefix}/{owner}/{repo}/{pullNum}/{parent}/{child}/{env}.tfplan?

Add post_apply command

Add post_apply command. This functionality would be super helpful in terms of performing tests after deploying infrastructure.

When extracting data from GitHub objects, check for nil.

Right now we're writing code like

func (r *RequestParser) extractPullData(pull *github.PullRequest, params *CommandContext) error {
	commit := pull.Head.SHA
	if commit == nil {
		return errors.New("key 'pull.head.sha' is null")
	}

but if pull or Head was nil this would panic. We just need to check these better.

Figure out test format and best practices and apply to existing tests

Proposal

  • place tests under package {package under test}_test to enforce testing the external interfaces
  • if you need to test internally i.e. access non-exported stuff, call the file {file under test}_internal_test.go
  • use the testing_util for easier-to-read assertions import . "github.com/hootsuite/atlantis/testing_util"
  • don't try to describe the whole test with a camel case test function name. Instead use t.Log statements:
// don't do this
func TestLockingWhenThereIsAnExistingLockForNewEnv(t *testing.T) {
    ...

// do this
func TestLockingExisting(t *testing.T) {
    	t.Log("if there is an existing lock, lock should...")
        ...
       	t.Log("...succeed if the new project has a different path") {
             // optionally wrap in a block so it's easier to read
       }

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.