Git Product home page Git Product logo

spread's People

Contributors

ethernetdan avatar kern avatar mfburnett avatar zoidyzoidzoid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spread's Issues

Centralize and improve build scripts + build docs

Right now we have code for verification and building in:

  • /make.sh (unused)
  • .travis.yml
  • redspread/homebrew-spread

We should centralize these make processes and ensure there is minimal build logic duplication.

Haven't settled on how I will set this up, I like the simplicity of Makefiles but am not sure they justify the dependency on make.

Related tasks

  • enforce gofmt (#80)
  • Remove make.sh (#64)
  • Document build process (#57)

Implement Application entity

Currently with spread there is no way to deploy multiple RCs/pods within the same deployment. To enable this concept I would like to introduce a new Entity called Application and would function a lot like a docker-compose.yml file.

Applications can be attached with any Entity (potentially Application itself)

This will also cleanup some code in cli/deploy.go dealing with the case of only deploying objects from .k2e (if no RCs, Pods, or Containers are found). Instead of having special logic creating objects, input.Build() would return an Application with no attached entities and the objects from .k2e.

Deployable Entities

Deployable Entities are entities that can be deployed to Kubernetes and which can be created by an Input source. Entities represent the hierarchy of Kubernetes configuration objects along with some objects created by Redspread.

Hierarchy

Each Entity holds the state of the configuration it represents. The hierarchy is:

The bolded Entity have struct representations in Kubernetes.

Other objects

Objects such as services, secrets, and volumes can also be attached to any level of an Entity. In Entities from directories these are stored in the .kube directory.

Deployment

Entities implement an interface similar to:

type Deployable interface {
    Deployment() Deployment
}

A deployment represents a collection of every deployable Kubernetes object. It is able to be deployed to a Kubernetes using labels to identify it's objects as belonging to the deployment. Those labels can be queried to retrieve Deployments from a cluster.

Non-Kube Entities

This representation of a Entity adds two new existing levels to the existing Kubernetes object hierarchy.

Application

Applications consist of a collection of Entity which are deployed together.

example:

// implements Deployable
type Application struct {
    // ...
    Entities []Deployable
    // ...
}

func (a Application) Deployment() (deployment deploy.Deployable) {
    for _, v := range a.Entities {
        deployment = deploy.Merge(deployment, v.Deployment())
    }
    return deployment
}

Image

The Kubernetes representation of an Image is a single field within the Container struct. Since spread is required to ensure that images have been built it will require a bit more information. This includes the Docker context that needs to be built and other build options.

Images and Builds

While Kubernetes represents an image as a single line in the Container struct, spread needs more information in order for images to be built during deployment.

Image

Image has the name and tag of the Docker image it refers to. It has an optional Build field which if set contains the configuration to build the image.

type Image struct {
    Name     string
    Tag      string
    Registry string
    Build    *Build
}

Build

Build contains the path of the the Docker context and build configuration details.

import (
    docker "github.com/fsouza/go-dockerclient"
)

type Build struct {
    ContextPath string
    Config      docker.BuildImageOptions
}

Deployment tests don't compile

It looks like when we started referring to Deployments with pointers we did not update the tests causing them to not compile.

Problem in pkg/deploy/deployment_test.go

Implement building functionality in `spread build`

In addition to creating and updating Kubernetes objects, spread deploy will also locally build a Docker context based on the path specified. It should then push to a Docker registry (if we’re not building it, we assume it shouldn’t be pushed).

Images are indicated to be built with an ampersand (&) placed immediately before the image name field in the container struct. Only those images built are pushed to the registry by default, taking configuration from ~/.docker/config.json.

Minimal CLI Setup

I've had good experiences with the command line interface library cli.go so I propose using it for spread. The executable package will be rsprd.com/spread/cmd/spread.

I like that it is able to generate help text and seems to scale to various size of tools.

Switch to /vendor from Godeps

Kubernetes is having problems with using Godeps rewritten imports. My plan is to switch to the Go 1.5 vendor experiment.

spread deploy doesn't work with non default kubectl contexts

$ spread deploy . nondefaultcontext

Deploying 3 objects using the context 'nondefaultcontext'.
Deployment found. Did not deploy.: could not get 'default/redspread-registry' (ReplicationController): Get http://localhost:8080/api/v1/namespaces/default/replicationcontrollers/redspread-registry?export=true: dial tcp [::1]:8080: getsockopt: connection refused

Expected behavior is to use the context or return an error if the context doesn't exist.

Enable commands to disable themselves

I came across the use case of having commands be able to be disabled when I wanted to add a debugging tool to spread but had no way to do so without it being exposed to users.

The reflection logic for menu generation should be altered to look for methods returning a cli.Command pointer rather than value. This allows the methods to return nil if they are disabled.

Better docs for Entity usage

There are certain methods that are frequently used throughout the entity package that could benefit from better docs.

Specifically (and not exclusively):

  • A package description should be written.
  • The data() method found on several different types of Entity has behavior around error handling and defaults that should be documented.
  • There are several unexported methods with non-obvious usage that could use documentation
  • Exported method should be checked for undocumented caveats and error cases

KubeCluster

KubeCluster provides access to managing a Kubernetes cluster. It abstracts the configuration of the Kubernetes client from it's usage.

Requirements

  • should be able to be created from a kubectl context (including default)
  • should be able to be created using kubectl parameters
  • should implement Deployer from #19 (eventually we want to implement Manager)

Implementation

The type wraps the Kubernetes API client using the configuration specified in the constructor either with a kubectl context or the individual parameters.

Deploy method

Iterates through each typed slice of Kubernetes objects in the Deployment and creates it within the cluster. The Deployment name should be recorded in an annotation. Return errors if any object is unable to be created.

Rollback

I'm not sure of the ideal behavior for rolling back deploys, let's wait for user feedback and decide.

Including a Changelog

Including a log is convenient for users getting a high level overview of the most important changes between releases.

Error when deploying `kube-mattermost` on Kube 1.2 alpha

I'm running Kubernetes 1.2 alpha 8. I was able to install spread successfully via the instructions on the README, but when deploying I get an error:

~/D/kube-mattermost (master) $ spread deploy .
Deploying 3 objects using the default context.
Did not deploy.: could not get 'default/mattermost-app' (Service): export of "services" is not supported

Remove constructor for FileSource

FileSource currently has a constructor which checks whether a path exists before returning a FileSource. This however is unnecessary because each FileSource method has failure logic (through kubectl) for when paths don't exist.

Since FileSource is of type string, creation can be accomplished through conversion.

Check golint output from pkg/deploy

We should check these out

$ go get -u github.com/golang/lint/golint
$ golint rsprd.com/spread/pkg/deploy
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:46:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:60:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:74:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:88:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:102:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:116:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:130:10: if block ends with a return statement, so drop this else and outdent its block (move short variable declaration to its own line if necessary)
/Users/dan/kube/src/rsprd.com/spread/pkg/deploy/deployment.go:329:2: exported var ErrorObjectNotSupported should have comment or be unexported

Refactor Entity

This might be a good idea.

Poor code reuse

There is a lot of identical behavior shared between Entity constructors. This repetition could be eliminated.
This includes:

  • Kubernetes object/image nil checks
  • Entity base creation
  • Kubernetes object deep copying (the deep copy implementation being used accepts interface{} for the value so can be used interchangeably)
  • Setting default object metadata
  • Validation (maybe)

Generic Entity const

func NewEntity(entity deploy.KubeObject, defaults kube.ObjectMeta, source string, objects ...deploy.KubeObject) (Entity, error) {}

Inconvenient external use

As demonstrated here, not having a generic constructor creates extra code. Using this constructor, the same directory search code could be used for Pods, RCs and eventually Apps (when they are implemented)

Add logging library

We should have logging to help trace the construction of entities and the usage of inputs.

I've used logrus in the past and like it but I'm open to options.

Deployment Interfaces

I propose the creation of 2 interfaces for managing and deploying Deployments, Deployer and Manager.

Deployer

// Deployer can deploy Deployments
type Deployer interface {
    // Deploy creates Deployment using the name requested. The name should be stored so that it can be queried by Manager.
    // Returns error if Deployment exists and replace is false.
    Deploy(d *Deployment, name string, replace bool) error
}

Manager

// Manager provides functionality to manipulate running Deployments
type Manager interface {
    Deployer
    // Deployment returns the named Deployment. Returns ErrNotFound if doesn't exist.
    Deployment(name string) (*Deployment, error)
    // Stop will remove the requested Deployment
    Stop(name string) error
}

Integrate code coverage into CI

We should keep track of these numbers, especially for core packages like pkg/entity. In the future we should fail on builds lacking some percentage of coverage.

Refactor Deployment to work more generically

Currently there is a large amount of logic which is repeated for every Kubernetes object type. When I initially wrote Deployment I thought having access to concrete object types would be useful but it's become clear that in most cases it's easier to deal with them generically.

Instead KubeObject can be used to refer to objects generically. In instances such as validation, where concrete types are desired, type switches and assertions are more than adequate.

Additionally, as the project has evolved the role of Deployment should be revisited.

Data type

I see a few ways potentially to store Deployments:

A

import "k8s.io/kubernetes/pkg/api/unversioned"

// Map of GroupKind => Map of `<name>/<namespace>` => KubeObject 
objects map[unversioned.GroupKind]map[string]KubeObject

B

// Map of `<name>/<namespace>` => KubeObject 
objects map[string]KubeObject

C

// Simply a slice of KubeObjects
objects []KubeObject

Trade-offs

  • A allows for fast lookups by GroupKind
    • There has been a need for this and the current solution is messy
  • B and C allow for simple ranging of the entire deployment, A requires some more logic.
  • C will be O(n) for basically every operation

Exported

I am uncertain whether or not Deployment should remain unexported.

Package

Deployments currently functions as a snapshot of state of a set of Kubernetes objects which is not inherently related to deploying. Given this is a useful abstraction, Deployment should probably be separated both in terms of package and conceptually from deploying.

Input Interface

An Interface should be created similar to:

type Input interface {
    Entity() (entity.Entity, error)
}

The Entity should be created with a source which would be easily human identifiable. (ie. pod.yml)

Multiple input sources will be able to coexist together, opening the possibility of hybrid models of input.

Create constructor for Deployment

In order to avoid the creation of illegal references to deployments, we should create a constructor and unexport the struct.

Came up fixing #15.

Implement spread status

We need some way to inspect the current state of objects found in the working directory.

I propose the introduction of a new subcommand, spread status to fulfill this purpose.

The command should at least:

  • enumerate every found Entity using the Redspread directory conventions on the current working directory
  • list the source of each Entity

Rewrite Entity error messages to be less generic

Currently, Entity will return vague error messages which make it hard to figure out what the failed operation was. Error messages should describe what the Entity is and what has gone wrong.

Implement spread deploy

Implement CLI command which is able to deploy Kubernetes objects stored using the Redspread directory convention to a Kubernetes cluster using kubectl contexts.

Solving a dependency nightmare

So I've quickly learned that we're going to run into trouble unless we lock our dependencies to the versions required into Kubernetes. The issue first arose because the image package relies on a newer version of Docker then Kubernetes is able to build with but this was a long time coming.

I made several attempts at making this happen and ended up in a bit of a mess. My first attempt was to use Godeps (see #21) but I ran into an issue with Kubernetes not playing well with Godep's rewriting of imports. My second attempt was to switch from Godeps to the vendor experiment, this resolved the import check problem with Kubernetes but has left a convoluted commit history. However, I am concerned about the way it was setup.

I am going to rewind back into the commit history and apply this fix. This operation breaks the commit history and leaves dangling PRs so it's something that should absolutely be avoided in the future.

This is my action plan:

  • Backup git on another private repo + locally
  • Reset to 6880c04 (PR #16)
  • Branch to dep-fix
  • Add implementation of github.com/docker/docker/reference to git index
  • Fix development_test.go go spew import
  • Fix input/input.go
  • Setup GOPATH for Kube and enable alias
  • godep restore on kubernetes repo
  • go get ./… deps into path
  • Run tests
  • Commit as “Prep to version dips: reimplemented some Docker reference methods, fixed wrong spew import, fixed sloppy refactor”
  • Run export GO15VENDOREXPERIMENT=1
  • godep save ./…
  • go get -v to ensure all vendored
  • Run tests
  • Add travis configuration from original
  • Commit as “Versioned dependencies in /vendor (recloses #21, recloses #28)”
  • Confirm travis build and recommit until working
  • Add command disabling
  • Commit as “Added command disabling (recloses #24)”
  • DANCE BREAK!
  • Force push 6880c04 to master on Github
  • Create and merge PR from dep-fix

Setup CI

We should have a CI setup for running tests and validating builds.

It will have to setup a Docker daemon in order to run tests for building.

Integration of Spread with CI

We imagine people will want to integrate Spread with their CI system with deploys triggered by pushed commits. We're open to discussion around how that setup will work.

DirectoryInput

DirectoryInput produces an Entity by scanning the directory structure of a given path and building an Entity tree based on a file naming convention. Inputting identical directories should produce identical Entities.

Schema

Files:

  • rc.yaml - holds RC
  • pod.yaml - holds Pod
  • *.ctr - holds container, there can be any number

Directories:

  • .k2e/ - holds arbitrary Kubernetes objects, will be included in Entities

Note: There can only be 1 Pod and 1 RC per directory.

Type Ordering

For reference, the ordering of Entities is:

  1. ReplicationController
  2. Pod
  3. Container
  4. Image

Operation

DirectoryInput should use the Kubernetes decoding libraries to get Kubernetes objects from the files and directories listed above.

It should operate the following way:

  • Check the directory for the lowest Entity
  • Check the directory for higher Entities (up to Container) and attempt to attach them
    • ie a Container could attach to an RC

Deployment

Deployment

Deployment represents Kubernetes objects that can be deployed. Deployment stores a slice of each type of deployable Kubernetes objects. It can be used to create deployments deployments and is how the current state of a deployment is returned.

If the Namespace field is set, then all objects without a namespace explicitly set will default to it’s value.

Deployments can have a sets of Annotations and Labels which will be applied to each deployed object. If an Annotation or Label already exists it will be overwritten.
for example:

type Deployment struct {
    // Globals
    Name        string
    Namespace   string
    Labels      map[string]string
    Annotations map[string]string

    // Objects
    ReplicationControllers []kube.ReplicationController
    Services               []kube.Service
    // ...
}

Implement Images

A type to represent a Docker image within spread is required.

For entity.Image a minimal implementation of Images was created. This implementation should be expanded without introducing breaking changes.

Requirements

  • should be able to be produced using the contents of the "Image" field in kube.Container
  • should be able to represent itself in a kube.Container "Image" field
  • should be able to produce a human friendly name
  • should be able to produce a DNS Label for use in Pod and RC naming

The current implementation has a build field, this will be built out later.

FileSource error messages

FileSource gives off some extraordinarily vague error messages. Example:

$ spread deploy .
build
Error using `.`: yaml: line 22: did not find expected key

It should print the file name when parsing errors occur.

Also there are a few erroneous print statements like the "build" above.

Godeps

We need to get the project setup on Godeps. The immediate concern is the Kubernetes build is broken at the current Docker and Kubernetes HEAD as documented in kubernetes/kubernetes#18774 but this is something that should be addressed for the longterm anyway.

Plan is to lock Kubernetes at it's master's current version (kubernetes/kubernetes@58e09b4) and Docker immediately before what seems the be the breaking commit (moby/moby@5fc0e1f3).

Renaming container name creates duplicate on update

If a previously deployed container has a name change (whether it be in a pod.yml, rc.yml, or *.ctr) a duplicate object will be created.

This can be resolved by pruning any additional objects or changing the way diffs are made.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.