Git Product home page Git Product logo

screwdriver's Issues

Store SCM as ID instead of URL

I propose that we store an scm_id in addition to an scm_url in the pipeline model. The id should be the unique field and url just be a display for the user.

We can generate id by a call to scmModel.getID(url). For GitHub that would be done via a call to https://developer.github.com/v3/repos/#get and https://developer.github.com/v3/repos/branches/#get-branch with a return of 123456:master. We know that branches are case-sensitive and repositories have a unique identifier. Additionally, we can access the repository later (regardless of rename) via https://api.github.com/repositories/:id

This stems from #82

Secret Management

As a user, I need to have access to my secrets, so I can publish / deploy my application

One of the big blockers with continuous delivery in the existing tools out there is the
availability of secrets to the build. Secrets are needed for publishing packages or code,
deploy to services, and remotely testing if a service is working.

The systems out there usually support secrets in one of three camps:

  • A secret can be available to all people on the shared system
  • A secret can be available just to your pipeline
  • A secret can be available just to a job in your pipeline

Screwdriver should provide a combination of options two and three. This gives developers the
ability to specify secrets that all jobs need to use, and then restrict production secrets
to just the production jobs.

Additionally, developers should be able to customize what secrets are available in a pull
request, even as fine-grained as should it be available in a forked pull-request.

Rules:

  • Saved secrets cannot be viewed by developers, only the name of them should be available
  • Secrets can only be listed, added, changed, or removed by admins of the pipeline
  • Secrets can be strings or generated SSH keys
  • Secrets should be available as environment variables

Work

Preparation

  • Cucumber feature file
  • Secret interface

Implementation

  • Screwdriver.yaml changes
  • Launcher changes
  • API changes
  • Store changes

Outcome

  • Screwdriver publishing itself
  • Cucumber feature passing
  • Documentation

Notes

List secrets attached to a build

  • endpoint: /v3/builds/:id/secrets
  • filtered by this job and pipeline
  • supports pagination params
  • does not expose value (except if a build credential is passed that is the same build id)

Feature: #148

date -> date-time

'date' -> 'date-time' in swagger.json so that codegen will work properly with the way we format date and time.

Duplicate dependencies on k8s

This is causing a startup error as it's trying to read the token from disk still.

├─┬ [email protected]
│ └─┬ [email protected]
│   └─┬ [email protected]
│     └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│   └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│   └─┬ [email protected]
│     └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│   └─┬ [email protected]
│     └── [email protected]
└─┬ [email protected]
  └─┬ [email protected]
    └─┬ [email protected]
      └── [email protected]

Rename to screwdriver?

I noticed that both kubernetes and spinnaker have their main repository be named after the git organization/product. Should we do the same?

If so, the API would be the primary component of SD and as such we should rename it.

Screwdriver Dogfooding - PRs

To ensure that Screwdriver is providing the value we expect, all of our Pull Requests in this organization should test through Screwdriver.

Expected features:

  • Opening a PR should create a PR job and start it
  • Syncing a PR should stop the existing PR job and start a new one
  • Closing a PR should stop the PR job and disabling the job
  • PR commit status should be updated on start and stop
  • Job should checkout from the desired branch and merge the pull request on top of it (emulating what GitHub would do)

Implementation details:

  • Update the commit status using the API tokens of the pipeline admins
  • Build should run the main job

Empowered Launcher

This is a major change to the launcher and how we execute it. Previously we would provide fields like: jobId, pipelineId, repository, and branch. We want to reduce the context provided to the launcher and instead empower the launcher to get the information it needs.

This has multiple benefits like:

  • Only one place to change (launcher) when adding new information like sha1 or is_forked
  • Launcher can now make requests on behalf of the build

Fields to be provided to executor:

  • buildId - Build Identifier (to look up stuff)
  • container - Container to start in
  • apiUri - URI to hit our API
  • token - Token to be able to read/write from the API

Input provided to launcher:

# docker run ... {{container}} 
SD_TOKEN={{token}} /opt/screwdriver/launch --api-uri {{apiUri}} {{buildId}}

Build started by non-logged in user

Given a scenario where the user "Jer" is not logged in and has not authorized Screwdriver. When that user opens a PR, the API will complain about the user not existing in the ecosystem yet. This results in a 500 being returned by the server.

Unhandled error crashed the server

The specific bug that triggered it has been resolved here screwdriver-cd/models#50 , but the server should never exit like this:

160812/191904.322, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (893ms)
160812/191906.047, [request,webhook-build,bb12771e794ac72679ea2538219fef33f7789432] data: Received status update to RUNNING
/usr/src/app/node_modules/vogels/lib/serializer.js:35
return new Date(value).toISOString();
^
RangeError: Invalid time value
at Object.internals.serialize.date (/usr/src/app/node_modules/vogels/lib/serializer.js:35:30)
at Object.internals.serializeAttribute.serializer.serializeAttribute as serializeAttribute
at /usr/src/app/node_modules/vogels/lib/expressions.js:70:44
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2523:13
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3073:15
at baseForOwn (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2046:14)
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3043:18
at baseReduce (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2520:7)
at Function. (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3446:13)
at Object.exports.serializeUpdateExpression (/usr/src/app/node_modules/vogels/lib/expressions.js:53:18)
npm info lifecycle [email protected]~start: Failed to exec start script
npm ERR! Linux 3.16.0-4-amd64
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start"
npm ERR! node v6.3.1
npm ERR! npm v3.10.3
npm ERR! code ELIFECYCLE
npm ERR! [email protected] start: ./bin/server
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script './bin/server'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the screwdriver-api package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! ./bin/server
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs screwdriver-api
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls screwdriver-api
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /usr/src/app/npm-debug.log

More Verbose Server Logs

Right now the server logs are quite minimal:

160826/235420.012, [response] http://api.screwdriver.cd: post /v3/webhooks/build {} 204 (533ms)

We should update them to include things like remote address, user agent, etc.

Logs should be chunked in X-line increments

Right now we upload the entire log of a step to the log service on each update as well as read from the entire step log on each read. This can be computationally (and financially) expensive when we're trying to read from the end of a long step log, especially if the step takes a while.

I propose that we switch to storing logs in chunks of logs instead of one big step file:

  • /builds/:id/:step/1 - 100 lines
  • /builds/:id/:step/2 - 21 lines

Reading from line 110 would only have to load the 21 line file from :step/2

New Logo

This is our current logo:
screwdriver_logo

It's more a placeholder then an actual logo. Any suggestions, ideas, implementations on a better logo?


Things to update:

  • Logo on page
  • Logo on homepage
  • Guide logo
  • Favicon
  • iPhone icon
  • GitHub OAuth application
  • NPM Gravatar
  • GitHub Gravatar

Bootstrap Screwdriver Tool

This should be a tool that a user can run to bootstrap a Screwdriver setup:

  • Create Secrets (JWT, Kubernetes, etc)
  • Deploy API (Kubernetes or Locally)
  • Deploy Store (Kubernetes or Locally)
  • Deploy UI (Kubernetes or Locally)
  • Create Database (IMDB or DynamoDB)

Think of it as a replacement/expansion of dynamic-dynamodb

Proposed repo name bootstrap

500 when Github sends a synchronize event

I guess that's what it sends when you force-push a branch for a PR:

Request URL: http://a4677c9873c9611e6aa7102b92f75d5c-1135862614.us-west-2.elb.amazonaws.com/v3/webhooks/github
Request method: POST
content-type: application/json
Expect:
User-Agent: GitHub-Hookshot/ebd57e0
X-GitHub-Delivery: 51d2f800-656d-11e6-98e9-e30f7b03e6aa
X-GitHub-Event: pull_request
X-Hub-Signature: sha1=f995b5c95a9487d84a4f6725a14d5b4b363840a9


160818/175809.038, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa] data: Received event pull_request
160818/175809.038, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa] data: PR #50 synchronize for [email protected]:screwdriver-cd/launcher.git#master
160818/175809.647, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa,a6cb05361974ef8779c9d684619cf530d7f79fc4] data: PR-50 stopped
160818/175810.094, [request,server] data: TypeError: config.tokenGen is not a function
at BuildModel.start (/usr/src/app/node_modules/screwdriver-models/lib/build.js:150:30)
at getCommitSha.then.then.build (/usr/src/app/node_modules/screwdriver-models/lib/buildFactory.js:140:31)
at process._tickDomainCallback (internal/process/next_tick.js:129:7)
160818/175808.947, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (1149ms)

Cannot create pipelines

Unable to create a new pipeline.

sd$ pipeline create [email protected]:screwdriver-cd/api.git#master
Ooops { statusCode: 500,
  error: 'Internal Server Error',
  message: 'An internal server error occurred' }
160721/060003.052, [error] message: Uncaught error: Cannot read property 'id' of undefined stack: TypeError: Uncaught error: Cannot read property 'id' of undefined
    at Pipeline.create (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:38:60)
    at MyDatastore.save (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:63:9)
    at get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/pipeline.js:50:35)
    at MyDatastore.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:48:9)
    at PipelineModel.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/base.js:27:31)
    at PipelineModel.create (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/pipeline.js:30:14)
    at Pipeline.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:29:26)
    at MyDatastore.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:48:9)
    at PipelineModel.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/base.js:27:31)
    at config.handler (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:21:22)

Streaming/Grouped Logs

As a user, I want to read the logs of my build, both in-progress and after it is done.

For this feature, we need to take the live streaming logs and present them to the user on a web UI grouped by the step. Steps should contain the exit code as well as the time it took to complete.

Move all Hapi plugins directly into this repository

Since we will be shipping this whole application as one (plugins and all), should we move all the user interface plugins (repos named plugin-*) directly into this repository, so it reduces the dependency and development chain?

Cannot login, token not allowed

Feature

Attempting to login gives the following error:

160727/054041.769, [error] message: "token" is not allowed stack: ValidationError: "token" is not allowed
    at Object.exports.process (/usr/src/app/node_modules/joi/lib/errors.js:154:19)
    at _validateWithOptions (/usr/src/app/node_modules/joi/lib/any.js:601:31)
    at root.validate (/usr/src/app/node_modules/vogels/node_modules/joi/lib/index.js:102:23)
    at Schema.validate (/usr/src/app/node_modules/vogels/lib/schema.js:173:14)
    at /usr/src/app/node_modules/vogels/lib/table.js:160:30
    at /usr/src/app/node_modules/vogels/node_modules/async/lib/async.js:52:16
    at Immediate.<anonymous> (/usr/src/app/node_modules/vogels/node_modules/async/lib/async.js:1206:34)
    at runCallback (timers.js:570:20)
    at tryOnImmediate (timers.js:550:5)
    at processImmediate [as _immediateCallback] (timers.js:529:5)

Token is not in the data model: https://github.com/screwdriver-cd/data-schema/blob/master/models/user.js

500 Response from github webhook close event

160813/000203.442, [request,webhook-github,2a050f80-60e9-11e6-900d-ae68a135de5f] data: Received event pull_request
160813/000203.442, [request,webhook-github,2a050f80-60e9-11e6-900d-ae68a135de5f] data: PR #49 closed for [email protected]:screwdriver-cd/models.git#master
160813/000203.574, [request,server] data: ValidationException: Invalid KeyConditionExpression: An expression attribute value used in expression is not defined; attribute value: :jobId
at Request.extractError (/usr/src/app/node_modules/aws-sdk/lib/protocol/json.js:43:27)
at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/usr/src/app/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/usr/src/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /usr/src/app/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/usr/src/app/node_modules/aws-sdk/lib/request.js:37:9)
at Request. (/usr/src/app/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
160813/000203.356, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (220ms)

List active jobs for a pipeline: pipelines/{id}/jobs

Modify the current route so that it:

  • Only lists active jobs (archived = false)
  • Lists jobs in the order they appear in workflow, then open PRs.
  • Archive jobs after they are renamed. Or if it's a PR job, archive if the PR is closed.

This requires changes in this order:

  1. data-schema
  2. datastore-dynamodb
  3. models
  4. API

Launcher doesn't retry on timeout

We have retries implemented for Screwdriver API calls, but if a request times out it is never retried.

Example:

2016/09/22 19:22:07 Error running launcher: updating step stop "test": posting to Step Stop: reading response from Screwdriver: Put https://api.screwdriver.cd/v3/builds/322239d37a0c7bb1c7214d45b30082c81a1e1899/steps/test: read tcp 100.96.138.4:35861->54.200.168.202:443: read: connection timed out

Documentation - creating a build

The documentation for creating a build is outdated. It states that the payload should include container, but that's no longer the case.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.