screwdriver-cd / screwdriver Goto Github PK
View Code? Open in Web Editor NEWAn open source build platform designed for continuous delivery.
Home Page: http://screwdriver.cd
License: Other
An open source build platform designed for continuous delivery.
Home Page: http://screwdriver.cd
License: Other
I propose that we store an scm_id
in addition to an scm_url
in the pipeline
model. The id
should be the unique field and url
just be a display for the user.
We can generate id
by a call to scmModel.getID(url)
. For GitHub that would be done via a call to https://developer.github.com/v3/repos/#get and https://developer.github.com/v3/repos/branches/#get-branch with a return of 123456:master
. We know that branches are case-sensitive and repositories have a unique identifier. Additionally, we can access the repository later (regardless of rename) via https://api.github.com/repositories/:id
This stems from #82
/v3/pipelines/:id/secrets
Feature: #148
As a user, I need to have access to my secrets, so I can publish / deploy my application
One of the big blockers with continuous delivery in the existing tools out there is the
availability of secrets to the build. Secrets are needed for publishing packages or code,
deploy to services, and remotely testing if a service is working.
The systems out there usually support secrets in one of three camps:
Screwdriver should provide a combination of options two and three. This gives developers the
ability to specify secrets that all jobs need to use, and then restrict production secrets
to just the production jobs.
Additionally, developers should be able to customize what secrets are available in a pull
request, even as fine-grained as should it be available in a forked pull-request.
Rules:
I added an sd.yaml for scm-base to get a build running from an external contributor pr. After closing and reopening the PR, I watched as the PR build completed as expected. Then, much to my surprise, github reported that the build was still running. Clicking into the details again, I was surprised to see the "main" build running for the PR "caused" by some cryptic user id.
screwdriver-cd/scm-base#5
https://cd.screwdriver.cd/builds/dd88e23cf7b306fb9a21503080649a99840d200d
/v3/builds/:id/secrets
Feature: #148
'date' -> 'date-time' in swagger.json so that codegen will work properly with the way we format date and time.
This is causing a startup error as it's trying to read the token from disk still.
├─┬ [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
├─┬ [email protected]
│ └─┬ [email protected]
│ └─┬ [email protected]
│ └── [email protected]
└─┬ [email protected]
└─┬ [email protected]
└─┬ [email protected]
└── [email protected]
If a job is disabled, we shouldn't be able to start a new build from it.
/v3/jobs/:id/secrets
Feature: #148
Requirements for authentication in swagger spec
https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#securitySchemeObject
I noticed that both kubernetes and spinnaker have their main repository be named after the git organization/product. Should we do the same?
If so, the API would be the primary component of SD and as such we should rename it.
To ensure that Screwdriver is providing the value we expect, all of our Pull Requests in this organization should test through Screwdriver.
Expected features:
Implementation details:
main
jobTalk about:
What are our guidelines? Max page size? Default page size?
We should talk about how we expect people to use this API:
This is a major change to the launcher and how we execute it. Previously we would provide fields like: jobId
, pipelineId
, repository
, and branch
. We want to reduce the context provided to the launcher and instead empower the launcher to get the information it needs.
This has multiple benefits like:
launcher
) when adding new information like sha1
or is_forked
Fields to be provided to executor:
buildId
- Build Identifier (to look up stuff)container
- Container to start inapiUri
- URI to hit our APItoken
- Token to be able to read/write from the APIInput provided to launcher:
# docker run ... {{container}}
SD_TOKEN={{token}} /opt/screwdriver/launch --api-uri {{apiUri}} {{buildId}}
There is a possible performance boost on server startup by only calling register once
https://github.com/screwdriver-cd/api/blob/master/lib/registerPlugins.js#L21-L27
Given a scenario where the user "Jer" is not logged in and has not authorized Screwdriver. When that user opens a PR, the API will complain about the user not existing in the ecosystem yet. This results in a 500 being returned by the server.
When you go to /v3/login
from the browser and then refresh the URL, a 500 is returned with:
{
"statusCode": 500,
"error": "Internal Server Error",
"message": "Missing github request token cookie"
}
The specific bug that triggered it has been resolved here screwdriver-cd/models#50 , but the server should never exit like this:
160812/191904.322, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (893ms)
160812/191906.047, [request,webhook-build,bb12771e794ac72679ea2538219fef33f7789432] data: Received status update to RUNNING
/usr/src/app/node_modules/vogels/lib/serializer.js:35
return new Date(value).toISOString();
^
RangeError: Invalid time value
at Object.internals.serialize.date (/usr/src/app/node_modules/vogels/lib/serializer.js:35:30)
at Object.internals.serializeAttribute.serializer.serializeAttribute as serializeAttribute
at /usr/src/app/node_modules/vogels/lib/expressions.js:70:44
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2523:13
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3073:15
at baseForOwn (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2046:14)
at /usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3043:18
at baseReduce (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:2520:7)
at Function. (/usr/src/app/node_modules/vogels/node_modules/lodash/index.js:3446:13)
at Object.exports.serializeUpdateExpression (/usr/src/app/node_modules/vogels/lib/expressions.js:53:18)
npm info lifecycle [email protected]~start: Failed to exec start script
npm ERR! Linux 3.16.0-4-amd64
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "start"
npm ERR! node v6.3.1
npm ERR! npm v3.10.3
npm ERR! code ELIFECYCLE
npm ERR! [email protected] start: ./bin/server
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script './bin/server'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the screwdriver-api package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! ./bin/server
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs screwdriver-api
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls screwdriver-api
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /usr/src/app/npm-debug.log
We lower case all the ScmUrl a while ago: #61
It's giving us problem. The github API actually makes a distinction between lower and upper case:
https://api.github.com/repos/screwdriver-cd/models/branches/newformat gives nothing
https://api.github.com/repos/screwdriver-cd/models/branches/NewFormat gives the branch
Right now the server logs are quite minimal:
160826/235420.012, [response] http://api.screwdriver.cd: post /v3/webhooks/build {} 204 (533ms)
We should update them to include things like remote address, user agent, etc.
Right now we upload the entire log of a step to the log service on each update as well as read from the entire step log on each read. This can be computationally (and financially) expensive when we're trying to read from the end of a long step log, especially if the step takes a while.
I propose that we switch to storing logs in chunks of logs instead of one big step file:
/builds/:id/:step/1
- 100 lines/builds/:id/:step/2
- 21 linesReading from line 110 would only have to load the 21 line file from :step/2
This should be a tool that a user can run to bootstrap a Screwdriver setup:
Think of it as a replacement/expansion of dynamic-dynamodb
Proposed repo name bootstrap
This makes it easy to startup with a Docker container:
This does mean we'll need to bundle all the datastores into the container
I guess that's what it sends when you force-push a branch for a PR:
Request URL: http://a4677c9873c9611e6aa7102b92f75d5c-1135862614.us-west-2.elb.amazonaws.com/v3/webhooks/github
Request method: POST
content-type: application/json
Expect:
User-Agent: GitHub-Hookshot/ebd57e0
X-GitHub-Delivery: 51d2f800-656d-11e6-98e9-e30f7b03e6aa
X-GitHub-Event: pull_request
X-Hub-Signature: sha1=f995b5c95a9487d84a4f6725a14d5b4b363840a9
160818/175809.038, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa] data: Received event pull_request
160818/175809.038, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa] data: PR #50 synchronize for [email protected]:screwdriver-cd/launcher.git#master
160818/175809.647, [request,webhook-github,51d2f800-656d-11e6-98e9-e30f7b03e6aa,a6cb05361974ef8779c9d684619cf530d7f79fc4] data: PR-50 stopped
160818/175810.094, [request,server] data: TypeError: config.tokenGen is not a function
at BuildModel.start (/usr/src/app/node_modules/screwdriver-models/lib/build.js:150:30)
at getCommitSha.then.then.build (/usr/src/app/node_modules/screwdriver-models/lib/buildFactory.js:140:31)
at process._tickDomainCallback (internal/process/next_tick.js:129:7)
160818/175808.947, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (1149ms)
Unable to create a new pipeline.
sd$ pipeline create [email protected]:screwdriver-cd/api.git#master
Ooops { statusCode: 500,
error: 'Internal Server Error',
message: 'An internal server error occurred' }
160721/060003.052, [error] message: Uncaught error: Cannot read property 'id' of undefined stack: TypeError: Uncaught error: Cannot read property 'id' of undefined
at Pipeline.create (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:38:60)
at MyDatastore.save (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:63:9)
at get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/pipeline.js:50:35)
at MyDatastore.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:48:9)
at PipelineModel.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/base.js:27:31)
at PipelineModel.create (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/pipeline.js:30:14)
at Pipeline.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:29:26)
at MyDatastore.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-datastore-imdb/index.js:48:9)
at PipelineModel.get (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-models/lib/base.js:27:31)
at config.handler (/Users/stjohn/Sites/Screwdriver/api/node_modules/screwdriver-plugin-pipelines/lib/create.js:21:22)
We're currently tied to Node 4 and Node 6 is the latest stable release. Should be a pretty easy upgrade:
response schemas for login, status, and stats are currently not defined
As a user, I want to read the logs of my build, both in-progress and after it is done.
For this feature, we need to take the live streaming logs and present them to the user on a web UI grouped by the step. Steps should contain the exit code as well as the time it took to complete.
If you want to have >1 SD instance per AWS account, you'll need the ability to specify a different set of tables.
Example:
[email protected]:screwdriver-cd/hashr.git
[email protected]:screwdriver-cd/hashr.git#master
[email protected]:screwdriver-cd/HASHR.git
Since we will be shipping this whole application as one (plugins and all), should we move all the user interface plugins (repos named plugin-*
) directly into this repository, so it reduces the dependency and development chain?
Attempting to login gives the following error:
160727/054041.769, [error] message: "token" is not allowed stack: ValidationError: "token" is not allowed
at Object.exports.process (/usr/src/app/node_modules/joi/lib/errors.js:154:19)
at _validateWithOptions (/usr/src/app/node_modules/joi/lib/any.js:601:31)
at root.validate (/usr/src/app/node_modules/vogels/node_modules/joi/lib/index.js:102:23)
at Schema.validate (/usr/src/app/node_modules/vogels/lib/schema.js:173:14)
at /usr/src/app/node_modules/vogels/lib/table.js:160:30
at /usr/src/app/node_modules/vogels/node_modules/async/lib/async.js:52:16
at Immediate.<anonymous> (/usr/src/app/node_modules/vogels/node_modules/async/lib/async.js:1206:34)
at runCallback (timers.js:570:20)
at tryOnImmediate (timers.js:550:5)
at processImmediate [as _immediateCallback] (timers.js:529:5)
Token is not in the data model: https://github.com/screwdriver-cd/data-schema/blob/master/models/user.js
160813/000203.442, [request,webhook-github,2a050f80-60e9-11e6-900d-ae68a135de5f] data: Received event pull_request
160813/000203.442, [request,webhook-github,2a050f80-60e9-11e6-900d-ae68a135de5f] data: PR #49 closed for [email protected]:screwdriver-cd/models.git#master
160813/000203.574, [request,server] data: ValidationException: Invalid KeyConditionExpression: An expression attribute value used in expression is not defined; attribute value: :jobId
at Request.extractError (/usr/src/app/node_modules/aws-sdk/lib/protocol/json.js:43:27)
at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/request.js:596:14)
at Request.transition (/usr/src/app/node_modules/aws-sdk/lib/request.js:21:10)
at AcceptorStateMachine.runTo (/usr/src/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /usr/src/app/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request. (/usr/src/app/node_modules/aws-sdk/lib/request.js:37:9)
at Request. (/usr/src/app/node_modules/aws-sdk/lib/request.js:598:12)
at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
160813/000203.356, [response] http://0.0.0.0:8080: post /v3/webhooks/github {} 500 (220ms)
We can use direct fields like ?foo=bar
as well as a more human search value: https://www.npmjs.com/package/search-query-parser
When a user authenticates against GitHub, we should get their list of repositories that they have access to and store that in the user table. That way we can do most non-write authentication checks for free.
We should use the same format at #160
Please document how I can deploy this into AWS directly.
Build model's create expects sha to be there.
Right now we only care about Pull request, so we get the sha from the payload. If it's not PR, this value will be undefined and create build will break. We need to do some lookup to get the sha before calling create.
https://github.com/screwdriver-cd/screwdriver/blob/master/plugins/github.js
Modify the current route so that it:
This requires changes in this order:
According to this standard for json (http://json-schema.org/latest/json-schema-validation.html#anchor26), which go-swagger follows, strings lengths should be validated with maxLength and minLength as opposed to maximum and minimum.
We have retries implemented for Screwdriver API calls, but if a request times out it is never retried.
Example:
2016/09/22 19:22:07 Error running launcher: updating step stop "test": posting to Step Stop: reading response from Screwdriver: Put https://api.screwdriver.cd/v3/builds/322239d37a0c7bb1c7214d45b30082c81a1e1899/steps/test: read tcp 100.96.138.4:35861->54.200.168.202:443: read: connection timed out
We want to start the main jobs on PR and "monitored" branch change.
Example package that could help us https://www.npmjs.com/package/hapi-github-webhooks
When a build of the main
job is completed, it should trigger the next job in the workflow with the same SHA.
To protect against CSRF, we should serve crumbs and require them for writes.
Probably use https://github.com/hapijs/crumb
The documentation for creating a build is outdated. It states that the payload should include container
, but that's no longer the case.
Right now I set up some basic ones for priority and impact, but there are a lot of other examples/best practices out in the wild. What do you think we should do?
Links:
jobs/{id}/builds
route that list all the builds belong to that jobnumber
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.