tektoncd / triggers Goto Github PK
View Code? Open in Web Editor NEWEvent triggering with Tekton!
License: Apache License 2.0
Event triggering with Tekton!
License: Apache License 2.0
Every PR should have all unit tests (i.e. go test ./...
) executed against it.
We don't have any code, or any automation :O
The create webhook task should be tested automatically
The automated test for the create webhook task doesn't exist yet. The task is created in this PR: #82
When users come to the repo for this project, they should be able to easily find an explanation of why the project exists and how it relates to other projects.
This is something we don't yet have for Tekton Pipelines, which results in questions such as tektoncd/pipeline#644. Having docs like this from the start would probably save a lot of time and answer a lot of user questions (not to mention clarify our own thoughts!).
It will be easier for ppl to start adding functionality if we add an empty controller that doesn't actually reconcile anything, just:
ko
Continuing to use knative/pkg is probably our best bet for now, in which case we can get a lot of this by more or less copying https://github.com/tektoncd/pipeline/blob/master/cmd/webhook.
When an EventListener
CRD is created, the triggers controller should respond by making an instance of running container that can be invoked with CloudEvents.
Eventually this running container would expose a Knative eventing addressable endpoint which could be invoked with CloudEvents and would create resource instances in response, but in the context of this issue, the goal is just to:
EventListener
creationEventListener
deletion (thought: should it be allowed to finish whatever it's doing before it finishes? probably! i guess this depends on the design of the running container!)Creating an EventListener
current does nothing :D
@bobcatfish writing this issue does not know the exact form that this running container should take. She knows it should be a container inside a pod, but is not sure if it should be a stateful set, deployment, or something else :D Anyone with more context (@iancoffey @ncskier @vtereso ) feel free to replace this with something definitive or whoever takes on the issue can decide!
Other thoughts:
Every PR should have logic applied to it to make sure that:
We don't have any code, or any automation :O
This is a proposed feature that would help orchestrating asynchronous pipelines.
The use case is as follows: as a user I want to be able to include in a pipeline a component that is executed asynchronously and possibly externally i.e. Tekton on a different cluster or something that is not Tekton at all.
For instance a pipeline may include a Task that sends a cloud event on completion. The cloud event triggers an external system that does a certain amount of processing, and sends a call back via cloud event. In the pipeline below, tasks A and C are started in parallel. Task B is only executed once A is complete. Task D is only executed when a cloud event is received. In this specific example this event is a call-back from an event that was sent by Task C to an external system.
In this example the pipeline helps us orchestrate the execution of an external task, since the overall pipeline condition exposed for the pipeline resource via the k8s API includes the execution and result of a process on an external system.
The UX model I had in mind for this feature is that a CloudEventPipelineResource could be extended to be used as an input, include a reference to a template, and Tekton would setup a binding implicitly for the Task that as that input resource.
Our codebase should consistently use tabs or spaces, and if code is submitted that is inconsistent, it should be caught by our presubmit linting checks.
Although #4 added linting, #21 added some files with spaces instead of tabs (#21 (comment)) and our linting didn't catch it :O
This is a zenhub epic about adding some baseline functionality to our initial controller and webhook, namely:
Folks will want to run Pipelines periodically, e.g. very night or every few hours they may want to trigger CI. Some use cases:
We've been discussing and designing solutions for event based triggering where the event comes from an external system like GitHub.
Hopefully this design will be super similar :D
Definitely not something we need for our first milestone but ppl have been asking for it so we can track it here and ppl can add details if needed :D
The EventListener
Deployment
pod should be able to create resources as defined in the associated TriggerTemplate(s)
when receiving an event
Currently, nothing does this.
When the event/message is received, it should be parsed to populate the corresponding TriggerBinding(s)
, which should then inject their values into the TriggerTemplate
resourceTemplates. After value interpolation, each resource should be POSTed to the K8s APIServer.
We should have a suite of end to end tests that we can use to test triggers functionality. We can start with a simple one that:
(If #36 happens first, this test can actually look for a running container, otherwise this will be a very light test!)
We've got a dummy test but not actually invoking any of our libs yet :D
One question at this point is do we want to use the knative test lib? I think it's probably simpler to use it, e.g.:
However it's not that complicated to implement our own (this is an example of what's required to hit most of the common stuff we'd need to do) https://github.com/bobcatfish/testing-crds/blob/master/client-go/test/cat_test.go#L136
Up to the implementer of this issue π and also something we can easily change later!
In the current design, users would be able to use the spec.resourceTemplates
field to escalate their degree of write permission within the cluster; users of TriggerTemplate are able to create any resource that the service account responsible for creating resources in a TriggerTemplate has permission to create. For example, if a user does not have access to create Pods, but the service account servicing TriggerTemplate does, a user could put a Pod into spec.resourceTemplates
and have it created.
We should protect against this form of privilege escalation by having a validating webhook that uses information from the authorizer (this is part of the payload webhooks are passed) to make SAR checks that establish that the user creating or updating a TriggerTemplate has permission to create those resources in the cluster.
TriggerBinding
and TriggerTemplate
are designed for reusing, that's means one single TriggerBinding
will be shared by several EventListener
.
So if modification occurred on TriggerBinding
or TriggerTemplate
what should related EventListener
do?
I think we should enhance as below in EventListener
type definition:
type Trigger struct {
IgnoreChange bool `json:"ignoreChange,omitempty"` -- add this field to define the policy
TriggerBinding TriggerBindingRef `json:"binding"`
TriggerTemplate TriggerTemplateRef `json:"template"`
}
Nothing about this part
To achieve this, the controller of EventListener
should watch both TriggerBinding
and TriggerTemplate
.
We should have the structure of the types we want to add for Tekton Triggers clearly defined in this repo and agreed on by all owners:
EventListener
TriggerBinding
TriggerTemplate
Each type should:
go
such that it can be fed to update-codegen.sh
examples
dir (even if we aren't testing this yet, see #11)The structure of the types is basically just in exist in example yaml in the design doc which means that:
Design for these types lives in https://docs.google.com/document/d/1fngeNn3kGD4P_FTZjAnfERcEajS7zQhSEUaN7BYIlTw/edit (visible to members of tekton-dev)
The main strength of tektoncd/triggers
, and more specifically, a triggerBinding
is that it will act as the glue enabling you to extract fields of an event payload to be used in a triggerTemplate
to create arbitrary resources.
We don't have anything for this
Since events can have any structure, we will not always have a definition for the struct to unmarshal into as done prior here. Further, likely only a few of the many fields will be utilized within a triggerTemplate
where unmarshalling the entire structure will be costly/wasteful.
triggerBinding
pods/listeners AOT, but this seems unnecessary, costly (as mentioned previous) and in conflict with the idea of having a dynamic solution.encoding/json
library and map[string]interface{}
, but there are some tools out there (https://github.com/tidwall/gjson seems like a good option) that are much more performant.Tekton going forward, as of last week's working group meeting, is following the template style of K8S, which can be interpreted as specifically avoiding a few things. https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md#what-about-parameterization
2/3rds of that K8S document applies directly to how Tekton will want to pursue templates and reuse of items. I am trying to find exceptions to the rules I have found in that document, though I have none currently. Everything that can be expressed with a template within our API/controller could instead be pre-processed outside of Tekton's view and applied as API calls to Tekton.
In fact, if following a couple of the models for apply -f/ apply -k/ and merging YAML/JSON in a way transparent to K8S, there is a chance to additionally unroll patches and recover original values if desired - though that is not going to be required for now.
Where am I going with this?
Tekton will avoid templating most forms of variable interpolation at the API layer for Tekton.
This will also mean avoiding creating any objects named "XYZTemplate" and/or "XYZTemplateType" in favor of allowing external tools to be chosen by a developer for generating the API calls/YAML/JSON files.
I propose any templating options be pushed to a CLI wrapper layer that makes the user experience for Tekton nicer, though leaves the controllers and API simpler for Tekton itself.
So, I think I am not going too far out there to state: Let us not add too many intelligent forms of template logic into the controllers.
It may be argued that Pipelines and PipelineRuns also follow a similar template logic (PipelineRun is just providing a patch of values over the Pipeline template), though that is a discussion for the pipeline team rather than for Triggers.
It follows to me that Tekton can be simpler and still achieve its goals. What to consider for removal? Anything performing variable interpolation/variable substitution that the user could have provided directly instead. Additionally, most other items that might be interpolated can potentially be generated and turned into a full API call/fully filled out YAML/filled out JSON. If not, then the extra state can be considered exposed via ConfigMap, Secret, or creating a resource/file/envvar for ingestion further along (each with different implications for debugging). I am not 100% sure how we might emulate or reuse the DownwardAPIVolumeFiles variant of handing information along, though it is also a pattern to consider from K8S.
I will first refer to https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md for consideration
https://docs.google.com/document/d/1h_3vSApIsuiwGkrqSiegi4NVaYG4oVzBquGAhIN6qGM
Resources that are created by TriggerTemplates
will clash if the names defined in the resources are static. Add some postfix/modifier (uuid/timestamp) to resources to prevent naming collisions.
Pipelines uses uuid's so it's probably best to follow that convention.
Currently, nothing does this.
If all resources get an automatic and unknown name modifier (like a postfix), it will be impossible to refer to them from within the TriggerTemplate
. For this reason, it is important that the same mechanism that uniquely names resources also allows this (sort of uuid) to be exposed for usage throughout the TriggerTemplate
.
This can be accomplished by exposing another interpolation variable within the TriggerTemplate
or also extending the TriggerTemplate
struct.
For example, a variable such as ${id}
(or any such name like ${unique}
) could be utilized throughout a TriggerTemplate
as follows:
apiVersion: tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: simple-pipeline-template
namespace: default
spec:
params:
- name: gitrevision
description: The git revision
default: master
- name: gitrepositoryurl
description: The git repository url
- name: namespace
description: The namespace to create the resources
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: git-source${id.one}
namespace: ${params.namespace}
labels:
triggertemplated: true
spec:
type: git
params:
- name: revision
value: ${params.gitrevision}
- name: url
value: ${params.gitrepositoryurl}
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: simple-pipeline-run${id.two}
namespace: default
labels:
triggertemplated: true
spec:
pipelineRef:
name: simple-pipeline
trigger:
type: event
resources:
- name: git-source
resourceRef:
name: git-source${id.one}
In this way, some interpolation base (like ${id}
) would have fields that could be implicitly referenced throughout the template (maybe as valid here). In any case, this is similar to how ${params}
is functioning.
To be more explicit, we could add another TriggerTemplateSpec
field like Params where users would specify exactly what unique field names they want to expose. For example:
spec:
params:
- name: gitrevision
description: The git revision
default: master
- name: gitrepositoryurl
description: The git repository url
- name: namespace
description: The namespace to create the resources
id:
- idOne
- idTwo
These are not functionally surfaced to the TriggerBinding
however, so maybe creating explicit fields is annoying and verbosity isn't good/as valuable?
This could also be done in a similar way, but instead of providing access to some randomly generated variable that is used for naming, resources names could be automatically modified and resources themselves (which still functionally evaluates to their name) could have some identifier to be used within the template. Regarding the prior example, it could look something like this:
apiVersion: tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: simple-pipeline-template
namespace: default
spec:
params:
- name: gitrevision
description: The git revision
default: master
- name: gitrepositoryurl
description: The git repository url
- name: namespace
description: The namespace to create the resources
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
name: git-source
namespace: ${params.namespace}
labels:
triggertemplated: true
spec:
type: git
params:
- name: revision
value: ${params.gitrevision}
- name: url
value: ${params.gitrepositoryurl}
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: simple-pipeline-run
namespace: default
labels:
triggertemplated: true
spec:
pipelineRef:
name: simple-pipeline
trigger:
type: event
resources:
- name: git-source
resourceRef:
name: ${1}
Documentation should DEFINITELY be included for this one. Discussion is good π
It will be easier for ppl to start adding functionality if we add an empty webhook that doesn't actually do anything (eventually this webhook controller will be used to verify Trigger types), just:
ko
controller := webhook.AdmissionController{
Client: kubeClient,
Options: options,
Handlers: map[schema.GroupVersionKind]webhook.GenericCRD{
// nothing actually here cuz we don't have any types yet
},
Logger: logger,
}
Continuing to use knative/pkg is probably our best bet for now, in which case we can get a lot of this by more or less copying https://github.com/tektoncd/pipeline/tree/master/cmd/controller.
It looks like the EventListener binary is going to be doing a lot of work in the future, so we should treat it's logs like we would logs from our controller, e.g.:
Right now we are using the go log
library and logging everything the same way. The user has no control over it.
Probably reasonable just to use the same log library as our controller (zap) and configure it in a similar way, e.g. https://github.com/tektoncd/triggers/blob/master/config/config-logging.yaml
But if whoever is doing this feels like it we could look into other logging libraries :D
It should (maybe!) be possible for users to provide values to TriggerTemplate
from sources other than events
A TriggerTemplate takes parameters, and doesn't need to worry about where those come from e.g.:
spec:
params:
- name: gitrevision
description: The git revision
default: master
- name: gitrepositoryurl
description: The git repository url
- name: namespace
description: The namespace to create the resources
At the moment these values can be provided only by a TriggerBinding which can extract them from json data (an event):
spec:
params: # side note - are these outputs? i think thats the intention...
- name: gitrevision
value: $(event.head_commit.id)
- name: gitrepositoryurl
value: $(event.repository.url)
- name: namespace
value: tekton-pipelines
There is no other way to provide values for the params of a TriggerTemplate
.
For dogfooding Tekton Pipelines, I wanted to make use of this the Pipeline we have been using for manual releases. However the Pipeline takes a parameter called versionTag
:
- name: versionTag
description: The vX.Y.Z version that the artifacts should be tagged with (including `v`)
There are different types of values we want for this:
v0.4
) and number of commits so far (i.e. each subsequent commit bumps the patch value, e.g. v0.4.1
) vYYYYMMDD-commitsha
)With the current implementation for Triggers (and Prow!), in order to accomplish the above, we need to modify the Pipeline itself to do this. And we'd need either 2 totally different Pipelines or a flag to distinguish between the official + nightly use cases.
My opinion is that it's reasonable to create a Pipeline like this that takes a version parameter without being concerned with how to create it, and I think it makes sense for the Triggering system to be able to make this data available.
Here is a proposal for how we could do this:
TriggerBinding
would specifically be about extracting values from Events only, so it could be renamed (#62) to something like EventParser
or ParmsFromEvent
or something better but specifcally about "events" - EventBinding
?E.g. EventListener could look like this:
apiVersion: tekton.dev/v1alpha1
kind: EventListener
metadata:
name: listener
namespace: tekton-pipelines
spec:
serviceAccountName: default
triggers:
- bindings:
- name: github-binding
- name: nightly-version-binding
template:
name: pipeline-template
nightly-version-binding
would be an instance of at type called something like ParamBinding
that is a container specification - the interface would be something like the container has to write a json file to an expected location with the values it exports, e.g. something like:
apiVersion: tekton.dev/v1alpha1
kind: ParamBinding
metadata:
name: nightly-version-binding
spec:
outputs:
- "versionTag"
container:
- name: generate-release-version
image: alpine/git
workingDir: "/workspace/go/src/github.com/bobcatfish/catservice/"
command:
- /bin/sh
args:
- -ce
- |
set -e
set -x
COMMIT=$(git rev-parse HEAD | cut -c 1-10)
DATE=$(date +"%Y%m%d")
VERSION_TAG="$DATE-$COMMIT"
echo "{'versionTag': '$VERSION_TAG'}" > "/builder/home/outputs.json"
In the future, we could even extend this to support cases like filtering! For example @wlynch has been looking into use cases where we need additional info from github to make a decision, e.g. "is the user who opened the PR a member of the organziation". We could use something like this to provide inputs to the filtering logic.
And this could be extended to allow for caching info - to continue the use case from (3), say we wanted to know if the user who opened the PR is a member of the org - we could have a mechanism that is caching data like this from GitHub.
What I like about (3) and (4) is that it make it possible to separate the responsibilities of obtaining data from the responsibility of acting on that data.
The EventListener
Deployment
pod needs to be able to reference the TriggerBinding(s)
/TriggerTemplates
associated with the EventListener
in order to be able to create resources with the received event.
Currently, nothing does this.
There is seemingly two ways to day this:
Deployment
The Deployment
pod will likely need to be passed the EventListener
ref that created the Deployment
(as an env) at the very least, but this could be extended in a brute force way by passing literally everything into the pod. I believe the first way is more simplistic because if the EventListener
or related pieces were updated (using the brute force approach), it would require the Deployment
be updated, and potentially, this would need to be modified each time the type definitions change.
Events received by the EventListener
addressable service should only be handled if the requester and request are authenticated.
We can use GitHub as an initial use case.
From @khrm :
Github computes the hash of payload using the above token and then pass the computed hash to the header.
And on triggers side, if we want to be sure, we compute the hash of payload using that token(needs to be stored) and match it with the one received in the header.
Currently, nothing does this.
We have discussed doing this via header matching:
As per the WG discussion, add a
http
field with an underlyingheadersMatch
field (as showcased in the proposal) to theTriggerBindingSpec
. The deployment could intelligently map the received events to the correctTriggerBinding(s)
, but simply iterating over the bindings is also good as a first pass.
However as mentioned above, more is required than just header matching.
@akihikokuroda included some example code at #78 (comment) to implement this.
Folks who try to follow along with our DEVELOPMENT.md should be able to actually run the commands.
(of course one way to meet this would be to delete the stuff we don't have yet, but I think it would put us in a great position to add all the supporting code we need to just make it work!)
We have no code yet, so most of the contents of the DEVELOPMENT.md would not work.
I copied most of the DEVELOPMENT.md contents from https://github.com/tektoncd/pipeline/blob/master/DEVELOPMENT.md but didn't copy the supporting code (e.g. hack dir).
Some things I noticed that we could probably make work fairly easily, that would set us up nicely:
hack
dir (update-deps, update-codegen)ko apply -f config/
to work, an easy way to do this would be to copy a lot of the config from https://github.com/tektoncd/pipeline/tree/master/config maybe without the bit to deploy the controller and webhook cuz we don't have those yetupdate-deps.sh
(or similar) to work.In the docs for TriggerBinding
s, we say:
Although similar, the separation of these two resources was deliberate to encourage TriggerTemplate definitions to be reusable
I don't disagree with this idea and I don't think TriggerTemplate
reusability should change.
However, for many users, it may end up being even more important for TriggerBinding
s to be reusable! TriggerBinding
reusability is severely limited in the current design because within the definition, a specific TriggerTemplate
is referenced within the TemplateRef
.
To give an idea of what I'm saying, imagine a user who is getting tons of GitHub events from various contexts/sources. Depending on the situation, the user has defined several TriggerTemplate
s, and each will execute different jobs like building code, performing test, moderating comments, etc. Now, for each of those TriggerTemplate
s, which are intended to be called upon separately in differently contexts, the user must completely redefine or copy-paste the payload->parameter logic, because that information is within TriggerBinding
, which is tied to a specific TriggerTemplate
via TemplateRef
. This could become extremely tedious especially for users with complex payloads.
Bottom line, I think TriggerTemplate
s and TemplateRef
s should each be reusable for their own reasons, and the way to do this is to move the TriggerBinding
βTriggerTemplate
connection representation out of the TriggerBinding
itself. This could be done by moving the connection information a layer up into EventListener
, or to create a new resource, representing that connection and possible conditional logic and then listing those new connection resources within EventListener
.
My proposal would be to move the connection information into EventListener
. This would mean removing the templateRef
field from TriggerBinding
s, and then an EventListener
could look something like this:
apiVersion: tekton.dev/v1alpha1
kind: EventListener
metadata:
name: simple-listener
spec:
connections:
- triggerBinding: simple-pipeline-binding
triggerTemplate: simple-pipeline-template
- triggerBinding: another-pipeline-binding
triggerTemplate: another-pipeline-template
Conditional/filtering information would also go within the connections
field. This way, the payload-variable logic is truly encapsulated and reusable within triggerBinding
. I see EventListener
analogous to a PipelineRun
or TaskRun
, in that it provides project-specific configuration details like connections, filtering, or perhaps default parameter/payload values, while triggerBinding
and triggerTemplate
can be reused as needed.
The triggering event must be communicated from the event source to the triggering in a secure way and the event must be authenticated before the processing.
I believe that these are necessary in the production environment so must be implemented some how but my question is if this is in the scope of the triggers.
The TLS can be implemented by defining the ingress or route for the event listener service. They are terminate the TLS and send the plain event to the event listener.
The authentication needs some additional code in the processing of the event (I just looked at the github webhook case, so far). I can think of 2 ways to approach this. The first one is to add an hook in the event listener for plugins to pre-process the events. The other one is add a sidecar to the event listener pod to receive the event and authenticate the event before pass to the event listener.
Github info:
https://developer.github.com/webhooks/securing/
https://developer.github.com/webhooks/creating/
When exception occurred, end user could easily to know what happen exactly.
I don't prefer to check the log of Pod
create by EventListener
controller, it's mass and should transparent to end user.
I think cr
of EventListener
is a good choice. It's not reusable and end user know it.
Check the discussion here: #74 (comment)
I think we need build a mechanism to make it easy to send exception to cr
of EventListener
.
Log exception to logs of Pod
created by EventListener
controller.
Users should be able to specify a Conditional
against event payload values to ensure that resources are generated dependent on the payload data. For context, webhook POST data from GitHub specifies the source organization/repository/branch. Users will want to make TriggerBindings
that execute against particular branches, etc.
Currently, this is no conditional creation.
Add Conditionals
to the TriggerBindingSpec
. Values should be interpolated from the event into the fields being asserted. Documentation should be included.
The builder package in pipelines is super handy for instantiating instances of the pipeline CRDs in unit tests! Would be great to have something like this (and tests for the builders themselves in triggers :D
We don't have this yet!
When @vdemeester added this to pipelines it took him quite a long time and I can tell you from experience that refactoring tests to use these builders later is not a ton of fun, so it'd be great if we could get these in sooner rather than later π
The initial types PR did not implement the TriggerResourceTemplate
.
The TriggerTemplate
has a functional TriggerResourceTemplate
and controller logic to handle creating the objects.
This is a Zenhub epic.
This repo should have automated CI running on every PR, just like our other tektoncd projects. Ideally this would be using Tekton itself - in tektoncd/pipeline#922 we are working toward triggering PipelineRuns with Prow, however there are still some barriers (e.g. logs) so in the interests of not blocking this project, we probably want to use Prow at least initially
We currently don't have any CI (or even any code at this point!).
Users of Triggers should have a place to look for docs on each CRD that they'll be interacting with.
We have no docs, but in #10 we'll be adding the types. The only docs we have at the moment are in the design doc.
One decision we may want to make at this point is whether we want to have a markdown file per type, the way that Tekton Pipelines does, or if we want this kind of documentation to live with the code and primarily be visible at https://godoc.org/.
If we decide to rely on godocs, we need to have more carefully curated godocs than https://godoc.org/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1 does, specifically:
It may also be harder to version godocs and present them nicely in an interface such as https://knative.dev/docs/ or https://skaffold.dev/docs/ (which i think both use netlify?)
When event
received, deployment of EventListener
will create at least one PipelineResource
and one Pipelinerun
, then those stuff will get out of hand of Trigger
.
Thatβs caused a problem, after period, there will be tens of hundreds pipeline
related stuff leaved.
So I think we need setup garbage disposal/house keeper mechanism for the situation.
I saw #48 , but it need delete EventListener
to clean all stuff. I think EventListener
is design for long term running.
Currently, nothing for this
When the webhook notices that one of our trigger CRD types is created, it will invoke each type's validation. When that validation is invoked, it should look at the fields of each CRD and make sure they are valid.
We have the skeleton webhook added, but our validation functions are empty, e.g.
triggers/pkg/apis/triggers/v1alpha1/trigger_template_validation.go
Lines 24 to 30 in 357fee5
This issue might not make any sense yet sine none of the types actually do anything, and we're still working on adding arbitrary event payloads in #19 so feel free to close this if I'm totally off base XD
@akihikokuroda brought up in #78 that in order to use triggers securtely, users need to use TLS for communication b/w the EventListener and the requester.
In further discussion it was agreed that this is up to the person setting up triggers:
I agree about the TLS. When the user create the eventlistener CD, he/she should create the ingress route and uses the external address of them to the webhook configuration.
However we should make this easy on the user by including in our docs (maybe install instructions?) at least recommendations / links around this.
We have no docs about this.
If we're including conditionals and headermatching in TriggerBinding
, then I would propose renaming it to something like TriggerSifter
(open to alternatives).
I suggest this because the TriggerBinding name was created to reflect how it binds values from the event payload onto the TrigerTemplate
, however it will now also be doing filtering and headermatching. I think the name should change to reflect the fact that more processing is done than just binding values.
Any resources created through Triggers should be properly labelled and annotated to specify their origins.
Currently, nothing does this.
Documentation should be included.
When an EventListener resource is created, we expect its status section to have information about the resulting deployment and the endpoint this deployment can be reached at.
The event_listener_types.go file defines the EventListener resource to have an empty status section.
@bobcatfish mentioned here that we can use the status section in our tests.
Please use this issue to discuss what information should go in the EventListener's status. We haven't discussed this yet, but I think that some relevant information we should consider includes the:
duckv1beta1.Status
Any examples we add to the repo should have at least a base level of testing applied to them with each PR, e.g. making sure the structure is sound (even if we don't verify the behavior).
In #10 we'll be adding types and examples, but we don't yet have any tests for them.
Once we have #36 the next step is for the running binary to expose a Knative addressable endpoint that can be invoked with a CloudEvent!
It doesn't have to actually do anything with the CloudEvent at this point but maybe could log something fun?
Once we have #36 we'll have a running pod but it won't be listening for CloudEvents yet
If we have #37 at this point, we should expand the end to end test to hit the running EventListener with a predefined CloudEvent. If not, we'll need to create a follow up issue in the #33 epic to add that to the end to end test.
Every PR should have all end to end tests (i.e. tests in the test
dir which use the e2e
build label) executed against it.
We don't have any code, or any automation :O
The Triggers webhook pod should not crash.
The Triggers webhook pod crashes with these logs:
{"level":"info","caller":"logging/config.go:100","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n \"level\": \"info\",\n \"development\": false,\n \"sampling\": {\n \"initial\": 100,\n \"thereafter\": 100\n },\n \"outputPaths\": [\"stdout\"],\n \"errorOutputPaths\": [\"stderr\"],\n \"encoding\": \"json\",\n \"encoderConfig\": {\n \"timeKey\": \"\",\n \"levelKey\": \"level\",\n \"nameKey\": \"logger\",\n \"callerKey\": \"caller\",\n \"messageKey\": \"msg\",\n \"stacktraceKey\": \"stacktrace\",\n \"lineEnding\": \"\",\n \"levelEncoder\": \"\",\n \"timeEncoder\": \"\",\n \"durationEncoder\": \"\",\n \"callerEncoder\": \"\"\n }\n}\n"}
{"level":"info","caller":"logging/config.go:101","msg":"Logging level set to info"}
{"level":"warn","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","logger":"webhook","caller":"webhook/main.go:64","msg":"Starting the Configuration Webhook","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:315","msg":"Found certificates for webhook...","knative.dev/controller":"webhook"}
{"level":"error","logger":"webhook","caller":"webhook/webhook.go:324","msg":"failed to register webhook","knative.dev/controller":"webhook","error":"failed to create a webhook: mutatingwebhookconfigurations.admissionregistration.k8s.io \"triggers-webhook.tekton.dev\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>","stacktrace":"github.com/tektoncd/triggers/vendor/knative.dev/pkg/webhook.(*AdmissionController).Run\n\t/root/go/src/github.com/tektoncd/triggers/vendor/knative.dev/pkg/webhook/webhook.go:324\nmain.main\n\t/root/go/src/github.com/tektoncd/triggers/cmd/webhook/main.go:106\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
{"level":"fatal","logger":"webhook","caller":"webhook/main.go:107","msg":"Error running admission controller{error 25 0 failed to create a webhook: mutatingwebhookconfigurations.admissionregistration.k8s.io \"triggers-webhook.tekton.dev\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>}","knative.dev/controller":"webhook","stacktrace":"main.main\n\t/root/go/src/github.com/tektoncd/triggers/cmd/webhook/main.go:107\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
ko apply -f config/
It'll be easier to iterate on docs and we'll be in a better position docs-wise in the future if we start with some basic docs now that we can continue to add to.
In #2 we'll add the bare minimum required docs that each tekton project needs to have, but those are more for the contributors and maintainers of the project, and less about users of the project.
The TriggerBinding should be reusable for the same event and the same pipeline.
It should (maybe!) be possible for users to provide values to TriggerTemplate from sources other than events
from #87
Users can provide static values to the TriggerTemplate by putting them to the params in the TriggerBinding instead of the attribute values in the event.
This method makes the TriggerBindings non-reusable in the same cases.
For these cases, the user need to create different event EventListener and TriggerBinding for each credential or repository.
Adding the capability to pass the param values from the EventListener to the TriggerBinding. With this, the EventListener still have to be created for each case but the same TriggerBinding can be used for the both cases.
After discussing with @imjasonh and @dibyom I'm thinking it would make sense to restrict the types of Resources a user can create with TriggerTemplates to just types that we know are needed for executing Tekton Pipelines (e.g. PipelineResources
, Conditions
, PipelineRuns
, etc.).
I think this makes sense because:
PipelineRuns
(maybe TaskRuns
too) but it's not clear we want to create anything else - and afaik we don't want to replace other generic event triggering solutions such as knative eventing or argo eventsTriggerTemplates
before waiting for the EventListener
to be invokedWe explicitly decided that it should be possible for a user to create any Resource in a kube cluster, e.g. they could create a Deployment (or a Role!) as long as the service account had permission to.
Currently, the TriggerBindingSpec is defined as follows:
// +k8s:deepcopy-gen=true
type TriggerBindingSpec struct {
TemplateBindings []TemplateBinding `json:"templatebindings,omitempty"`
}
// +k8s:deepcopy-gen=true
type TemplateBinding struct {
TemplateRef TriggerTemplateRef `json:"templateref,omitempty"`
Event TriggerTemplateEvent `json:"triggertemplateevent,omitempty"`
Params []pipelinev1.Param `json:"params,omitempty"`
}
This means we have multiple distinct event bindings within a TriggerBinding
. Currently, we have a one-to-many mapping from EventListener
to TriggerBinding
and well as a one-to-many mapping from TriggerBinding
to TriggerTemplates
. As I understood it, we wanted to support this?
I believe this may not be the best design decision after consideration for issues like naming modifiers. To be brief, there will be no way to refer to resources being generated between templates (on the same TriggerBinding
).
As much as it makes sense (our design should be as simple as possible and no simpler! :D) we should try to make our CRDs:
TriggerBindings
more reusable b/c we figured otherwise folks could end up having to create the same TriggerBindings
over and over again).In light of #55 (see "actual behaviour" below), I propose that:
EventListeners
TriggerBindings
main job)We could add a Filter
type, maybe even something as specific as HTTPFilter
, something like:
apiVersion: tekton.dev/v1alpha1
kind: HTTPFilter
metadata:
name: requester-is-github-push
spec:
inputParams:
- name: secret
description: The git webhook secret
headerMatches:
- X-GitHub-Event: push
- X-GitHub-Secret: $(inputParams.secret)
Or we could take this one step further and have both an HTTPFilter
and HTTPValidation
but that feels like it's going too far?
Then TriggerBinding
would become just:
apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: my-triggerbinding
spec:
# I removed conditionals also!
outputParams:
- name: gitbranch
value: $(event.ref)
- name: gitrevision
value: $(event.head_commit.id)
- name: gitrepositoryurl
value: $(event.repository.url)
At this point TriggerBinding
might make sense to rename to something like EventParser
?
And then EventListener
would be:
kind: EventListener
metadata:
name: my-eventlistener
spec:
bindings:
filterRef: requester-is-github-push
params:
- name: secret
value: somehow-from-kube-secret
triggerTemplateRef: my-triggertemplate
triggerBindingRef: my-triggerbinding
conditions:
- name: is-master
params:
- $(my-triggerbinding.gitbranch): refs/heads/master
TriggerBindingRef would be responsible for extracting values that are available to conditions
- and maybe filters
? Or if that's crazy then conditions
can go back into TemplateBinding
:D
Or something like that! What I do like though is keeping the responsibilities separate, and using EventListener
to bring them together.
In our most recent design (#55 (comment)) we have added some values to the TriggerBinding which are for filtering (i.e. only create these resources if conditions
is true) and validation (i.e. only create these resources if headerMatches
is true, if it isn't true, this indicates that the requester is bogus :O - this can be viewed as a type of filtering too?)
This means TriggerBinding
now has 3 jobs:
outputparams
conditions
headerMatches
apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: my-triggerbinding
spec:
inputParams:
- name: secret
description: The git webhook secret
- name: branch
description: The git branch
default: master
headerMatches:
- X-GitHub-Event: push
- X-GitHub-Secret: $(inputParams.secret)
conditionals:
- $(event.ref): refs/heads/$(inputParams.branch)
outputParams:
- name: gitrevision
value: $(event.head_commit.id)
- name: gitrepositoryurl
value: $(event.repository.url)
Related issues:
People who want to use Triggers should have an obvious place to look for some quick start docs on how they can get up and running, e.g.:
In #2 we'll add a DEVELOPMENT.md for contributors, but we won't have any getting started docs for users.
May want to use https://github.com/tektoncd/pipeline/blob/master/docs/tutorial.md as an example.
Users should specify a service account (or a role binding?) in the EventListener
to be used for resources created as per specific TriggerTemplate
.
Currently, there is no implementation.
Currently, the EventListener
creates a Deployment
and a load balancer Service
(to reference the deployment), which enables the pods to be addressable/receive external traffic. The service account tied to the EventListener
listener pod has permissions necessary to read Triggers resources, but how can it get the permissions necessary for creating resources defined in a particular TriggerTemplate
, which are arbitrary?
This proposal addresses the specifications as outlined in the initial design proposal (https://docs.google.com/document/d/1fngeNn3kGD4P_FTZjAnfERcEajS7zQhSEUaN7BYIlTw/edit#heading=h.iyqzt1brkg3o), namely:
This proposal takes the perspective of getting things out the door "as quickly as possible", but not at the cost of diminishing the Triggers
design. Rather, an initial release might provide insights from other users and have us consider things we had not otherwise. This could help build a base to iterate from.
From the CloudEvents
specification,
Events are everywhere. However, event producers tend to describe events differently.
CloudEvents
are a standardized way to format messages. The project justification is that without such a standardized format "developers must constantly re-learn how to consume events". This seems like a good idea to me, but CloudEvent
adapters is where I draw skepticism. The specification mentions adapters as follows:
Since not all event producers generate
CloudEvents
by default, there is documentation describing the recommended process for adapting some popular events intoCloudEvents
, see CloudEvents Adapters.
Since the addressable endpoint provided by an EventListener
is a terminal (the event is consumed/not be emitted further), it seems unclear why Triggers
should convert the payload into a CloudEvent
. Although adapters could be provided for the "popular events" (e.g., GitHub, GitLab, DockerHub), since these are likely the primary concern, this already seems like bespoke implementation for events within Triggers
. I understand anything bespoke as the inverse of the intent behind the project since the power of Triggers
is in supporting any event. This is accomplished through user provided event bindings where the structure of the data must be known. This is the only stipulation, but it allows for the intended decoupling from vendors. CloudEvents
is working to standardize payload structures, but in the end, the payload is always going to be pushed down somewhere (https://github.com/cloudevents/sdk-go/blob/master/pkg/cloudevents/event.go#L13) because producers produce different things, which ties back into the idea of handling arbitrary payloads.
If Triggers
is coupled to CloudEvents
, it will require that we enumerate an adapter for each event type. Assuming this path, one further issue to consider here is how to determine if an event is actually a CloudEvent
because otherwise these adapters will wrap CloudEvents
into yet another CloudEvent
, which would be unintended. With this adapter pattern, it seems like it hinges upon the assumption that the incoming events are not CloudEvents
. First class CloudEvents
also have a side effect that users who are familiar with the payload they should be receiving now have to know to go through the CloudEvent
envelope rather than the raw data when constructing event bindings.
TriggerBinding
event bindings necessitate that the structure of the data was already known. In this way, it should not matter if the event is a CloudEvent
. The popular events like GitHub and GitLab (and likely others) provide headers that best provide the concept of an "event type" that can be used agnostic of CloudEvents
. This can address the concern of multiple TriggerBindings
with distinct events on a single EventListener
.
With optional headers to match against event types, TriggerBindings
could look as follows:
apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: simple-pipeline-binding
namespace: default
spec:
templateRef:
name: simple-pipeline-template
serviceAccount: default
headerMatch:
- X-GitHub-Event: issues
params:
- name: gitrevision
value: ${event.head_commit.id}
- name: gitrepositoryurl
value: ${event.repository.url}
Triggers
should not provide first class support for CloudEvents
CloudEvent
abstraction to Triggers
is unclearThe pods generated by the EventListener
need to be (at least conditionally) addressable external to the cluster. One way to accomplish this is to use Knative eventing. Knative uses Istio
as the underlying technology to expose an address. In any case, the underlying resources that are created to create the external endpoint devolve to an Ingress
and/or a LoadBalancer service
.
For the sake of decoupling from all unnecessary components, a simplistic solution would involve creating a LoadBalancer Service
that refers to the Deployment
(as underway here) so that it can properly consume events. Most cloud providers have baked in implementations for LoadBalancers
as well, which allows us to leverage this "without paying". To the effect of being configurable, we could (later or otherwise) allow for an embedded specification of a service rather than making all the assumptions about how the LoadBalancer
should be configured.
Use LoadBalancer Services
to make the EventListener
pods addressable
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.