Git Product home page Git Product logo

triggers's People

Contributors

abayer avatar afrittoli avatar akihikokuroda avatar alangreene avatar bigkevmcd avatar bobcatfish avatar chmouel avatar dependabot[bot] avatar dibyom avatar dlorenc avatar dorismeixing avatar dprotaso avatar gabemontero avatar guillaumerose avatar iancoffey avatar imjasonh avatar jmcshane avatar khrm avatar lbernick avatar marcelmue avatar mattmoor avatar my-git9 avatar navidz avatar nikhil-thomas avatar piyush-garg avatar savitaashture avatar tualeron avatar vdemeester avatar vincent-pli avatar wlynch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

triggers's Issues

Get unit tests running against PRs

Expected behavior

Every PR should have all unit tests (i.e. go test ./...) executed against it.

Actual behavior

We don't have any code, or any automation :O

Additional info

Create project justification docs

Expected behavior

When users come to the repo for this project, they should be able to easily find an explanation of why the project exists and how it relates to other projects.

Actual behavior

This is something we don't yet have for Tekton Pipelines, which results in questions such as tektoncd/pipeline#644. Having docs like this from the start would probably save a lot of time and answer a lot of user questions (not to mention clarify our own thoughts!).

Create a skeleton controller that does nothing

It will be easier for ppl to start adding functionality if we add an empty controller that doesn't actually reconcile anything, just:

  • Has all the config needed to build and deploy with ko
  • Will start running and invoke a reconcile loop of some kind that does nothing (maybe just logs hello world?)

Additional info

Continuing to use knative/pkg is probably our best bet for now, in which case we can get a lot of this by more or less copying https://github.com/tektoncd/pipeline/blob/master/cmd/webhook.

Controller should create EventListener pod on creation

Expected Behavior

When an EventListener CRD is created, the triggers controller should respond by making an instance of running container that can be invoked with CloudEvents.

Eventually this running container would expose a Knative eventing addressable endpoint which could be invoked with CloudEvents and would create resource instances in response, but in the context of this issue, the goal is just to:

  • Start the container on EventListener creation
  • Delete the container on EventListener deletion (thought: should it be allowed to finish whatever it's doing before it finishes? probably! i guess this depends on the design of the running container!)

Actual Behavior

Creating an EventListener current does nothing :D

Additional Info

@bobcatfish writing this issue does not know the exact form that this running container should take. She knows it should be a container inside a pod, but is not sure if it should be a stateful set, deployment, or something else :D Anyone with more context (@iancoffey @ncskier @vtereso ) feel free to replace this with something definitive or whoever takes on the issue can decide!

Other thoughts:

  • This means adding an image which we can build for the running EventListener itself!
  • The logic in this image should have tests
  • The image should have a README in cmd

Get linting running against repo

Expected behavior

Every PR should have logic applied to it to make sure that:

  • Go code is formatted as expected
  • Markdown files are formatted as expected and links they contain work

Actual behavior

We don't have any code, or any automation :O

Additional info

  • These are called "build-tests" in the prow config which is generally referring to linting and building, basically smoke test / linting but not actually running any tests <-- for now we should probably just copy this and do exactly what it does
  • The config that prow uses to determine what to run against what repo is in this config.yaml and the instructions for updating it are in this readme (you might need the assistance of someone with cluster access to apply the config)

Task trigger within a pipeline

This is a proposed feature that would help orchestrating asynchronous pipelines.

The use case is as follows: as a user I want to be able to include in a pipeline a component that is executed asynchronously and possibly externally i.e. Tekton on a different cluster or something that is not Tekton at all.

For instance a pipeline may include a Task that sends a cloud event on completion. The cloud event triggers an external system that does a certain amount of processing, and sends a call back via cloud event. In the pipeline below, tasks A and C are started in parallel. Task B is only executed once A is complete. Task D is only executed when a cloud event is received. In this specific example this event is a call-back from an event that was sent by Task C to an external system.

task_trigger_in_pipeline

In this example the pipeline helps us orchestrate the execution of an external task, since the overall pipeline condition exposed for the pipeline resource via the k8s API includes the execution and result of a process on an external system.

The UX model I had in mind for this feature is that a CloudEventPipelineResource could be extended to be used as an input, include a reference to a template, and Tekton would setup a binding implicitly for the Task that as that input resource.

[Epic] Initial controller + webhook

This is a zenhub epic about adding some baseline functionality to our initial controller and webhook, namely:

  • Webhook should actually validate the types as we've added them so far
  • Webhook and controller are able to handle arbitrary event payloads
  • Controller should create a running pod when an EventListener resource is created (doesn't need to actually do anything yet tho)

Design for cron triggering

Expected Behavior

Folks will want to run Pipelines periodically, e.g. very night or every few hours they may want to trigger CI. Some use cases:

  • To run long running Pipelines that take too long to run on every commit
  • To pull in external changes on a regular basis (e.g. new versions of dependencies)
  • To do nightly releases

Actual Behavior

We've been discussing and designing solutions for event based triggering where the event comes from an external system like GitHub.

Hopefully this design will be super similar :D

Additional Info

Definitely not something we need for our first milestone but ppl have been asking for it so we can track it here and ppl can add details if needed :D

Create Resource(s) from Event(s)

Expected Behavior

The EventListener Deployment pod should be able to create resources as defined in the associated TriggerTemplate(s) when receiving an event

Actual Behavior

Currently, nothing does this.

Additional Info

When the event/message is received, it should be parsed to populate the corresponding TriggerBinding(s), which should then inject their values into the TriggerTemplate resourceTemplates. After value interpolation, each resource should be POSTed to the K8s APIServer.

Create an end to end test (in go) that creates an EventListener

Expected Behavior

We should have a suite of end to end tests that we can use to test triggers functionality. We can start with a simple one that:

  1. Creates an EventListener
  2. Confirms it is created
  3. Deletes the EventListener
  4. Confirms it is deleted

(If #36 happens first, this test can actually look for a running container, otherwise this will be a very light test!)

Actual Behavior

We've got a dummy test but not actually invoking any of our libs yet :D

Additional Info

One question at this point is do we want to use the knative test lib? I think it's probably simpler to use it, e.g.:

https://github.com/tektoncd/pipeline/blob/b91640d7db3d2de6fcb8c30c618a417047a03c12/test/embed_test.go#L69-L70

However it's not that complicated to implement our own (this is an example of what's required to hit most of the common stuff we'd need to do) https://github.com/bobcatfish/testing-crds/blob/master/client-go/test/cat_test.go#L136

Up to the implementer of this issue πŸ˜‰ and also something we can easily change later!

Users should only be able to create TriggerTemplates for resources they are permissioned to use

In the current design, users would be able to use the spec.resourceTemplates field to escalate their degree of write permission within the cluster; users of TriggerTemplate are able to create any resource that the service account responsible for creating resources in a TriggerTemplate has permission to create. For example, if a user does not have access to create Pods, but the service account servicing TriggerTemplate does, a user could put a Pod into spec.resourceTemplates and have it created.

We should protect against this form of privilege escalation by having a validating webhook that uses information from the authorizer (this is part of the payload webhooks are passed) to make SAR checks that establish that the user creating or updating a TriggerTemplate has permission to create those resources in the cluster.

Define policy for modification of TriggerBinding and TriggerTemplate

Expected Behavior

TriggerBinding and TriggerTemplate are designed for reusing, that's means one single TriggerBinding will be shared by several EventListener.
So if modification occurred on TriggerBinding or TriggerTemplate what should related EventListener do?

  • Ignore (dev env)
  • Stop working and send notification to avoid potential exception (production env)

I think we should enhance as below in EventListener type definition:

type Trigger struct {
        IgnoreChange  bool `json:"ignoreChange,omitempty"`   -- add this field to define the policy
	TriggerBinding  TriggerBindingRef  `json:"binding"`
	TriggerTemplate TriggerTemplateRef `json:"template"`
}

Actual Behavior

Nothing about this part

Additional Info

To achieve this, the controller of EventListener should watch both TriggerBinding and TriggerTemplate.

Add initial type defintions

Expected behavior

We should have the structure of the types we want to add for Tekton Triggers clearly defined in this repo and agreed on by all owners:

  • EventListener
  • TriggerBinding
  • TriggerTemplate

Each type should:

  • Be defined in go such that it can be fed to update-codegen.sh
  • Be agreed on by all owners
  • Be clearly documented in comments (so that it would be fully documented in godoc form)
  • Have an example in an examples dir (even if we aren't testing this yet, see #11)

Actual behavior

The structure of the types is basically just in exist in example yaml in the design doc which means that:

  1. the actual structure and purpose of each field is not clearly defined
  2. each person reviewing + authoring may have a different idea of what the field actually is

Additional info

Design for these types lives in https://docs.google.com/document/d/1fngeNn3kGD4P_FTZjAnfERcEajS7zQhSEUaN7BYIlTw/edit (visible to members of tekton-dev)

Handle arbitrary event payloads

Expected Behavior

The main strength of tektoncd/triggers, and more specifically, a triggerBinding is that it will act as the glue enabling you to extract fields of an event payload to be used in a triggerTemplate to create arbitrary resources.

Current Behavior

We don't have anything for this

Background

Since events can have any structure, we will not always have a definition for the struct to unmarshal into as done prior here. Further, likely only a few of the many fields will be utilized within a triggerTemplate where unmarshalling the entire structure will be costly/wasteful.

Ideas

  • Have a code gen step that will inject the library and specific type into the triggerBinding pods/listeners AOT, but this seems unnecessary, costly (as mentioned previous) and in conflict with the idea of having a dynamic solution.
  • I made a quick prototype using the internal encoding/json library and map[string]interface{}, but there are some tools out there (https://github.com/tidwall/gjson seems like a good option) that are much more performant.

K8S-styled templating - remove triggers' XYZTemplates, template types, template definitions, and variable interpolations from trigger controllers and triggers API

Tekton going forward, as of last week's working group meeting, is following the template style of K8S, which can be interpreted as specifically avoiding a few things. https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md#what-about-parameterization

2/3rds of that K8S document applies directly to how Tekton will want to pursue templates and reuse of items. I am trying to find exceptions to the rules I have found in that document, though I have none currently. Everything that can be expressed with a template within our API/controller could instead be pre-processed outside of Tekton's view and applied as API calls to Tekton.

In fact, if following a couple of the models for apply -f/ apply -k/ and merging YAML/JSON in a way transparent to K8S, there is a chance to additionally unroll patches and recover original values if desired - though that is not going to be required for now.

Where am I going with this?

Tekton will avoid templating most forms of variable interpolation at the API layer for Tekton.

This will also mean avoiding creating any objects named "XYZTemplate" and/or "XYZTemplateType" in favor of allowing external tools to be chosen by a developer for generating the API calls/YAML/JSON files.

I propose any templating options be pushed to a CLI wrapper layer that makes the user experience for Tekton nicer, though leaves the controllers and API simpler for Tekton itself.

So, I think I am not going too far out there to state: Let us not add too many intelligent forms of template logic into the controllers.

It may be argued that Pipelines and PipelineRuns also follow a similar template logic (PipelineRun is just providing a patch of values over the Pipeline template), though that is a discussion for the pipeline team rather than for Triggers.

It follows to me that Tekton can be simpler and still achieve its goals. What to consider for removal? Anything performing variable interpolation/variable substitution that the user could have provided directly instead. Additionally, most other items that might be interpolated can potentially be generated and turned into a full API call/fully filled out YAML/filled out JSON. If not, then the extra state can be considered exposed via ConfigMap, Secret, or creating a resource/file/envvar for ingestion further along (each with different implications for debugging). I am not 100% sure how we might emulate or reuse the DownwardAPIVolumeFiles variant of handing information along, though it is also a pattern to consider from K8S.

I will first refer to https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/declarative-application-management.md for consideration

https://docs.google.com/document/d/1h_3vSApIsuiwGkrqSiegi4NVaYG4oVzBquGAhIN6qGM

Add Support For Naming Modifiers

Expected Behavior

Resources that are created by TriggerTemplates will clash if the names defined in the resources are static. Add some postfix/modifier (uuid/timestamp) to resources to prevent naming collisions.

Pipelines uses uuid's so it's probably best to follow that convention.

Actual Behavior

Currently, nothing does this.

Additional Info

If all resources get an automatic and unknown name modifier (like a postfix), it will be impossible to refer to them from within the TriggerTemplate. For this reason, it is important that the same mechanism that uniquely names resources also allows this (sort of uuid) to be exposed for usage throughout the TriggerTemplate.

This can be accomplished by exposing another interpolation variable within the TriggerTemplate or also extending the TriggerTemplate struct.

For example, a variable such as ${id} (or any such name like ${unique}) could be utilized throughout a TriggerTemplate as follows:

apiVersion: tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: simple-pipeline-template
  namespace: default
spec:
  params:
    - name: gitrevision
      description: The git revision
      default: master
    - name: gitrepositoryurl
      description: The git repository url
    - name: namespace
      description: The namespace to create the resources
  resourcetemplates:
    - apiVersion: tekton.dev/v1alpha1
      kind: PipelineResource
      metadata:
        name: git-source${id.one}
        namespace: ${params.namespace}
        labels:
            triggertemplated: true
      spec:
        type: git
        params:
        - name: revision
          value: ${params.gitrevision}
        - name: url
          value: ${params.gitrepositoryurl}
    - apiVersion: tekton.dev/v1alpha1
      kind: PipelineRun
      metadata:
        name: simple-pipeline-run${id.two}
        namespace: default
        labels:
            triggertemplated: true
      spec:
        pipelineRef:
            name: simple-pipeline
        trigger:
          type: event
        resources:
        - name: git-source
          resourceRef:
            name: git-source${id.one}

In this way, some interpolation base (like ${id}) would have fields that could be implicitly referenced throughout the template (maybe as valid here). In any case, this is similar to how ${params} is functioning.

To be more explicit, we could add another TriggerTemplateSpec field like Params where users would specify exactly what unique field names they want to expose. For example:

spec:
  params:
    - name: gitrevision
      description: The git revision
      default: master
    - name: gitrepositoryurl
      description: The git repository url
    - name: namespace
      description: The namespace to create the resources
  id:
    - idOne
    - idTwo

These are not functionally surfaced to the TriggerBinding however, so maybe creating explicit fields is annoying and verbosity isn't good/as valuable?

This could also be done in a similar way, but instead of providing access to some randomly generated variable that is used for naming, resources names could be automatically modified and resources themselves (which still functionally evaluates to their name) could have some identifier to be used within the template. Regarding the prior example, it could look something like this:

apiVersion: tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: simple-pipeline-template
  namespace: default
spec:
  params:
    - name: gitrevision
      description: The git revision
      default: master
    - name: gitrepositoryurl
      description: The git repository url
    - name: namespace
      description: The namespace to create the resources
  resourcetemplates:
    - apiVersion: tekton.dev/v1alpha1
      kind: PipelineResource
      metadata:
        name: git-source
        namespace: ${params.namespace}
        labels:
            triggertemplated: true
      spec:
        type: git
        params:
        - name: revision
          value: ${params.gitrevision}
        - name: url
          value: ${params.gitrepositoryurl}
    - apiVersion: tekton.dev/v1alpha1
      kind: PipelineRun
      metadata:
        name: simple-pipeline-run
        namespace: default
        labels:
            triggertemplated: true
      spec:
        pipelineRef:
            name: simple-pipeline
        trigger:
          type: event
        resources:
        - name: git-source
          resourceRef:
            name: ${1}

Documentation should DEFINITELY be included for this one. Discussion is good πŸ˜„

Create a skeleton webhook that does nothing

It will be easier for ppl to start adding functionality if we add an empty webhook that doesn't actually do anything (eventually this webhook controller will be used to verify Trigger types), just:

  • Has all the config needed to build and deploy with ko
  • Instantiates an admission controller that isn't actually doing anything, e.g.:
	controller := webhook.AdmissionController{
		Client:  kubeClient,
		Options: options,
		Handlers: map[schema.GroupVersionKind]webhook.GenericCRD{
			// nothing actually here cuz we don't have any types yet
		},
		Logger: logger,
	}

Additional info

Continuing to use knative/pkg is probably our best bet for now, in which case we can get a lot of this by more or less copying https://github.com/tektoncd/pipeline/tree/master/cmd/controller.

Add a log library + logging config to the EventListener binary

Expected Behavior

It looks like the EventListener binary is going to be doing a lot of work in the future, so we should treat it's logs like we would logs from our controller, e.g.:

  • Be able to log at different levels (e.g. error vs warn vs debug)
  • Allow the instantiator of the binary to control the log level (e.g. run at warn or error by default but it's possbile to turn on debug / verbose logging

Actual Behavior

Right now we are using the go log library and logging everything the same way. The user has no control over it.

Additional Info

Probably reasonable just to use the same log library as our controller (zap) and configure it in a similar way, e.g. https://github.com/tektoncd/triggers/blob/master/config/config-logging.yaml

But if whoever is doing this feels like it we could look into other logging libraries :D

Support for dynamic params to TriggerTemplates beyond just event values

Expected Behavior

It should (maybe!) be possible for users to provide values to TriggerTemplate from sources other than events

Actual Behavior

A TriggerTemplate takes parameters, and doesn't need to worry about where those come from e.g.:

spec:
  params:
    - name: gitrevision
      description: The git revision
      default: master
    - name: gitrepositoryurl
      description: The git repository url
    - name: namespace
      description: The namespace to create the resources

At the moment these values can be provided only by a TriggerBinding which can extract them from json data (an event):

spec:
  params: # side note - are these outputs? i think thats the intention...
    - name: gitrevision
      value: $(event.head_commit.id)
    - name: gitrepositoryurl
      value: $(event.repository.url)
    - name: namespace
      value: tekton-pipelines

There is no other way to provide values for the params of a TriggerTemplate.

Steps to Reproduce the Problem

For dogfooding Tekton Pipelines, I wanted to make use of this the Pipeline we have been using for manual releases. However the Pipeline takes a parameter called versionTag:

https://github.com/tektoncd/pipeline/blob/3950521325d4744760a96c18e3d0c67d86495af3/tekton/publish.yaml#L14-L15

    - name: versionTag
      description: The vX.Y.Z version that the artifacts should be tagged with (including `v`)

There are different types of values we want for this:

  • When we are creating official releases, we want to be able to extract the version from a combo of the name of the branch (e.g. v0.4) and number of commits so far (i.e. each subsequent commit bumps the patch value, e.g. v0.4.1)
  • When we create nightly releases, we want to generate the version tag based on the date and the commit we are building from (e.g. vYYYYMMDD-commitsha)

With the current implementation for Triggers (and Prow!), in order to accomplish the above, we need to modify the Pipeline itself to do this. And we'd need either 2 totally different Pipelines or a flag to distinguish between the official + nightly use cases.

My opinion is that it's reasonable to create a Pipeline like this that takes a version parameter without being concerned with how to create it, and I think it makes sense for the Triggering system to be able to make this data available.

Additional info

Here is a proposal for how we could do this:

  1. TriggerBinding would specifically be about extracting values from Events only, so it could be renamed (#62) to something like EventParser or ParmsFromEvent or something better but specifcally about "events" - EventBinding?
  2. We could provide an interface (you guessed it, a container!) for users to provide additional parameters.

E.g. EventListener could look like this:

apiVersion: tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: listener
  namespace: tekton-pipelines
spec:
  serviceAccountName: default
  triggers:
    - bindings:
        - name: github-binding
        - name: nightly-version-binding 
      template:
        name: pipeline-template

nightly-version-binding would be an instance of at type called something like ParamBinding that is a container specification - the interface would be something like the container has to write a json file to an expected location with the values it exports, e.g. something like:

apiVersion: tekton.dev/v1alpha1
kind: ParamBinding
metadata:
  name: nightly-version-binding
spec:
  outputs:
   - "versionTag"
  container:
  - name: generate-release-version
    image: alpine/git
    workingDir: "/workspace/go/src/github.com/bobcatfish/catservice/"
    command:
    - /bin/sh
    args:
    - -ce
    - |
      set -e
      set -x

      COMMIT=$(git rev-parse HEAD | cut -c 1-10)
      DATE=$(date +"%Y%m%d")
      VERSION_TAG="$DATE-$COMMIT"

      echo "{'versionTag': '$VERSION_TAG'}" > "/builder/home/outputs.json"
  1. In the future, we could even extend this to support cases like filtering! For example @wlynch has been looking into use cases where we need additional info from github to make a decision, e.g. "is the user who opened the PR a member of the organziation". We could use something like this to provide inputs to the filtering logic.

  2. And this could be extended to allow for caching info - to continue the use case from (3), say we wanted to know if the user who opened the PR is a member of the org - we could have a mechanism that is caching data like this from GitHub.

What I like about (3) and (4) is that it make it possible to separate the responsibilities of obtaining data from the responsibility of acting on that data.

Set up EventListener Deployment for Resource Creation

Expected Behavior

The EventListener Deployment pod needs to be able to reference the TriggerBinding(s)/TriggerTemplates associated with the EventListener in order to be able to create resources with the received event.

Actual Behavior

Currently, nothing does this.

Additional Info

There is seemingly two ways to day this:

  • Set up Tekton/K8s client within the pod to pull the necessary resources
  • Pass the resources into the Deployment

The Deployment pod will likely need to be passed the EventListener ref that created the Deployment (as an env) at the very least, but this could be extended in a brute force way by passing literally everything into the pod. I believe the first way is more simplistic because if the EventListener or related pieces were updated (using the brute force approach), it would require the Deployment be updated, and potentially, this would need to be modified each time the type definitions change.

Enable TriggerBindings to validate requests (e.g. GitHub)

Expected Behavior

Events received by the EventListener addressable service should only be handled if the requester and request are authenticated.

We can use GitHub as an initial use case.

From @khrm :

Github computes the hash of payload using the above token and then pass the computed hash to the header.
And on triggers side, if we want to be sure, we compute the hash of payload using that token(needs to be stored) and match it with the one received in the header.

Actual Behavior

Currently, nothing does this.

We have discussed doing this via header matching:

As per the WG discussion, add a http field with an underlying headersMatch field (as showcased in the proposal) to the TriggerBindingSpec. The deployment could intelligently map the received events to the correct TriggerBinding(s), but simply iterating over the bindings is also good as a first pass.

However as mentioned above, more is required than just header matching.

Additional Info

@akihikokuroda included some example code at #78 (comment) to implement this.

Make instructions in DEVELOPMENT guide work

Expected behavior

Folks who try to follow along with our DEVELOPMENT.md should be able to actually run the commands.

(of course one way to meet this would be to delete the stuff we don't have yet, but I think it would put us in a great position to add all the supporting code we need to just make it work!)

Actual behavior

We have no code yet, so most of the contents of the DEVELOPMENT.md would not work.

I copied most of the DEVELOPMENT.md contents from https://github.com/tektoncd/pipeline/blob/master/DEVELOPMENT.md but didn't copy the supporting code (e.g. hack dir).

Some things I noticed that we could probably make work fairly easily, that would set us up nicely:

  • adding the supporting scripts in the hack dir (update-deps, update-codegen)
  • making update-codegen work by adding an empty or nearly empty _type.go file.
  • getting ko apply -f config/ to work, an easy way to do this would be to copy a lot of the config from https://github.com/tektoncd/pipeline/tree/master/config maybe without the bit to deploy the controller and webhook cuz we don't have those yet
  • Note that in #8 we'll create a skeleton controller and in #9 we'll create a skeleton webhook

Additional info

  • DEVELOPMENT.md is being added in #2
  • This is going to require vendoring in a lot of the deps we need; the first step to make that work will be to figure out what to vendor to get update-deps.sh (or similar) to work.

Proposal to move TriggerBinding-TriggerTemplate connection out of TriggerBinding definition

In the docs for TriggerBindings, we say:

Although similar, the separation of these two resources was deliberate to encourage TriggerTemplate definitions to be reusable

I don't disagree with this idea and I don't think TriggerTemplate reusability should change.

However, for many users, it may end up being even more important for TriggerBindings to be reusable! TriggerBinding reusability is severely limited in the current design because within the definition, a specific TriggerTemplate is referenced within the TemplateRef.

To give an idea of what I'm saying, imagine a user who is getting tons of GitHub events from various contexts/sources. Depending on the situation, the user has defined several TriggerTemplates, and each will execute different jobs like building code, performing test, moderating comments, etc. Now, for each of those TriggerTemplates, which are intended to be called upon separately in differently contexts, the user must completely redefine or copy-paste the payload->parameter logic, because that information is within TriggerBinding, which is tied to a specific TriggerTemplate via TemplateRef. This could become extremely tedious especially for users with complex payloads.

Bottom line, I think TriggerTemplates and TemplateRefs should each be reusable for their own reasons, and the way to do this is to move the TriggerBinding→TriggerTemplate connection representation out of the TriggerBinding itself. This could be done by moving the connection information a layer up into EventListener, or to create a new resource, representing that connection and possible conditional logic and then listing those new connection resources within EventListener.

My proposal would be to move the connection information into EventListener. This would mean removing the templateRef field from TriggerBindings, and then an EventListener could look something like this:

apiVersion: tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: simple-listener
spec:
  connections:
    - triggerBinding: simple-pipeline-binding
      triggerTemplate: simple-pipeline-template
    - triggerBinding: another-pipeline-binding
      triggerTemplate: another-pipeline-template

Conditional/filtering information would also go within the connections field. This way, the payload-variable logic is truly encapsulated and reusable within triggerBinding. I see EventListener analogous to a PipelineRun or TaskRun, in that it provides project-specific configuration details like connections, filtering, or perhaps default parameter/payload values, while triggerBinding and triggerTemplate can be reused as needed.

Securing triggering

Expected Behavior

The triggering event must be communicated from the event source to the triggering in a secure way and the event must be authenticated before the processing.

  1. TLS should be optionally enabled for the communication.
  2. The event / payload must be authenticated in the way provided by the event source.

Actual Behavior

I believe that these are necessary in the production environment so must be implemented some how but my question is if this is in the scope of the triggers.

The TLS can be implemented by defining the ingress or route for the event listener service. They are terminate the TLS and send the plain event to the event listener.

The authentication needs some additional code in the processing of the event (I just looked at the github webhook case, so far). I can think of 2 ways to approach this. The first one is to add an hook in the event listener for plugins to pre-process the events. The other one is add a sidecar to the event listener pod to receive the event and authenticate the event before pass to the event listener.

Steps to Reproduce the Problem

Additional Info

Github info:
https://developer.github.com/webhooks/securing/
https://developer.github.com/webhooks/creating/

Exception expose

Expected Behavior

When exception occurred, end user could easily to know what happen exactly.
I don't prefer to check the log of Pod create by EventListener controller, it's mass and should transparent to end user.
I think cr of EventListener is a good choice. It's not reusable and end user know it.

Check the discussion here: #74 (comment)
I think we need build a mechanism to make it easy to send exception to cr of EventListener.

Actual Behavior

Log exception to logs of Pod created by EventListener controller.

Add Conditionals to Resource Creation

Expected Behavior

Users should be able to specify a Conditional against event payload values to ensure that resources are generated dependent on the payload data. For context, webhook POST data from GitHub specifies the source organization/repository/branch. Users will want to make TriggerBindings that execute against particular branches, etc.

Actual Behavior

Currently, this is no conditional creation.

Additional Info

Add Conditionals to the TriggerBindingSpec. Values should be interpolated from the event into the fields being asserted. Documentation should be included.

Create builder package for tests

Expected Behavior

The builder package in pipelines is super handy for instantiating instances of the pipeline CRDs in unit tests! Would be great to have something like this (and tests for the builders themselves in triggers :D

Actual Behavior

We don't have this yet!

When @vdemeester added this to pipelines it took him quite a long time and I can tell you from experience that refactoring tests to use these builders later is not a ton of fun, so it'd be great if we could get these in sooner rather than later πŸ’ƒ

Get initial CI setup

This is a Zenhub epic.

Expected behavior

This repo should have automated CI running on every PR, just like our other tektoncd projects. Ideally this would be using Tekton itself - in tektoncd/pipeline#922 we are working toward triggering PipelineRuns with Prow, however there are still some barriers (e.g. logs) so in the interests of not blocking this project, we probably want to use Prow at least initially

Actual behavior

We currently don't have any CI (or even any code at this point!).

Create docs for each type

Expected behavior

Users of Triggers should have a place to look for docs on each CRD that they'll be interacting with.

Actual behavior

We have no docs, but in #10 we'll be adding the types. The only docs we have at the moment are in the design doc.

Additional info

One decision we may want to make at this point is whether we want to have a markdown file per type, the way that Tekton Pipelines does, or if we want this kind of documentation to live with the code and primarily be visible at https://godoc.org/.

If we decide to rely on godocs, we need to have more carefully curated godocs than https://godoc.org/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1 does, specifically:

It may also be harder to version godocs and present them nicely in an interface such as https://knative.dev/docs/ or https://skaffold.dev/docs/ (which i think both use netlify?)

House keeper for Pipeline related stuff

Expected Behavior

When event received, deployment of EventListener will create at least one PipelineResource and one Pipelinerun, then those stuff will get out of hand of Trigger.
That’s caused a problem, after period, there will be tens of hundreds pipeline related stuff leaved.

So I think we need setup garbage disposal/house keeper mechanism for the situation.
I saw #48 , but it need delete EventListener to clean all stuff. I think EventListener is design for long term running.

Actual Behavior

Currently, nothing for this

All existing types should have validation

Expected Behavior

When the webhook notices that one of our trigger CRD types is created, it will invoke each type's validation. When that validation is invoked, it should look at the fields of each CRD and make sure they are valid.

Actual Behavior

We have the skeleton webhook added, but our validation functions are empty, e.g.

func (t *TriggerTemplate) Validate(ctx context.Context) *apis.FieldError {
return t.Spec.Validate(ctx)
}
func (s *TriggerTemplateSpec) Validate(ctx context.Context) *apis.FieldError {
return nil
}

Additional Info

This issue might not make any sense yet sine none of the types actually do anything, and we're still working on adding arbitrary event payloads in #19 so feel free to close this if I'm totally off base XD

Add clear docs on how to set up TLS correctly for EventListener

Expected Behavior

@akihikokuroda brought up in #78 that in order to use triggers securtely, users need to use TLS for communication b/w the EventListener and the requester.

In further discussion it was agreed that this is up to the person setting up triggers:

I agree about the TLS. When the user create the eventlistener CD, he/she should create the ingress route and uses the external address of them to the webhook configuration.

However we should make this easy on the user by including in our docs (maybe install instructions?) at least recommendations / links around this.

Actual Behavior

We have no docs about this.

Proposal to rename TriggerBinding

If we're including conditionals and headermatching in TriggerBinding, then I would propose renaming it to something like TriggerSifter (open to alternatives).

I suggest this because the TriggerBinding name was created to reflect how it binds values from the event payload onto the TrigerTemplate, however it will now also be doing filtering and headermatching. I think the name should change to reflect the fact that more processing is done than just binding values.

Add Triggers information to Resources

Expected Behavior

Any resources created through Triggers should be properly labelled and annotated to specify their origins.

Actual Behavior

Currently, nothing does this.

Additional Info

Documentation should be included.

Define EventListener Status

Expected Behavior

When an EventListener resource is created, we expect its status section to have information about the resulting deployment and the endpoint this deployment can be reached at.

Actual Behavior

The event_listener_types.go file defines the EventListener resource to have an empty status section.

Additional Info

@bobcatfish mentioned here that we can use the status section in our tests.

Please use this issue to discuss what information should go in the EventListener's status. We haven't discussed this yet, but I think that some relevant information we should consider includes the:

  • EventListener's deployment
  • Address for sending an http request to the EventListener's deployment
  • duckv1beta1.Status

Add automated coverage for examples

Expected behavior

Any examples we add to the repo should have at least a base level of testing applied to them with each PR, e.g. making sure the structure is sound (even if we don't verify the behavior).

Actual behavior

In #10 we'll be adding types and examples, but we don't yet have any tests for them.

Additional info

Update EventListener to consume CloudEvent

Expected Behavior

Once we have #36 the next step is for the running binary to expose a Knative addressable endpoint that can be invoked with a CloudEvent!

It doesn't have to actually do anything with the CloudEvent at this point but maybe could log something fun?

Actual Behavior

Once we have #36 we'll have a running pod but it won't be listening for CloudEvents yet

Additional Info

If we have #37 at this point, we should expand the end to end test to hit the running EventListener with a predefined CloudEvent. If not, we'll need to create a follow up issue in the #33 epic to add that to the end to end test.

Get end to end tests running against PRs

Expected behavior

Every PR should have all end to end tests (i.e. tests in the test dir which use the e2e build label) executed against it.

Actual behavior

We don't have any code, or any automation :O

Additional info

Webhook pod crash loop "can't set finalizers"

Expected Behavior

The Triggers webhook pod should not crash.

Actual Behavior

The Triggers webhook pod crashes with these logs:

{"level":"info","caller":"logging/config.go:100","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"sampling\": {\n    \"initial\": 100,\n    \"thereafter\": 100\n  },\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","caller":"logging/config.go:101","msg":"Logging level set to info"}
{"level":"warn","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","logger":"webhook","caller":"webhook/main.go:64","msg":"Starting the Configuration Webhook","knative.dev/controller":"webhook"}
{"level":"info","logger":"webhook","caller":"webhook/webhook.go:315","msg":"Found certificates for webhook...","knative.dev/controller":"webhook"}
{"level":"error","logger":"webhook","caller":"webhook/webhook.go:324","msg":"failed to register webhook","knative.dev/controller":"webhook","error":"failed to create a webhook: mutatingwebhookconfigurations.admissionregistration.k8s.io \"triggers-webhook.tekton.dev\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>","stacktrace":"github.com/tektoncd/triggers/vendor/knative.dev/pkg/webhook.(*AdmissionController).Run\n\t/root/go/src/github.com/tektoncd/triggers/vendor/knative.dev/pkg/webhook/webhook.go:324\nmain.main\n\t/root/go/src/github.com/tektoncd/triggers/cmd/webhook/main.go:106\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
{"level":"fatal","logger":"webhook","caller":"webhook/main.go:107","msg":"Error running admission controller{error 25 0  failed to create a webhook: mutatingwebhookconfigurations.admissionregistration.k8s.io \"triggers-webhook.tekton.dev\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>}","knative.dev/controller":"webhook","stacktrace":"main.main\n\t/root/go/src/github.com/tektoncd/triggers/cmd/webhook/main.go:107\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}

Steps to Reproduce the Problem

  1. Running on OpenShift
  2. ko apply -f config/

Add baseline level of user docs

Expected behavior

It'll be easier to iterate on docs and we'll be in a better position docs-wise in the future if we start with some basic docs now that we can continue to add to.

Actual behavior

In #2 we'll add the bare minimum required docs that each tekton project needs to have, but those are more for the contributors and maintainers of the project, and less about users of the project.

MakeTriggerBinding more reusable

Expected Behavior

The TriggerBinding should be reusable for the same event and the same pipeline.

Actual Behavior

It should (maybe!) be possible for users to provide values to TriggerTemplate from sources other than events

from #87

Users can provide static values to the TriggerTemplate by putting them to the params in the TriggerBinding instead of the attribute values in the event.

This method makes the TriggerBindings non-reusable in the same cases.

  • same task but the different credentials are required to upload the result to the repository.
  • same task but the the result is uploaded in the different repository

For these cases, the user need to create different event EventListener and TriggerBinding for each credential or repository.

One Idea

Adding the capability to pass the param values from the EventListener to the TriggerBinding. With this, the EventListener still have to be created for each case but the same TriggerBinding can be used for the both cases.

Restrict creation of Resources to just Pipeline Resources?

Expected Behavior

After discussing with @imjasonh and @dibyom I'm thinking it would make sense to restrict the types of Resources a user can create with TriggerTemplates to just types that we know are needed for executing Tekton Pipelines (e.g. PipelineResources, Conditions, PipelineRuns, etc.).

I think this makes sense because:

  • It's easier to expand the scope later than it is to restrict it
  • I don't think we have any concrete use cases for creating other Resources (correct me if I'm wrong); we know we want to start PipelineRuns (maybe TaskRuns too) but it's not clear we want to create anything else - and afaik we don't want to replace other generic event triggering solutions such as knative eventing or argo events
  • If we do restrict the types, we can validate the structure of TriggerTemplates before waiting for the EventListener to be invoked

Actual Behavior

We explicitly decided that it should be possible for a user to create any Resource in a kube cluster, e.g. they could create a Deployment (or a Role!) as long as the service account had permission to.

Rework TriggerBindingSpec

Currently, the TriggerBindingSpec is defined as follows:

// +k8s:deepcopy-gen=true
type TriggerBindingSpec struct {
	TemplateBindings []TemplateBinding `json:"templatebindings,omitempty"`
}

// +k8s:deepcopy-gen=true
type TemplateBinding struct {
	TemplateRef TriggerTemplateRef   `json:"templateref,omitempty"`
	Event       TriggerTemplateEvent `json:"triggertemplateevent,omitempty"`
	Params      []pipelinev1.Param   `json:"params,omitempty"`
}

This means we have multiple distinct event bindings within a TriggerBinding. Currently, we have a one-to-many mapping from EventListener to TriggerBinding and well as a one-to-many mapping from TriggerBinding to TriggerTemplates. As I understood it, we wanted to support this?

I believe this may not be the best design decision after consideration for issues like naming modifiers. To be brief, there will be no way to refer to resources being generated between templates (on the same TriggerBinding).

Proposal to separate Validation / Filtering from TriggerBinding

Expected Behavior

As much as it makes sense (our design should be as simple as possible and no simpler! :D) we should try to make our CRDs:

  • Have one clear job
  • Minimize repetition (e.g. in #55 we made TriggerBindings more reusable b/c we figured otherwise folks could end up having to create the same TriggerBindings over and over again).

In light of #55 (see "actual behaviour" below), I propose that:

  • Folks may want to reuse "validation"/filtering logic across many EventListeners
  • Validation/filtering logic is a different responsibility than extracting values from events (which seems to be TriggerBindings main job)

Ideas

We could add a Filter type, maybe even something as specific as HTTPFilter, something like:

apiVersion: tekton.dev/v1alpha1
kind: HTTPFilter
metadata:
  name: requester-is-github-push
spec:
  inputParams:
  - name: secret
    description: The git webhook secret
  headerMatches:
  - X-GitHub-Event: push
  - X-GitHub-Secret: $(inputParams.secret)

Or we could take this one step further and have both an HTTPFilter and HTTPValidation but that feels like it's going too far?

Then TriggerBinding would become just:

apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
  name: my-triggerbinding
spec:
  # I removed conditionals also!
  outputParams:
  - name: gitbranch
    value: $(event.ref)
  - name: gitrevision
    value: $(event.head_commit.id)
  - name: gitrepositoryurl
    value: $(event.repository.url)

At this point TriggerBinding might make sense to rename to something like EventParser?

And then EventListener would be:

kind: EventListener
metadata:
  name: my-eventlistener
spec:
  bindings:
    filterRef: requester-is-github-push
      params:
      - name: secret
        value: somehow-from-kube-secret
    triggerTemplateRef: my-triggertemplate
    triggerBindingRef: my-triggerbinding
    conditions:
       - name: is-master
         params:
           - $(my-triggerbinding.gitbranch): refs/heads/master

TriggerBindingRef would be responsible for extracting values that are available to conditions - and maybe filters? Or if that's crazy then conditions can go back into TemplateBinding :D

Or something like that! What I do like though is keeping the responsibilities separate, and using EventListener to bring them together.

Actual Behavior

In our most recent design (#55 (comment)) we have added some values to the TriggerBinding which are for filtering (i.e. only create these resources if conditions is true) and validation (i.e. only create these resources if headerMatches is true, if it isn't true, this indicates that the requester is bogus :O - this can be viewed as a type of filtering too?)

This means TriggerBinding now has 3 jobs:

  1. Extract values from events and make them available via outputparams
  2. Extract values from events and use them to filter via conditions
  3. Extract values from events and use them to validate the caller via headerMatches
apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
  name: my-triggerbinding
spec:
  inputParams:
  - name: secret
    description: The git webhook secret
  - name: branch
    description: The git branch
    default: master
  headerMatches:
  - X-GitHub-Event: push
  - X-GitHub-Secret: $(inputParams.secret)
  conditionals:
  - $(event.ref): refs/heads/$(inputParams.branch)
  outputParams:
  - name: gitrevision
    value: $(event.head_commit.id)
  - name: gitrepositoryurl
    value: $(event.repository.url)

Additional Info

Related issues:

  • Adding conditionals #49
  • Renaming TriggerBinding #62 <-- depending on how this issue goes, it might become more obvious if TriggerBinding should be renamed and to what :D
  • Making TriggerBindings reusable #55

Create getting started guide

Expected behavior

People who want to use Triggers should have an obvious place to look for some quick start docs on how they can get up and running, e.g.:

  • How to install and setup
  • What the types are and how to use them
  • Where to look for more

Actual behavior

In #2 we'll add a DEVELOPMENT.md for contributors, but we won't have any getting started docs for users.

Additional info

May want to use https://github.com/tektoncd/pipeline/blob/master/docs/tutorial.md as an example.

Triggers RBAC

Expected Behavior

Users should specify a service account (or a role binding?) in the EventListener to be used for resources created as per specific TriggerTemplate.

Actual Behavior

Currently, there is no implementation.

Additional Info

Currently, the EventListener creates a Deployment and a load balancer Service (to reference the deployment), which enables the pods to be addressable/receive external traffic. The service account tied to the EventListener listener pod has permissions necessary to read Triggers resources, but how can it get the permissions necessary for creating resources defined in a particular TriggerTemplate, which are arbitrary?

Event/Addressable Proposal

This proposal addresses the specifications as outlined in the initial design proposal (https://docs.google.com/document/d/1fngeNn3kGD4P_FTZjAnfERcEajS7zQhSEUaN7BYIlTw/edit#heading=h.iyqzt1brkg3o), namely:

  • Whether or not to have first class support of CloudEvents
  • How to create an addressable endpoint

This proposal takes the perspective of getting things out the door "as quickly as possible", but not at the cost of diminishing the Triggers design. Rather, an initial release might provide insights from other users and have us consider things we had not otherwise. This could help build a base to iterate from.

Cloud Events

From the CloudEvents specification,

Events are everywhere. However, event producers tend to describe events differently.

CloudEvents are a standardized way to format messages. The project justification is that without such a standardized format "developers must constantly re-learn how to consume events". This seems like a good idea to me, but CloudEvent adapters is where I draw skepticism. The specification mentions adapters as follows:

Since not all event producers generate CloudEvents by default, there is documentation describing the recommended process for adapting some popular events into CloudEvents, see CloudEvents Adapters.

Since the addressable endpoint provided by an EventListener is a terminal (the event is consumed/not be emitted further), it seems unclear why Triggers should convert the payload into a CloudEvent. Although adapters could be provided for the "popular events" (e.g., GitHub, GitLab, DockerHub), since these are likely the primary concern, this already seems like bespoke implementation for events within Triggers. I understand anything bespoke as the inverse of the intent behind the project since the power of Triggers is in supporting any event. This is accomplished through user provided event bindings where the structure of the data must be known. This is the only stipulation, but it allows for the intended decoupling from vendors. CloudEvents is working to standardize payload structures, but in the end, the payload is always going to be pushed down somewhere (https://github.com/cloudevents/sdk-go/blob/master/pkg/cloudevents/event.go#L13) because producers produce different things, which ties back into the idea of handling arbitrary payloads.

If Triggers is coupled to CloudEvents, it will require that we enumerate an adapter for each event type. Assuming this path, one further issue to consider here is how to determine if an event is actually a CloudEvent because otherwise these adapters will wrap CloudEvents into yet another CloudEvent, which would be unintended. With this adapter pattern, it seems like it hinges upon the assumption that the incoming events are not CloudEvents. First class CloudEvents also have a side effect that users who are familiar with the payload they should be receiving now have to know to go through the CloudEvent envelope rather than the raw data when constructing event bindings.

TriggerBinding event bindings necessitate that the structure of the data was already known. In this way, it should not matter if the event is a CloudEvent. The popular events like GitHub and GitLab (and likely others) provide headers that best provide the concept of an "event type" that can be used agnostic of CloudEvents. This can address the concern of multiple TriggerBindings with distinct events on a single EventListener.

With optional headers to match against event types, TriggerBindings could look as follows:

apiVersion: tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
  name: simple-pipeline-binding
  namespace: default
spec:
  templateRef:
    name: simple-pipeline-template
  serviceAccount: default
  headerMatch:
    - X-GitHub-Event: issues
  params:
    - name: gitrevision
      value: ${event.head_commit.id}
    - name: gitrepositoryurl
      value: ${event.repository.url}

Conclusion

Triggers should not provide first class support for CloudEvents

  • There is a performance cost in wrapping messages
  • The benefit of the CloudEvent abstraction to Triggers is unclear
  • Wrapping can be potentially error prone
  • Writing adapters is in contrast to the strengths of parsing arbitrary messages
  • Event bindings logic becomes unclear
  • Will take time to implement

Addressable Endpoint

The pods generated by the EventListener need to be (at least conditionally) addressable external to the cluster. One way to accomplish this is to use Knative eventing. Knative uses Istio as the underlying technology to expose an address. In any case, the underlying resources that are created to create the external endpoint devolve to an Ingress and/or a LoadBalancer service.

For the sake of decoupling from all unnecessary components, a simplistic solution would involve creating a LoadBalancer Service that refers to the Deployment (as underway here) so that it can properly consume events. Most cloud providers have baked in implementations for LoadBalancers as well, which allows us to leverage this "without paying". To the effect of being configurable, we could (later or otherwise) allow for an embedded specification of a service rather than making all the assumptions about how the LoadBalancer should be configured.

Conclusion

Use LoadBalancer Services to make the EventListener pods addressable

  • Does not require any installation dependencies
  • Supported by most clouds for "free"
  • We have control on how this is configured
  • Less moving parts to fail
  • Should be easy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.