Git Product home page Git Product logo

drone-runtime's Introduction

The drone runtime package implements the execution model for container-based pipelines. It is effectively a lightweight container orchestration engine optimized for pipelines.

Definition File

The runtime package accepts a pipeline definition file as input, which is a simple json file. This file is not intended to be read or written by humans. It is considered an intermediate representation, and should be generated by a computer program from more user-friendly formats such a yaml.

Example hello world definition file:

{
	"metadata": {
		"uid": "uid_AOTCIPBf3XdTFs2j",
		"namespace": "ns_JVzesGoyteu5koZK",
		"name": "test_hello_world"
	},
	"steps": [
		{
			"metadata": {
				"uid": "uid_8a7IJsL9zSJCCchd",
				"namespace": "ns_JVzesGoyteu5koZK",
				"name": "greetings"
			},
			"docker": {
				"args": [
					"-c",
					"echo hello world"
				],
				"command": [
					"/bin/sh"
				],
				"image": "alpine:3.6",
				"pull_policy": "default"
			}
		}
	],
	"docker": {}
}

Local Testing

The runtime package includes a simple command line utility allowing you to test pipeline execution locally. You should use this for local development and testing.

The runtime package includes sample definition files that you can safely execute on any machine with Docker installed. These sample files should be used for research and testing purposes.

Example commands:

drone-runtime samples/1_hello_world.json
drone-runtime samples/2_on_success.json
drone-runtime samples/3_on_failure.json
drone-runtime samples/4_volume_host.json
drone-runtime samples/5_volume_temp.json
drone-runtime samples/6_redis.json
drone-runtime samples/7_redis_multi.json
drone-runtime samples/8_postgres.json
drone-runtime samples/9_working_dir.json
drone-runtime samples/10_docker.json

Example command tests docker login:

drone-runtime --config=path/to/config.json samples/11_requires_auth.json

Kubernetes Engines

The default runtime engine targets Docker, however, there is an experimental runtime engine that targets Kubernetes. Pipeline containers are launched as Pods using the Kubernetes API.

drone-runtime \
  --kube-url=https://localhost:6443 \
  --kube-config=~/.kube/config \
  samples/kubernetes/1_hello_world.json

drone-runtime's People

Contributors

bogdangi avatar bradrydzewski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drone-runtime's Issues

kubernetes: disable in-cluster authentication for spawned images

Images spawned by kubernetes can access the kubernetes API using "in cluster" authentication. This should be disabled for containers that are spawned by the pipeline, or at least customizable in some way. Until this is resolved, the Kubernetes engine should be internal-use-only (do not use for public repositories that accept pull requests). I was hoping to have a solution in place already, but could use some help if anyone is interested.

Kubernetes: pipeline jobs created should include labels

Currently the jobs that are created to run pipelines are missing the standard labels. I think we should at least add the label k8s-app=drone for jobs. This is standard for almost every resource on k8s and is currently missing. This would allow a user to run kubectl delete job -l k8s-app=drone for example.

I think we should also consider adding more labels. e.g. for build ID, repo, etc. I see we have annotations for these but not labels. Are the annotations used/read by either Drone itself or drone/controller? If not, then I would move all the annotations to labels. If they are used, then duplicate some of this data to the labels so they can be used with kubectl <verb> -l xxx=yyy ๐Ÿ™‚

Allow custom pod annotations (per job)

Right now to pass AWS secrets into pipeline I need to create a robot user and share its creds via secrets into a step.

We're using KIAM in AWS to assign IAM roles to pods. If drone allowed to add pod annotations like descibed here https://github.com/uswitch/kiam#overview to an underlying k8s job/pod we could use KIAM to pass credentials (via aws metadata)

Example:

kind: pipeline
name: default
k8s_annotations:
   iam.amazonaws.com/role: reportingdb-reader

steps:
  - name: build
    image: quay.io/org/img:0.1.0
...

Annotation should be passed to drone job level object, and then to k8s job -> pod correspondingly. Then every step inside the job should see creds via aws metadata

set in config.json failed: no such file or directory

I am running following pipeline using drone 1.0 rc3

kind: pipeline
name: default

steps:
- name: build
  image: golang
  commands:
  - go build
  - go test

  testing:
    image: golang:1.11
    group: build
    commands:
      - go get -u github.com/golang/lint/golint
      - make test

  build:
    image: golang:1.11
    group: build
    commands:
      - go get -u github.com/golang/dep/cmd/dep
      - dep ensure
      - make build_linux_amd64

when the new namespace is created and clone pod starts the result is following:

NAME                               READY     STATUS               RESTARTS   AGE
b4bfulps892q3o9kgqutcoqoqhvsnwgc   0/1       ContainerCannotRun   0          1m
Events:
  Type     Reason                 Age   From                        Message
  ----     ------                 ----  ----                        -------
  Normal   Scheduled              1m    default-scheduler           Successfully assigned b4bfulps892q3o9kgqutcoqoqhvsnwgc to dc2-dev-node-3-13
  Normal   SuccessfulMountVolume  1m    kubelet, dc2-dev-node-3-13  MountVolume.SetUp succeeded for volume "4ourscyc3ir93chjkjzqtn4pet72be97"
  Normal   SuccessfulMountVolume  1m    kubelet, dc2-dev-node-3-13  MountVolume.SetUp succeeded for volume "vjv54k4zj1b7vzbnhf7gy8i56lawmgd3"
  Normal   Pulling                1m    kubelet, dc2-dev-node-3-13  pulling image "docker.io/drone/git:latest"
  Normal   Pulled                 1m    kubelet, dc2-dev-node-3-13  Successfully pulled image "docker.io/drone/git:latest"
  Normal   Created                1m    kubelet, dc2-dev-node-3-13  Created container
  Warning  Failed                 1m    kubelet, dc2-dev-node-3-13  Error: failed to start container "b4bfulps892q3o9kgqutcoqoqhvsnwgc": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "chdir to cwd (\"/drone/src\") set in config.json failed: no such file or directory"

driver: vmware fusion

support vmware fusion. The yaml configuration would look something like below. The type parameter would be used to differentiate between vm and container pipelines, and would unmsarshal to a different structure.

---
version: 1
kind: pipeline
type: vmwarefusion

settings:
  image: osx.img

platform:
  os: osx
  arch: amd64

steps:
- name: build
  commands:
  - go build
  - go test

docker: implement CPU limits

I could not figure out how to translate kubernetes CPU limits (in string format) to Docker CPU limits. This requires additional research.

cleanup condition structure

cleanup the condition structure:

conditions: {
  on_success: true,
  on_failure: true
}

Also maybe consider adding some additional parameter to limit execution. Right now these conditions are being evaluated at the yaml level, which we might want to keep. So this change needs more thinking and discussion.

conditions: {
  on_success: true,
  on_failure: true,
+ platform: linux/amd64,
+ vendor: drone
+ user_agent: drone-cli
}

Cannot be used in openshift

I am trying to run drone with kubernetes runtime in openshift. However, it cannot work because:

  volumes:
  - hostPath:
      path: /tmp/drone/zai6xqb3jgbxoh3lxx5xmibp1pxrgvu9/zai6xqb3jgbxoh3lxx5xmibp1pxrgvu9
      type: DirectoryOrCreate

https://kubernetes.io/docs/concepts/storage/volumes/

DirectoryOrCreate | If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet.

this means that kubernetes will create folder with 755 (root is the owner). However, openshift by default does not allow executing containers as root.

That is why we should have possibility to configure privileged parameter to true in all pods.

    securityContext:
      privileged: false

I think privileged can be configured in case of normal step things. However, we need to possibility to configure it in other steps as well like clone etc

bump docker dependency

We should bump the docker dependency to a newer version and verify everything still works correctly so that we can take advantage of any bug fixes.

[[constraint]]
  name = "github.com/docker/docker"
  version = "1.13.1"

gcr: handle config.json with both https://gcr.io and gcr.io

If the config.json has entries for both gcr.io and https://gcr.io we should prune the file and remove the entry with https. If the runtime chooses the https entry it will result in the below error message.

default: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication

The config.json order is non-deterministic because it is a map structure; Go maps do not preserve order. As a result the runtime will sometimes use gcr.io and other times use https://gcr.io. So this error can appear to be intermittent.

Note that until we have a solution in place, the manual workaround is to manually strip superfluous entries from the config.json file before you store it in your secrets.

kubernetes: persistent volumes

this helps ensure all pods have access to a shared workspace and can run on the same machine. It also helps us implement temp_dir volumes (as defined in the drone yaml). Persistent volumes are currently disabled while we try to figure out an approach to scheduling pods on specific nodes.

kubernetes: Configmap is mounted to home as directory

When executing a step, the kubernetes runtime mounts f.e. the .netrc file to the /root/ directory. This causes the whole /root/* to be read-only.

This can be resolved with using a subpath to mount only the specific file from the configmap.

This can result in errors like these when using node:

+ + npm i
npm ERR! path /root/.npm
npm ERR! code EROFS npm ERR! errno -30 npm ERR! syscall mkdir npm ERR! rofs EROFS: read-only file system, mkdir '/root/.npm' npm ERR! rofs Often virtualized file systems, or other file systems npm ERR! rofs that don't support symlinks, give this error.

Kubernetes temporary volumes use all same hostpath

I am using the Kubenetes executor for drone.
This is a snipped from the drone.yml

services:
- name: foo
  image: bar
  ...
  volumes:
  - name: foo
    path: /var/lib/foo
  - name: bar
    path: /data
  - name: baz
    path: /tmp/cache

volumes:
- name: foo
  temp: {}
- name: bar
  temp: {}
- name: baz
  temp: {}

The resulting kubernetes pod spec

 "spec": {
    "volumes": [
      {
        "name": "b9t3ruvez9zrvw828y46jyydk90ety93",
        "hostPath": {
          "path": "/tmp/drone/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx",
          "type": "DirectoryOrCreate"
        }
      },
      {
        "name": "odbfsvn0obxbar794eko88hzkljw6k2n",
        "hostPath": {
          "path": "/tmp/drone/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx",
          "type": "DirectoryOrCreate"
        }
      },
      {
        "name": "gr3hctwvm4kjettjsm0evx10mk5vqf5l",
        "hostPath": {
          "path": "/tmp/drone/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx/k4xzoklmyt0rbsrsiw2ec2wluxkyfycx",
          "type": "DirectoryOrCreate"
        }
      }
    ],
...
"volumeMounts": [
          {
            "name": "b9t3ruvez9zrvw828y46jyydk90ety93",
            "mountPath": "/var/lib/foo"
          },
          {
            "name": "odbfsvn0obxbar794eko88hzkljw6k2n",
            "mountPath": "/data"
          },
          {
            "name": "gr3hctwvm4kjettjsm0evx10mk5vqf5l",
            "mountPath": "/tmp/cache"
          }
        ],

This leads to the problem that the data is all mixed in the same directory.

Provide a generic label for all jobs

I'm looking at a solution, for deleting jobs, on pre 1.12 k8s clusters. It would be nice if I could kubectl delete job -l drone=job or something similar.

driver: aws

create an aws driver that spins up servers on-deman

kind: pipeline
type: aws

config:
  image: ami-1a2b3c4d
  instance: t3.small
  region: us-west-2

credentials:
  from_secret: aws_credentials

steps:
- name: build
  commands:
  - docker build -t foo/bar .
  - docker push foo/bar

Kubernetes: 'cannot create log'

I think this might be a bug.
I am using the kubernetes executor on ARM64 using cri-o runtime.
Drone creates a job and the log from the 'drone-controller' reports

1 client_config.go:548] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
log: exiting because of error: log: cannot create log: open /tmp/drone-controller.drone-job-wgfkw-45jr6.unknownuser.log.WARNING.20190117-211751.1: no such file or directory 

Assumption: Could the /tmp directory be missing?

I noticed that the AMD64 image is using alpine as base image while ARM64 uses drone/ca-certs.

Can't go get on MacOS

I am unable to install the drone-runtime on MacOS.
The issue is with a vendor library, but I figured I should point it out here.

go get -u github.com/drone/drone-runtime
# docker.io/go-docker
../../../docker.io/go-docker/service_create.go:128:36: cannot use dgst (type "github.com/opencontainers/go-digest".Digest) as type "github.com/docker/distribution/vendor/github.com/opencontainers/go-digest".Digest in argument to reference.WithDigest```

kubernetes: use activeDeadlineSeconds

k8s jobs can make use of backoffLimit and activeDeadlineSeconds.

kubernetes: Feature Request: Step Pod Annotations

Use Case

Specific use case: Enabling AWS IAM roles for each pod step.

Kube2IAM works by looking at annotations on a given pod to figure out which role to provide info for. Before, we could just annotate the agent pods, but now each step is running in a new pod and we need a way to annotate that.

In general though there are a variety of K8s features that use annotations.

API

From an API perspective, we could treat this a bit like limits were handled in: #29 . This has the perk that you could apply very specific roles to each step of the pipeline.

driver: qemu

support qemu as the runtime engine. The corresponding yaml would be slightly different than docker, and would look something like this:

---
version: 1
kind: pipeline
type: qemu

settings:
  image: debian.img

platform:
  os: linux
  arch: amd64

steps:
- name: build
  commands:
  - go build
  - go test

trim docker for windows errors

docker for windows has long error messages and they can expose private data about the container and host machine. We should try and extract the error message when possible, and always omit extra info.

example 1:

ServerCore 1809 Images: Error response from daemon: CreateComputeSystem 3ca0aefe96cfba6d138e922ebaca6a4ded77a7ca488f944eee454918c1f3e7cc: Insufficient system resources exist to complete the requested service. (extra info: ....

example 2:
default: Error response from daemon: container e65be3e173123a115278a42af7e256b9a318de91a197e4a1a002a2cb3a34a9f7 encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2) extra info:

kubernetes: log lines if \n exists

kubernetes batches log output which means multiple lines could be received in a single []byte. We should modify the lineWriter to split when \n exists. It is important to note that most log lines have a trailing \n so we should take special care not to remove this (it is important)

func (w *lineWriter) Write(p []byte) (n int, err error) {
	out := string(p)
	if w.rep != nil {
		out = w.rep.Replace(out)
	}

+	// TODO this is pseudo code and needs refined
+      if strings.Contains(strings.TrimSuffix(out, "\n"), "\n") {
+        parts := strings.Split(out, "\n")
+        // TODO create a line struct for each part
+      }

	line := &Line{
		Number:    w.num,
		Message:   out,
		Timestamp: int64(time.Since(w.now).Seconds()),
	}

	if w.state.hook.GotLine != nil {
		w.state.hook.GotLine(w.state, line)
	}
	w.num++

	w.lines = append(w.lines, line)
	return len(p), nil
}

Kubernetes: pipeline services do not create kubernetes service

I have a drone pipeline with a services section.
When running on kubernetes a new namespace is created, the steps and services create pods, but there is no service in the namespace. Therefore the steps/pods to not reach the service/pods since no DNS resolution will be successful.

Rotate Logs when limit exceeded

Instead of truncating logs we the limit is exceeded we should instead push new logs to the list, and pop older logs lines off the list.

driver: firecracker

support firecracker as the runtime engine. The corresponding yaml would be slightly different than docker, and would look something like this:

---
version: 1
kind: pipeline
type: firecracker

settings:
  image: debian.img

platform:
  os: linux
  arch: amd64

steps:
- name: build
  commands:
  - go build
  - go test

driver: exec

The exec driver will allow Pipeline executions on the host machine. This can be useful for workloads that cannot run inside containers, and for supporting additional operating systems, such as osx. Only trusted repositories would be allowed to use this pipeline type. The yaml itself would look something like this:

kind: pipeline
type: exec

platform:
  os: darwin
  arch: amd64

steps:
- name: build
  commands:
  - go build
  - go test

Example of all available fields for this driver type:

kind: pipeline
type: exec

platform:
  os: darwin
  arch: amd64

clone:
  depth: 50
  disable: false
  skip_verify: false

steps:
- name: build
  shell: /bin/sh
  environment:
    GOOS: darwin
    GOARCH: amd64
    PASSWORD:
      from_secret: password
  commands:
  - go test -v
  - go build -o hello-world

Useful for pipelines that consist only of Docker commands:

kind: pipeline
type: exec

steps:
- name: build
  commands:
  - docker build -t foo .
  - docker run -n bar foo
- name: cleanup
  commands:
  - docker rm bar
  - docker rmi foo

Add a prefix to the auto-generated resources by the pipeline controller

Currently the auto-generated resources name by the pipeline controller looks like this: n3grlskg6s9ac7c39l5dt80i49iirtue. It makes extremely hard to monitoring them as we never can expect how they will looks like.

Example screenshot:
screen shot 2019-03-05 at 4 59 17 am

Adding a prefix to the auto-generated Namespace and Pod would solve this.

Generate Build script in Engine

Previously the yaml parser was responsible for generating the shell script and deciding how it was passed to the container. This should be moved to the runtime engine. To do so, we should add a Commands and Shell attribute:

	type Step struct {
		Entrypoint   []string          `json:"entrypoint,omitempty"`
		Command      []string          `json:"command,omitempty"`
+		Commands     []string          `json:"commands,omitempty"`
+		Shell        []string          `json:"shell,omitempty"`
...
	}

The commands and shell should be used by the engine to generate the build script and inject this into the container so that it can be executed as the main entrypoint. Some implementation notes:

  1. If the commands section is specific, it should always override the entrypoint and args
  2. If the shell is empty we should default to /bin/sh on linux and powershell on windows (do we need to specify platform somewhere to enable this)?
  3. The engine driver should choose how it wants to upload the script to the container
  4. The script generator already exists in drone-yaml-v1 and should be moved into this repository in an engine/scripts package so that it can be shared by all engine implementations
  5. The Docker engine uploads the file using docker cp and then executes. We should continue to use this approach for Docker, but should use an engine-specific approach for kubernetes. Perhaps it could be added as a mounted secret? or maybe passed as an environment variable that we execute with /bin/bash -c ${SCRIPT}? Whatever works best.

Once the runtime is patched, we need to update the drone-yaml-v1 parser to populate the commands and shell attributes, and we can remove the script generator code from the yaml package as well.

cc @metalmatze

Allow to set DRONE_RUNNER_LABELS on kubernetes jobs

When using kubernetes mode there's no way to pass DRONE_RUNNER_LABELS onto jobs, thus I can't use https://docs.drone.io/config/pipeline/nodes/ feature

My current use case: I have drone servers in dev & prod, both watching the same repo. I want dev server to execute dev deploy part of the yaml, and prod - prod. If I could pass DRONE_RUNNER_LABELS to drone server, which passes them down to all jobs->pods, I could ensure I have differently labeled workers in dev/prod and use filtering

Longterm: DRONE_RUNNER_LABELS itself might not be suitable for k8s at all, as k8s provides own scheduling techniques. Instead of using runner labels to separate environments, maybe it makes sense to add a separate server var for this .

Add internal logger package

We should add an internal logger package similar to go-login. Sometimes we need detailed debugging information and we need a generic way to provide this.

// A Logger represents an active logging object that generates
// lines of output to an io.Writer.
type Logger interface {
	Debug(args ...interface{})
	Debugf(format string, args ...interface{})
	Debugln(args ...interface{})

	Error(args ...interface{})
	Errorf(format string, args ...interface{})
	Errorln(args ...interface{})

	Info(args ...interface{})
	Infof(format string, args ...interface{})
	Infoln(args ...interface{})

	Warn(args ...interface{})
	Warnf(format string, args ...interface{})
	Warnln(args ...interface{})
}

Once enabled we should provide debug logging for Docker and Kubernetes, specifically we need to start with the Docker pull process to verify whether credentials are being properly passed to the Docker daemon when cloning.

kubernetes: always set resource requests and limits

https://github.com/drone/drone-runtime/pull/29/files implements resource limits perfectly, but it requires users to set the limits per step.

Many of the pipelines don't use this feature, still in Kubernetes it's a best practice to set the resource requests and limits. If not set and the cluster is out of memory then workloads are killed in a random order.

By setting a moderate resource limit will prevent this, and also encourage people to set appropriate limits to their steps.

As a conversation opener the default limit could be 500m CPU and 512M memory

docker: set custom network

the proc.Docker.Networks should be added to the network config endpoints:

// helper function returns the container network configuration.
func toNetConfig(spec *engine.Spec, proc *engine.Step) *network.NetworkingConfig {
	endpoints := map[string]*network.EndpointSettings{}
	endpoints[spec.Metadata.UID] = &network.EndpointSettings{
		NetworkID: spec.Metadata.UID,
		Aliases:   []string{proc.Metadata.Name},
	}
	return &network.NetworkingConfig{
		EndpointsConfig: endpoints,
	}
}

kubernetes: support named ports

when a service has > 1 port, the ports need names. Drone names the ports the same as the port number, but Kubernetes does not seem to like this.

ports:
- name: 80
  port: 80
- name: 443
  port: 443

the kubernetes documentation doesn't mention any naming restrictions, and yet here we are. So it appears we either need to a) require people name ports or b) find a port auto-naming scheme that kubernetes will accept.

kubernetes: pipeline blocking

the following yaml blocks in kubernetes:

kind: pipeline

steps:
- name: ping
  image: redis:4-alpine
  commands:
  - sleep 5
  - echo $REDIS_SERVICE_HOST
  - redis-cli -h $REDIS_SERVICE_HOST ping
- name: greetings
  image: golang:1.11
  commands:
  - echo hello
  - echo world

services:
- name: redis
  image: redis:4-alpine
  ports:
  - 6379

I suspect it is has something to do with either the Wait or Tail functions because I can see the containers are stopped in kubernetes. I will research further and post back results of my research.

kubernetes: Feature Request: taints + tolerations

It would help my use case of using kubernetes + Drone, and desiring to have k8s nodes dedicated to running Drone jobs.

This could work by having a cluster admin create k8s nodes with taints:
kubectl taint nodes node1 dedicated=drone:NoSchedule

And so a feature request for supporting config option in Drone to set matching tolerations:

spec:
  tolerations:
    - key: dedicated
      value: drone

feature: implement depends_on

the depends_on keyword should enable execution of pipelines as a DAG. The runtime currently ignores depends_on, and executes pipeline steps sequentially.

errignore parameter

consider including an errignore: true parameter to a container configuration to indicate that failing the container should be ignored and should not fail the overall pipeline.

kubernetes: allow skipping namespace creation

I'd like kubernetes drone deployment to use a single namespace for all the workloads.

For me it makes more sense than allowing drone a full-cluser admin control.

Frankly, I expected drone to work like this when I was setting it up, and I ended up with the following rbac config:

apiVersion: v1
kind: Namespace
metadata:
  name: drone-workloads
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: drone-ci-runner
  namespace: drone-workloads
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch", "extensions"]
  resources: ["jobs"]
  verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: drone-ci-runner-binding
  namespace: drone-workloads
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: drone-ci-runner
subjects:
  - kind: ServiceAccount
    name: default
    namespace: drone

Here drone server is to live at drone namespace, and to run it's worloads in drone-workloads namespace.

For some reason, drone attempted to create some namespace, and build failed with the following:

default: namespaces is forbidden: User "system:serviceaccount:drone-workloads:default" cannot create resource "namespaces" in API group "" at the cluster scope

Actually, as I think now, it might be wanting to create the very drone-workloads namespace. Or do something like that, I dunno.
In other words, I'm stuck, please help.

If it's trying to create namespace - it's a good idea to add a configuration value to just skip it, for cases like mine where I don't want to give cluster admin to the drone.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.