Git Product home page Git Product logo

kubedock's Introduction

Kubedock

Kubedock is a minimal implementation of the docker api that will orchestrate containers on a kubernetes cluster, rather than running containers locally. The main driver for this project is to run tests that require docker-containers inside a container, without the requirement of running docker-in-docker within resource heavy containers. Containers that are orchestrated by kubedock are considered short-lived and ephemeral and not intended to run production services. An example use case is running testcontainers-java enabled unit-tests in a tekton pipeline. In this use case, running kubedock in a sidecar can help orchestrating containers inside the kubernetes cluster instead of within the task container itself.

Quick start

Running this locally with a testcontainers enabled unit-test requires to run kubedock with port-forwarding enabled (kubedock server --port-forward). After that start the unit tests in another terminal with the below environment variables set, for example:

export TESTCONTAINERS_RYUK_DISABLED=true  ## optional, can be enabled
export TESTCONTAINERS_CHECKS_DISABLE=true ## optional, can be enabled
export DOCKER_HOST=tcp://127.0.0.1:2475
mvn test

The default configuration for kubedock is to orchestrate in the namespace that has been set in the current context. This can be overruled with -n argument (or via the NAMESPACE environment variable). The service requires permissions to create pods, services and configmaps. If namespace locking is used, the service also requires permissions to create leases in the namespace.

To see a complete list of available options: kubedock --help.

Implementation

When kubedock is started with kubedock server it will start an API server on port :2475, which can be used as a drop-in replacement for the default docker api server. Additionally, kubedock can also start listening to an unix-socket (docker.sock).

Containers

Container API calls are translated towards kubernetes pods. When a container is started, it will create a kubernetes service within the cluster and maps the ports to that of the container (note that only tcp is supported). This will make it accessible for use within the cluster (e.g. within a containerized pipeline within that same cluster). It is also possible to create port-forwards for the ports that should be exposed with the --port-forward argument. These are however not very performant, nor stable and are intended for local debugging. If the ports should be exposed on localhost as well, but port-forwarding is not required, they can be made available via the built-in reverse-proxy. This can be enabled with the --reverse-proxy argument and is mutually exclusive with --port-forward.

Starting a container is a blocking call that will wait until it results in a running pod. By default it will wait for maximum 1 minute, but this is configurable with the --timeout argument. The logs API calls will always return the complete history of logs, and doesn't differentiate between stdout/stderr. All log output is send as stdout. Executions in the containers are supported.

By default, all containers will be orchestrated using kubernetes pods. If a container has been given a specific name, this will be visible in the name of the pod. If the label com.joyrex2001.kubedock.name-prefix has been set, this will be added as a prefix to the name. This can also be set with the environment variable POD_NAME_PREFIX or with the --pod-name-prefix argument.

The containers will be started with the default service account. This can be changed with the --service-account. If required, the uid of the user that runs inside the container can also be enforced with the --runas-user argument and the com.joyrex2001.kubedock.runas-user label.

Volumes

Volumes are implemented by copying the source content to the container by means of an init-container that is started before the actual container is started. By default the kubedock image with the same version as the running kubedock is used as the init container. However, this can be any image that has tar available and can be configured with the --initimage argument.

Volumes are one-way copies and ephemeral. This typically means, any data that is written into the volume is not available locally. This also means that mounts to devices, or sockets are not supported (e.g. mounting a docker-socket). Volumes that point to a single file will be converted to a configmap (and is implicitly read-only always).

Copying data from a running container back to the client is supported as well, but only works if the running container has tar available. Also be aware that copying data to a container will implicitly start the container. This is different compared to a real docker api, where a container can be in an unstarted state. To 'workaround' this, use a volume instead. Alternatively kubedock can be started with --pre-archive, which will convert copy statements of single files to configmaps when the container is started yet. This will implicitly make the target file read-only, and may not work in all use-cases (hence it's not the default).

Networking

Kubedock flattens all networking, which basically means that everything will run in the same namespace. This should be sufficient for most use-cases. Network aliases are supported. When a network alias is present, it will create a service exposing all ports that have been exposed by the container. If no ports are configured, kubedock is able to fetch ports that are exposed in the container image. To do this, kubedock should be started with the --inspector argument.

Images

Kubedock implements the images API by tracking which images are requested. It is not able to actually build or import images. If kubedock is started with --inspector, kubedock will fetch configuration information about the image by calling external container registries. This configuration includes ports that are exposed by the container image itself, and increases network aliases support. The registries should be configured by the client (for example by doing a skopeo login). By default images that are used are deployed with a 'IfNotPresent' pull policy. This can be globally configured with the --pull-policy argument, and can be configured on container level by adding a label com.joyrex2001.kubedock.pull-policy to the container. Possible values are 'never', 'always' and 'ifnotpresent'.

Namespace locking

If multiple kubedocks are using the namespace, it might be possible there will be collisions in network aliases. Since networks are flattened (see Networking), all network aliases will result in a Service with the name of the given network alias. To ensure tests don't fail because of these name collisions, kubedock can lock the namespace while it's running. When enabling this with the --lock argument, kubedock will create a lease called kubedock-lock in the namespace in which it tracks the current ownership.

Resource requests and limits

By default containers are started without any resource request configuration. This can impact performance of the tests that are run in the containers. Setting resource requests (and limits) will allow better scheduling, and can improve the overall performance of the running containers. Global requests and limits can be set with --request-cpu and --request-memory, which takes regular kubernetes resource requests configurations as can be found in the kubernetes documentation. Limits are optional, and can be configured by adding it with a ,limit. If the values should be configured specifically for a container, they can be configured by adding com.joyrex2001.kubedock.request-cpu or com.joyrex2001.kubedock.request-memory labels to the container with their specific requests (and limits). The labels take precedence over the cli configuration.

Active deadline seconds

Sometimes you may want to specify an activeDeadlineSeconds for the pods run by Kubedock; this is useful in multi-tenant environments if you want the pods to use resources in the terminating quota (if activeDeadlineSeconds is not set, pods will use notterminating quota). You can set the default value using --active-deadline-seconds; pod-specific values can be configured by adding com.joyrex2001.kubedock.active-deadline-seconds label.

Pod template

The pods that are created by kubedock can be customized with additional configuration by providing a pod template with --pod-template. If this is provided, all pods that are created by kubedock will use the provided pod template as a base. If the template contains a containers definition, it will use the first entry in the list as a template for all containers kubedock adds to a pod (including sidecars and init containers). Note that volumes are ignored in these templates. Settings configured via the pod-template have the least precedence in case these can also be configured via other means (cli or labels).

Kubernetes labels and annotations

Labels that are added to container images are added as annotations and labels to the created kubernetes pods. Additional labels and annotations can be added with the --annotation and --label cli argument. Environment variables that start with K8S_ANNOTATION_ and K8S_LABEL_ will be added as a kubernetes annotation or label as well. For example K8S_ANNOTATION_FOO will create an annotation foo with the value of the environment variable. Note that annotations and labels added via environment variables or cli will not be processed by kubedock if they have a specific control function. For these occasions specific environment variables and cli arguments are present.

Resources cleanup

Kubedock will dynamically create pods and services in the configured namespace. If kubedock is requested to delete a container, it will remove the pod and related services. Kubedock will also delete all the resources (services and pods) it created in the running instance before exiting (identified with the kubedock.id label).

Automatic reaping

If a test fails and didn't clean up its started containers, these resources will remain in the namespace. To prevent unused pods, configmaps and services lingering around, kubedock will automatically delete these resources. If these resources are owned by the current process, they will be removed if they are older than 60 minutes (default). If the resources have the label kubedock=true, but are not owned by the running process, it will delete them 15 minutes after the initial reap interval (in the default scenario; after 75 minutes).

Forced cleaning

The reaping of resources can also be enforced at startup. When kubedock is started with the --prune-start argument, it will delete all resources that have the label kubedock=true, before starting the API server. This includes resources that are created by other instances of kubedock.

Docker-in-docker support

Kubedock detects if a docker-socket is bound, and will add a kubedock-sidecar providing this docker-socket to support docker-in-docker use-cases. The sidecar that will be deployed for these containers, will proxy all api calls to the main kubedock. This behavior can be disabled with --disable-dind.

Service Account RBAC

As a reference, the below role can be used to manage the permissions of the service account that is used to run kubedock in a cluster. The uncommented rules are the minimal permissions. Depending on use of --lock, the additional (commented) rule is required as well.

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kubedock
  namespace: cicd
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create", "get", "list", "delete", "watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["list", "get"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "get", "list", "delete"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["create", "get", "list", "delete"]
## optional permissions (depending on kubedock use)
# - apiGroups: ["coordination.k8s.io"]
#   resources: ["leases"]
#   verbs: ["create", "get", "update"]

See also

kubedock's People

Contributors

appiepollo14 avatar blarc avatar dpp23 avatar jeremysprofile avatar joyrex2001 avatar kahowell avatar lorenzobenvenuti avatar lpiepiora avatar mausch avatar mikevader avatar rcgeorge23 avatar stratusjerry avatar testwill avatar yaraskm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kubedock's Issues

Ability to influence serviceaccount used in created deployment

I'm looking through the code but can't see a way to influence the serviceaccount used. The default serviceaccount would be unwise and probably in many setups has little to none rights. Would there be a way to specificy this as a label as well?

Thanks for kubedock!

Regards,
Gijs van Dulmen

Pod names

Hi Vincent,

Is it possible to have any influence over the names that are assigned to pods when they are created in kubernetes?

Thanks

Docker cp command is failing

Problem Description

When trying to run docker cp against kubedock I get the following error

$ docker cp <local_file> <container_name>:<container_folder>
Error response from daemon: {"message":"command terminated with exit code 2"}

And from kubedock logs:

image

How to reproduce it

The issue can be reproduced on OpenShift Developer Sandbox (a Red Hat Developer account is required but it's free) using the following link: https://workspaces.openshift.com/f?url=https://github.com/l0rd/micronaut-sample

image

Additional context

When starting a MySQL container using testcontainer it tries to copy a file in the container and fails. That's the reason I am opening this issue. This can be tested running ./gradlew test in the Red Hat Developer Sandbox workspace linked above.

--reverse-proxy not working as expected when running kubedock as a standalone service

I have kubedock running as a standalone service in namespace A.

I am running a test that uses kubedock to spin up a MySQL pod in namespace B.

I can see that the MySQL pod has started up. When I exec into a pod in namespace A I can successfully nc MySQL using the pod IP (on port 3306) and cluster IP (on the randomly assigned port).

However I can't connect to the kubedock pod using its pod IP and the randomly assigned MySQL service port.

# nc -vvv 10.52.156.57:34615
nc: 10.52.156.57:34615 (10.52.156.57:34615): Connection refused

I am running kubedock with the following arguments:

      args:
        - "server"
        - "--image-pull-secrets=xxxx"
        - "--namespace=B"
        - "--reverse-proxy"

Would you expect kubedock to work when run like this? Is there anything obvious I'm doing wrong?

From the logs, it looks like the kubedock reverse proxy has started up, e.g.

I0722 10:27:44.843072       1 deploy.go:190] reverse proxy for 34615 to 3306
I0722 10:27:44.843079       1 tcpproxy.go:33] start reverse-proxy localhost:34615->172.20.86.237:3306
I0722 10:27:44.852264       1 copy.go:36] copy 4096 bytes to 83710e5d2443:/

withFileSystemBind does not mount any files

Kubedock Version
0.9.2

Host OS
MacOS

Host Arch
ARM

Environment:
Local Cluster with k3d:
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10+k3s1", GitCommit:"471f5eb3dbfeaee6b6dd6ed9ab4037c10cc39680", GitTreeState:"clean", BuildDate:"2022-02-24T23:38:19Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/arm64"}
Gitlab CI/CD pipeline k8s cluster:
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:45:53Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

Steps to Reproduce:

  • Crate a new Container
  • Add a withFileSystemBind of any folder
  • Start the container

Expected:

  • Folder is successfully bound to container and files are available

Actual:

  • The folder was not mounted.

Additional Information:
The argument "--pre-archive" needs to be set for kubedock. Otherwise no copy/mount did work at all.

We tried mounting and copying files to the container with all available possibilities.
None of them is working on it's own. Meaning if you use any of the following methods
no file is ever mounted or copied.

  • .withFileSystemBind(...)
  • .withCopyToContainer(...)
  • .withCopyFileToContainer(...)
  • .withClasspathResourceMapping(...)

However when combining multiple of these Methods they partly work.
Whenever the method ".withFileSystemBind(...)" is present for the container, all
other copies work as expected. But the filesystembind itself does not. In this case
there is a config map as well as the mounted files present in the container.

Following shows the describe for the started alpinecontainer.

With all 4 methods called:

	Mounts:
		/config2 from pfiles (rw,path="fbc1ac88b427356b61892951faa93e54")
		/config3/redis.conf from pfiles (rw,path="c7f85a5709e2ec3a35488d25ef0bdde7")
		/config4/redis.conf from pfiles (rw,path="377f7586930fe9b2b041e64a12d5e76e")
		/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r4f4t (ro)
	Volumes:
		pfiles:
			Type: ConfigMap (a volume populated by a ConfigMap)
			Name: e363d58d2cf8
			Optional:  false

The same output without the .withFileSystemBind BUT still we the other 3 copies (no volume present afterwards):

	Mounts:
		/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jn4c8 (ro)

Used example test:

@Test
@SuppressWarnings("unchecked")
void shouldBindDirectory() {
    String configPath = Paths.get("").toAbsolutePath() + "/build/it-config/data";
    File file = new File(configPath + "/redis.conf");
    GenericContainer test = new GenericContainer(DockerImageName.parse("alpine:3.15"))
        .withLogConsumer(new Slf4jLogConsumer(getLogger(this.getClass().getName())))
        .withFileSystemBind(configPath, "/config", BindMode.READ_ONLY)
        .withCopyToContainer(Transferable.of(file.getAbsolutePath()), "/config2/redis.conf")
        .withCopyFileToContainer(MountableFile.forHostPath(file.getAbsolutePath()), "/config3/redis.conf")
        .withClasspathResourceMapping("redis.conf", "/config4/redis.conf", BindMode.READ_ONLY)
        .withCommand("sh", "-c", "sleep 5 && echo started && tail -f /dev/null")
        .waitingFor(Wait.forLogMessage(".*started.*", 1).withStartupTimeout(Durations.ONE_MINUTE));
    test.start();
    try {
        String basePath = test.execInContainer("ls", "/").getStdout();
        log.info(basePath);
        String foo1 = test.execInContainer("ls", "/config").getStdout();
        String foo2 = test.execInContainer("ls", "/config2/").getStdout();
        String foo3 = test.execInContainer("ls", "/config3").getStdout();
        String foo4 = test.execInContainer("ls", "/config4").getStdout
        log.info("TEST withFileSystemBind: " + foo1);
        log.info("TEST withCopyToContainer: " + foo2);
        log.info("TEST withCopyFileToContainer: " + foo3);
        log.info("TEST withClasspathResourceMapping: " + foo4);
    } catch (IOException | InterruptedException e) {
        throw new RuntimeException(e);
    }
}

Testcontainer in openshift unix:///var/run/docker.sock is not listening

I'm trying to set up this to use testcontainers in openshift but the maven task cant seem to get a connection to the socket.

My task:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: mvn-test
spec:
  params:
    - default: ''
      name: MAVEN_MIRROR_URL
      type: string
    - default: .
      name: CONTEXT_DIR
      type: string
  workspaces:
      - name: source
      - name: maven-settings
  sidecars:
    - name: kubedock
      image: joyrex2001/kubedock:latest
      args:
        - server
        - --reverse-proxy
        - --unix-socket
        - /var/run/docker.sock
      env:
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      readinessProbe:
        exec:
          command:
          - sh
          - -c
          - sleep 5 && touch /var/run/docker.sock
        timeoutSeconds: 10
      volumeMounts:
        - name: $(workspaces.source.volume)
          mountPath: $(workspaces.source.path)
        - name: kubedock-socket
          mountPath: /var/run/
  steps:
    - env:
        - name: HOME
          value: /tekton/home
      image: >-
        registry.redhat.io/ubi8/ubi-minimal@sha256:6910799b75ad41f00891978575a0d955be2f800c51b955af73926e7ab59a41c3
      name: mvn-settings
      script: >
        # ...
    - name: step-mvn-test
      image: gcr.io/cloud-builders/mvn
      workingDir: $(workspaces.source.path)/$(params.CONTEXT_DIR)
      command:
        - /usr/bin/mvn
      args:
        - '-s'
        - $(workspaces.maven-settings.path)/settings.xml
        - test
      env:
        - name: HOME
          value: /tekton/home
        - name: TESTCONTAINERS_RYUK_DISABLED
          value: "true"
        - name: TESTCONTAINERS_CHECKS_DISABLE
          value: "true"
      volumeMounts:
        - name: kubedock-socket
          mountPath: /var/run/
  volumes:
    - name: kubedock-socket
      emptyDir: {}

Error log from maven:

13:29:58.318 [main] DEBUG org.testcontainers.utility.TestcontainersConfiguration - Testcontainers configuration overrides will be loaded from file:/tekton/home/.testcontainers.properties
13:29:58.327 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/tekton/home/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /tekton/home/.testcontainers.properties (No such file or directory)
13:29:58.331 [main] DEBUG org.testcontainers.utility.TestcontainersConfiguration - Testcontainers configuration overrides will be loaded from file:/workspace/source/target/test-classes/testcontainers.properties
13:29:58.333 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
13:29:58.359 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy - $XDG_RUNTIME_DIR is not set.
13:29:58.360 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy - '/tekton/home/.docker/run' does not exist.
13:29:58.399 [main] DEBUG org.testcontainers.dockerclient.RootlessDockerClientProviderStrategy - '/run/user/65532' does not exist.
13:29:58.400 [main] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy - Trying out strategy: UnixSocketClientProviderStrategy
13:29:58.441 [main] WARN org.testcontainers.dockerclient.DockerClientProviderStrategy - DOCKER_HOST unix:///var/run/docker.sock is not listening
13:29:58.441 [main] DEBUG org.testcontainers.dockerclient.DockerClientProviderStrategy - strategy UnixSocketClientProviderStrategy did not pass the test
13:29:58.446 [main] INFO org.testcontainers.dockerclient.DockerMachineClientProviderStrategy - docker-machine executable was not found on PATH ([/usr/java/openjdk-18/bin, /usr/local/sbin, /usr/local/bin, /usr/sbin, /usr/bin, /sbin, /bin])
13:29:58.447 [main] ERROR org.testcontainers.dockerclient.DockerClientProviderStrategy - Could not find a valid Docker environment. Please check configuration. Attempted configurations were:
As no valid configuration was found, execution cannot continue.
See https://www.testcontainers.org/on_failure.html for more details.

Log from kubedock:

sidecar-kubedock
I0601 13:28:08.357062       1 main.go:28] kubedock 0.11.0-1-g761ec8a (20230525-104822)
I0601 13:28:08.357843       1 main.go:105] kubernetes config: namespace=playground, initimage=joyrex2001/kubedock:0.11.0, ready timeout=1m0s
I0601 13:28:08.358193       1 main.go:129] reaper started with max container age 1h0m0s
I0601 13:28:08.358291       1 main.go:75] enabled reverse-proxy services via 0.0.0.0 on the kubedock host
I0601 13:28:08.358386       1 main.go:102] default image pull policy: ifnotpresent
I0601 13:28:08.358416       1 main.go:105] service account used in deployments: default
I0601 13:28:08.358445       1 main.go:107] using namespace: playground
I0601 13:28:08.358539       1 main.go:46] api server started listening on /var/run/docker.sock
I0601 13:30:08.543594       1 main.go:175] exit signal recieved, removing pods, configmaps and services

Kubedock does not remove pods after test finishes

I am using java / junit / testcontainers with Kubedock to spin up a bunch of containers for a test.

After the test has finished, the containers are not removed immediately, but they are eventually cleaned up after an hour or so.

Is this expected behaviour?

To get around this I have added an afterAll hook that explicitly removes the pods, which works fine, but I wonder whether something is misconfigured as from the docs it sounds like pod removal should happen automatically after the test has finished.

TestContainers Kafka wrapper fails to start Confleunt Kafka

The Test Containers - Kafka module fails to startup a container. The following starter example works fine on a standard Docker backend but fails with Kubedock.

import org.junit.jupiter.api.Test;

import org.testcontainers.containers.KafkaContainer;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;

@Testcontainers
public class KafkaTest {

    @Test
    void testKafkaStartup() {
        KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:5.5.0"))
                .withStartupAttempts(3);

        KAFKA_CONTAINER.start();
        KAFKA_CONTAINER.stop();
    }
}

Running this example against a local Kubedock build from master yields:

[GIN] 2022/02/14 - 12:33:51 | 201 |       165.8µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2022/02/14 - 12:33:53 | 204 |    2.4264976s |       127.0.0.1 | POST     "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/start"
I0214 12:33:53.722604   27595 portforward.go:42] start port-forward 37943->9093
I0214 12:33:53.722665   27595 portforward.go:42] start port-forward 52796->2181
[GIN] 2022/02/14 - 12:33:53 | 200 |        86.8µs |       127.0.0.1 | GET      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/json"
I0214 12:33:53.833324   27595 copy.go:36] copy 2048 bytes to 44e0da1c4a4f:/
E0214 12:33:58.908660   27595 v2.go:168] io: read/write on closed pipe
E0214 12:33:58.914197   27595 util.go:18] error during request[500]: command terminated with exit code 2
[GIN] 2022/02/14 - 12:33:58 | 500 |    5.1302852s |       127.0.0.1 | PUT      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/archive?noOverwriteDirNonDir=false&path=%2F"
[GIN] 2022/02/14 - 12:33:59 | 200 |    102.6893ms |       127.0.0.1 | GET      "/containers/44e0da1c4a4ff1260dfe20404f1a9f916c88ab78dcf1f0d0204a51fc228c0cde/logs?stdout=true&stderr=true&since=0"
[GIN] 2022/02/14 - 12:33:59 | 201 |       144.7µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2022/02/14 - 12:34:01 | 204 |     2.379796s |       127.0.0.1 | POST     "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/start"
I0214 12:34:01.436944   27595 portforward.go:42] start port-forward 38007->2181
I0214 12:34:01.437094   27595 portforward.go:42] start port-forward 52166->9093
[GIN] 2022/02/14 - 12:34:01 | 200 |       175.1µs |       127.0.0.1 | GET      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/json"
I0214 12:34:01.491702   27595 copy.go:36] copy 2048 bytes to 86abe9fd787d:/
E0214 12:34:01.989394   27595 v2.go:168] io: read/write on closed pipe
E0214 12:34:01.990213   27595 util.go:18] error during request[500]: command terminated with exit code 2
[GIN] 2022/02/14 - 12:34:01 | 500 |    547.6007ms |       127.0.0.1 | PUT      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/archive?noOverwriteDirNonDir=false&path=%2F"
[GIN] 2022/02/14 - 12:34:02 | 200 |    103.6247ms |       127.0.0.1 | GET      "/containers/86abe9fd787d0b557bde2496a269a708f03a9a368b05989116d6cfad93ca41e3/logs?stdout=true&stderr=true&since=0"
[GIN] 2022/02/14 - 12:34:02 | 201 |       183.3µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2022/02/14 - 12:34:04 | 204 |    2.4081256s |       127.0.0.1 | POST     "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/start"
I0214 12:34:04.518805   27595 portforward.go:42] start port-forward 44148->9093
I0214 12:34:04.518965   27595 portforward.go:42] start port-forward 47341->2181
[GIN] 2022/02/14 - 12:34:04 | 200 |       207.4µs |       127.0.0.1 | GET      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/json"
I0214 12:34:04.573697   27595 copy.go:36] copy 2048 bytes to 479e20e215dd:/
E0214 12:34:05.047982   27595 v2.go:168] io: read/write on closed pipe
E0214 12:34:05.052508   27595 util.go:18] error during request[500]: command terminated with exit code 2
[GIN] 2022/02/14 - 12:34:05 | 500 |    526.6571ms |       127.0.0.1 | PUT      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/archive?noOverwriteDirNonDir=false&path=%2F"
[GIN] 2022/02/14 - 12:34:05 | 200 |    101.7087ms |       127.0.0.1 | GET      "/containers/479e20e215dd43c14171afb588470b239a31c475f8ac44670d54b95dad71c6fd/logs?stdout=true&stderr=true&since=0"

I've looked at this off and on over the last few weeks and was hopeful that perhaps the fix for #6 would resolve this as well.

Testcontainers waiting for container output to contain expected content is not reliable

Hello,

First of all thank you very much for the awesome project!

We've tried using kubedock for out testcontainers tests, but have hit an issue with using the bellow pattern from the testcontainers docs:

WaitingConsumer consumer = new WaitingConsumer();

container.followOutput(consumer, STDOUT);

consumer.waitUntil(frame -> 
    frame.getUtf8String().contains("STARTED"), 30, TimeUnit.SECONDS);

About 1 out of 5 times, it will timeout even though the logs do contain the expected string. Calling container.getLogs() just before the wait confirms that.

Is this a know limitation? I am happy to help debug this, but not sure where to start

Reverse proxy with random ports

Hi @joyrex2001, really enjoying kubedock so far!

We are trying to move away from using --port-forward, replacing it with --reverse-proxy, unfortunately we have a bunch of TestContainers tests which need to communicate with the container via random ports.
We're seemingly hitting a wall here with --reverse-proxy, with the TestContainers tests ending up failing with timeouts, whereas it works out of the box with --port-forward.

Do you have any suggestion for this usecase? This might simply be that I do not fully understand how --reverse-proxy is supposed to work as there isn't really a lot of documentation on this flag, so feel free to correct me if it isn't designed for this.
Alternatively, what makes --port-forward unreliable, and is it addressable?

We would also like to host kubedock on our cluster, while running the tests remotely on our CI platform, however that requires an extra layer of proxying between kubedock and our CI with something like kubectl port-forward, which makes this problem even worse. Have you thought about this scenario as well?

Setting custom labels on created resources from command argument or env var

Hi!

We want to configure some labels on resources created by kubedock with the current CI job information.

And we don't have control on container name or labels to used already present features.

We run kubedock as a service on Gitlab runner.
This mean that when Gitlab launch a job a pod is created with kubedock as a sidecar.
So each kubedock instance is dedicated to one job.

It would be a convenient use to configure the desired labels as environment variables or command argument of the kubedock instance.

Lock acquisition hangs with kubedock 0.14.0

After upgrading kubedock from 0.13.0 to 0.14.0 we're experiencing problems with the lease acquisition when using the --lock flag. The startup hangs after the following log message:

leaderelection.go: attempting to acquire leader lease esta-tekton-dev/kubedock-lock...

and the testcontainers fail because the Docker API is not available.

This seems like a comeback of #40. The Lease resource is again left with a non-empty holderIdentity field after kubedock terminates.

We're running Kubedock on Openshift and the environment didn't change. Downgrading to 0.13.0 solves the problem thus it must be related to a change in 0.14.0.

Leader election lease not removed on shutdown

I'm running kubedock on OpenShift to enable Testcontainers within Tekton pipelines.

When using the --lock option, the first time kubedock server starts, it'll create the kubedock-lock lease. But when terminating kubedock (with kill -3), the lease remains and on the next run, the server hangs with the message "leaderelection.go:245] attempting to acquire leader lease esta-tekton-predev/kubedock-lock..." and the testcontainers fail because the Docker API is not available.

Maybe it would make sense to remove the lease when shutting down kubedock. Or is there something I'm missing when using the --lock option?

Testcontainers failure with kafka using latest testcontainer version

Since testcontainer testcontainers/testcontainers-java#7333

Advertised listeners are added with container hostname in

    protected String brokerAdvertisedListener(InspectContainerResponse containerInfo) {
        return String.format("BROKER://%s:%s", containerInfo.getConfig().getHostName(), "9092");
    }

However kubedock is not returning hostname information for call /containers/{id}/json

in json path $.Config.Hostname

{
    "Config": {
        "Cmd": [
            "-c",
            "while [ ! -f /testcontainers_start.sh ]; do sleep 0.1; done; /testcontainers_start.sh"
        ],
        "Env": [
            "KAFKA_LOG_FLUSH_INTERVAL_MESSAGES=9223372036854775807",
            "KAFKA_BROKER_ID=1",
            "KAFKA_TRANSACTION_STATE_LOG_MIN_ISR=1",
            "KAFKA_ZOOKEEPER_CONNECT=localhost:2181",
            "KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS=1",
            "KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR=1",
            "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1",
            "KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9093,BROKER://0.0.0.0:9092",
            "KAFKA_INTER_BROKER_LISTENER_NAME=BROKER",
            "KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0",
            "KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=BROKER:PLAINTEXT,PLAINTEXT:PLAINTEXT"
        ],
        "Image": "confluentinc/cp-kafka:7.2.2",
        "Labels": {
            "com.joyrex2001.kubedock.pull-policy": "ifnotpresent",
            "com.joyrex2001.kubedock.runas-user": "0",
            "com.joyrex2001.kubedock.service-account": "default",
            "org.testcontainers": "true",
            "org.testcontainers.lang": "java",
            "org.testcontainers.sessionId": "61f69934-1293-47a7-8a83-c34d45477150",
            "org.testcontainers.version": "1.19.0"
        },
        "Tty": false
    },
    "Created": "2023-09-22T08:27:19Z",
    "HostConfig": {
        "LogConfig": {
            "Config": {},
            "Type": "json-file"
        },
        "NetworkMode": "bridge"
    },
    "Id": "30539bcf9ea636c91c42b9fc88c6cc7e911f75e3da93dc02b723025c8d1cd27b",
    "Image": "confluentinc/cp-kafka:7.2.2",
    "Name": "/",
    "Names": [
        "/30539bcf9ea636c91c42b9fc88c6cc7e911f75e3da93dc02b723025c8d1cd27b",
        "/30539bcf9ea6"
    ],
    "NetworkSettings": {
        "IPAddress": "127.0.0.1",
        "Networks": {
            "bridge": {
                "Aliases": null,
                "IPAddress": "127.0.0.1",
                "NetworkID": "0418b65c2fbf92590ee4ccb1ac52ca2f6c69a865a7d8352d4c634734e77e1061"
            }
        },
        "Ports": {
            "2181/tcp": [
                {
                    "HostIp": "0.0.0.0",
                    "HostPort": "63606"
                }
            ],
            "9093/tcp": [
                {
                    "HostIp": "0.0.0.0",
                    "HostPort": "63605"
                }
            ]
        }
    },
    "State": {
        "Dead": false,
        "Error": "",
        "ExitCode": 0,
        "FinishedAt": "0001-01-01T00:00:00Z",
        "Health": {
            "Status": "healthy"
        },
        "OOMKilled": false,
        "Paused": false,
        "Restarting": false,
        "Running": true,
        "StartedAt": "2023-09-22T08:27:19Z",
        "Status": "Up"
    }
}%

Issue similar to #43

Kubedock liveness / readiness probe?

Hi there,

More of a question than an issue...

Does kubedock provide a liveness / readiness endpoint?

I would like to run kubedock as a standalone deployment in k8s, so I'm wondering how to configure the k8s liveness / readiness probes.

Thanks!

Running kubedock throws `error instantiating kubernetes client: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"`

Build from latest master (ae61e80) and trying to run kubedock results in the following error:

./kubedock server     --port-forward     --namespace kubedock     --verbosity 10     --request-cpu 50m     --request-memory 100Mi     --unix-socket /tmp/docker.sock     --timeout 5m0s
I0814 09:23:33.303987 2577016 main.go:28] kubedock 0.8.2-16-gae61e80 (20220814-082213)
F0814 09:23:33.305875 2577016 main.go:37] error instantiating kubernetes client: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

This does not reproduce from tag 0.8.2. The cluster I am running against is v1.20 so likely a version/api mismatch.

Accessing containers - --reverse-proxy / --inspector

I am struggling a bit trying to understand how best to access containers that kubedock has created.

I have created a simple integration test that depends on a mysql container. If I use kubedock in --reverse-proxy mode, I can see that when my test asks testcontainers for the mysql container db url, it is given the fully qualified kubedock hostname with the random port that has been assigned to the mysql instance:

Waiting for database connection to become available at jdbc:mysql://kubedock.xxx.svc.cluster.local:57636/yyy using query 'SELECT 1'

Unfortunately, because I have not exposed this port explicitly in the kubedock service config, it times out while trying to obtain a connection.

If I remove the --reverse-proxy flag and instead try to use --inspector, I can see that a k8s service has been created for my mysql container with the random port exposed, however when my test asks testcontainers for the mysql db url, it is still given the kubedock hostname rather than that of the mysql service.

So I guess my question is, what is the expected use case for the --inspector flag? Is there some way I can get testcontainers to provide me with the container's service name rather than the kubedock service name?

kubedock and insecure registry usage

Hi,

I want to integrate kubedock in our CI flow but I encounter an issue while spinning up the testcontainer containers.
I exported the DOCKER_HOST env to point to kubedock API - ALL OK.
When the maven runs the tests (using testcontainers) I get the following errors:
18:20:58 [2023-09-06T15:20:58.820Z] 15:20:48.252 [main] WARN org.testcontainers.dockerclient.DockerClientProviderStrategy - Could not determine Docker OS type 18:20:58 [2023-09-06T15:20:58.820Z] 15:20:48.253 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is 127.0.0.1 18:20:58 [2023-09-06T15:20:58.820Z] 15:20:48.263 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: ... 18:20:58 [2023-09-06T15:20:58.823Z] Caused by: org.testcontainers.containers.ContainerFetchException: Can't get Docker image: RemoteDockerImage(imageName=nexus3-prod.radcom.co.il:8084/mongo:5.0.5, imagePullPolicy=DefaultPullPolicy(), imageNameSubstitutor=org.testcontainers.utility.ImageNameSubstitutor$LogWrappedImageNameSubstitutor@10895b16) 18:20:58 [2023-09-06T15:20:58.824Z] at org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1371) 18:20:58 [2023-09-06T15:20:58.824Z] Caused by: com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"message":"pinging container registry nexus3-prod.radcom.co.il:8084: Get \"[https://nexus3-prod.radcom.co.il:8084/v2/\](https://nexus3-prod.radcom.co.il:8084/v2//)": http: server gave HTTP response to HTTPS client"} ...
In a standard docker engine environment, I would configure the "insecure registries" in the /etc/docker/daemon.json.
But when using Kubedock, I could not find a way to mark a particulare registry as insecure.
I know it's not a best practice to use HTTP registries, but at this moment I need to find a solution for this blocking point. Do you have tips/ solutions ?
Thanks.

Container not found errors for all containers I start via testcontainers.

I probably am doing something wrong, but I am running a kubedock pod. I have testcontainers running via gitlab ci. When it runs it connects find and created containers. I can see it in the logs. However when I check the logs it keeps saying this for all booted containers:

W0524 18:09:48.023092 1 util.go:64] container status error: pods "kubedock-eohvuhyspkj8-redis-1-d2c8e83115a7" not found

when I run it with high verbosity it seems to say that the containers were OOMKilled however they were not OOMKilled because I don't see any OOMKilling at all from the kube events nor the container statuses. I even tried setting high limits + requests just to test.

I will try now with 0.10.0 since 0.11.0 is new and perhaps some bug was introduced. Will report back.

My startup commands/args:

command: ["/app/kubedock"]
        args: ["server", "-ngitlab-runner", "--reverse-proxy", "--inspector", "--request-memory=1024Mi,1024Mi", "--timeout=4m0s", "--service-account=gitlab-runner-devops"]

I also went ahead and gave the service account cluster-admin for all services accounts in this NS just to eliminate other possible reasons.

I am on GKE 1.25.x

Port forward automatic retry

Hi
I'm using your tool and I really like it. Currently I have a problem that some containers seem to open, close and reopen ports on startup and this causes the port forward feature to fail. Is it possible to change the code in a way that if you detect an abortion of the port forward the tool will automatically retry the port forward? This would make this feature more robust.
Thanks
Markus Ritter

P.S. if there is anything I can do to please let me know

Copying a file to a container before starting it

Hi @joyrex2001,

We have a container that we would like to copy some files to before starting it (debezium)

We were using testcontainer's withCopyToContainer to copy these files in, which worked locally but did not work in k8s with kubedock. I found this thread where the same issue was being discussed:

#1

So I updated our test to use withFileSystemBind instead, and while this worked locally it unfortunately didn't seem to work in k8s either.

While investigating the issue, I notice that the debezium connector pod has a kubedock sidecar:

 NAME↑        PF       IMAGE                                                               READY        STATE                   INIT               RESTARTS PROBES(L:R)             CPU       MEM       CPU/R:L       MEM/R:L       %CPU/R       %CPU/L        %MEM/R        %MEM/L PORTS                    AGE            │
 main         ●        eu.gcr.io/my-company/docker.io/debezium/connect:1.9.6.Final         false        PodInitializing         false                     0 off:off                   0         0        1000:0        1000:0            0          n/a             0           n/a kd-tcp-8083:8083         6m32s          │
 setup        ●        joyrex2001/kubedock:0.10.0                                          false        ImagePullBackOff        true                      0 off:off                   0         0          10:0         128:0            0          n/a             0           n/a                          6m32s          │
                                                                                                                                                                                                                                                                                                                            

It looks like this is to do with the way the configmap is created.

As you can see the image cannot be pulled -- this is because <my company> is quite a large bank, so we very quickly get rate limited by docker hub when trying to pull images (all traffic from our infrastructure comes from a small range of IP addresses from the wider internet's perspective).

Our solution to being rate limited is to cache images from docker hub in our own image repo (eu.gcr.io/my-company/...), and reference those ones instead of the public ones from docker hub. Is it possible to tell kubedock to use the cached GCR kubedock image when it spins up the sidecar instead of the docker hub one?

Thanks!

Setting properties in pod template on container level

Hi @joyrex2001,

We are planning to use testcontainers library together with kubedock tool in our CI/CD pipeline on Kubernetes cluster.
The problem is that our IT set some SecurityContext requirements both on pod and containers. As a consequence kubedock is not able to create pods as its specification does not meet requirements. We tried to use pod template option but it is not possible to set properties on container level.
Would it be possible to extend logic for pod template to be able to define properties on container level?

Thank you for your great tool!

Cheers,
Vladislav

UDP Support

I would like to know if you plan to support udp
I see in README that it's not supported yet.

Best Match target system architecture

Hey @joyrex2001 ,
really cool stuff thanks for this 🙇 .
recently ran into a little problem and wanted to ask if a change could be proposed.

in https://github.com/joyrex2001/kubedock/blob/master/internal/server/routes/common/images.go#L57
there is no architecture returned resulting in an emulated architecture which runs at somethimes so slow that it doesn't survive the health/ready check.

my setup is using running as proxy for java-testcontainers

there is already an evaluation of the server/backend architecture. would it make sense to return the backend architecture at that point ?

Error getting events when running `docker run` in OpenShift (but the Pod starts successfully)

Problem Description

I am running kubedock in a container (this is the dockerfile) on OpenShift using the following command:

kubedock server --port-forward

and when I run

export DOCKER_HOST='tcp://127.0.0.1:2475'
docker run --name httpd -d -p 8080:8080 python python -m http.server 8080

I get the following error:

ERRO[0000] error getting events from daemon: Error response from daemon: 404 page not found 

But the Pod is created successfully (c.f. screenshots)

How to reproduce it

The issue can be reproduced on OpenShift Developer Sandbox (a Red Hat Developer account is required but it's free) using the following link: https://workspaces.openshift.com/f?url=https://github.com/l0rd/micronaut-sample

Screenshots

image

image

Implementing docker build command with Kaniko

Hi,

I've observed that our current feature set lacks support for the docker build command. To enrich our capabilities, I propose the use of Kaniko, a tool designed to build Docker images in environments such as Kubernetes, without a Docker daemon.

My suggestion is to trigger a Kaniko Job on each docker build command issued, allowing the Docker image to be built and pushed directly to the specified registry within the Kubernetes cluster.

I believe this addition would significantly streamline Docker image building and deployment processes, especially beneficial in CI/CD contexts.

I'm eager to discuss this further and potentially contribute to its implementation.

Thanks.

Label com.joyrex2001.kubedock.runas-user not working

We want to start a testcontainer with the label com.joyrex2001.kubedock.runas-user

public class KafkaTestResource extends KafkaCompanionResource {

    @Override
    public void init(Map<String, String> initArgs) {
        if (this.kafkaCompanion == null) {
            super.init(initArgs);
            this.kafka.withLabel("com.joyrex2001.kubedock.runas-user", "0");
            this.kafka.withLabel("com.joyrex2001.kubedock.request-cpu", "400m,1");

The label does not change the runasUser Setting in the created Pod.
See the following output of the kubedock log where the labels are visible but kubedock starts the pod with the user defined in image:

I0707 08:57:12.146529       1 util.go:78] Request Body: {"Hostname":null,"Domainname":null,"User":null,"AttachStdin":null,"AttachStdout":null,"AttachStderr":null,"PortSpecs":null,"Tty":null,"OpenStdin":null,"StdinOnce":null,"Env":["LOG_DIR=/tmp"],"Cmd":["sh","-c","while [ ! -f /testcontainers_start.sh ]; do sleep 0.1; done; /testcontainers_start.sh"],"Healthcheck":null,"ArgsEscaped":null,"Entrypoint":null,"Image":"quay.io/strimzi-test-container/test-container:0.100.0-kafka-3.1.0","Volumes":{},"WorkingDir":null,"MacAddress":null,"OnBuild":null,"NetworkDisabled":null,"ExposedPorts":{"9092/tcp":{}},"StopSignal":null,"StopTimeout":null,"HostConfig":{"Binds":[],"BlkioWeight":null,"BlkioWeightDevice":null,"BlkioDeviceReadBps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteIOps":null,"MemorySwappiness":null,"NanoCpus":null,"CapAdd":null,"CapDrop":null,"ContainerIDFile":null,"CpuPeriod":null,"CpuRealtimePeriod":null,"CpuRealtimeRuntime":null,"CpuShares":null,"CpuQuota":null,"CpusetCpus":null,"CpusetMems":null,"Devices":null,"DeviceCgroupRules":null,"DeviceRequests":null,"DiskQuota":null,"Dns":null,"DnsOptions":null,"DnsSearch":null,"ExtraHosts":[],"GroupAdd":null,"IpcMode":null,"Cgroup":null,"Links":[],"LogConfig":null,"LxcConf":null,"Memory":null,"MemorySwap":null,"MemoryReservation":null,"KernelMemory":null,"NetworkMode":"95cde802aa72b1b9dd5955feb45e93ad6ca3bb2ef27388caa24d85c23a6d13f7","OomKillDisable":null,"Init":null,"AutoRemove":null,"OomScoreAdj":null,"PortBindings":{"9092/tcp":[{"HostIp":"","HostPort":""}]},"Privileged":null,"PublishAllPorts":null,"ReadonlyRootfs":null,"RestartPolicy":null,"Ulimits":null,"CpuCount":null,"CpuPercent":null,"IOMaximumIOps":null,"IOMaximumBandwidth":null,"VolumesFrom":[],"Mounts":null,"PidMode":null,"Isolation":null,"SecurityOpt":null,"StorageOpt":null,"CgroupParent":null,"VolumeDriver":null,"ShmSize":null,"PidsLimit":null,"Runtime":null,"Tmpfs":null,"UTSMode":null,"UsernsMode":null,"Sysctls":null,"ConsoleSize":null,"CgroupnsMode":null},"Labels":{"com.joyrex2001.kubedock.runas-user":"0","org.testcontainers":"true","org.testcontainers.lang":"java","com.joyrex2001.kubedock.request-cpu":"400m,1","org.testcontainers.version":"1.17.6","org.testcontainers.sessionId":"4b7fc7b7-b48b-4059-866c-713f470b8861"},"Shell":null,"NetworkingConfig":{"EndpointsConfig":{"95cde802aa72b1b9dd5955feb45e93ad6ca3bb2ef27388caa24d85c23a6d13f7":{"IPAMConfig":null,"Links":null,"Aliases":["tc-qkimr7Zz"],"NetworkID":null,"EndpointID":null,"Gateway":null,"IPAddress":null,"IPPrefixLen":null,"IPv6Gateway":null,"GlobalIPv6Address":null,"GlobalIPv6PrefixLen":null,"MacAddress":null}}}}
I0707 08:57:12.146797       1 util.go:101] Response Body: {"Id":"d9b8a736e1ddb306f38e172f517b4cf0482d07709d721641d2d2bfb253a1fd29"}
[GIN] 2023/07/07 - 08:57:12 | 201 |     393.293µs |      10.131.1.9 | POST     "/containers/create"
I0707 08:57:12.211618       1 util.go:77] Request Headers: http.Header{"Accept":[]string{"application/json"}, "Accept-Encoding":[]string{"gzip, x-gzip, deflate"}, "Connection":[]string{"keep-alive"}, "Content-Type":[]string{"application/json"}, "User-Agent":[]string{"Apache-HttpClient/5.0.3 (Java/17.0.5)"}, "X-Tc-Sid":[]string{"4b7fc7b7-b48b-4059-866c-713f470b8861"}}
I0707 08:57:12.211645       1 util.go:78] Request Body: 
W0707 08:57:12.211698       1 container.go:174] user not set, will run as user defined in image

I've quickly searched through the repository and found the const for the labels here but there is no const for the label com.joyrex2001.kubedock.runas-user.

We do not want to run any Pod with the UID 0 so the flag --runas-user is no solution.

Am I doing something wrong or is the implementation of the label missing?

Document serviceaccount RBAC configuration

I did not found in the Readme how to configure correctly RBAC for serviceaccount.
I personnally found this as a minimal working configuration :

        apiVersion: rbac.authorization.k8s.io/v1
        kind: Role
        metadata:
          name: kubedock
        rules:
          - apiGroups: ["apps"]
            resources: ["deployments"]
            verbs: ["create", "get", "list", "delete"]
          - apiGroups: [""]
            resources: ["pods", "pods/log"]
            verbs: ["list", "get"]
          - apiGroups: [""]
            resources: ["services"]
            verbs: ["create", "get", "list"]

Perhaps it should be mentionned in the doc ?

Kubedock does not work with recent testcontainers-java kafka (1.16.+)

Hi,

First of all: thanks for making and maintaining this repo. Really useful! I have played around with this repo and especially with Kafka testcontainers on OpenShift with Tekton. I found out that your example works nice on OpenShift but mine project failed.

Mainly because your examples use version 1.15.3 while mine project was using 1.16.3. There have been some changes around the dynamic updating of the Kafka config.

With version 1.16.3 the args of the deployed containers look like:

args:
        - sh
        - '-c'
        - |
          #!/bin/bash
          echo 'clientPort=2181' > zookeeper.properties
          echo 'dataDir=/var/lib/zookeeper/data' >> zookeeper.properties
          echo 'dataLogDir=/var/lib/zookeeper/log' >> zookeeper.properties
          zookeeper-server-start zookeeper.properties &
          echo '' > /etc/confluent/docker/ensure 
          /etc/confluent/docker/run 

while 1.15.3 creates:

args:
        - sh
        - '-c'
        - >-
          while [ ! -f /testcontainers_start.sh ]; do sleep 0.1; done;
          /testcontainers_start.sh

Port from container are not exposed by kubedock

Hello there,

when I'm running e.g. mongodb as a testcontainers image, port 27017 is not exposed by the kudedock, kubectl says that:

    Container ID:  containerd://f08607808925b030a57c604f02904ce8f74c02fd9fdf43fb317281c21f6f06e0
    Image:         mongo:4.4.10
    Image ID:      docker.io/library/mongo@sha256:2821997cba3c26465b59cc2e863b940d21a58732434462100af10659fc0d164f
    Port:          27017/TCP
    Host Port:     0/TCP
    Args:
      --replSet
      docker-rs

Testcontainers test suite reports log:

org.springframework.dao.DataAccessResourceFailureException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches WritableServerSelector. Client view of cluster state is {type=UNKNOWN, servers=[{address=docker:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]

docker is the container alias, port 2475 is exposed and docker API is available.

Ports of target containers are not accessible from the container which uses kubedock. Do anyone know the reason of this?

Cheers,
W

Example for Mounting Files

Hi,

my colleagues and I are having trouble using TestContainers to mount files into our container.

Could you provide an example that demonstrates how kubedock can be used to successfully copy (or even mount) files to a container?

We tried using:

.withCopyFileToContainer(MountableFile.forClasspathResource("/ignite.xml"), "/conf/ignite.xml")
.new GenericContainer(...)
        .withClasspathResourceMapping("redis.conf",
                                      "/etc/redis.conf",
                                      BindMode.READ_ONLY)
.withFileSystemBind("./src/test/local/data-grid/conf", "/conf")

but Ignite gives us a FileNotFoundException for /conf/ignite.xml (The config is needed for startup).

This is using kubedock-0.4.0 with Kubernetes 1.21.1

P.S. Thanks for creating kubedock! It's a great-looking solution for getting TestContainers to work nicely with Kubernetes.

kubedock high CPU usage if pods stuck in CrashLoopBackOff

Hi,

today, I found this amazing project to close the gab between testcontainers and gitlab ci which runs on kubernetes, too. Thanks for this awesome work!

Before I start to integrate kubedock, I test it locally. While initial tests are fine, running https://github.com/rieckpil/blog-tutorials/tree/master/spring-boot-integration-tests-testcontainers results into a high CPU usage.

image

Logs:

jkr@joe-nb ~ % ~/Downloads/kubedock server --port-forward
I1027 19:35:11.748402   84001 main.go:26] kubedock 0.7.0 (20211008-105904)
I1027 19:35:11.749336   84001 main.go:95] kubernetes config: namespace=vrp-testcontainers-kubernetes, initimage=joyrex2001/kubedock:0.7.0, ready timeout=1m0s
I1027 19:35:11.749668   84001 main.go:117] reaper started with max container age 1h0m0s
I1027 19:35:11.749770   84001 main.go:68] port-forwarding services to 127.0.0.1
I1027 19:35:11.749885   84001 main.go:100] default image pull policy: ifnotpresent
I1027 19:35:11.749926   84001 main.go:102] using namespace: vrp-testcontainers-kubernetes
I1027 19:35:11.750065   84001 main.go:35] api server started listening on :2475
[GIN] 2021/10/27 - 19:35:20 | 200 |     123.807µs |       127.0.0.1 | GET      "/info"
[GIN] 2021/10/27 - 19:35:20 | 200 |      29.519µs |       127.0.0.1 | GET      "/info"
[GIN] 2021/10/27 - 19:35:20 | 200 |      28.059µs |       127.0.0.1 | GET      "/version"
[GIN] 2021/10/27 - 19:35:20 | 200 |      75.252µs |       127.0.0.1 | GET      "/images/json"
[GIN] 2021/10/27 - 19:35:20 | 200 |        78.5µs |       127.0.0.1 | GET      "/images/jboss/keycloak:11.0.0/json"
[GIN] 2021/10/27 - 19:35:20 | 201 |     394.528µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2021/10/27 - 19:35:35 | 204 | 14.567361081s |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/start"
I1027 19:35:35.422243   84001 portforward.go:42] start port-forward 34468->8080
[GIN] 2021/10/27 - 19:35:35 | 200 |     120.223µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
[GIN] 2021/10/27 - 19:35:35 | 200 |     124.327µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
E1027 19:35:36.603462   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60751] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:36.676557   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:36 socat[60758] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:37.762316   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60924] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:37.832716   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:37 socat[60931] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:38.912798   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:38 socat[61039] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:38.987719   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:39 socat[61046] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:40.076636   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61095] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
E1027 19:35:40.158013   84001 portforward.go:400] an error occurred forwarding 34468 -> 8080: error forwarding port 8080 to pod 97b9789ffbcc98354c10c90815c06897e999e1c69005fcfbec26c9e870795bb1, uid : exit status 1: 2021/10/27 19:35:40 socat[61107] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
[GIN] 2021/10/27 - 19:35:53 | 201 |      84.187µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/exec"
[GIN] 2021/10/27 - 19:35:54 | 200 |  317.038459ms |       127.0.0.1 | POST     "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/start"
[GIN] 2021/10/27 - 19:35:54 | 200 |      68.388µs |       127.0.0.1 | GET      "/exec/808a7da1789efc9f5e8a0b8bdf5b8ca44843e0dddcaeed5ab7e0a331870c2029/json"
[GIN] 2021/10/27 - 19:35:54 | 200 |      83.432µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
I1027 19:35:54.033603   84001 containers.go:217] ignoring signal
[GIN] 2021/10/27 - 19:35:54 | 204 |      42.978µs |       127.0.0.1 | POST     "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/kill"
[GIN] 2021/10/27 - 19:35:54 | 200 |      90.172µs |       127.0.0.1 | GET      "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369/json"
[GIN] 2021/10/27 - 19:35:54 | 204 |  253.744279ms |       127.0.0.1 | DELETE   "/containers/815f8424ab54d5c94987109c3049e005dd620b1c2904f2f8aea3580ee2999369?v=true&force=true"
[GIN] 2021/10/27 - 19:35:54 | 200 |     105.337µs |       127.0.0.1 | GET      "/images/postgres:12/json"
[GIN] 2021/10/27 - 19:35:54 | 201 |     175.279µs |       127.0.0.1 | POST     "/containers/create"
I1027 19:36:19.621781   84001 portforward.go:42] start port-forward 47785->5432
[GIN] 2021/10/27 - 19:36:19 | 204 | 25.308206574s |       127.0.0.1 | POST     "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/start"
[GIN] 2021/10/27 - 19:36:19 | 200 |     129.701µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
[GIN] 2021/10/27 - 19:36:19 | 200 |      81.498µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
[GIN] 2021/10/27 - 19:37:19 | 200 |     317.497µs |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/json"
[GIN] 2021/10/27 - 19:37:19 | 200 |   74.017123ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
[GIN] 2021/10/27 - 19:37:19 | 200 |    58.57439ms |       127.0.0.1 | GET      "/containers/a1f2506d65208460cd20342ff35c4db80eea8752700e1055f123a18148856935/logs?stdout=true&stderr=true&since=0"
[GIN] 2021/10/27 - 19:37:19 | 201 |     190.866µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2021/10/27 - 19:37:23 | 204 |  3.277132909s |       127.0.0.1 | POST     "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/start"
I1027 19:37:23.216054   84001 portforward.go:42] start port-forward 62630->5432
[GIN] 2021/10/27 - 19:37:23 | 200 |      95.659µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
[GIN] 2021/10/27 - 19:37:23 | 200 |      86.154µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
[GIN] 2021/10/27 - 19:38:23 | 200 |     116.501µs |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/json"
[GIN] 2021/10/27 - 19:38:23 | 200 |   69.953111ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
[GIN] 2021/10/27 - 19:38:23 | 200 |   61.136457ms |       127.0.0.1 | GET      "/containers/2d2f2db70a487e9b10f54bef0580359f9d69a39b99e0057a1ea4492a7e78a2e7/logs?stdout=true&stderr=true&since=0"
[GIN] 2021/10/27 - 19:38:23 | 201 |     285.397µs |       127.0.0.1 | POST     "/containers/create"
[GIN] 2021/10/27 - 19:38:28 | 204 |  4.293655856s |       127.0.0.1 | POST     "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/start"
I1027 19:38:28.117308   84001 portforward.go:42] start port-forward 62426->5432
[GIN] 2021/10/27 - 19:38:28 | 200 |     203.466µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
[GIN] 2021/10/27 - 19:38:28 | 200 |     164.063µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
[GIN] 2021/10/27 - 19:39:28 | 200 |     146.367µs |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/json"
[GIN] 2021/10/27 - 19:39:28 | 200 |   66.033642ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"
[GIN] 2021/10/27 - 19:39:28 | 200 |   62.444152ms |       127.0.0.1 | GET      "/containers/169389ceb3c51085fc7b0c14a4f99972369a4fd039b98b8afe41da50b75830d6/logs?stdout=true&stderr=true&since=0"

Kubedock: 0.7.0
OS: Mac OS
Kubernetes: Openshift 3.11

jkr@joe-nb ~ % kubectl get all -l kubedock=true
NAME                                READY   STATUS             RESTARTS   AGE
pod/169389ceb3c5-85446cf64b-66qpb   0/1     CrashLoopBackOff   5          3m
pod/2d2f2db70a48-65bdb6dbb7-67rj8   0/1     CrashLoopBackOff   5          4m
pod/a1f2506d6520-c5c57c8b6-x5b65    0/1     CrashLoopBackOff   5          5m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)              AGE
service/kd-169389ceb3c5   ClusterIP   172.30.173.82   <none>        5432/TCP,62426/TCP   3m
service/kd-2d2f2db70a48   ClusterIP   172.30.227.70   <none>        5432/TCP,62630/TCP   4m
service/kd-a1f2506d6520   ClusterIP   172.30.71.160   <none>        5432/TCP,47785/TCP   5m

NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/169389ceb3c5   1         1         1            0           3m
deployment.apps/2d2f2db70a48   1         1         1            0           4m
deployment.apps/a1f2506d6520   1         1         1            0           5m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/169389ceb3c5-85446cf64b   1         1         0       3m
replicaset.apps/2d2f2db70a48-65bdb6dbb7   1         1         0       4m
replicaset.apps/a1f2506d6520-c5c57c8b6    1         1         0       5m

This reason for this could be that the pods are in CrashLoopBackOff. The reason, that pods crashes are known (file permissions issues), but in such cases, kubedock should not generate such a high load. The high load is persistent even after mvn clean verify is finished. Also press CRTL+C takes some time to terminate the process.

I'm able to reproduce this behaivor. If you teach me, I'm able to provide traces or profiling files. But before doing this, please ensure that such profiling files does not contain sensitive informations like the kube credentials.

Is `/images/load` API endpoint supported?

We are trying to use kubedock with Testcontainers for Java. I set it up and was able to connect, but we get 404s when trying to call the Docker API endpoint /images/load, and from the logs below and grepping through kubedock, it looks like that endpoint isn't implemented.

I read the kubedock docs, where it says

Kubedock implements the images API by tracking which images are requested. It is not able to actually build images.

But I couldn't find anything about whether or not kubedock is expected to be able to load already-built images, or if it is only able to refer to images stored in some Docker registry. Can you please confirm if this is a bug or user error that I am experiencing, or if this is expected behavior and a purposeful limitation of kubedock to keep it lightweight?

Logs:

I0804 18:10:25.899934       1 main.go:28] kubedock 0.11.0 (20230524-112404)
I0804 18:10:25.900722       1 main.go:105] kubernetes config: namespace=jenkins-ci, initimage=joyrex2001/kubedock:0.11.0, ready timeout=1m0s
I0804 18:10:25.901321       1 main.go:129] reaper started with max container age 1h0m0s
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

I0804 18:10:25.901646       1 main.go:102] default image pull policy: ifnotpresent
I0804 18:10:25.901718       1 main.go:105] service account used in deployments: default
I0804 18:10:25.901782       1 main.go:107] using namespace: jenkins-ci
[GIN-debug] GET    /info                     --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func2 (6 handlers)
[GIN-debug] GET    /events                   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func3 (6 handlers)
[GIN-debug] GET    /version                  --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func4 (6 handlers)
[GIN-debug] GET    /_ping                    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func5 (6 handlers)
[GIN-debug] HEAD   /_ping                    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func6 (6 handlers)
[GIN-debug] POST   /containers/create        --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func7 (6 handlers)
[GIN-debug] POST   /containers/:id/start     --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func8 (6 handlers)
[GIN-debug] POST   /containers/:id/attach    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func9 (6 handlers)
[GIN-debug] POST   /containers/:id/stop      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func10 (6 handlers)
[GIN-debug] POST   /containers/:id/restart   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func11 (6 handlers)
[GIN-debug] POST   /containers/:id/kill      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func12 (6 handlers)
[GIN-debug] POST   /containers/:id/wait      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func13 (6 handlers)
[GIN-debug] POST   /containers/:id/rename    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func14 (6 handlers)
[GIN-debug] POST   /containers/:id/resize    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func15 (6 handlers)
[GIN-debug] DELETE /containers/:id           --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func16 (6 handlers)
[GIN-debug] GET    /containers/json          --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func17 (6 handlers)
[GIN-debug] GET    /containers/:id/json      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func18 (6 handlers)
[GIN-debug] GET    /containers/:id/logs      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func19 (6 handlers)
[GIN-debug] HEAD   /containers/:id/archive   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func20 (6 handlers)
[GIN-debug] GET    /containers/:id/archive   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func21 (6 handlers)
[GIN-debug] PUT    /containers/:id/archive   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func22 (6 handlers)
[GIN-debug] POST   /containers/:id/exec      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func23 (6 handlers)
[GIN-debug] POST   /exec/:id/start           --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func24 (6 handlers)
[GIN-debug] GET    /exec/:id/json            --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func25 (6 handlers)
[GIN-debug] POST   /networks/create          --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func26 (6 handlers)
[GIN-debug] POST   /networks/:id/connect     --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func27 (6 handlers)
[GIN-debug] POST   /networks/:id/disconnect  --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func28 (6 handlers)
[GIN-debug] GET    /networks                 --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func29 (6 handlers)
[GIN-debug] GET    /networks/:id             --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func30 (6 handlers)
[GIN-debug] DELETE /networks/:id             --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func31 (6 handlers)
[GIN-debug] POST   /networks/prune           --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func32 (6 handlers)
[GIN-debug] POST   /images/create            --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func33 (6 handlers)
[GIN-debug] GET    /images/json              --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func34 (6 handlers)
[GIN-debug] GET    /images/:image/*json      --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterDockerRoutes.func35 (6 handlers)
[GIN-debug] GET    /containers/:id/top       --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /containers/:id/changes   --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /containers/:id/export    --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /containers/:id/stats     --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /containers/:id/update    --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /containers/:id/pause     --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /containers/:id/unpause   --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /containers/:id/attach/ws --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /containers/prune         --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /build                    --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /volumes                  --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /volumes/:id              --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] DELETE /volumes/:id              --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /volumes/create           --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] POST   /volumes/prune            --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (6 handlers)
[GIN-debug] GET    /libpod/version           --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func2 (7 handlers)
[GIN-debug] GET    /libpod/_ping             --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func3 (7 handlers)
[GIN-debug] HEAD   /libpod/_ping             --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func4 (7 handlers)
[GIN-debug] POST   /libpod/containers/create --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func5 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/start --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func6 (7 handlers)
[GIN-debug] GET    /libpod/containers/:id/exists --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func7 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/attach --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func8 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/stop --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func9 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/restart --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func10 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/kill --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func11 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/wait --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func12 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/rename --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func13 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/resize --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func14 (7 handlers)
[GIN-debug] DELETE /libpod/containers/:id    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func15 (7 handlers)
[GIN-debug] GET    /libpod/containers/json   --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func16 (7 handlers)
[GIN-debug] GET    /libpod/containers/:id/json --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func17 (7 handlers)
[GIN-debug] GET    /libpod/containers/:id/logs --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func18 (7 handlers)
[GIN-debug] HEAD   /libpod/containers/:id/archive --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func19 (7 handlers)
[GIN-debug] GET    /libpod/containers/:id/archive --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func20 (7 handlers)
[GIN-debug] PUT    /libpod/containers/:id/archive --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func21 (7 handlers)
[GIN-debug] POST   /libpod/containers/:id/exec --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func22 (7 handlers)
[GIN-debug] POST   /libpod/exec/:id/start    --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func23 (7 handlers)
[GIN-debug] GET    /libpod/exec/:id/json     --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func24 (7 handlers)
[GIN-debug] POST   /libpod/images/pull       --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func25 (7 handlers)
[GIN-debug] GET    /libpod/images/json       --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func26 (7 handlers)
[GIN-debug] GET    /libpod/images/:image/*json --> github.com/joyrex2001/kubedock/internal/server/routes.RegisterLibpodRoutes.func27 (7 handlers)
[GIN-debug] GET    /libpod/info              --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (7 handlers)
[GIN-debug] POST   /libpod/images/build      --> github.com/joyrex2001/kubedock/internal/server/httputil.NotImplemented (7 handlers)
I0804 18:10:25.902290       1 main.go:37] api server started listening on :2475
[GIN-debug] Listening and serving HTTP on :2475
[GIN] 2023/08/04 - 18:10:36 | 200 |      26.191µs |       127.0.0.1 | HEAD     "/_ping"
[GIN] 2023/08/04 - 18:10:53 | 404 | 16.815218898s |       127.0.0.1 | POST     "/images/load?quiet=1"
[GIN] 2023/08/04 - 18:10:53 | 200 |       16.67µs |       127.0.0.1 | HEAD     "/_ping"

When running against kubedock testcontainers test need to factor in pull time in the start timeout

testcontainers will pull the docker image prior to starting the container. This means that when running against docker, test authors need not worry about pull times when specifying the timeout for starting the container.

In the context of kubedock though, of course pulling an image before creating a deployment/job makes no sense so the pull duration ends up ticking during the start timeout.

Not sure if there is anything that can be done here without be too complex, but wanted to raise the issue none the less.

The only idea I have is to implement the image pull via a daemonset that pulls the image on all nodes. This is not guaranteed to work if there is a scale up at the time of pod creation, but that is probably a niche case. This will however be quite wasteful as we will have to pull the image on every single node and will spin n pods for it.

Another option is to use inter-pod affinity. When pulling the image, we start a deployment/job/pod with that image that just sleeps forever. Then we schedule the actual workload we care about, configuring it to collocate with the first one. One it starts, we can clean up the one we used for the image pull. I've never used inter-pod affinities so not sure what are the implications here and if it will work at all.

Kubedock and concurrency

Hi there,

I have a question about concurrency.

We are using kubedock as a standalone service running in kubernetes. I have managed to get things pretty stable, however some strange things happen when I try to kick off several builds (which talk to kubedock) at the same time (e.g. multiple copies of the same service being spun up etc). I have not really dug into this yet so I am unsure of the cause.

Can kubedock be used in this way? Or is it only intended to be used by one process at a time?

Thanks

Testcontainers failure with 500 error using Quarkus with Kafka

I'm running into an issue with Testcontainers with Quarkus using the smallrye kafka library. The issue seems to be around a 500 error that gets return when the library attempts to start up the container, so it's outside of my control. To reproduce I'm using the quarkus quickstart that can be found here: kafka-quickstart. I've tried starting kubedock with both --port-forward and --reverse-proxy. For the java side I'm running the following:

export TESTCONTAINERS_RYUK_DISABLED=true
export TESTCONTAINERS_CHECKS_DISABLE=true
export DOCKER_HOST=tcp://127.0.0.1:2475
./mvnw test 

The specific error I'm seeing is:

I0801 15:51:32.355254    9237 copy.go:36] copy 3072 bytes to 2be7a7670a6c:/
E0801 15:51:32.514282    9237 v2.go:167] io: read/write on closed pipe
E0801 15:51:32.515727    9237 util.go:18] error during request[500]: command terminated with exit code 2
[GIN] 2023/08/01 - 15:51:32 | 500 |  165.357468ms |       127.0.0.1 | PUT      "/containers/2be7a7670a6c45a7035f976f9114d52f9c946974c579329de976d57f2322b482/archive?noOverwriteDirNonDir=false&path=%2F&copyUIDGID=false"

and on the java side:

2023-08-01 15:51:29,035 INFO  [tc.doc.io/.3.4] (build-17) Creating container for image: docker.io/vectorized/redpanda:v22.3.4
2023-08-01 15:51:29,040 INFO  [org.tes.uti.RegistryAuthLocator] (build-17) Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: docker.io/vectorized/redpanda:v22.3.4, configFile: /home/user/.docker/config.json, configEnv: DOCKER_AUTH_CONFIG). Falling back to docker-java default behaviour. Exception message: Status 404: No config supplied. Checked in order: /home/user/.docker/config.json (file not found), DOCKER_AUTH_CONFIG (not set)
2023-08-01 15:51:29,133 INFO  [tc.doc.io/.3.4] (build-17) Container docker.io/vectorized/redpanda:v22.3.4 is starting: 2be7a7670a6c45a7035f976f9114d52f9c946974c579329de976d57f2322b482
2023-08-01 15:51:32,331 WARN  [tc.doc.io/.3.4] (build-17) The architecture 'null' for image 'docker.io/vectorized/redpanda:v22.3.4' (ID docker.io/vectorized/redpanda:v22.3.4) does not match the Docker server architecture 'amd64'. This will cause the container to execute much more slowly due to emulation and may lead to timeout failures.
2023-08-01 15:51:32,517 ERROR [tc.doc.io/.3.4] (build-17) Could not start container: com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"message":"command terminated with exit code 2"}
        at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:247)
        at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.put(DefaultInvocationBuilder.java:223)
        at org.testcontainers.shaded.com.github.dockerjava.core.exec.CopyArchiveToContainerCmdExec.execute(CopyArchiveToContainerCmdExec.java:34)
        at org.testcontainers.shaded.com.github.dockerjava.core.exec.CopyArchiveToContainerCmdExec.execute(CopyArchiveToContainerCmdExec.java:13)
        at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrSyncDockerCmdExec.exec(AbstrSyncDockerCmdExec.java:21)
        at org.testcontainers.shaded.com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
        at org.testcontainers.shaded.com.github.dockerjava.core.command.CopyArchiveToContainerCmdImpl.exec(CopyArchiveToContainerCmdImpl.java:167)
        at org.testcontainers.containers.ContainerState.copyFileToContainer(ContainerState.java:313)
        at io.quarkus.kafka.client.deployment.RedPandaKafkaContainer.containerIsStarting(RedPandaKafkaContainer.java:67)
        at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:479)
        at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:344)
        at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
        at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
        at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:322)
        at io.quarkus.kafka.client.deployment.DevServicesKafkaProcessor.lambda$startKafka$5(DevServicesKafkaProcessor.java:237)
        at java.base/java.util.Optional.orElseGet(Optional.java:369)
        at io.quarkus.kafka.client.deployment.DevServicesKafkaProcessor.startKafka(DevServicesKafkaProcessor.java:285)
        at io.quarkus.kafka.client.deployment.DevServicesKafkaProcessor.startKafkaDevService(DevServicesKafkaProcessor.java:95)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at io.quarkus.deployment.ExtensionLoader$3.execute(ExtensionLoader.java:864)
        at io.quarkus.builder.BuildContext.run(BuildContext.java:282)
        at org.jboss.threads.ContextHandler$1.runWith(ContextHandler.java:18)
        at org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2513)
        at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1538)
        at java.base/java.lang.Thread.run(Thread.java:829)
        at org.jboss.threads.JBossThread.run(JBossThread.java:501)

I have tried this same setup in my local environment with docker and the tests run fine.

Kubernetes Service not created

The Kubernetes Service is missing when running a Testcontainer (e.g. PostgreSQL) with Kubedock. The container does export the port properly but there is no Service resource created. I tested this with Kubedock v0.14, v0.13 and v0.12 and was not able to get this running. The last version that did work for me is v0.10

Adding --inspector option also does not help.

Any hints are much appreciated. Many thanks!

Kubedock sometimes fails to establish reverse proxy

Hi Vincent,

I have noticed that Kubedock often fails to set up a reverse proxy because the local port that it is trying to bind to is already in use, e.g.

E0310 15:02:13.723054       1 deploy.go:206] error setting up port-forward: listen tcp 0.0.0.0:39002: bind: address already in use
I0310 15:02:13.722912       1 deploy.go:192] reverse proxy for 39002 to 8080
I0310 15:02:13.722937       1 tcpproxy.go:33] start reverse-proxy 0.0.0.0:39002->172.20.10.225:8080

(I notice that in the version of Kubedock we're using, the log text is slightly wrong. It says port-forward but it was actually trying to create a reverse proxy -- this seems to be fixed in the version on master)

From looking at the code I think Kubedock is picking a port at random, and I don't see anywhere where it's checking whether the port is actually free. If that is indeed the case, perhaps it would be good to catch the exception and retry with a different port if a bind: address already in use error occurs. What do you think?

Thanks

ConfigMaps are fetch even if no option for them

Hi @joyrex2001 ,

According to the minimum RBAC provided in the README.md it seems no call for ConfigMap should be done by default.

But when running the image I get this kind of errors:

E0808 17:00:59.867275 1 main.go:83] error cleaning k8s containers: configmaps is forbidden: User "system:serviceaccount:XXXXX:YYYYYY" cannot list resource "configmaps" in API group "" in the namespace "XXXXX"

Should I give this rule too? Or does kubedock should change this?

Thank you,

`error during exec: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr`

Hi @joyrex2001 ,

Trying to make kubedock working in my cluster but I get the issue:

E0808 17:00:34.188424 1 exec.go:105] error during exec: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr

I did give the pods/exec to the service account but it seems that something else. Did you succeed in making the waiting for of testcontainers working with exec?

I found just a little information about this error on forums... not helping that much 😞

Thank you,

Full logs:

I0808 16:58:59.810173 1 main.go:28] kubedock 0.8.2-13-gf3c81fc (20220802-143408)
I0808 16:58:59.811125 1 main.go:105] kubernetes config: namespace=XXXXXXXXXX, initimage=joyrex2001/kubedock:0.8.2, ready timeout=1m0s
I0808 16:58:59.811939 1 main.go:129] reaper started with max container age 1h0m0s
I0808 16:58:59.812324 1 main.go:106] default image pull policy: ifnotpresent
I0808 16:58:59.812395 1 main.go:108] using namespace: XXXXXXXXXX
I0808 16:58:59.812616 1 main.go:36] api server started listening on :2475
[GIN] 2022/08/08 - 17:00:30 |�[97;42m 200 �[0m| 43.003µs | 127.0.0.1 |�[97;45m HEAD �[0m "/_ping"
[GIN] 2022/08/08 - 17:00:30 |�[97;42m 200 �[0m| 13.852µs | 127.0.0.1 |�[97;45m HEAD �[0m "/_ping"
[GIN] 2022/08/08 - 17:00:30 |�[97;42m 200 �[0m| 132.479µs | 127.0.0.1 |�[97;44m GET �[0m "/networks"
[GIN] 2022/08/08 - 17:00:30 |�[97;42m 200 �[0m| 119.406µs | 127.0.0.1 |�[97;44m GET �[0m "/images/postgres:14-alpine/json"
[GIN] 2022/08/08 - 17:00:30 |�[97;42m 201 �[0m| 501.03µs | 127.0.0.1 |�[97;46m POST �[0m "/containers/create"
W0808 17:00:30.962715 1 container.go:161] user not set, will run as user defined in image
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 204 �[0m| 3.040557031s | 127.0.0.1 |�[97;46m POST �[0m "/containers/6597307e6e6e1f5512f978f6b8620e163315e92d09776791acf75d8eeb8ed3c1/start"
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 201 �[0m| 203.883µs | 127.0.0.1 |�[97;46m POST �[0m "/containers/6597307e6e6e1f5512f978f6b8620e163315e92d09776791acf75d8eeb8ed3c1/exec"
E0808 17:00:34.188424 1 exec.go:105] error during exec: unable to upgrade connection: you must specify at least 1 of stdin, stdout, stderr
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 200 �[0m| 80.86269ms | 127.0.0.1 |�[97;46m POST �[0m "/exec/6b88f931bb91aface88d07e2c4ccef3d28d6b232c91cce6475517622ef80a471/start"
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 200 �[0m| 125.707µs | 127.0.0.1 |�[97;44m GET �[0m "/exec/6b88f931bb91aface88d07e2c4ccef3d28d6b232c91cce6475517622ef80a471/json"
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 200 �[0m| 258.235µs | 127.0.0.1 |�[97;44m GET �[0m "/containers/6597307e6e6e1f5512f978f6b8620e163315e92d09776791acf75d8eeb8ed3c1/json"
[GIN] 2022/08/08 - 17:00:34 |�[97;42m 200 �[0m| 155.822µs | 127.0.0.1 |�[97;44m GET �[0m "/containers/6597307e6e6e1f5512f978f6b8620e163315e92d09776791acf75d8eeb8ed3c1/json"

-----------
EDIT:
I did the whole path from your code:

RemoteCmd <- ExecContainer <- ExecStart (retrieving the exec params in `memdb`)

and

ContainerExec (having them through HTTP and setting them into `memdb`

From what I understand my https://github.com/testcontainers/testcontainers-go should in a way set parameters "true" for stderr/stdout (as specified by the decoded JSON struct

Stdout bool `json:"AttachStdout"`
Stderr bool `json:"AttachStderr"`
)... but for now I don't find how to set it :/

Any reason why by default we don't attach them if that's required by Kubernetes (since we cannot set TTY/stdin in your tool)?

Issue parsing environment variables where the value contains an `=` (equals) character

Hi there,

I have come across an issue while trying to set an environment variable that contains an equals character, e.g.

Error from kubedock log:

E1004 12:32:00.263399       1 container.go:74] could not parse env SOME_BASE_64_ENCODED_ENV_VARIABLE=MIIJKAIB...JsXVU2syw3EZ7Y=

It seems that in container.go kubedock determines whether or not an env variable is valid by the presence of only 1 equals character:

	for _, e := range co.Env {
		f := strings.Split(e, "=")
		if len(f) != 2 {
			klog.Errorf("could not parse env %s", e)
			continue
		}
		env = append(env, corev1.EnvVar{Name: f[0], Value: f[1]})
	}

I wonder if only splitting on the first equals that we encounter would work?

Consider setting custom labels on managed resources

Custom labels are translated to annotations on the managed resource (e.g. deployments, services). I wonder why not also set labels on the managed resource?

Some platforms make a difference between annotations and labels on resources. I think it would make sense to set both on kubedock managed resources.

Waiting for running container times out

The following simple python script fails if running against kubedock, but works against docker:

    import docker

    client = docker.from_env(timeout=_DOCKER_CLIENT_TIMEOUT)

    container = client.containers.run(
        "busybox",
        entrypoint="echo",
        command="hey",
        detach=True,
        stdout=True,
        stderr=True,
        tty=False,
        labels={
            "com.joyrex2001.kubedock.deploy-as-job": "true"
        }
    )
    container.wait(timeout=_DOCKER_CLIENT_TIMEOUT)

    print(container.logs(stdout=True, stderr=True, tail=100))

I can see the job starting and running successfully, however container.wait(timeout=_DOCKER_CLIENT_TIMEOUT) times out even though the pod has finished.

Podman support

Problem Description

When trying to run podman on a host where I have started kubedock I get the following error

$ podman --remote --url "tcp://127.0.0.1:2475" run --name httpd -d -p 8080:8080 python python -m http.server 8080
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: ping response was 404

And from kubedock logs

image

How to reproduce it

The issue can be reproduced on OpenShift Developer Sandbox (a Red Hat Developer account is required but it's free) using the following link: https://workspaces.openshift.com/f?url=https://github.com/l0rd/micronaut-sample

Video recording

2023-04-18.23.59.20.mp4

Is it possible for kubedock to apply annotations?

Hi @joyrex2001,

Is there any way to get kubedock to apply annotations to pods when it starts them?

I can see there's a function that mentions annotations, but I can't see how to configure them.

// getAnnotations will return a map of annotations to be added to the
// container. This map contains the labels as specified in the container
// definition.
func (in *instance) getAnnotations(annotations map[string]string, tainr *types.Container) map[string]string {
	if annotations == nil {
		annotations = map[string]string{}
	}
	for k, v := range tainr.Labels {
		annotations[k] = v
	}
	annotations["kubedock.containername"] = tainr.Name
	return annotations
}

Thanks!

404 when starting a container

Running a container build from the following Dockerfile:

            FROM alpine
            RUN echo \$RANDOM >> /tmp/test.txt
            CMD cat /tmp/test.txt && echo "DONE" && sleep 28800

I get 404 from when calling container.start. When I look at the pods inside k8s, everything looks good, so I think this is a bug inside kubedock.

In the logs I get the following, but not sure if it is a red herring:

[GIN-debug] [WARNING] Headers were already written. Wanted to override status code 404 with 204

I am going to try and figure out a small runnable example to demonstrate this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.