Git Product home page Git Product logo

devpod-provider-kubernetes's People

Contributors

89luca89 avatar cameronraysmith avatar dependabot[bot] avatar fabiankramm avatar pascalbreuninger avatar pbialon avatar sanmai-nl avatar solomonakinyemi avatar tulequ avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

devpod-provider-kubernetes's Issues

Feature request: Support docker-compose

When using devcontainers with Github codespaces it is possible to reference a docker-compose file in the devcontainer.json:

{
    "name": "C# (.NET) and PostgreSQL",
    "dockerComposeFile": "docker-compose.yml",
    "service": "app",
    "workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
   // ...
}

Full example: https://github.com/devcontainers/templates/blob/main/src/dotnet-postgres/.devcontainer/docker-compose.yml

This will bring up one pod which runs the postgres database and a second one which you can use to develop the application. Both pods are linked so that the postgres db can be accessed from the dev container.

Right now this is not possible with the kubernetes provider: [12:28:25] info devcontainer up: find docker compose: docker compose is not supported by this provider, please choose a different one

Is that something which is planned for the future?

Control image pull policy

I didn't find a way to specify the image pull policy. Starting a dev container fails for me using a workstation-local Kubernetes deployment.

GET https://index.docker.io/v2/library/mylocalimage/manifests/development: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/mylocalimage Type:repository]]

Feature request - Support toleration in provider kubernetes

Currently the provider kubernetes allow node selector. The node selector does not allow selecting node that has been tainted for specific use case or capability, example the more expensive GPU node will be tainted with GPU.

Request - add support to define toleration in provider kubernetes.

Target Architecture detection does not work with sidecards present

Hey there,

we have a cluster with LinkerD activated for namespaces.

We are stuck creating a workspace with this output:

[16:16:54] info Workspace example already exists
[16:16:54] debug Acquire workspace lock...
[16:16:54] debug Acquired workspace lock...
[16:16:54] info Creating devcontainer...
[16:16:54] debug Inject and run command: '/private/var/folders/fk/gnqthmjs5fjc6gkdqd50wr580000gp/T/AppTranslocation/C6DBB645-F78A-46D3-A4CF-A72A2823FE57/d/DevPod.app/Contents/MacOS/devpod-cli' agent workspace up --workspace-info '<redacted>' --debug
[16:16:54] debug Execute command locally
[16:16:54] info Use /Users/patst/.devpod/agent/contexts/default/workspaces/example as workspace dir
[16:16:54] debug Created logger
[16:16:54] debug Received ping from agent
[16:16:54] debug Workspace Folder already exists
[16:16:54] debug Use Kubernetes Namespace 'acc000-example'
[16:16:54] debug Use Kubernetes Context 'aks-1'
[16:16:54] debug Run command: kubectl --namespace acc000-example --context aks-1 get pvc devpod-example-default-3bc0e --ignore-not-found -o json
[16:16:55] info Find out cluster architecture...
[16:16:55] debug Run command: kubectl --namespace acc000-example --context aks-1 run -i devpod-zontts -q --pod-running-timeout=10m0s --rm --restart=Never --image busybox -- sh
[16:26:57] info find out cluster architecture:  error: timed out waiting for the condition
[16:26:57] info exit status 1
[16:26:57] info build image
[16:26:57] info github.com/loft-sh/devpod/pkg/devcontainer.(*Runner).runSingleContainer
[16:26:57] info /Users/runner/work/devpod/devpod/pkg/devcontainer/single.go:51
[16:26:57] info github.com/loft-sh/devpod/pkg/devcontainer.(*Runner).Up

I think the issue is, that the detection only works if exactly on container is present. (see

err := k.runCommand(ctx, []string{"create", "ns", k.namespace}, nil, buf, buf)
)

If I run the command locally, without -q the error is shown:

kubectl --namespace acc000-example --context aks-1 run -i devpod-zontts  --pod-running-timeout=10m0s --rm --restart=Never --image busybox -- sh 
Defaulted container "linkerd-proxy" out of: linkerd-proxy, devpod-zontts2, linkerd-init (init)
If you don't see a command prompt, try pressing enter.

Would it be possible to make the check more robust or provide a config flag setting the architecture to skip the detection at all?

Error trying to do devpod for the single pod kubernetes cluster of Docker Desktop. Mac M1 system with Dokcer Desktop and DevPod

[15:15:28] info Workspace vscode-remote-try-python already exists
[15:15:28] debug Acquire workspace lock...
[15:15:28] debug Acquired workspace lock...
[15:15:28] info Creating devcontainer...
[15:15:28] debug Inject and run command: '/Applications/DevPod.app/Contents/MacOS/devpod-cli' helper ssh-server --stdio --debug
[15:15:28] debug Attempting to create SSH client
[15:15:28] debug Execute command locally
[15:15:28] debug SSH client created
[15:15:28] debug SSH session created
[15:15:28] info Execute SSH server command: bash -c '/Applications/DevPod.app/Contents/MacOS/devpod-cli' agent workspace up --workspace-info 'H4sIAAAAAAAA/9RWTW/buhL9Kw/EW9qWRH1r58psatSxDUkO3msRGBQ5jHUjS4JIOQkC//cLWm7jtnHiuwruVjpnhmfOcDjP6KFu72VDGSza4q6oUISMlYRWGqrePnbGiMOuqbnB6krBo5IGB0G7Uhk/edLYSVZzGLawrRUMVfs0bJ7Upq5eMKO/ZF2hwUsyFD2jgqMIneOiAep+AwyD0A09NEBNwVTXAorQRqlGRoZRN1DdtbTZjO4KtelyKiUoOWL11mC5D4EfcI7t3GMu9n1KQ2zbFKzQ8SxwHZtaYWDauRd6vpOblhnajhVy7lDBbdfYFqytZS3UWZ36SG29Kzi0WldFt/ps910ObQUKJBqgulFFXUn9ezofx9n0Zpr9f51Nr8lilemvnYR22QfhKFJtB/sB+rr6RJI5yUi6jhfzz9MrjdzRstPxje7UJp1NmySKO12696Nl5H8XJZ6Pr0m6HMfkNPf2aahV9la+mm6/H6AtZZui0l7vB6jgcFKcvpRoP0Cy7tq+H+4KlUBTy0LV7dOJub2lBzMv8GI/QKwFqsudFVuQim4bFCFsYmdo4qFpZVYQ2UFk29/QAJVUqpXUpz5FYDOy3AgHGnFsfBShY+frDPQOKqXPXNaMlihCWrNuA6o22ppx05QFOxxCGhPYLWs+ok1jxDpYpaRxTdkiNfq7NWRlgQaI1w9VWVO+Smaviy9roYZyc2QZLZRAJUjjB9HYmSN3ZKIBgkdgfdXzoqJtAYfGOzF1mSxuphOSoOj7M6olilBZVN0jGiDaMq2AbrnnaPUbYPey26II+S7YYFNsMdNzQy9wOIacBww86grXExAE2PaZC9S0feyyMOBhiGnAgVk8B/xSnne1DX9cp+HLJXpdrzXCb3CGB1XDXsx+cEZqu/1dKnMtlwdgCeyZJkDugcdCJoSHcwdjywIcCOzblm26ue/4ImemnimBEIHrcko/UOpBzE+pnLYPRfWmrQE2mWeZpsV9n1seY6HPuedzGtjY5B63mO16vpu7du55Ls8dP8BgBbntUywCyD9Cay/rd1//EPuKsSbGHCh2/BAotZjpY6yd5hbP7dB0vNy3XDcXoc8YzwNqCshDIcwcm+BxR3yk2F+dfSgqXj/IN63FOWVCMJ/hnOUOw45rWkIw27U4pVx4AuzAdWhgcc8LhR/aFGzfcxwRiNwKqfkRao+6em9H8Ahof7vXw5HdQ1uCPEwyXkial4epLWgpD+8Ib4udfn8R66Sqt+gHp5+Dx4/RMxJFxSew05OYFpX+/x399/mV2bj/j4aiW/0EbLf0YtYRrYmKtnegxi3bFAqOO8tZXg8e0lP07QC1XXVh4rarNEEq2qoLKQdsT6qbizl1oykcSlBwIakHo1u9FrCyWLysQ4e94GQfONmU4tkqzUiyThYzcnZViRMyzsjra8rxVX6VN5mmX9fp9NsvBMu8Ks4SFvFXksxImq4n03T8aUYmp9S+Ed/nTq/HV+fVfCGzJUkuAyUkXaySmKRngf9w1Yyz2Xo5zr6cytJXk6nysp3yX7Wh/hJjuZrN1imJE5KlazL/w923Omk2/kRm502YLyZknZIZibNFcha1XEzW1+P59DNJs3VGrpezcXa+A5Y38Xocx7qfrheT87j3WyQlyc00JjraYjU/X+80WyTjK7KOZ+P0XLT9/m8AAAD//wEAAP//oDUimU8OAAA=' --debug
[15:15:28] info Use /Users/tomxu/.devpod/agent/contexts/default/workspaces/vscode-remote-try-python as workspace dir
[15:15:28] debug Created logger
[15:15:28] debug Received ping from agent
[15:15:28] debug Workspace Folder already exists /Users/tomxu/.devpod/agent/contexts/default/workspaces/vscode-remote-try-python/content
[15:15:28] debug Workspace exists, skip downloading
[15:15:28] debug Run findDevContainer driver command: ${KUBERNETES_PROVIDER} find
[15:15:28] info Use Kubernetes Namespace 'my-namespace'
[15:15:28] info Use Kubernetes Config '/users/tomxu/.kube/config'
[15:15:28] info Run command: kubectl --namespace my-namespace --kubeconfig /users/tomxu/.kube/config get pvc devpod-vscode-rem-89596 --ignore-not-found -o json
[15:15:28] info error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``
[15:15:28] info retrieve image mcr.microsoft.com/devcontainers/python:0-3.11
[15:15:28] info github.com/loft-sh/devpod/pkg/image.GetImage
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/image/image.go:21
[15:15:28] info github.com/loft-sh/devpod/pkg/image.GetImageConfig
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/image/image.go:42
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).inspectImage
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/inspect.go:18
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).getImageBuildInfoFromImage
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/build.go:199
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).extendImage
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/build.go:111
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).build
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/build.go:101
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).runSingleContainer
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/single.go:56
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).Up
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/run.go:120
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).devPodUp
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:380
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).up
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:160
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:94
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:52
[15:15:28] info github.com/spf13/cobra.(*Command).execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
[15:15:28] info github.com/spf13/cobra.(*Command).ExecuteC
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
[15:15:28] info github.com/spf13/cobra.(*Command).Execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
[15:15:28] info github.com/loft-sh/devpod/cmd.Execute
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/root.go:90
[15:15:28] info main.main
[15:15:28] info /Users/runner/work/devpod/devpod/main.go:8
[15:15:28] info runtime.main
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/proc.go:250
[15:15:28] info runtime.goexit
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/asm_arm64.s:1172
[15:15:28] info get image build info
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).extendImage
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/build.go:113
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).build
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/build.go:101
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).runSingleContainer
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/single.go:56
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).Up
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/run.go:120
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).devPodUp
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:380
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).up
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:160
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:94
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:52
[15:15:28] info github.com/spf13/cobra.(*Command).execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
[15:15:28] info github.com/spf13/cobra.(*Command).ExecuteC
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
[15:15:28] info github.com/spf13/cobra.(*Command).Execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
[15:15:28] info github.com/loft-sh/devpod/cmd.Execute
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/root.go:90
[15:15:28] info main.main
[15:15:28] info /Users/runner/work/devpod/devpod/main.go:8
[15:15:28] info runtime.main
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/proc.go:250
[15:15:28] info runtime.goexit
[15:15:28] debug Connection to SSH Server closed
[15:15:28] debug Done creating devcontainer
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/asm_arm64.s:1172
[15:15:28] info build image
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).runSingleContainer
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/single.go:64
[15:15:28] info github.com/loft-sh/devpod/pkg/devcontainer.(*runner).Up
[15:15:28] info /Users/runner/work/devpod/devpod/pkg/devcontainer/run.go:120
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).devPodUp
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:380
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).up
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:160
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:94
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:52
[15:15:28] info github.com/spf13/cobra.(*Command).execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
[15:15:28] info github.com/spf13/cobra.(*Command).ExecuteC
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
[15:15:28] info github.com/spf13/cobra.(*Command).Execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
[15:15:28] info github.com/loft-sh/devpod/cmd.Execute
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/root.go:90
[15:15:28] info main.main
[15:15:28] info /Users/runner/work/devpod/devpod/main.go:8
[15:15:28] info runtime.main
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/proc.go:250
[15:15:28] info runtime.goexit
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/asm_arm64.s:1172
[15:15:28] info devcontainer up
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.(*UpCmd).Run
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:96
[15:15:28] info github.com/loft-sh/devpod/cmd/agent/workspace.NewUpCmd.func1
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/agent/workspace/up.go:52
[15:15:28] info github.com/spf13/cobra.(*Command).execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:916
[15:15:28] info github.com/spf13/cobra.(*Command).ExecuteC
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:1044
[15:15:28] info github.com/spf13/cobra.(*Command).Execute
[15:15:28] info /Users/runner/work/devpod/devpod/vendor/github.com/spf13/cobra/command.go:968
[15:15:28] info github.com/loft-sh/devpod/cmd.Execute
[15:15:28] info /Users/runner/work/devpod/devpod/cmd/root.go:90
[15:15:28] info main.main
[15:15:28] info /Users/runner/work/devpod/devpod/main.go:8
[15:15:28] info runtime.main
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/proc.go:250
[15:15:28] info runtime.goexit
[15:15:28] info /Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/asm_arm64.s:1172
[15:15:28] debug Done executing ssh server helper command
[15:15:28] fatal Process exited with status 1
run agent command
github.com/loft-sh/devpod/pkg/devcontainer/sshtunnel.ExecuteCommand.func2
/Users/runner/work/devpod/devpod/pkg/devcontainer/sshtunnel/sshtunnel.go:122
runtime.goexit
/Users/runner/hostedtoolcache/go/1.20.5/x64/src/runtime/asm_arm64.s:1172

And when I click back and click back the error: I got this: [15:15:56] info Workspace 'vscode-remote-try-python' is 'NotFound', you can create it via 'devpod up vscode-remote-try-python'

Ability to run devpods as non-root or even in restricted pod security standard environments

Currently, at least the init container seems to require running as root (UID 0, see

RunAsNonRoot: &[]bool{false}[0],
) with no way to configure it.

It would be great if we could deploy devpod to clusters that require running as non-root. Even better, to clusters that use the restricted pod security standard: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted.

Is this something that is technically possible?

Make pod-running-waiting timeout configurable

Description

Users have been experiencing issues with timeouts when working with the cluster autoscaler.

In the current setup, if the node selector setting triggers the autoscaler, the system often times out at the devpod level. This is particularly evident in environments where autoscaler response times are slower. Making the timeout user-configurable can help alleviate these issues and provide a more flexible solution for varied user environments.

Proposed solution

Introduce a configurable timeout option for users.

Alternative solutions

Static Extension: Increase the default timeout value, though this is not a one-size-fits-all solution.

Additional context

Slack discussion: https://loft-sh.slack.com/archives/C056ZDZPJ4W/p1694654884948429

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.