Git Product home page Git Product logo

marketplace-kubernetes's Introduction

Marketplace Kubernetes

Apache license Pull Requests Welcome

Welcome!

This repository contains the source code and deployment scripts for Kubernetes-based applications listed in the DigitalOcean Marketplace.

You can find more information about DigitalOcean Kubernetes how to use 1-Click apps in our Kubernetes Documentation.

Please review the contributor guidelines before building your app and submitting your PR.

If you have specific questions about your Marketplace app, you can ask the DigitalOcean Community.

marketplace-kubernetes's People

Contributors

alexeigutium avatar annanay25 avatar arosales avatar bhagirathhapse avatar chandansagar avatar chasdl avatar denisgolius avatar devillecodes avatar ferroin avatar grellyd avatar jgannondo avatar jgustie avatar keladhruv avatar muratugureminoglu avatar nitishdsharma avatar pachadotdev avatar paulanunda avatar prologic avatar ranjithwingrider avatar rberrelleza avatar scott avatar sharmita3 avatar tiina303 avatar v-bpastiu avatar v-ctiutiu avatar valerabr avatar vladciobancai avatar wadenick avatar yanivbh1 avatar zalkar-z avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

marketplace-kubernetes's Issues

Bitnami metrics-server Helm Chart throws "unable to fully scrape metrics"

I have tried installing metrics-server via the Bitnami Helm Chart as described on https://marketplace.digitalocean.com/apps/kubernetes-metrics-server ,however the pod never starts and I get the following

metrics-server-c6bd8c57-sbsbg      0/1     CrashLoopBackOff   6          5m51s

➜  k logs -n kube-system metrics-server-c6bd8c57-sbsbg  
I0406 09:38:26.370033       1 serving.go:325] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
E0406 09:38:26.828218       1 server.go:132] unable to fully scrape metrics: unable to fully scrape metrics from node worker-pool-8fi9h: unable to fetch metrics from node worker-pool-8fi9h: Get "https://worker-pool-8fi9h:10250/stats/summary?only_cpu_and_memory=true": dial tcp: lookup worker-pool-8fi9h on 10.245.0.10:53: no such host
I0406 09:38:26.842057       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0406 09:38:26.842076       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0406 09:38:26.842097       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0406 09:38:26.842102       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0406 09:38:26.842117       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0406 09:38:26.842122       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0406 09:38:26.842159       1 secure_serving.go:197] Serving securely on [::]:8443
I0406 09:38:26.842228       1 dynamic_serving_content.go:130] Starting serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key
I0406 09:38:26.842249       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0406 09:38:26.942329       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0406 09:38:26.942471       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0406 09:38:26.942719       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0406 09:38:55.045366       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0406 09:38:55.045401       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
I0406 09:38:55.045413       1 configmap_cafile_content.go:223] Shutting down client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0406 09:38:55.045506       1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0406 09:38:55.045519       1 dynamic_serving_content.go:145] Shutting down serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key
I0406 09:38:55.045545       1 secure_serving.go:241] Stopped listening on [::]:8443'''

helm chart / stack parameter

Hi,
I'm trying to bring KubeMQ (Kubernetes Message Broker) to the marketplace via a helm chart.
In order that KubeMQ will properly install, a token parameter should be passed during deploy,
Can you guide me on how to do so or direct me to the right example/link ?
Thank you,

Kubernetes Monitoring Stack - Insights page not updated

When installing the advanced metrics in a K8s cluster using the following instructions, the Portal's Insights page display additional metrics.
https://www.digitalocean.com/docs/kubernetes/how-to/monitor-advanced/

After installing the Kubernetes Monitoring Stack, the Insights page does not display additional metrics and still prompts to get additional metrics.

image

I would expect that after installing the Kubernetes Monitoring Stack, the Insights page would display the additional metrics.

generate-stack.sh error

The following fails with sed (GNU sed) 4.7 on Linux chris-msi 4.19.69-1-MANJARO #1 SMP PREEMPT Thu Aug 29 08:51:46 UTC 2019 x86_64 GNU/Linux :

sed -i '' "s/{{STACK_NAME}}/$STACK_NAME/g" $file

Removing the first single quotes makes it behave nicely:

sed -i "s/{{STACK_NAME}}/$STACK_NAME/g" $file

Haven't done a PR, because I'm not sure if the existing command does work with other versions/OSs.

Apache Pulsar 1-Click App

Description

Apache Pulsar is a distributed, open source pub-sub messaging and streaming platform for real-time workloads, managing hundreds of billions of events per day.

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart repository).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for Apache Pulsar.

Kubernetes Monitoring Stack PVC Support

Any plan for adding Persistence Volumes support for Kubernetes Monitoring Stack?

When installing using the one-click deployment, if there is any kind of disruption, all of the logs are deleted from prometheus-operator on pod restart. This defeats the purpose of having monitoring in the first place since we won't find out anything about the pre-crash state.

Update kube-prometheus-stack to use Helm chart version 30.0.1

Description

Kube-prometheus-stack Helm chart version is a little bit too old. The current version used by the marketplace stack is 23.3.1. We can upgrade to version 30.0.1, and have it aligned with Starter Kit Prometheus version, which is more up to date and tested.

For the Helm based installation, the kube-prometheus-stack chart version 30.0.1 corresponds to Prometheus Operator application version 0.53.1.

Impacted Areas

Prometheus Monitoring marketplace stack.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for the Prometheus monitoring stack. Also, make sure to point the user to additional resources and guides, to learn more about what Prometheus can do.

How to limit resources?

Hey all.

I've been seeing Prometheus spend a huge amount of CPU and RAM, and noticed one of the pods does not have any resource limits. However, when trying to edit the manifest (I'm using Lens), I've got an error because I'm trying to use declarative object configuration, while it seems that the resource is created using the imperative configuration.

How should I update the configuration? Thanks.

Bizarre result when opening Grafana IP

I one-click installed kubernetes-monitoring-stack, got my Grafana IP but opening it reveals this plain html page:

<html lang="pt-br"><head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Gazetaweb</title>
    
    
</head>
<body>
    
    

    
    



    Home
</body></html>

This is my first contact with Grafana but this seems weird and I'm not sure how to proceed?

Monitoring stack failing to install - Kubernetes 1.19.3-do.0

The monitoring stack is failing to install. I even created a new cluster to test this and it failed to install on the new cluster.

image

I tried to install helm charts myself and there is an issue with persistent storage:

helm install metrics bitnami/metrics-server -n monitoring
helm upgrade metrics bitnami/metrics-server --set apiService.create=true -n monitoring
helm install prom stable/prometheus -n monitoring
helm install grafana bitnami/grafana -n monitoring

Grafana pod error:
mkdir: cannot create directory '/opt/bitnami/grafana/data/plugins': Permission denied

Prometheus pod error

level=info ts=2020-10-31T02:54:36.494Z caller=main.go:343 msg="Starting Prometheus" version="(version=2.20.1, branch=HEAD, revision=983ebb4a513302315a8117932ab832815f85e3d2)"
level=info ts=2020-10-31T02:54:36.494Z caller=main.go:344 build_context="(go=go1.14.6, user=root@7cbd4d1c15e0, date=20200805-17:26:58)"
level=info ts=2020-10-31T02:54:36.494Z caller=main.go:345 host_details="(Linux 4.19.0-11-amd64 #1 SMP Debian 4.19.146-1 (2020-09-17) x86_64 prom-prometheus-server-76cc8fd49c-2rpnm (none))"
level=info ts=2020-10-31T02:54:36.494Z caller=main.go:346 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-10-31T02:54:36.494Z caller=main.go:347 vm_limits="(soft=unlimited, hard=unlimited)"
level=error ts=2020-10-31T02:54:36.494Z caller=query_logger.go:87 component=activeQueryTracker msg="Error opening query log file" file=/data/queries.active err="open /data/queries.active: permission denied"
panic: Unable to create mmap-ed active query log
goroutine 1 [running]:
github.com/prometheus/prometheus/promql.NewActiveQueryTracker(0x7fff5edcb37e, 0x5, 0x14, 0x30898a0, 0xc0007819b0, 0x30898a0)
	/app/promql/query_logger.go:117 +0x4cd
main.main()
	/app/cmd/prometheus/main.go:374 +0x4f08

metrics-server stack needs to be updated

Description

metrics-server stack seems incomplete, hence we need to add missing parts like: documentation (README) and upgrade/uninstall.sh scripts. Also, the deploy.sh script needs to be updated to use the official kubernetes-sigs Helm repository instead of bitnami.

Impacted Areas

metrics-server stack.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate metrics-server stack scripts for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for the metrics-server stack. Also, make sure to point the user to additional resources and guides, to learn more about what metrics-server can do.

Knative 1-click application

Description

Knative gains popularity and offers an all in one solution for running serverless applications on Kubernetes. It cuts down all the boilerplate needed for spinning up workloads in Kubernetes, like creating deployments, services, ingress objects, etc. And, it doesn't stop there. Knative is also about implementing best practices in production systems (e.g. blue-green, canary deployments), helps with application observability (logs and metrics), and adds support for event driven applications.

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Validate that the scripts work as expected for each scenario (deploy/uninstall).
  3. Have the main README created, containing a quick start guide for the Knative 1-click app. Also, make sure to point the user to additional resources and guides, to learn more about what Knative can do.

Note:

Knative installation and configuration doesn't support Helm. It is Operator based, and this is the official recommendation for production systems. Helm is not always officially supported, so we must have a second option adopted as well to perform installations (meaning the Operator way).

kube-state-metrics v2.1.13 is no valid version

The deploy script for the kube-state-metrics

Since no errors were visible #285, I didn't know why the installation failed in the first place.

Reifying the deployment through Helm/ArgoCD allowed me to learn, that 2.1.13 is no valid version for kube-state-metrics, neither upstream nor with Bitnami:

The chart has also been moved to:

Bitnami also only offers versions newer than v2.2.x

Cert-Manager stack

Description

Cert-Manager is a very popular open source certificate management tool, specifically designed to work with Kubernetes. It can handle all the required operations for obtaining, renewing and using SSL/TLS certificates.

It also complements the ingress-nginx stack so that you can generate and manage production ready TLS certificates as well.

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for the cert-manager stack. Also, make sure to point the user to additional resources and guides, to learn more about what cert-manager can do.

Are the steps in the marketplace correct?

As a begginner trying out kubernetes I'm having trouble setting up the Kubernetes Monitoring Stack (BETA) from the market place.

I'm following all the steps: creating a cluster with 2 node pools, a load balancer for the pools, and downloading the config file. But, when I run the first command (kubectl -n prometheus-operator get svc prometheus-operator-grafana) it does not work (error: Error from server (NotFound): services "prometheus-operator-grafana" not found). Accessing the ip address of the load balancer also does not work, it just gives a 503 Service Unavailable error.

What should I do that I'm not doing? Do I need to install grafana and etc manually? Is there any other resources I could read to get this up and running easily?

Grafana Plugins and missing Pie Chart plugin

Hi,
Recently spun up a Kubernetes cluster and install this monitoring stack. While looking through the dashboards there are a few that are claiming to be missing the Pie Chart plugin (Kubernetes / Networking / Cluster). An FYI.

Also, how do you install new plugins as I cannot exec into the pod? - in case users want to add new plugins

Thanks

Marketplace link in `History of installed applications` is incorrect

After installing PostHog to K8s cluster using marketplace at the bottom of the K8s cluster page on DigitalOcean the link to PostHog on the Marketplace is incorrect

It is linking to
https://marketplace.digitalocean.com/apps/post-hog <- This 404's as it's an invalid URL
when it should be linking to
https://marketplace.digitalocean.com/apps/posthog-1

image

PS:
Sorry if this is the wrong spot to open this issue! I'm not sure where else to report it :)

Monitoring stack - Adding custom metrics

Hi
I am new to k8 and struggling to get custom metrics passed on to Grafana.

I have a rails application that I want to monitor requests and workload using this yabeda-rails gem.
Im pretty sure that i have setup rails correctly as I can access https://MYAPP/metrics and it gives me the correct information (I added a sample in the bottom).

In the bottom i also included my Service and Deployment config (I removed all the clutter).
Now, because /metrics is available in my rails app on port 3000, I was expecting
that I would only need to apply the annotations for prometheus in the Services or Deployments metadata.

  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '3000'

I should now be able to see the graphs using the Grafana dashboard id 11668.
I only get No data in the graphs which is telling me that the scraping doesn't work as I assumed.

I was expecting that the current scrape_configs scraped all pods if prometheus.io/scrape: 'true', am i wrong?
Do I need to add a custom scrape_config for this, how do I add new scrape_configs in a convenient way?

Examples are telling me that I need to add something like this somehow:

scrape_configs:
- job_name: 'rails'
  scheme: http
  metrics_path: '/metrics'
  static_configs:
    - targets:
      - 'rails:3000'

Service and Deployment

---
apiVersion: v1
kind: Service
metadata:
  name: rails
  namespace: dev
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/port:   '3000'
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: 3000
  selector:
    app: budid
    component: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rails
  namespace: dev
  labels:
    app: budid
    component: api
spec:
  selector:
    matchLabels:
      app: budid
      component: api
  template:
    metadata:
      labels:
        app: budid
        component: api
    spec:
      containers:
        - name: rails
      ...

Metrics

# TYPE sidekiq_jobs_enqueued_total counter
# HELP sidekiq_jobs_enqueued_total A counter of the total number of jobs sidekiq enqueued.
# TYPE http_request_total counter
# HELP http_request_total A counter of the total number of external HTTP \\\n                         requests.
# TYPE http_response_total counter
# HELP http_response_total A counter of the total number of external HTTP \\\n                         responses.
# TYPE http_response_duration_milliseconds histogram
# HELP http_response_duration_milliseconds A histogram of the response                                                duration (milliseconds).
# TYPE rails_requests_total counter
# HELP rails_requests_total A counter of the total number of HTTP requests rails processed.
rails_requests_total{controller="api/v1/base",action="status",status="200",format="json",method="get"} 2.0
rails_requests_total{controller="api/v1/brokers",action="theme",status="200",format="json",method="get"} 2.0
rails_requests_total{controller="api/v1/biddings",action="get_by_bidder",status="200",format="json",method="get"} 2.0
rails_requests_total{controller="api/v1/bidders",action="show",status="200",format="json",method="get"} 2.0
rails_requests_total{controller="api/v1/banks",action="index",status="200",format="json",method="get"} 2.0
# TYPE rails_request_duration_seconds histogram
# HELP rails_request_duration_seconds A histogram of the response latency.
rails_request_duration_seconds_bucket{controller="api/v1/base",action="status",status="200",format="json",method="get",le="0.005"} 2.0
rails_request_duration_seconds_bucket{controller="api/v1/base",action="status",status="200",format="json",method="get",le="0.01"} 2.0
...

Can we run a script on all nodes prior to installing our application into the cluster?

Hi,

We're looking to have some packages present on all nodes prior to installing our k8s application into the cluster. Since deploy.sh is run on the master, is there any way to access the ip addresses of all nodes in deploy.sh, or if not, is there some script that we can add to our stack that is guaranteed to be run on all nodes, prior to the running of deploy.sh?

Thank you,
James Pedersen

Tekton Triggers stack addition

Description

Tekton is a cloud native solution for building CI/CD systems. It is specifically engineered to run on Kubernetes, and empowers developers to create CI pipelines using reusable blocks.

Currently, the DigitalOcean 1-click apps marketplace provides the Tekton Pipelines core component. Tekton Triggers is a nice addition to Tekton Pipelines, empowering users to create CI flows that react to various external event sources (such as GitHub, for example).

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart repository).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for Tekton Triggers.

openebs volume cannot be mounted to the Pod

When using:
https://marketplace.digitalocean.com/apps/openebs
the services deploys correctly and Pods come up as expected. However, an error occurs when trying to mount to a Pod.

Per @paulanunda:

I spun up a 3 node cluster and got openebs installed without issue. When applying the sample yml file from above, the web pod got stuck in a "ContainerCreating" state. Here's my current running pods as well as event logs:

Every 2.0s: kubectl get po -A                                                                                                                                                                Pauls-MacBook-Pro.local: Tue Sep 17 15:05:59 2019

NAMESPACE     NAME                                                             READY   STATUS              RESTARTS   AGE
default       pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg   2/2     Running             0          3m42s
default       pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx    1/1     Running             0          3m32s
default       pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t    1/1     Running             0          3m32s
default       pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b    1/1     Running             0          3m32s
default       web-559f597df4-zz6j4                                             0/1     ContainerCreating   0          3m43s
kube-system   cilium-7922g                                                     1/1     Running             0          14m
kube-system   cilium-b5vtf                                                     1/1     Running             0          14m
kube-system   cilium-dcwtk                                                     1/1     Running             0          14m
kube-system   cilium-operator-b8c856758-d7l8q                                  1/1     Running             0          16m
kube-system   coredns-9d6bf9876-gb6qv                                          1/1     Running             0          16m
kube-system   coredns-9d6bf9876-h7dpb                                          1/1     Running             0          16m
kube-system   csi-do-node-9s62d                                                2/2     Running             0          14m
kube-system   csi-do-node-vqhj2                                                2/2     Running             0          14m
kube-system   csi-do-node-xxhwh                                                2/2     Running             0          14m
kube-system   do-node-agent-kwtnc                                              1/1     Running             0          14m
kube-system   do-node-agent-t6dkv                                              1/1     Running             0          14m
kube-system   do-node-agent-xbr82                                              1/1     Running             0          14m
kube-system   kube-proxy-7b27j                                                 1/1     Running             0          14m
kube-system   kube-proxy-k29mt                                                 1/1     Running             0          14m
kube-system   kube-proxy-z5f4k                                                 1/1     Running             0          14m
kube-system   kubelet-rubber-stamp-76bd547767-dqcj6                            1/1     Running             0          16m
openebs       openebs-admission-server-854b569848-s4jqn                        1/1     Running             0          13m
openebs       openebs-apiserver-5d55849f5-bmdzs                                1/1     Running             2          13m
openebs       openebs-localpv-provisioner-56865d855c-8p2br                     1/1     Running             0          13m
openebs       openebs-ndm-2nbk6                                                1/1     Running             0          13m
openebs       openebs-ndm-gsrlt                                                1/1     Running             0          13m
openebs       openebs-ndm-lm59h                                                1/1     Running             0          13m
openebs       openebs-ndm-operator-5b886f6d89-dnnsl                            1/1     Running             1          13m
openebs       openebs-provisioner-b8d7f7fbb-lhp5b                              1/1     Running             0          13m
openebs       openebs-snapshot-operator-5bd58bdd8c-qqd6d                       2/2     Running             0          13m

Logs

default       0s          Normal    ScalingReplicaSet         deployment/web                                                       Scaled up replica set web-559f597df4 to 1
default       0s          Normal    SuccessfulCreate          replicaset/web-559f597df4                                            Created pod: web-559f597df4-zz6j4
default       0s          Warning   FailedScheduling          pod/web-559f597df4-zz6j4                                             persistentvolumeclaim "nginx-pvc" not found
default       0s          Warning   FailedScheduling          pod/web-559f597df4-zz6j4                                             persistentvolumeclaim "nginx-pvc" not found
default       0s          Normal    ExternalProvisioning      persistentvolumeclaim/nginx-pvc                                      waiting for a volume to be created, either by external provisioner "openebs.io/provisioner-iscsi" or manually created by system administrator
default       0s          Normal    Provisioning              persistentvolumeclaim/nginx-pvc                                      External provisioner is provisioning volume for claim "default/nginx-pvc"
default       0s          Normal    ScalingReplicaSet         deployment/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep              Scaled up replica set pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7 to 3
default       1s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7   Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk
default       1s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk    Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk to pool-roqxab73v-b2q3
default       1s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7   Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95    Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95 to pool-roqxab73v-b2qu
default       1s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7   Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv
default       0s          Normal    ScalingReplicaSet         deployment/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl             Scaled up replica set pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c to 1
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv    Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv to pool-roqxab73v-b2q8
default       0s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c   Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg to pool-roqxab73v-b2qu
default       0s          Normal    ScalingReplicaSet         deployment/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep               Scaled down replica set pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7 to 0
default       0s          Normal    SuccessfulDelete          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7    Deleted pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv
default       0s          Normal    ProvisioningSucceeded     persistentvolumeclaim/nginx-pvc                                       Successfully provisioned volume pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e
default       0s          Normal    SuccessfulDelete          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7    Deleted pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95
default       0s          Normal    SuccessfulDelete          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7    Deleted pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk
default       0s          Normal    Scheduled                 pod/web-559f597df4-zz6j4                                              Successfully assigned default/web-559f597df4-zz6j4 to pool-roqxab73v-b2qu
default       0s          Normal    SuccessfulAttachVolume    pod/web-559f597df4-zz6j4                                              AttachVolume.Attach succeeded for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e"
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Normal    Pulling                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk     Pulling image "quay.io/openebs/jiva:1.1.0"
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Normal    Pulling                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95     Pulling image "quay.io/openebs/jiva:1.1.0"
default       0s          Normal    Pulling                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Pulling image "quay.io/openebs/jiva:1.1.0"
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Warning   Failed                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk     Error: cannot find volume "openebs" to mount into container "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con"
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-bpqbk     Successfully pulled image "quay.io/openebs/jiva:1.1.0"
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95     Successfully pulled image "quay.io/openebs/jiva:1.1.0"
default       0s          Warning   Failed                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-gkr95     Error: cannot find volume "openebs" to mount into container "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con"
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Successfully pulled image "quay.io/openebs/jiva:1.1.0"
default       0s          Normal    Created                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Created container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-con
default       0s          Normal    Started                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Started container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-con
default       0s          Normal    Pulling                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Pulling image "quay.io/openebs/m-exporter:1.1.0"
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Normal    ScalingReplicaSet         deployment/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep               Scaled up replica set pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476 to 3
default       0s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476    Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t
default       0s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476    Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx
default       0s          Normal    SuccessfulCreate          replicaset/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476    Created pod: pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t     Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t to pool-roqxab73v-b2q3
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx     Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx to pool-roqxab73v-b2qu
default       0s          Normal    Scheduled                 pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b     Successfully assigned default/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b to pool-roqxab73v-b2q8
default       0s          Normal    Pulling                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b     Pulling image "quay.io/openebs/jiva:1.1.0"
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t     Container image "quay.io/openebs/jiva:1.1.0" already present on machine
default       0s          Normal    Created                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t     Created container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Started                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-nds9t     Started container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b     Successfully pulled image "quay.io/openebs/jiva:1.1.0"
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Normal    Created                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b     Created container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Started                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-tch8b     Started container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx     Container image "quay.io/openebs/jiva:1.1.0" already present on machine
default       0s          Normal    Created                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx     Created container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Started                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-78c6cb5476-mnfhx     Started container pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-con
default       0s          Normal    Pulled                    pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Successfully pulled image "quay.io/openebs/m-exporter:1.1.0"
default       0s          Normal    Created                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Created container maya-volume-exporter
default       0s          Normal    Started                   pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-ctrl-789867f77c-x67xg    Started container maya-volume-exporter
default       4s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       1s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Warning   FailedMount               pod/pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv     Unable to mount volumes for pod "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv_default(4dfb3d42-efe7-4e97-9c0d-4d9081a4f060)": timeout expired waiting for volumes to attach or mount for pod "default"/"pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e-rep-5b84655cb7-fvdwv". list of unmounted volumes=[openebs default-token-t9l64]. list of unattached volumes=[openebs default-token-t9l64]
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              Unable to mount volumes for pod "web-559f597df4-zz6j4_default(ff75a1b5-8309-4b08-80ad-9dcca7cd42bb)": timeout expired waiting for volumes to attach or mount for pod "default"/"web-559f597df4-zz6j4". list of unmounted volumes=[nginx-pvc-vol]. list of unattached volumes=[nginx-pvc-vol default-token-t9l64]
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH
default       0s          Warning   FailedMount               pod/web-559f597df4-zz6j4                                              MountVolume.WaitForAttach failed for volume "pvc-1a4831ff-58a2-4db3-b284-43bfdbb3996e" : executable file not found in $PATH

Service mesh with logging stack

I have used DigitalOcean Kubernetes cluster and am trying to use Linkerd2 for service mesh and loki for logging for my deployment.
I thought it will be useful if service mesh and logging are installed at the same time to monitor kubernetes cluster.
In addition to service mesh and logging, distributed tracing with Jaeger is more useful for monitoring cluster.

Proposal: Add KubeSphere to DigitalOcean Marketplace (Kubernetes)

Hi DOK Team and @marketplace-eng:

KubeSphere is an open-source container platform based on Kubernetes. Users are allowed to easily install KubeSphere on DigitalOcean Kubernetes to build an enterprise-grade container platform with full-stack automated IT operations and streamlined DevOps workflows.

Hope the materials above are helpful. If this proposal could be approved by DO Marketplace Team, we would like to create a PR to add KubeSphere to the marketplace and maintain it in the long term : )

OpenEBS Dynamic NFS Provisioner mount existing volume

Hi!
I'm trying to use OpenEBS Dynamic NFS Provisioner in my application in DO Kubernetes cluster.
Can you please let me know if it's possible to use existing volume in Persistent volume claim that is created with nfs-rwx-storage class?
I want to make sure the volume data is not lost even if the PVC is deleted manually. Can I do it? I tried combining PVC for NFS with these instructions (https://github.com/digitalocean/csi-digitalocean/blob/master/examples/kubernetes/pod-single-existing-volume/README.md), but no luck

[metrics-server] unable to fully scrape metrics from source

User reported:

After installing I get an error message in the pod logs of metrics server:

unable to fully scrape metrics from source kubelet_summary:default-ji17: unable to fetch metrics from Kubelet default-ji17 (10.133.247.135): Get https://10.133.247.135:10250/stats/summary?only_cpu_and_memory=true: x509: cannot validate certificate for 10.133.247.135 because it doesn't contain any IP SANs

I could run the metrics server with the flag --kubelet-insecure-tls But I think the one click install should just work with the current settings?

Allow users to upload their values.yml file to be used by deploy.sh script for 1 Click App install

Need for passing in local values.yaml file to get FusionAuth stack installed in Digital Ocean Marketplace

Related issues:

We are currently in the process of fixing the error we get clicking 'Install app' for our application (FusionAuth) on our Kubernetes Cluster in Digital Ocean.

image

We went through the process described here but we are unsure of how to make our stack installation work without allowing users to pass in a local copy of their values.yaml. A copy of our default values.yml can be found here. The FusionAuth helm chart requires a database (either PostgreSQL or MySQL) and optionally ElasticSearch to run. In the default values.yml you will see empty values such as database.host, database.user, search.host, database.password etc. We need a way to allow users to pass the host value no matter where they are running their cluster, be it DOKS, AKS, GKE, minikube, etc.

At this point, we could just have users run the helm commands in the deploy.sh script from the command line, and pass in their locally modified values.yml file such as helm upgrade my-release fusionauth/fusionauth -f values.yml --install fusionauth fusionauth/fusionauth --atomic --create-namespace --timeout 8m0s --namespace fusionauth --version 0.10.10
but this defeats the purpose of a 1 Click app.

Can someone on the Digital Ocean team suggest a way for users to either a) pass in their values.yml file using command line arguments to the deploy.sh script (not currently supported by DO) or b) a way for us to redesign our helm chart so users wouldn't have to configure our chart so dynamic values can be added - command line arguments for helm work but only outside the deploy.sh script which DO uses for their '1 Click App' installation.

Any clarifications or suggestions are appreciated.

FusionAuth Chart
FusionAuth Chart values yml
FusionAuth Stack values.yml

Resource requirements from do_config.yml not enforced

We have resource requirements set, but they don't seem to be enforced in any way.

Contributing guide says:

Required: In do_config.yml, update minimum resource requirements for your application. We use this information to prevent applications from being installed on undersized clusters.

We have set the following resource requirements

node_count: 2
cpu: "4"
memory: "4Gi"

But when I used the 1 click posthog app creation I could try to deploy it to a 1 node, 1vcpu, 2Gi total cluster (used create new cluster flow), i.e. selecting this worked (to be precise started deploying the cluster and running the deploy script, which then failed):
Screen Shot 2021-07-13 at 00 13 35

Follow-up question from a failed install

Additionally since these resource requirements were way to small to be able to deploy Posthog the deploy.sh helm command failed, which resulted in me not being able to access any of the cluster info anymore (i.e. clicking on the cluster I got:
Screen Shot 2021-07-13 at 00 07 55 for a while. Note that I could access it later again, perhaps once uninstall had ran, in any case I was able to reproduce seeing this screen twice).

After a failed install is there a way to see the error log from the deploy.sh? I only see
Screen Shot 2021-07-13 at 01 04 34

Loki with persistent volume claim

Hi,
I tried to use digitalOcean one-click deployment of loki. After deployment on Kubernetes cluster, pods seem to work but persistent volume claim (pvc) not worked.

For me, pvc worked when installed loki by using helm chart with some --set options, not using yaml file in this repo.
How can I improve this repo with pvc and upgraded version of containers?

Tekton pipelines stack update

Description

Tekton is a cloud native solution for building CI/CD systems. It is specifically engineered to run on Kubernetes, and empowers developers to create CI pipelines using reusable blocks.

Currently, the 1-click marketplace app installs Tekton via kapp-controller. It means, two applications are installed in the Kubernetes cluster: one is Tekton itself, and the other one is kapp-controller as a dependency. Kapp-controller is used to bootstrap Tekton.

We can simplify the process and avoid clutter by using Helm. An official Helm chart is available now for Tekton on the CD Foundation repo.

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for Tekton pipelines.

Argo CD stack update

Description

Argo CD is another popular open source implementation providing a neat GitOps experience. One of the main features making it unique amongst other existing solutions (like Flux CD for example), is the ability to audit and observe all CD pipelines via a graphical web interface.

Currently, the 1-click marketplace app installs and checks Argo CD installation via kubectl commands. We can enhance and simplify the process by using Helm. An official Helm chart is available now for Argo CD on the argo project repository.

Impacted Areas

None.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for Argo CD.

helm chart custom values input

Hi, I’m trying to bring Posthog to the marketplace via a helm chart.
Minimally for Posthog to work nicely I’d need:

  1. hostname (for tls, cert manager, what they will use as the UI), which they need to register and point to the LB IP
  2. outgoing email smtp login and password (used for password reset etc)

I saw these two issues:

  • #34
  • #57 (couldn’t find anything in contributing.md)

Since our charts repo is open source anyone could use that directly already to create the k8s cluster on DigitalOcean, customize their values file & deploy via helm. But I’d love to have a 1-click deployment via marketplace, I’m just not sure how to get these two parameters from users.

Helm v3 deprecated `helm template --name` named parameter

Hi DO team,

When building the newest version of our application, when running render.sh I noticed that with a current version of Helm the --name argument to helm template has been deprecated. It is now just a positional argument. (https://helm.sh/docs/helm/helm_template/#helm)

You may wish to update the template to make it compatible with v3+ of helm. https://github.com/digitalocean/marketplace-kubernetes/blob/master/utils/stack-templates/render.sh#L24 I have edited the render.sh file in our stack to use the new postional parameter.

Happy to create a tiny PR if that will be useful.

Graham

My current Helm version: (version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

Associated Stack Overflow issue: https://stackoverflow.com/questions/57961162/helm-install-unknown-flag-name

Base offering too much for 1GB/1CPU single node pool

I have a single node (1GB/1vCPU) cluster that I run some very low-traffic stuff on.

Recently I had to upgrade to 1.23.10-do.0 – the upgrade was forced, but no big deal. However, when the upgrade finished the CPU requests exceeded the available CPU.

According to kubectl describe nodes, I get (this text is edited for simplicity):

...
Allocatable:
  cpu:                900m
...
  Name                                              CPU 
  ----                                              ----
  ingress-nginx-controller-7868b8f99b-7g8lp         100m
  cilium-ng6ww                                      310m
  cilium-operator-c9bf9575f-dtmfc                   100m
  coredns-d4c49d69-dwvx9                            100m
  coredns-d4c49d69-zlj79                            100m
  cpc-bridge-proxy-d22g9                            100m
  do-node-agent-z6p2v                               102m  

So, in the base deployment alone we have 912 CPU requested of an available 900. In my case, kubernetes decided the ingress controller was the best pod to keep down, so everything was effectively down.

To fix it, i manually lowered the CPU request for ingress to 50, but I am no concerned the base offering will not work after upgrade going forward. I expect I’ll have to make a manual correction after every future upgrade.

Is there a reasonable approach to fixing this so the base offering continues to work after upgrade? Without doubling my hosting costs I can’t get a system that offers enough CPU to meet the reservation requests on the base offering.

Thanks,

Update ingress-nginx stack to use Helm chart version 4.0.13

Problem Description

Ingress-Nginx Helm chart version is a little bit too old. The current application version used by the marketplace stack is 1.0.4. We can upgrade to version 1.1.0, and have it aligned with Starter Kit Nginx version, which is more up to date and tested.

For the Helm based installation, the ingress-nginx chart version 4.0.13 corresponds to Nginx application version 1.1.0.

Impacted Areas

Ingress-Nginx marketplace stack.

Solution

As per the contribution guideline, following steps must be performed:

  1. Use the stack generator script provided in this repository (./stack-templates/generate-stack.sh), and have the required scripts created automatically.
  2. Tweak as necessary the Helm chart values inside the values.yml file (usually we don't paste here the default values, because those will be merged anyways from the upstream values file of the Helm chart).
  3. Validate that the scripts work as expected for each scenario (deploy/upgrade/uninstall).
  4. Have the main README created, containing a quick start guide for the Nginx Ingress stack. Also, make sure to point the user to additional resources and guides, to learn more about what Nginx can do.

Kubernetes Monitoring Stack is not persistent

All changes made in Grafana are lost on pod recreation.
It would be pretty good to have simple way to add PVC. For now Kubernetes Monitoring Stack cannot be used on production environment because all changes in grafana can be lost.

Old version of OpenEBS is showing on DIgitalocean market place.

I can see OpenEBS 1.3 has been updated on Digitalocean market place.
When installing OpenEBS from the market place it's installing 1.3.0 version.
But it seems there still some places where it's still showing 1.1 version of OpenEBS.
I have attached a few screenshots for reference.

digitalocean-3
digitalocean-2
digitalocean-1

Automation of updates

we have most marketplaces listing automated for our product (sosiv.io) and we would like to automate updates for DigitalOcean marketplace as well.

Creating a PR in this repo is a non-issue, but we do have thoughts about how we can automate the request to update the version on the vendor portal in DigitalOcean:

  1. is automation for it supported, if not - is there an ETA for support/a reason to why its not supported?
  2. is there any documentation on the matter?

Proposal: OpenFaaS

I would like to see an entry for OpenFaaS:

Linked issue: openfaas/faas#1261

Installation instructions in the docs: https://docs.openfaas.com/deployment/kubernetes/#a-deploy-with-helm-for-production

Helm chart README: https://github.com/openfaas/faas-netes/blob/master/HELM.md

My understanding is that tiller is not made available, so we should generate the required YAML files before applying them.

My suggestion in the upstream issue is to use a LoadBalancer.

Sponsor for idea: @wadenick

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.