Git Product home page Git Product logo

jitsi-deployment's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jitsi-deployment's Issues

Errors during deployment

I tried to deploy the code (overlays/production) to both Azure AKS and AWS EKS with Fargate. Both resulted in similar errors during deployment like below

unable to recognize "STDIN": no matches for kind "Certificate" in version "cert-manager.io/v1alpha2",
unable to recognize "STDIN": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha2",
unable to recognize "STDIN": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1",
unable to recognize "STDIN": no matches for kind "Kibana" in version "kibana.k8s.elastic.co/v1",
unable to recognize "STDIN": no matches for kind "DecoratorController" in version "metacontroller.k8s.io/v1alpha1",
unable to recognize "STDIN": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1",
unable to recognize "STDIN": no matches for kind "PodMonitor" in version "monitoring.coreos.com/v1",
unable to recognize "STDIN": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1",
unable to recognize "STDIN": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1",
unable to recognize "STDIN": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Below is the error returned from the server specific to Azure AKS (not sure if this is useful)

{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"kubernetes-dashboard"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"kubernetes-dashboard"},"subjects":[{"kind":"ServiceAccount","name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"}]}\n"},"labels":null},"roleRef":{"name":"kubernetes-dashboard"},"subjects":[{"kind":"ServiceAccount","name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"}]}
to:
Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
Name: "kubernetes-dashboard", Namespace: ""
for: "STDIN": ClusterRoleBinding.rbac.authorization.k8s.io "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"kubernetes-dashboard"}: cannot change roleRef

For your information, I have also replaced the placeholders for the base64 credentials in

/overlays/production/ops/bbb-basic-auth-secret.yaml
/base/jitsi/jitsi-secret.yaml

Below is the screenshot when I run kubectl get all -n jitsi
ss2

Also, when everything works perfectly, which service is the interface to users that is exposed to the Internet? Is it the web service?
Thank you

Docs

for non project specific stuff (like K8s setup) the docs should either make clear that this not project specific and relate to external docs if possible

unable to recognize "STDIN": no matches for kind "DecoratorController"

Hello, I am having some problems trying to get this setup on digital ocean, I have so far fixed some issues already but another ones are giving me a hard time like this one below:

unable to recognize "STDIN": no matches for kind "DecoratorController" in version "metacontroller.k8s.io/v1alpha1"

Also, I have some questions…
Can we use a nginx-ingress, or does it have to be ha-proxy? Digital ocean already provide load balancers, I had a simple toy setup of Jitsi working fine, with a nginx-ingress but only one jvb, but we want this to be able to autoscale, so we have seen this project as a perfect for us, maybe overkill with all the Kibana/monitoring, but this is ok.
The doc in this site does not really say what we can modify on the overlays or not, also the passwords and other configs that can change are on the "base" not on the overlays, will be nice to be able to just pull base from a URL to this repo so it keeps in sync instead of having to keep updating etc.
Any ideas on what can be wrote with that error above? thanks

Prometheus not deploying in namespace

Hi,
First of all, thanks for all the work in the project. It's absolutely amazing.

After I deploy the 'development' overlay there's no prometheus deployments, only operator and adapter. I get the grafana endpoint in the ingress but there's no datasource available. No pods with tag prometheus:k8s.

kubectl get deployments -n monitoring
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
grafana               1/1     1            1           82m
kube-state-metrics    1/1     1            1           82m
prometheus-adapter    1/1     1            1           82m
prometheus-operator   1/1     1            1           82m

But there is a svc in the namespace:

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       ClusterIP   10.43.2.2       <none>        9093/TCP                     82m
alertmanager-operated   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   81m
bbb                     ClusterIP   None            <none>        9100/TCP                     82m
grafana                 ClusterIP   10.43.191.171   <none>        3000/TCP                     82m
kube-state-metrics      ClusterIP   None            <none>        8443/TCP,9443/TCP            82m
node-exporter           ClusterIP   None            <none>        9100/TCP                     82m
prometheus-adapter      ClusterIP   10.43.58.35     <none>        443/TCP                      82m
prometheus-k8s          ClusterIP   10.43.103.217   <none>        9090/TCP                     82m
prometheus-operator     ClusterIP   None            <none>        8443/TCP                     82m

I've checked the kube-prometheus project and find the raw file of this deployment. But It doesn't works if apply the file into the namespace.

There are no prometheus pods running.

kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS   AGE
alertmanager-main-0                    2/2     Running   0          87m
alertmanager-main-1                    2/2     Running   0          87m
alertmanager-main-2                    2/2     Running   0          87m
grafana-6cfc4fd6db-r2jn4               1/1     Running   0          88m
kube-state-metrics-6b7567c4c7-n54nn    3/3     Running   0          88m
node-exporter-5s8gp                    2/2     Running   0          88m
node-exporter-kdsw4                    2/2     Running   0          88m
node-exporter-rbfvk                    2/2     Running   0          88m
node-exporter-s7l5p                    2/2     Running   0          88m
node-exporter-z7nmv                    2/2     Running   0          88m
prometheus-adapter-b8d458474-s5qvd     1/1     Running   0          88m
prometheus-operator-666946c454-zcnnx   2/2     Running   0          88m

I don't apply the bbb-* files in the kustomize.

My config:

kubectl 1.19 / kustomize 3.5.4
Kubernetes 1.17.9 (with rke/rancher but with cluster deployed by kubespray I get the same error).
Nodes with Ubuntu 18.04 and Local volumes
MetalLB as LoadBalancer in L2 mode

Can you shed some light on this problem?

Thank you.

error at the time of deployment : error validating data: ValidationError(Secret.data): invalid type for io.k8s.api.core.v1.Secret.data:

Hello @simoncolincap @janrenz @mvakert @wolfm89

Everything goes well apart from the below error at the time of deployment..

For your information, I have also replaced the placeholders for the base64 credentials in

/overlays/production/ops/bbb-basic-auth-secret.yaml
/base/jitsi/jitsi-secret.yaml

apiVersion: v1 kind: Secret metadata: namespace: jitsi name: jitsi-config type: Opaque data: JICOFO_COMPONENT_SECRET:6290f3e910afc1b0ded51a9ff5825e08 JICOFO_AUTH_PASSWORD:b8cb3a744c3a83b9107f6e4115c0bad8 JVB_AUTH_PASSWORD:119d7ffe32bb2964337068852d6ad078 JVB_STUN_SERVERS:meet-jit-si-turnrelay.jitsi.net:443 TURNCREDENTIALS_SECRET:119d7ffe32bb2964337068852d6ad675 TURN_HOST:52.172.170.79 STUN_PORT:4000 TURN_PORT:4001 TURNS_PORT:4002

error: error validating "STDIN": error validating data: ValidationError(Secret.data): invalid type for io.k8s.api.core.v1.Secret.data: got "string", expected "map"; if you choose to ignore these errors, turn validation off with --validate=false

Swift response will be life saver..

Thanks in advance!

Installation issue

I find this repo has installation issue related to metacontroller, there is no prerequisite information that has been provided. if someone has completed the installation with kustomize 3.5.4 Please share your steps, that may save lot of time.
in my case pods are getting unscheduled and it has warnings.

contribute to jitsi-contrib/jitsi-kubernetes?

We discuss if you would like to contribute your solution to the new repository jitsi-contrib/jitsi-kubernetes. See the discussion also here.

In the repo I already placed a 'Single POD solution' based on kustzomize. You solution explained here is more professional and the idea is to add a subsection under /doc/ providing this solution.

If you think its not the right place as your solution is very comprehensive we can also create just a README fine introducing your solution and pointing to your repo.

What do you think?

Running locally in Windows

Hello,

I'm trying to run this locally in windows.
I have Kubernetes v1.16.5 running using Docker Desktop.

After cloning the repo, setting up username, password in bbb-basic-auth-secret.yaml and jitsi credentials in jitsi-secret.yaml, I have been getting this error

j8Oq5PA

So, what can I do regarding this error?

If anyone could help me out with this, I'd highly appreciate it.

Another question that I have is, how can I access it after successfully running it?
What I mean is, on which port I can access the jitsi web?
e.g. localhost:80 ? or localhost:8443?

Thank you
Ayush.

Turn Server config in Prosody configuration

Hello

We have deployed this setup on a AWS EC2 environment using k8s. It works for users from home.
Although some of the users are behind corporate firewall and we don't have the UDP ports(30301/30401 etc) opened in this firewall to stream media.
Can you please suggest what are the configurations and where can I add them to enable TURN server?

Thanks,
Amit

Multiple loadbalanced Jitsi Meet setups

Jitsi uses the term "shard" to describe the composition that contains single containers for
web, jicofo, prosody and multiple containers of jvb running in parallel.

Right now we only have a single shard defined. This is not a fail-safe setup as we're only running single pods for web, jicofo, and prosody.

Sharding can be achieved by using a HAProxy in front of all shards and routing the incoming requests based on chat room specified in the path (HAProxy stick tables). More info on this topic can be found here: https://jitsi.org/blog/new-tutorial-video-scaling-jitsi-meet-in-the-cloud/

Cluster-level logging architecture

There is no setup of any logging architecture right now. Streamlining of all logs, persistence, and accessibility is important as it gives us possibility to analyze anomalies and failures after their occurrences.

CustomResourceWebhookConversion feature is disabled

I can't deploy due to this message:
Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "certificaterequests.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "certificates.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "challenges.acme.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "clusterissuers.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "issuers.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (Invalid): error when creating "STDIN": CustomResourceDefinition.apiextensions.k8s.io "orders.acme.cert-manager.io" is invalid: spec.conversion.webhookClientConfig: Required value: required when strategy is set to Webhook, but not allowed because the CustomResourceWebhookConversion feature is disabled Error from server (BadRequest): error when creating "STDIN": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: decode base64: illegal base64 data at input byte 0, error found in #10 byte of ...|ret\u003e","JICOFO_C|..., bigger context ...|"JICOFO_AUTH_PASSWORD":"\u003cbase64-secret\u003e","JICOFO_COMPONENT_SECRET":"\u003cbase64-secret\u0|... Error from server (BadRequest): error when creating "STDIN": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: decode base64: illegal base64 data at input byte 0, error found in #10 byte of ...|ret\u003e","username|..., bigger context ...|v1","data":{"password":"\u003cbase64-secret\u003e","username":"\u003cbase64-secret\u003e"},"kind":"S|... Error from server (Invalid): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"scope\":\"jitsi\",\"service\":\"web\"},\"name\":\"web\",\"namespace\":\"jitsi\"},\"spec\":{\"clusterIP\":\"None\",\"ports\":[{\"name\":\"http\",\"port\":80,\"targetPort\":80}],\"selector\":{\"k8s-app\":\"web\",\"scope\":\"jitsi\"}}}\n"},"labels":{"k8s-app":null,"scope":"jitsi","service":"web"}},"spec":{"$setElementOrder/ports":[{"port":80}],"clusterIP":"None","ports":[{"name":"http","port":80,"targetPort":80},{"$patch":"delete","port":8000},{"$patch":"delete","port":8443}],"selector":{"k8s-app":"web","scope":"jitsi"},"type":null},"status":null} to: Resource: "/v1, Resource=services", GroupVersionKind: "/v1, Kind=Service" Name: "web", Namespace: "jitsi" Object: &{map["apiVersion":"v1" "kind":"Service" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"jitsi-web\"},\"name\":\"web\",\"namespace\":\"jitsi\"},\"spec\":{\"ports\":[{\"name\":\"8000\",\"port\":8000,\"targetPort\":80},{\"name\":\"8443\",\"port\":8443,\"targetPort\":443}],\"selector\":{\"k8s-app\":\"jitsi-web\"},\"type\":\"NodePort\"},\"status\":{\"loadBalancer\":{}}}\n"] "creationTimestamp":"2020-04-12T18:16:54Z" "labels":map["k8s-app":"jitsi-web"] "name":"web" "namespace":"jitsi" "resourceVersion":"728840" "selfLink":"/api/v1/namespaces/jitsi/services/web" "uid":"c9dbe029-7ce9-11ea-9ac9-42010a940056"] "spec":map["clusterIP":"10.39.243.71" "externalTrafficPolicy":"Cluster" "ports":[map["name":"8000" "nodePort":'\u799d' "port":'\u1f40' "protocol":"TCP" "targetPort":'P'] map["name":"8443" "nodePort":'\u7c63' "port":'\u20fb' "protocol":"TCP" "targetPort":'\u01bb']] "selector":map["k8s-app":"jitsi-web"] "sessionAffinity":"None" "type":"NodePort"] "status":map["loadBalancer":map[]]]} for: "STDIN": Service "web" is invalid: spec.clusterIP: Invalid value: "None": field is immutable [unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1", unable to recognize "STDIN": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2beta2", unable to recognize "STDIN": no matches for kind "Certificate" in version "cert-manager.io/v1alpha2", unable to recognize "STDIN": no matches for kind "ClusterIssuer" in version "cert-manager.io/v1alpha2", unable to recognize "STDIN": no matches for kind "Elasticsearch" in version "elasticsearch.k8s.elastic.co/v1", unable to recognize "STDIN": no matches for kind "Kibana" in version "kibana.k8s.elastic.co/v1", unable to recognize "STDIN": no matches for kind "DecoratorController" in version "metacontroller.k8s.io/v1alpha1", unable to recognize "STDIN": no matches for kind "Alertmanager" in version "monitoring.coreos.com/v1", unable to recognize "STDIN": no matches for kind "PodMonitor" in version "monitoring.coreos.com/v1", unable to recognize "STDIN": no matches for kind "Prometheus" in version "monitoring.coreos.com/v1", unable to recognize "STDIN": no matches for kind "PrometheusRule" in version "monitoring.coreos.com/v1", unable to recognize "STDIN": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"]

Alarming

By using kube-prometheus we already have an alertmanager set up. This instance should be configured correctly and send out alarms if any of the clusters components show signs of being unhealthy.

Installation Issue

I had install code with $git clone https://github.com/hpi-schul-cloud/jitsi-deployment.git and run $kustomize build . | kubectl apply -f - in overlays/development. But i have errors like this
"Error: accumulating resources: accumulateFile "accumulating resources from 'ops/': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/overlays/development/ops' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/overlays/development/ops': accumulating resources: accumulateFile "accumulating resources from '../../../base/ops': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops' must resolve to a file", accumulateDirector: "recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops': accumulating resources: accumulateFile \"accumulating resources from 'monitoring/': '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops/monitoring' must resolve to a file\", accumulateDirector: \"recursed accumulation of path '/home/k8smaster/Documents/hpi-schul-jitsi/jitsi-deployment/base/ops/monitoring': accumulating resources: accumulateFile \\\"accumulating resources from 'https://github.com/coreos/kube-prometheus?ref=master': YAML file [https://github.com/coreos/kube-prometheus?ref=master] encounters a format error.\\\\nerror converting YAML to JSON: yaml: line 31: mapping values are not allowed in this context\\\\n\\\", accumulateDirector: \\\"couldn't make target for path '/tmp/kustomize-756266654/repo': unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/tmp/kustomize-756266654/repo'\\\"\"""
error: no objects passed to apply
k8smaster@k8smaster:~/Documents/hp"

where is the porblem?
And thank you for sharing this.

Jitsi related Pods should have liveness/readiness checks

We currently don't have liveness/readiness checks for the prosody and web pods.

I suggest to use the return value of prosodyctl status to gain confidence in the prosody readiness.
For the web pod I suggest to just use the HTTP port and check for a successful query.

Copyright Licensing

I noticed that this repo lacks a copyright license. Was this intentional or is there a possibility of adding an Apache license?

Public VPC vs Private VPC

Dear @mvakert, @wolfm89, @janrenz

First of all, thank you so much for all your work! This example have proven super helpful for us! We are using your idea of statefulsets and the way to define the services. Thank you so much!

Now, we have a question: are you using this in a public vpc where each node receives a public ip or you are using a private vpc?

If you have a private vpc, how are you handling the communication to the pods from the outside?

Thank you so much for all your help!

Deployment Fail

I tried to deploy this services on IONOS kubernete. I deployed scripts in overlays/production using kustomize like your guide in Readme.md. All scripts are executed flawlessly and done. When look into the dashboard, some components are failed to run so they are not available. Kibana is also not available.

My server specification is :

  • xeon 8 cores
  • 20 GB Ram
  • 600 GB hdd
  • 600 GB ssd

All participants are muted

Manged to deploy this to DO k8s cluster, but the web has some erros and there is no video or audio.
Messages, joining a call works.
Console log image of the errors:
image
Do all of these .env variables are required? Do i need stun servers?
image
Do i need to change the XMPP domian?
My ingress entrypoint domain is video.domain.com . do I need to set one for XMPP or does it already use meet.jitsi inside the cluster?

CONNECTION FAILED: connection.otherError

Hi ,

Until now this setup used to work perfectly.
Suddenly when I installed afresh the web client is not able to connect. I am seeing this issue in console logs at client.

2020-08-21T09:45:40.193Z [modules/xmpp/xmpp.js] <A.connectionHandler>: (TIME) Strophe connfail[item-not-found]: 4319.934999999532
Logger.js:154 2020-08-21T09:45:40.195Z [modules/xmpp/xmpp.js] <A.connectionHandler>: (TIME) Strophe disconnected[item-not-found]: 4321.874999999636
Logger.js:154 2020-08-21T09:45:40.196Z [modules/statistics/statistics.js] <Function.S.sendAnalyticsAndLog>: {"type":"operational","action":"connection.failed","attributes":{"error_type":"connection.otherError","error_message":"item-not-found","suspend_time":0,"time_since_last_success":null}}

app.bundle.min.js?v=4074:126 2020-08-21T09:45:55.578Z [connection.js] <a.u>: CONNECTION FAILED: connection.otherError

Logger.js:154 2020-08-21T09:46:22.953Z [connection.js] <a.u>: CONNECTION FAILED: connection.otherError

.....

app.bundle.min.js?v=4074:126 2020-08-21T09:47:55.972Z [connection.js] <a.u>: CONNECTION FAILED: connection.otherError

UDP ports are open on AWS EC2 instances

Can you please suggest what could be the reason?

Thanks
Amit

jicofo

image
image

Can You Please Help Me with this issuse

Where I done wrong step.. Please tell me

Other pods working, but readiness of jicofo not working please help withis issuse

Thank You

please add installation of metacontoller to README

I suggest to include a more specific cluster specification to better define under what conditions this repo might be working.
E.G. the metacontroller needs to be installed explicitly. This is not mentioned anywhere

https://metacontroller.github.io/metacontroller/guide/install.html

Installing it like this helped me to finish the kubectl apply -f build step

kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/metacontroller/master/manifests/metacontroller.yaml

Error while deployment: Failed calling webhook

I got the following error during deployment.

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=30s: dial tcp 192.168.202.12:443: connect: connection refused
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 192.168.168.125:443: connect: connection refused
Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 192.168.168.125:443: connect: connection refused

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.