Git Product home page Git Product logo

helm-charts's People

Contributors

davehorton avatar javibookline avatar joan-bookline avatar mrfabio avatar pilganchuk avatar radicaldrew avatar xquanluu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

helm-charts's Issues

taints and tolerations are not working

I have a cluster setup using K3s and all the nodes are already taking load ( in acceptable format ).

As per the deployment guide, I have applied tolerations and taints from the guide and then did a helm install ( given below )

kubectl taint node devops232.com sip=true:NoSchedule

kubectl taint node devops231.com rtp=true:NoSchedule

and
kubectl label node devops231.com voip-environment=rtp
kubectl label node devops232.com voip-environment=sip

now when I try to install the solution using helm chart, I'm getting

  Warning  FailedScheduling  2m15s             default-scheduler  0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {rtp: true}, that the pod didn't tolerate.
  Warning  FailedScheduling  8s (x1 over 68s)  default-scheduler  0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {rtp: true}, that the pod didn't tolerate.

Questions:

1 -- What are the right procedures for giving/assigning nodes to these pods -- any example will suffice. ( even to disable this taint/tolerations, I can't disable it altogether, getting errors )
2 -- what is the error node(s) didn't have free ports for the requested pod ports means.

Please note that I'm deploying the cluster for on-prem based deployments for an RnD to materialize it for production usage. the Cluster is based on K3s

 kubectl get nodes -o wide
NAME               STATUS   ROLES                       AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                               KERNEL-VERSION          CONTAINER-RUNTIME
devops230.com   Ready    control-plane,etcd,master   3d      v1.23.8+k3s2   192.168.2.230   <none>        Red Hat Enterprise Linux 8.4 (Ootpa)   4.18.0-305.el8.x86_64   containerd://1.5.13-k3s1
devops231.com   Ready    control-plane,etcd,master   2d23h   v1.23.8+k3s2   192.168.2.231   <none>        Red Hat Enterprise Linux 8.4 (Ootpa)   4.18.0-305.el8.x86_64   containerd://1.5.13-k3s1
devops232.com   Ready    control-plane,etcd,master   2d23h   v1.23.8+k3s2   192.168.2.232   <none>        Red Hat Enterprise Linux 8.4 (Ootpa)   4.18.0-305.el8.x86_64   containerd://1.5.13-k3s1

[Request] Add support to subpath on the Webapp

Currently, the chart allows the configuration of the hostname for several services:

api.hostname:     api-sip-customer-site.com      or  api.sip.customer.site.com
grafana.hostname: grafana-sip-customer-site.com  or  grafana.sip.customer.site.com
homer.hostname:   homer-sip-customer-site.com    or  homer.sip.customer.site.com
jaeger.hostname:  jaeger-sip-customer-site.com   or  jaeger.sip.customer.site.com
webapp.hostname:  webapp-sip-customer-site.com   or  webapp.sip.customer.site.com

It will create an Ingress + External IP for each one (if enabled). It would be nice to add the possibility to use just 1 ingress with subpaths for each service, like so:

api.hostname:     customer.site.com/sip/api
grafana.hostname: customer.site.com/sip/grafana
homer.hostname:   customer.site.com/sip/homer
jaeger.hostname:  customer.site.com/sip/jaeger
webapp.hostname:  customer.site.com/sip/webapp

With this, we can only use 1 External IP and reduce the number of firewall rules. By making the solution cheaper overall.

Another issue the chart won't solve is the webapp to support the resources not being on the root. It doesn't allow setting a custom/relative path for the new "root". I experienced similar problems with Homer webapp.

Sip pod CrashLoopBackOff when configure jambonz to use sip over websockets

Hi there, I have successfully deployed the Jambonz with kubernetes helm charts on GKE Cluster (chart version 0.1.33). I'm trying to follow the instructions on github to configure jambonz to use sip over websockets.

Here is my Issuer:

huyquangng258@cloudshell:~$ kubectl get issuers -n jambonz -owide
NAME        READY   STATUS                AGE
ca-issuer   True    Signing CA verified   4h29m

,and my Certificate:

huyquangng258@cloudshell:~$ kubectl get certs -n jambonz -owide
NAME                   READY   SECRET            ISSUER      STATUS                                          AGE
drachtio-certificate   True    jambonz-secrets   ca-issuer   Certificate is up to date and has not expired   5h3m

When I want to use my cert and set the config sbc.sip.ssl.enabled to true, The problem I'm having here is SIP pod is always in CrashLoopBackOff state:

huyquangng258@cloudshell:~$ kubectl get pods -n jambonz
NAME                               READY   STATUS             RESTARTS       AGE
api-server-57f85c559b-qlmrn        1/1     Running            0              27h
cassandra-init-job-45jp8           1/1     Running            0              27h
feature-server-66d45ff6df-pwkjg    3/3     Running            0              130m
jambonz-sbc-rtp-5vc47              2/2     Running            0              27h
jambonz-sbc-sip-r8vsr              1/3     CrashLoopBackOff   10 (85s ago)   4m36s
sbc-call-router-564c766fb9-hcnps   1/1     Running            0              27h
sbc-inbound-77cc969bbc-mqgfx       1/1     Running            3 (55m ago)    17h
sbc-outbound-77bcfd9f65-zbspg      1/1     Running            2 (117m ago)   17h
webapp-9cb5cfb7b-x9n9c             1/1     Running            0              27h

Logs for SIP pod:

huyquangng258@cloudshell:~$ kubectl logs jambonz-sbc-sip-r8vsr -n jambonz
Defaulted container "drachtio" out of: drachtio, sidecar, smpp, db-create-wait (init)
2023-12-12 08:55:53.947270 Starting drachtio version v0.8.24
2023-12-12 08:55:53.947331 Logging threshold:                     5
2023-12-12 08:55:53.947350 Route for outbound connection:         sip-method: INVITE, http-method: POST, http-url: http://sbc-call-router:3000
2023-12-12 08:55:53.947356 DrachtioController::run: Main thread id: 137362610331328
2023-12-12 08:55:53.947363 DrachtioController::run tls key file:         /etc/ssl/tls.key
2023-12-12 08:55:53.947368 DrachtioController::run tls certificate file: /etc/ssl/tls.crt
2023-12-12 08:55:53.947373 DrachtioController::run listening for applications on tcp port 9022 and tls port 0
2023-12-12 08:55:53.948423 ClientController::ClientController done setting tls options: 
2023-12-12 08:55:53.948439 Client controller thread id: 137362610331328
2023-12-12 08:55:53.948503 ClientController::start_accept_tcp
2023-12-12 08:55:53.948542 DrachtioController::run mtu size for udp packets: 4096
2023-12-12 08:55:53.948551 DrachtioController::run - sipcapture/Homer enabled: udp:heplify-server.monitoring:9060;hep=3;capture_id=10
2023-12-12 08:55:53.948556 DrachtioController::run - blacklist checking config
2023-12-12 08:55:53.948560 DrachtioController::run - blacklist is disabled
2023-12-12 08:55:53.948565 Prometheus support disabled
2023-12-12 08:55:53.948570 tcp keep alives will be sent to clients every 45 seconds
2023-12-12 08:55:53.948628 DrachtioController::run: starting sip stack on local address sip:10.148.0.14:5060;transport=udp,tcp (external address: 34.87.73.165)
2023-12-12 08:55:53.948638 SipTransport::getBindableContactUri: sip:34.87.73.165:5060;transport=udp,tcp;maddr=10.148.0.14
2023-12-12 08:55:53.948651 nta.c:884 nta_agent_create() nta_agent_create: SOFIA_SEARCH_DOMAINS 1, using sres_search instead of sres_query (search option in resolv.conf will be applied
2023-12-12 08:55:53.948687 nta.c:979 nta_agent_create() nta_agent_create: initialized hash tables
2023-12-12 08:55:53.948709 tport.c:529 tport_tcreate() tport_create(): 0x5b608ba40300
2023-12-12 08:55:53.948726 tport_logging.c:204 tport_open_log() events HEP RRR DATA [hep=3;capture_id=10]
2023-12-12 08:55:53.948731 tport_logging.c:204 tport_open_log() events HEP RRR DATA [capture_id=10]
2023-12-12 08:55:53.949615 Client controller thread id: 137362610317056
2023-12-12 08:55:53.949649 ClientController::threadFunc - ClientController: io_context run loop started (or restarted)
2023-12-12 08:55:53.953960 nta.c:2401 agent_create_master_transport() nta: master transport created
2023-12-12 08:55:53.954069 tport.c:1667 tport_bind_server() tport_bind_server(0x5b608ba40300) to */10.148.0.14:5060
2023-12-12 08:55:53.954117 tport.c:1738 tport_bind_server() tport_bind_server(0x5b608ba40300): calling tport_listen for udp
2023-12-12 08:55:53.954130 tport.c:651 tport_alloc_primary() tport_alloc_primary(0x5b608ba40300): new primary tport 0x5b608ba24500
2023-12-12 08:55:53.954220 tport.c:780 tport_listen() tport_listen(0x5b608ba24500): listening at udp/10.148.0.14:5060
2023-12-12 08:55:53.954234 tport.c:1738 tport_bind_server() tport_bind_server(0x5b608ba40300): calling tport_listen for tcp
2023-12-12 08:55:53.954241 tport.c:651 tport_alloc_primary() tport_alloc_primary(0x5b608ba40300): new primary tport 0x5b608ba24780
2023-12-12 08:55:53.954306 tport.c:780 tport_listen() tport_listen(0x5b608ba24780): listening at tcp/10.148.0.14:5060
2023-12-12 08:55:53.954319 nta.c:2355 nta_agent_add_tport() nta: bound to (34.87.73.165:5060;transport=*;maddr=10.148.0.14)
2023-12-12 08:55:53.954328 nta.c:2497 agent_init_via() nta: agent_init_via: SIP/2.0/udp 34.87.73.165 (*)
2023-12-12 08:55:53.954340 nta.c:2497 agent_init_via() nta: agent_init_via: SIP/2.0/tcp 34.87.73.165 (*)
2023-12-12 08:55:53.954357 nta.c:2369 nta_agent_add_tport() nta: Via fields initialized
2023-12-12 08:55:53.954700 nta.c:2377 nta_agent_add_tport() nta: Contact header created
2023-12-12 08:55:53.954717 nta.c:986 nta_agent_create() nta_agent_create: initialized transports
2023-12-12 08:55:53.954724 nta.c:992 nta_agent_create() nta_agent_create: initialized random identifiers
2023-12-12 08:55:53.954730 nta.c:998 nta_agent_create() nta_agent_create: initialized timer
2023-12-12 08:55:53.954827 nta.c:1008 nta_agent_create() nta_agent_create: initialized resolver
2023-12-12 08:55:53.955846 SipTransport::addTransports - creating transport: 0x5b608ba24500: udp/10.148.0.14:5060
2023-12-12 08:55:53.956685 SipTransport::addTransports - creating transport: 0x5b608ba24780: tcp/10.148.0.14:5060
2023-12-12 08:55:53.957990 DrachtioController::run: adding additional internal sip address sips:10.148.0.14:8443;transport=wss (external address: 34.87.73.165)
2023-12-12 08:55:53.958009 SipTransport::getBindableContactUri: sips:34.87.73.165:8443;transport=wss;maddr=10.148.0.14
2023-12-12 08:55:53.958027 tport.c:1667 tport_bind_server() tport_bind_server(0x5b608ba40300) to wss/10.148.0.14:8443
2023-12-12 08:55:53.958041 tport.c:1738 tport_bind_server() tport_bind_server(0x5b608ba40300): calling tport_listen for wss
2023-12-12 08:55:53.958048 tport.c:651 tport_alloc_primary() tport_alloc_primary(0x5b608ba40300): new primary tport 0x5b608ba24a00
2023-12-12 08:55:53.958440 tport.c:780 tport_listen() tport_listen(0x5b608ba24a00): listening at wss/10.148.0.14:8443
2023-12-12 08:55:53.958456 nta.c:2355 nta_agent_add_tport() nta: bound to (34.87.73.165:8443;transport=wss;maddr=10.148.0.14)
2023-12-12 08:55:53.958465 nta.c:2497 agent_init_via() nta: agent_init_via: SIP/2.0/udp 34.87.73.165 (*)
2023-12-12 08:55:53.958476 nta.c:2497 agent_init_via() nta: agent_init_via: SIP/2.0/tcp 34.87.73.165 (*)
2023-12-12 08:55:53.958484 nta.c:2497 agent_init_via() nta: agent_init_via: SIP/2.0/wss 34.87.73.165:8443 (*)
2023-12-12 08:55:53.958498 nta.c:2369 nta_agent_add_tport() nta: Via fields initialized
2023-12-12 08:55:53.958526 nta.c:2377 nta_agent_add_tport() nta: Contact header created
2023-12-12 08:55:53.959305 SipTransport::addTransports - creating transport: 0x5b608ba24a00: wss/10.148.0.14:8443
2023-12-12 08:55:53.959910 DrachtioController::run: adding additional internal sip address sips:10.148.0.14:5061;transport=tls (external address: 34.87.73.165)
2023-12-12 08:55:53.959926 SipTransport::getBindableContactUri: sips:34.87.73.165:5061;transport=tls;maddr=10.148.0.14
2023-12-12 08:55:53.960005 tport.c:1667 tport_bind_server() tport_bind_server(0x5b608ba40300) to tls/10.148.0.14:5061
2023-12-12 08:55:53.960025 tport.c:1738 tport_bind_server() tport_bind_server(0x5b608ba40300): calling tport_listen for tls
2023-12-12 08:55:53.960059 tport.c:651 tport_alloc_primary() tport_alloc_primary(0x5b608ba40300): new primary tport 0x5b608ba24c80
2023-12-12 08:55:53.960074 tport_type_tls.c:223 tport_tls_init_master() tport_tls_init_master(0x5b608ba24c80): tls key file = /etc/ssl/tls.key
2023-12-12 08:55:53.960089 tport_type_tls.c:234 tport_tls_init_master() tport_tls_init_master(0x5b608ba24c80): tls cert file = /etc/ssl/tls.crt
2023-12-12 08:55:53.960101 tport_type_tls.c:252 tport_tls_init_master() tport_tls_init_master(0x5b608ba24c80): tls_policy: 0, tls_verify: 0
2023-12-12 08:55:53.960106 tport_type_tls.c:253 tport_tls_init_master() tport_tls_init_master(0x5b608ba24c80): tls_version: 28, tls_timeout: 300
2023-12-12 08:55:53.960709 tport_tls.c:414 tls_init_context() tls_init_context: error loading CA list: cafile.pem
2023-12-12 08:55:53.960746 tport.c:1165 tport_zap_secondary() tport_zap_secondary(0x5b608ba24c80): zap tport 0x5b608ba24c80 from (null)/(null):(null), count(wss) is 0, count(tcp) is 0
2023-12-12 08:55:53.960782 tport.c:758 tport_listen() tport_listen(0x5b608ba40300): tls_init_master(pf=2 tls/[10.148.0.14]:5061): Input/output error
2023-12-12 08:55:53.960792 nta.c:2345 nta_agent_add_tport() nta: bind(34.87.73.165:5061;transport=tls;maddr=10.148.0.14): Input/output error
2023-12-12 08:55:53.960796 DrachtioController::run: Error adding additional transport

Pod Describe here:

huyquangng258@cloudshell:~$ kubectl describe pod jambonz-sbc-sip-r8vsr -n jambonz 
Name:             jambonz-sbc-sip-r8vsr
Namespace:        jambonz
Priority:         0
Service Account:  default
Node:             gke-jambonz-cluster-1-sip-06c86148-x04f/10.148.0.14
Start Time:       Tue, 12 Dec 2023 08:52:43 +0000
Labels:           app=jambonz-sbc-sip
                  controller-revision-hash=85b45ff847
                  pod-template-generation=8
Annotations:      <none>
Status:           Running
IP:               10.148.0.14
IPs:
  IP:           10.148.0.14
Controlled By:  DaemonSet/jambonz-sbc-sip
Init Containers:
  db-create-wait:
    Container ID:  containerd://46271167033d841ad0a425d8e3e3ba1f5f8223be9fecdd65475a6a7fe161b784
    Image:         kanisterio/mysql-sidecar:0.40.0
    Image ID:      docker.io/kanisterio/mysql-sidecar@sha256:7300e6d158fd9a82efef1387ac4cbfbbcebae1de27772dc69ae880721ab9cee4
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until mysql -u jambones -D jambones -h mysql.db -p${MYSQL_PASSWORD} --protocol=tcp -e "select count(*) from accounts";
      do 
        sleep 5
      done
      
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 12 Dec 2023 08:52:43 +0000
      Finished:     Tue, 12 Dec 2023 08:52:43 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      MYSQL_PASSWORD:  <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc8qv (ro)
Containers:
  drachtio:
    Container ID:  containerd://b6eba0f7e163b1c617ea0c73804a07ab5d7bb3904f241f96433b853841bcad2d
    Image:         drachtio/drachtio-server:0.8.24
    Image ID:      docker.io/drachtio/drachtio-server@sha256:134ee1d0bfd8190d88270f05d37d4b6191463c2c8a96a6503d017790ad35f68e
    Ports:         9022/TCP, 5060/UDP, 5060/TCP, 8443/TCP
    Host Ports:    9022/TCP, 5060/UDP, 5060/TCP, 8443/TCP
    Args:
      drachtio
      --loglevel
      debug
      --cloud-deployment
      --sofia-loglevel
      9
      --homer
      heplify-server.monitoring:9060
      --homer-id
      10
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 12 Dec 2023 08:58:42 +0000
      Finished:     Tue, 12 Dec 2023 08:58:42 +0000
    Ready:          False
    Restart Count:  6
    Environment:
      DRACHTIO_SECRET:         <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
      SOFIA_SEARCH_DOMAINS:    1
      SOFIA_SRES_NO_CACHE:     1
      CLOUD:                   gcp
      IMDSv2:                  
      DRACHTIO_TLS_CERT_FILE:  /etc/ssl/tls.crt
      DRACHTIO_TLS_KEY_FILE:   /etc/ssl/tls.key
      TLS_PORT:                5061
      WSS_PORT:                8443
    Mounts:
      /etc/drachtio.conf.xml from jambonz-sbc-sip-conf (rw,path="drachtio.conf.xml")
      /etc/ssl/ from drachtio-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc8qv (ro)
  sidecar:
    Container ID:   containerd://20829b0636f6ddbc717994361a708508a2e89e987cd5e99b94ec9b8244e1403b
    Image:          jambonz/sbc-sip-sidecar:0.8.5
    Image ID:       docker.io/jambonz/sbc-sip-sidecar@sha256:c4436a7c7cdfd9ad1905365cbdcfb7bfe37e458dbbcccd6386ad90b5d6791c27
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 12 Dec 2023 08:58:43 +0000
      Finished:     Tue, 12 Dec 2023 08:58:43 +0000
    Ready:          False
    Restart Count:  6
    Environment:
      NODE_ENV:                        production
      K8S:                             1
      JAMBONES_REGBOT_CONTACT_USE_IP:  1
      JAMBONES_LOGLEVEL:               info
      JAMBONES_REDIS_HOST:             redis.db
      JAMBONES_REDIS_PORT:             6379
      JAMBONES_MYSQL_DATABASE:         jambones
      JAMBONES_MYSQL_HOST:             mysql.db
      JAMBONES_MYSQL_USER:             jambones
      JAMBONES_MYSQL_PASSWORD:         <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
      JAMBONES_TIME_SERIES_HOST:       influxdb.monitoring
      ENABLE_METRICS:                  1
      K8S_POD_NAME:                    jambonz-sbc-sip-r8vsr (v1:metadata.name)
      STATS_HOST:                      telegraf.monitoring
      STATS_PORT:                      8125
      STATS_TAGS:                      pod:$(K8S_POD_NAME)
      STATS_PROTOCOL:                  tcp
      STATS_TELEGRAF:                  1
      STATS_SAMPLE_RATE:               1
      DRACHTIO_HOST:                   127.0.0.1
      DRACHTIO_SECRET:                 <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
      DRACHTIO_PORT:                   9022
      JWT_SECRET:                      <set to the key 'JWT_SECRET' in secret 'jambonz-secrets'>  Optional: false
      JAMBONZ_RECORD_WS_USERNAME:      jambonz
      JAMBONZ_RECORD_WS_PASSWORD:      <set to the key 'JWT_SECRET' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc8qv (ro)
  smpp:
    Container ID:   containerd://7c9808e89e8599faf2178a56554bf8f2bea9bd5d861f2474b49ce50dde31c070
    Image:          jambonz/smpp-esme:0.8.5
    Image ID:       docker.io/jambonz/smpp-esme@sha256:ba1684d6208c5451dc3eefc24484058a8f5bfd3fecf62e5f6719bff6092afff6
    Port:           80/TCP
    Host Port:      80/TCP
    State:          Running
      Started:      Tue, 12 Dec 2023 08:52:44 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      NODE_ENV:                 production
      JAMBONES_LOGLEVEL:        info
      K8S:                      1
      HTTP_PORT:                80
      AVOID_UDH:                1
      JAMBONES_REDIS_HOST:      redis.db
      JAMBONES_REDIS_PORT:      6379
      JAMBONES_MYSQL_DATABASE:  jambones
      JAMBONES_MYSQL_HOST:      mysql.db
      JAMBONES_MYSQL_USER:      jambones
      JAMBONES_MYSQL_PASSWORD:  <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
      DRACHTIO_HOST:            127.0.0.1
      DRACHTIO_SECRET:          <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jc8qv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  jambonz-sbc-sip-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      jambonz-sbc-sip-conf
    Optional:  false
  drachtio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  drachtio-certs
    Optional:    false
  kube-api-access-jc8qv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              voip-environment=sip
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
                             sip:NoSchedule op=Exists
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  8m18s                   default-scheduler  Successfully assigned jambonz/jambonz-sbc-sip-r8vsr to gke-jambonz-cluster-1-sip-06c86148-x04f
  Normal   Pulled     8m18s                   kubelet            Container image "kanisterio/mysql-sidecar:0.40.0" already present on machine
  Normal   Created    8m18s                   kubelet            Created container db-create-wait
  Normal   Started    8m18s                   kubelet            Started container db-create-wait
  Normal   Started    8m17s                   kubelet            Started container smpp
  Normal   Created    8m17s                   kubelet            Created container smpp
  Normal   Pulled     8m17s                   kubelet            Container image "jambonz/smpp-esme:0.8.5" already present on machine
  Normal   Started    8m16s (x2 over 8m17s)   kubelet            Started container sidecar
  Warning  BackOff    8m15s                   kubelet            Back-off restarting failed container sidecar in pod jambonz-sbc-sip-r8vsr_jambonz(5fb901bb-df8e-4a8e-8e7c-02e584308aa4)
  Normal   Created    8m1s (x3 over 8m17s)    kubelet            Created container sidecar
  Normal   Pulled     8m1s (x3 over 8m17s)    kubelet            Container image "jambonz/sbc-sip-sidecar:0.8.5" already present on machine
  Normal   Started    8m1s (x3 over 8m17s)    kubelet            Started container drachtio
  Normal   Created    8m1s (x3 over 8m17s)    kubelet            Created container drachtio
  Normal   Pulled     8m1s (x3 over 8m17s)    kubelet            Container image "drachtio/drachtio-server:0.8.24" already present on machine
  Warning  BackOff    3m11s (x26 over 8m15s)  kubelet            Back-off restarting failed container drachtio in pod jambonz-sbc-sip-r8vsr_jambonz(5fb901bb-df8e-4a8e-8e7c-02e584308aa4)

Am I missing any steps here?

Jambonz -- SIP POD is always in CrashLoopBackOff state

Hi, I have successfully deployed the Jambonz on an on-prem K3s cluster. It has to be a brand new cluster to make things ready and there should be no already existing Ingress Controller installed to make things workable.

ef.com is my internal domain, so I can reach it internally.

now the main problems that I'm seeing

Helm Command that I used:

root@devops213:~# kubectl label node devops216.ef.com "voip-environment=sip"
node/devops216.ef.com labeled
root@devops213:~# kubectl label node devops217.ef.com "voip-environment=rtp"
node/devops217.ef.com labeled
root@devops213:~# kubectl taint node devops216.ef.com "sip=true:NoSchedule"
node/devops216.ef.com tainted
root@devops213:~# kubectl taint node devops217.ef.com "rtp=true:NoSchedule"
node/devops217.ef.com tainted

Added the HELM

root@devops213:~# helm repo add jambonz https://jambonz.github.io/helm-charts/
"jambonz" has been added to your repositories

HELM Install

helm install --namespace=jambonz \
--set "global.db.namespace=jambonz-db" \
--set "global.monitoring.namespace=jambonz-monitoring" \
--set "monitoring.grafana.hostname=grafana.ef.com" \
--set "monitoring.homer.hostname=homer.ef.com" \
--set "monitoring.jaeger.hostname=jaeger.ef.com" \
--set "webapp.hostname=portal.ef.com" \
--set "api.hostname=api.ef.com" \
--set cloud=none \
jambonz jambonz/jambonz

1 -- SIP Pod is always in CreashLoopBackOff State.

NAME                               READY   STATUS             RESTARTS         AGE
api-server-57b87d96cb-7j5pt        1/1     Running            0                149m
feature-server-86ff98b7f-xjf72     3/3     Running            0                149m
jambonz-sbc-rtp-vqhw8              2/2     Running            0                149m
jambonz-sbc-sip-9fqmb              1/3     CrashLoopBackOff   22 (2m57s ago)   35m
sbc-call-router-568678c4ff-dxppf   1/1     Running            0                149m
sbc-inbound-56bb6b697b-bzmpm       1/1     Running            5 (146m ago)     149m
sbc-outbound-6789b5bb67-tqx8q      1/1     Running            5 (147m ago)     149m
sbc-register-7bf8d6dc96-7jfpl      1/1     Running            0                149m
webapp-7859685df4-hmrft            1/1     Running            0                149m

2 -- logs for the pod and its subsequent containers are given below:

 kubectl -n jambonz logs -f jambonz-sbc-sip-9fqmb drachtio
SipTransport::init - contact: sip::5060;transport=udp,tcp
Uncaught exception: SipTransport::init - invalid contact sip::5060;transport=udp,tcp

sbc-options-handler

root@devops213:~# kubectl -n jambonz logs -f jambonz-sbc-sip-9fqmb sbc-options-handler

> [email protected] start
> node app

node:events:498
      throw er; // Unhandled 'error' event
      ^

Error: connect ECONNREFUSED 127.0.0.1:9022
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1157:16)
Emitted 'error' event on Srf instance at:
    at Immediate.<anonymous> (/opt/app/node_modules/drachtio-srf/lib/srf.js:84:64)
    at processImmediate (node:internal/timers:466:21) {
  errno: -111,
  code: 'ECONNREFUSED',
  syscall: 'connect',
  address: '127.0.0.1',
  port: 9022
}

Node.js v17.4.0
npm notice
npm notice New minor version of npm available! 8.3.1 -> 8.15.1
npm notice Changelog: <https://github.com/npm/cli/releases/tag/v8.15.1>
npm notice Run `npm install -g [email protected]` to update!
npm notice

SMPP

root@devops213:~# kubectl -n jambonz logs -f jambonz-sbc-sip-9fqmb smpp

> [email protected] start
> node app

{"level":30, "time": "2022-07-28T12:45:22.241Z","pid":19,"hostname":"devops216.ef.com","msg":"jambonz-smpp-esme listening for api requests at http://localhost:80"}
{"level":30, "time": "2022-07-28T12:45:22.249Z","pid":19,"hostname":"devops216.ef.com","msg":"jambonz-smpp-esme listening for smpp at http://localhost:2775"}

POd Describe

root@devops213:~# kubectl describe pod jambonz-sbc-sip-9fqmb -n jambonz
Name:         jambonz-sbc-sip-9fqmb
Namespace:    jambonz
Priority:     0
Node:         devops216.ef.com/192.168.2.216
Start Time:   Thu, 28 Jul 2022 17:45:04 +0500
Labels:       app=jambonz-sbc-sip
              controller-revision-hash=7486447d86
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           192.168.2.216
IPs:
  IP:           192.168.2.216
Controlled By:  DaemonSet/jambonz-sbc-sip
Init Containers:
  db-create-wait:
    Container ID:  containerd://2169eddea7d92256c9dfbff048b2444fe826d03c9f46748b002436c9c0082535
    Image:         kanisterio/mysql-sidecar:0.40.0
    Image ID:      docker.io/kanisterio/mysql-sidecar@sha256:7300e6d158fd9a82efef1387ac4cbfbbcebae1de27772dc69ae880721ab9cee4
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      until mysql -u jambones -D jambones -h mysql.jambonz-db -p${MYSQL_PASSWORD} --protocol=tcp -e "select count(*) from accounts";
      do
        sleep 5
      done

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 28 Jul 2022 17:45:05 +0500
      Finished:     Thu, 28 Jul 2022 17:45:05 +0500
    Ready:          True
    Restart Count:  0
    Environment:
      MYSQL_PASSWORD:  <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m68s (ro)
Containers:
  drachtio:
    Container ID:  containerd://4d27b46fff58208824e5452b780802a6307b5508c6cd5e6d2df82bd18b36fdfb
    Image:         drachtio/drachtio-server:0.8.17-rc1
    Image ID:      docker.io/drachtio/drachtio-server@sha256:1b0a6e9f0b811ff9f9d70b8b5845337f5237f517992fe1c31a51bf6d544179ce
    Ports:         9022/TCP, 5060/UDP, 5060/TCP
    Host Ports:    9022/TCP, 5060/UDP, 5060/TCP
    Args:
      drachtio
      --loglevel
      info
      --cloud-deployment
      --sofia-loglevel
      3
      --homer
      heplify-server.jambonz-monitoring
      --homer-id
      10
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 28 Jul 2022 19:24:10 +0500
      Finished:     Thu, 28 Jul 2022 19:24:10 +0500
    Ready:          False
    Restart Count:  24
    Environment:
      DRACHTIO_SECRET:       <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
      SOFIA_SEARCH_DOMAINS:  1
      SOFIA_SRES_NO_CACHE:   1
      CLOUD:                 none
      IMDSv2:
    Mounts:
      /etc/drachtio.conf.xml from jambonz-sbc-sip-conf (rw,path="drachtio.conf.xml")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m68s (ro)
  sbc-options-handler:
    Container ID:   containerd://beddab0598c300f43e3226b41f046069ade3e0e2e567d57c9d89cde7970c05e1
    Image:          jambonz/sbc-options-handler:0.7.5
    Image ID:       docker.io/jambonz/sbc-options-handler@sha256:325e5727907dd35d496e8a67a024ab35693b8448b6190f1aefee0adad4f7397f
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 28 Jul 2022 19:26:25 +0500
      Finished:     Thu, 28 Jul 2022 19:26:31 +0500
    Ready:          False
    Restart Count:  24
    Environment:
      NODE_ENV:                 production
      K8S:                      1
      JAMBONES_LOGLEVEL:        info
      JAMBONES_REDIS_HOST:      redis.jambonz-db
      JAMBONES_REDIS_PORT:      6379
      JAMBONES_MYSQL_DATABASE:  jambones
      JAMBONES_MYSQL_HOST:      mysql.jambonz-db
      JAMBONES_MYSQL_USER:      jambones
      JAMBONES_MYSQL_PASSWORD:  <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
      DRACHTIO_HOST:            127.0.0.1
      DRACHTIO_SECRET:          <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m68s (ro)
  smpp:
    Container ID:   containerd://7fb01bba5494e45520f3dc779dfa6376758dcae5bf7097c6e60bf02e6c623902
    Image:          jambonz/smpp-esme:0.7.5
    Image ID:       docker.io/jambonz/smpp-esme@sha256:858aab5e25dda9339d60eb0c17448c3fb5ba5f6721ad9913d5bc06dbb7f91b1f
    Port:           80/TCP
    Host Port:      80/TCP
    State:          Running
      Started:      Thu, 28 Jul 2022 17:45:14 +0500
    Ready:          True
    Restart Count:  0
    Environment:
      NODE_ENV:                 production
      JAMBONES_LOGLEVEL:        info
      K8S:                      1
      HTTP_PORT:                80
      AVOID_UDH:                1
      JAMBONES_REDIS_HOST:      redis.jambonz-db
      JAMBONES_REDIS_PORT:      6379
      JAMBONES_MYSQL_DATABASE:  jambones
      JAMBONES_MYSQL_HOST:      mysql.jambonz-db
      JAMBONES_MYSQL_USER:      jambones
      JAMBONES_MYSQL_PASSWORD:  <set to the key 'MYSQL_PASSWORD' in secret 'jambonz-secrets'>  Optional: false
      DRACHTIO_HOST:            127.0.0.1
      DRACHTIO_SECRET:          <set to the key 'DRACHTIO_SECRET' in secret 'jambonz-secrets'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4m68s (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  jambonz-sbc-sip-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      jambonz-sbc-sip-conf
    Optional:  false
  kube-api-access-4m68s:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              voip-environment=sip
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
                             sip:NoSchedule op=Exists
Events:
  Type     Reason   Age                  From     Message
  ----     ------   ----                 ----     -------
  Warning  BackOff  3m (x484 over 102m)  kubelet  Back-off restarting failed container

Any clue or comment will be much appreciate.

Also thanks for your YouTube video. It worked well for deploying the cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.