NATS Streaming reached its end of life.
It is no longer supported and has been replaced by Jetstream
JetStream is build into the NATS Server and supported by all major clients. Check examples here
NATS Streaming Operator
License: Apache License 2.0
Could anybody share their experience with installing a NATS Streaming cluster on Open Shift or OKD?
Thank you
It would be helpful to have a NatsStreamingCluster example with all possible options just to see everything that is available.
For example in one of the examples it shows setting spec.config.storeDir
. what other options are available here other than storeDir?
Similar to the NATS Operator, could look into whether feasible to have a sidecar for reloading the streaming server (though currently the NATS Streaming Server does not support reloading).
kubernets: v1.9.5
nats operator: latest
nats streaming operator: latest
after started nats cluster as the readme, everything seems fine, but when starting 3 stan pods using below command:
echo '
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan"
spec:
size: 3
natsSvc: "example-nats"
' | kubectl -n nats-io apply -f -
none of the 3 pods can connect to nats cluster, error message is like:
[1] 2018/08/02 06:52:48.194822 [INF] STREAM: Starting nats-streaming-server[example-stan] version 0.10.2
[1] 2018/08/02 06:52:48.194920 [INF] STREAM: ServerID: 5bQLEaI0IxgiQ8GbWYkIzr
[1] 2018/08/02 06:52:48.194927 [INF] STREAM: Go version: go1.10.3
[1] 2018/08/02 06:52:50.195774 [INF] STREAM: Shutting down.
[1] 2018/08/02 06:52:50.195843 [FTL] STREAM: Failed to start: nats: no servers available for connection
sometimes one pod has errors:
[1] 2018/08/02 06:52:55.956946 [INF] STREAM: Starting nats-streaming-server[example-stan] version 0.10.2
[1] 2018/08/02 06:52:55.956982 [INF] STREAM: ServerID: qwzK7kMOw6Dxz91FiutahR
[1] 2018/08/02 06:52:55.956988 [INF] STREAM: Go version: go1.10.3
[1] 2018/08/02 06:52:55.974620 [INF] STREAM: Recovering the state...
[1] 2018/08/02 06:52:55.974827 [INF] STREAM: No recovered state
[1] 2018/08/02 06:52:55.974921 [INF] STREAM: Cluster Node ID : qwzK7kMOw6Dxz91Fiutald
[1] 2018/08/02 06:52:55.974928 [INF] STREAM: Cluster Log Path: example-stan/qwzK7kMOw6Dxz91Fiutald
[1] 2018/08/02 06:53:01.082008 [INF] STREAM: Shutting down.
[1] 2018/08/02 06:53:01.082246 [FTL] STREAM: Failed to start: failed to join Raft group example-stan
if I start stan pods one by one, sometimes the first can connect, sometimes two.
I'm trying settup a Nats Streaming cluster with three nodes on local kubernetes following the operator docs and I'm getting continuos connection fail from same pods.
As you can see, stan-cluster-poc-1
became cluster leader.
[1] 2020/07/06 19:25:28.972786 [INF] STREAM: Starting nats-streaming-server[stan-cluster-poc] version 0.18.0
[1] 2020/07/06 19:25:28.972914 [INF] STREAM: ServerID: S2CN7Xkm9RPjaSmk17QhUu
[1] 2020/07/06 19:25:28.972921 [INF] STREAM: Go version: go1.14.4
[1] 2020/07/06 19:25:28.972924 [INF] STREAM: Git commit: [026e3a6]
[1] 2020/07/06 19:25:29.076044 [INF] STREAM: Recovering the state...
[1] 2020/07/06 19:25:29.086609 [INF] STREAM: No recovered state
[1] 2020/07/06 19:25:29.092785 [INF] STREAM: Cluster Node ID : "stan-cluster-poc-1"
[1] 2020/07/06 19:25:29.092886 [INF] STREAM: Cluster Log Path: /persistence/stan/raft/stan-cluster-poc-1
[1] 2020/07/06 19:25:29.152058 [INF] STREAM: raft: initial configuration: index=0 servers=[]
[1] 2020/07/06 19:25:29.153059 [INF] STREAM: raft: entering follower state: follower="Node at stan-cluster-poc."stan-cluster-poc-1".stan-cluster-poc [Follower]" leader=
[1] 2020/07/06 19:25:29.158236 [DBG] STREAM: Bootstrapping Raft group stan-cluster-poc as seed node
[1] 2020/07/06 19:25:29.166935 [DBG] STREAM: Discover subject: _STAN.discover.stan-cluster-poc
[1] 2020/07/06 19:25:29.166977 [DBG] STREAM: Publish subject: _STAN.pub.stan-cluster-poc.>
[1] 2020/07/06 19:25:29.166982 [DBG] STREAM: Subscribe subject: _STAN.sub.stan-cluster-poc
[1] 2020/07/06 19:25:29.166985 [DBG] STREAM: Subscription Close subject: _STAN.subclose.stan-cluster-poc
[1] 2020/07/06 19:25:29.166988 [DBG] STREAM: Unsubscribe subject: _STAN.unsub.stan-cluster-poc
[1] 2020/07/06 19:25:29.166991 [DBG] STREAM: Close subject: _STAN.close.stan-cluster-poc
[1] 2020/07/06 19:25:29.170852 [INF] STREAM: Message store is RAFT_FILE
[1] 2020/07/06 19:25:29.171036 [INF] STREAM: Store location: /persistence/stan/stan-cluster-poc-1
[1] 2020/07/06 19:25:29.171376 [INF] STREAM: ---------- Store Limits ----------
[1] 2020/07/06 19:25:29.171513 [INF] STREAM: Channels: 100 *
[1] 2020/07/06 19:25:29.171580 [INF] STREAM: --------- Channels Limits --------
[1] 2020/07/06 19:25:29.172067 [INF] STREAM: Subscriptions: 1000 *
[1] 2020/07/06 19:25:29.172346 [INF] STREAM: Messages : 1000000 *
[1] 2020/07/06 19:25:29.172706 [INF] STREAM: Bytes : 976.56 MB *
[1] 2020/07/06 19:25:29.172915 [INF] STREAM: Age : unlimited *
[1] 2020/07/06 19:25:29.173035 [INF] STREAM: Inactivity : unlimited *
[1] 2020/07/06 19:25:29.173215 [INF] STREAM: ----------------------------------
[1] 2020/07/06 19:25:33.103290 [WRN] STREAM: raft: heartbeat timeout reached, starting election: last-leader=
[1] 2020/07/06 19:25:33.103356 [INF] STREAM: raft: entering candidate state: node="Node at stan-cluster-poc."stan-cluster-poc-1".stan-cluster-poc [Candidate]" term=2
[1] 2020/07/06 19:25:33.121018 [DBG] STREAM: raft: votes: needed=1
[1] 2020/07/06 19:25:33.121087 [DBG] STREAM: raft: vote granted: from="stan-cluster-poc-1" term=2 tally=1
[1] 2020/07/06 19:25:33.121244 [INF] STREAM: raft: election won: tally=1
[1] 2020/07/06 19:25:33.121276 [INF] STREAM: raft: entering leader state: leader="Node at stan-cluster-poc."stan-cluster-poc-1".stan-cluster-poc [Leader]"
[1] 2020/07/06 19:25:33.121463 [INF] STREAM: server became leader, performing leader promotion actions
[1] 2020/07/06 19:25:33.147520 [INF] STREAM: finished leader promotion actions
[1] 2020/07/06 19:25:33.147612 [INF] STREAM: Streaming Server is ready
However, it fails on establish connection with one or more nodes depending of cluster nodes number (e.g.):
[1] 2020/07/06 19:34:45.125722 [WRN] STREAM: raft: failed to contact: server-id="stan-cluster-poc-3" time=1.000833233s
[1] 2020/07/06 19:34:46.071788 [WRN] STREAM: raft: failed to contact: server-id="stan-cluster-poc-3" time=1.946740647s
[1] 2020/07/06 19:34:46.413845 [ERR] STREAM: raft: failed to heartbeat to: peer=stan-cluster-poc."stan-cluster-poc-3".stan-cluster-poc error="nats: timeout"
[1] 2020/07/06 19:34:54.349017 [ERR] STREAM: raft: failed to appendEntries to: peer="{Voter "stan-cluster-poc-3" stan-cluster-poc."stan-cluster-poc-3".stan-cluster-poc}" error="natslog: read timeout"
On stan-cluster-poc-2
I received the warning bellow:
[1] 2020/07/06 19:25:33.240858 [WRN] STREAM: raft: failed to get previous log: previous-index=4 last-index=0 error="log not found"
On stan-cluster-poc-3
I received the warning bellow:
[1] 2020/07/06 19:25:34.340570 [WRN] STREAM: raft: failed to get previous log: previous-index=5 last-index=0 error="log not found"
OSX version 10.14.6
docker 19.03.8
kubernetes version 1.16.5
persistent volume with storageClassName “local-storage” and ReadWriteOnce mode
nats operator 0.7.2
nats-server version 2.1.7
nats streaming operator 0.3.0-v1alpha1
nats-streaming-server version 0.18.0
Trying to create NatsStreaming(Stan) pod with replica set 3, Using a custom store dir for a Persistent Volume. But Getting Error
Unable to Mount Volumes for Pod Because “volume is already exclusively attached to one node and can’t be attached to another”
This is Because, NatsStreaming Operator doesn't have StateFullSet. Is there any way to solve it.
I'm currently trying to get NATS streaming (file based persistence) working clustered on K8s on GCP and was wondering what state this project is in for that particular use case.
Currently basic set up seems broken: #61
And replies to this issue seem to suggest using auto node updates isn't possible: #40
And this issue seems to suggest clustered, file based persistence isn't working: #23
Have I misunderstood any of these issues? And if not is there a project that meets these needs?
I don't think this operator needs all verbs on these Kubernetes objects. We should make sure to have less access.
# Allow actions on basic Kubernetes objects
- apiGroups: [""]
resources:
- configmaps
- secrets
- pods
- services
- serviceaccounts
- serviceaccounts/token
- endpoints
- events
verbs: ["*"]
I got started with the nats-operator helm chart. Everything works well there. As far as the streaming operator:
TODO.md
in this repo).Thanks for all the excellent work so far!!!
I'm trying to install / update nats-streaming operator with postgresql attached as storage - and I've recently get a problem with not responding to changes in NatsStreamingCluster
$ k get NatsStreamingCluster -o yaml
apiVersion: v1
items:
- apiVersion: streaming.nats.io/v1alpha1
kind: NatsStreamingCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"streaming.nats.io/v1alpha1","kind":"NatsStreamingCluster","metadata":{"annotations":{},"name":"nats-streaming","namespace":"default"},"spec":{"configFile":"/etc/stan/config/secret.conf","natsSvc":"nats","size":2,"store":"SQL","template":{"spec":{"containers":[{"name":"nats-streaming","volumeMounts":[{"mountPath":"/etc/stan/config","name":"stan-secret","readOnly":true}]}],"volumes":[{"name":"stan-secret","secret":{"secretName":"stan-secret"}}]}}}}
clusterName: ""
creationTimestamp: "2019-02-26T11:42:42Z"
generation: 1
name: nats-streaming
namespace: default
resourceVersion: "51146622"
selfLink: /apis/streaming.nats.io/v1alpha1/namespaces/default/natsstreamingclusters/nats-streaming
uid: a0426fe2-39bb-11e9-b4aa-42010a9c00b8
spec:
configFile: /etc/stan/config/secret.conf
natsSvc: nats
size: 2
store: SQL
template:
spec:
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /etc/stan/config
name: stan-secret
readOnly: true
volumes:
- name: stan-secret
secret:
secretName: stan-secret
kind: List
metadata:
resourceVersion: ""
selfLink: ""
But after installing there is only one pod of nats-streaming
namespace is set to 'default'
NAME READY STATUS RESTARTS AGE
nats-1 1/1 Running 0 21m
nats-2 1/1 Running 0 21m
nats-3 1/1 Running 0 21m
nats-operator-56c4974b7f-wsp6j 1/1 Running 0 21m
nats-streaming-1 1/1 Running 0 15m
nats-streaming-operator-d5cbd6665-8k6rt 1/1 Running 0 21m
After editing resource and setting size to different value (4)
k edit NatsStreamingCluster
operator didn't responds to events.
!! There is only one instance when SQL config used - with default installation with file store there is valid number of instances !!
Env: GKE
After installation from readme (each scenario) there is no possibility to connect to Nats-Streaming-Server - there is no service added to Kubernetes cluster.
cluster installed with following commands
kubectl create namespace $NS
kubectl create secret generic stan-secret --from-file secret.conf
kubectl apply -f https://github.com/nats-io/nats-operator/releases/download/v0.3.0/deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/nats-io/nats-streaming-operator/master/deploy/default-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/nats-io/nats-streaming-operator/master/deploy/deployment.yaml
kubectl apply -n$NS -f nats-cluster.yaml
# kubectl apply -n$NS -f nats-streaming-cluster-sql.yaml
kubectl apply -n$NS -f nats-streaming-cluster.yaml
where nats-cluster.yaml:
---
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats"
# namespace: "nats"
spec:
size: 3
and nats-streaming-cluster.yaml:
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "stan"
spec:
# Number of nodes in the cluster
size: 3
# NATS Streaming Server image to use, by default
# the operator will use a stable version
#
image: "nats-streaming:latest"
# Service to which NATS Streaming Cluster nodes will connect.
#
natsSvc: "nats"
config:
debug: true
trace: true
raftLogging: true
After getting service list there is only one which is pointing to nats cluster
❯ kgs
namespace is set to 'default'
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nats ClusterIP 10.103.197.224 <none> 4222/TCP 39m
nats-mgmt ClusterIP None <none> 6222/TCP,8222/TCP 39m
Originally I create STAN cluster of size 3 with STAN Operator successfully. However if i change the size to 0, CRD still keep one replica up and running. Is it bug? If not, can i request a feature to allow us to update size to 0 in order scale down whole STAN cluster?
kubectl logs bmrg-dev-nats-streamer-2
[1] 2019/10/21 09:53:54.901940 [INF] STREAM: Starting nats-streaming-server[bmrg-dev-nats-streamer] version 0.16.2
[1] 2019/10/21 09:53:54.901996 [INF] STREAM: ServerID: h6KMqx3nUR7PZU62q3jKVZ
[1] 2019/10/21 09:53:54.902000 [INF] STREAM: Go version: go1.11.13
[1] 2019/10/21 09:53:54.902005 [INF] STREAM: Git commit: [910d6e1]
[1] 2019/10/21 09:53:56.910836 [INF] STREAM: Shutting down.
[1] 2019/10/21 09:53:56.911025 [FTL] STREAM: Failed to start: nats: no servers available for connection
"nats-operator:latest"
10-deployments.yaml connecteverything/nats-operator:0.6.0
deployment.yaml synadia/nats-streaming-operator:v0.2.2-v1alpha1
I recently implemented service accounts and it works great for standard NATS communication on the nats-cluster, however, the NatsStreamingCluster isn't making use of the Service Accounts setup, do I have to manually define what credentials it uses? And if so, where would I put this in the Kubernetes YAML config?
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "stan-cluster"
spec:
natsSvc: "nats-cluster"
..... (rest of config is irrelevant and relates to persistent storage)
Any ideas?
The operator doesn't handle any updates to the NatsStreamingCluster
spec other than the size setting. So if you update the server version, that won't get passed down to the managed pods. Same with annotations.
The operator does seem to pass down these properties when creating new pods, but if the pods are already running, updates don't take effect. This is a serious lack since it prevents you from managing the streaming server version being run, but it is also important if you are providing the configuration file via a configmap like below, where you want to make sure that the pods are restarted if there are configmap changes upon deploy as recommended here:
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "stan-cluster"
namespace: {{ .Values.namespace }}
spec:
natsSvc: {{ .Values.cluster.name }}
version: "{{ .Values.stan_version }}"
size: 3
configFile: "/etc/stan/config/stan.conf"
config:
storeDir: "/pv/stan"
ftGroup: "stan"
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics_port }}"
checksum/config: {{ include (print $.Template.BasePath "/configs.yaml") . | sha256sum }}
spec:
volumes:
- name: {{ .Values.config_name }}
configMap:
name: {{ .Values.config_name }}
- name: stan-store-dir
persistentVolumeClaim:
claimName: {{ .Values.pvc_name }}
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /etc/stan/config
name: {{ .Values.config_name }}
readOnly: true
- mountPath: /pv
name: stan-store-dir
- name: metrics
image: synadia/prometheus-nats-exporter:0.2.0
args: ["-connz", "-routez", "-subz", "-varz", "-channelz", "-serverz", "-DV", "http://localhost:8222/"]
ports:
- name: metrics
containerPort: {{ .Values.metrics_port }}
protocol: TCP
In the case of the version, not even deleting the pods will get them restarted with the updated streaming server version. This works however with the nats operator.
in your document there is an example for PostgreSQL DB store and pvc.
do i have to use either of these or can i use mongo for example and if so please provide an example config for the secret
Is there a way to enable http monitoring with the nats-streaming-operator?
/streaming/serverz
and such.
I tried adding a container args to the k8s config.
---
apiVersion: streaming.nats.io/v1alpha1
kind: NatsStreamingCluster
metadata:
name: nats-streaming-cluster
spec:
size: 3
natsSvc: nats-cluster
template:
spec:
containers:
- args: ["--http_port=8222", "--stan_debug=true", "--stan_trace=true"]
But the default operator config doesn't create a service that would work for that.
Currently the README.md points out in Getting Started section that you need to install nats-operator
first and then it gives you a command about installing version v0.5.0
of it. ATM The latest version of nats-operator
is 0.7.2.
PS.: this info tends to frequently get outdated. Should it actually point to a fixed version? If so a compatibility version between nats-operator
versions and nats-streaming-operator
versions should me maintained as well, right?
If you change the nats-io namespace to say, default, the streamer doesn't ever connect to the service.
Is it ok to ask operator to run NATS and STUN on a different namespace than operator runs on?
I installed the nats-streaming-operator
to my minikube cluster. It installed without any issue. Since nats
service created by nats-operator
is a ClusterIP
one I unable to send data into that service externally. So I tried two approaches to send data.
Create a service like below of type LoadBalancer
and try to send data.
apiVersion: v1
kind: Service
metadata:
name: nats-streaming
namespace: default
labels:
app: nats-streaming
stan_cluster: siddhi-stan
spec:
externalTrafficPolicy: Cluster
ports:
- name: natsservice
port: 4222
protocol: TCP
targetPort: 4222
- name: natsservice-proxy
port: 8222
protocol: TCP
targetPort: 8222
selector:
app: nats-streaming
stan_cluster: siddhi-stan
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
Create a different kubernetes deployment of https://github.com/nats-io/go-nats-streaming/tree/master/examples and use that deployment to send data.
I used the following command to send requests.
go run $GOPATH/src/github.com/nats-io/go-nats-streaming/examples/stan-pub/main.go --server nats://nats-streaming.default.svc.cluster.local:4222 <SUBJECT> "{\"name\":\"data\"}"
But from both approaches, I got an error as
Can't connect: stan: connect request timeout.
Make sure a NATS Streaming Server is running at: nats://nats-streaming:4222
exit status 1
So I want to know that how can I external send data to nats-streaming-server
in the kubernetes cluster?
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:28:14Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
we're running nats streaming operator in gke, it was running fine. But after we updated kubernetes version. nats clients are not able to connect to nats server.
$ kubectl get natsclusters.nats.io nats-cluster -o yaml
apiVersion: nats.io/v1alpha2
kind: NatsCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"nats.io/v1alpha2","kind":"NatsCluster","metadata":{"annotations":{},"name":"nats-cluster","namespace":"default"},"spec":{"size":3}}
creationTimestamp: "2019-05-09T05:56:01Z"
generation: 4
name: nats-cluster
namespace: default
resourceVersion: "34972339"
selfLink: /apis/nats.io/v1alpha2/namespaces/default/natsclusters/nats-cluster
uid: 1f98955b-721f-11e9-b8dc-42010a8001a9
spec:
size: 3
status:
conditions:
- reason: scaling cluster from 2 to 3 peers
transitionTime: "2019-06-26T11:04:40Z"
type: ScalingUp
- reason: current state matches desired state
transitionTime: "2019-06-26T11:05:53Z"
type: Ready
- reason: scaling cluster from 2 to 3 peers
transitionTime: "2019-06-26T11:05:54Z"
type: ScalingUp
- reason: current state matches desired state
transitionTime: "2019-06-26T11:06:06Z"
type: Ready
- reason: scaling cluster from 2 to 3 peers
transitionTime: "2019-06-26T19:47:43Z"
type: ScalingUp
- reason: current state matches desired state
transitionTime: "2019-06-26T19:48:12Z"
type: Ready
- reason: scaling cluster from 1 to 3 peers
transitionTime: "2019-06-26T19:51:14Z"
type: ScalingUp
- reason: current state matches desired state
transitionTime: "2019-06-26T19:52:07Z"
type: Ready
- reason: scaling cluster from 2 to 3 peers
transitionTime: "2019-06-26T19:57:13Z"
type: ScalingUp
- reason: current state matches desired state
transitionTime: "2019-06-26T19:57:38Z"
type: Ready
currentVersion: 1.4.0
size: 3
$ kubectl get natsstreamingclusters.streaming.nats.io pharmer-cluster -o yaml
apiVersion: streaming.nats.io/v1alpha1
kind: NatsStreamingCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"streaming.nats.io/v1alpha1","kind":"NatsStreamingCluster","metadata":{"annotations":{},"name":"pharmer-cluster","namespace":"default"},"spec":{"natsSvc":"nats-cluster","size":3}}
creationTimestamp: "2019-05-09T05:58:23Z"
generation: 1
name: pharmer-cluster
namespace: default
resourceVersion: "19344769"
selfLink: /apis/streaming.nats.io/v1alpha1/namespaces/default/natsstreamingclusters/pharmer-cluster
uid: 747d5286-721f-11e9-b8dc-42010a8001a9
spec:
natsSvc: nats-cluster
size: 3
$ kubectl logs -f pharmer-cluster-1
[1] 2019/06/27 04:43:47.580129 [INF] STREAM: Starting nats-streaming-server[pharmer-cluster] version 0.11.2
[1] 2019/06/27 04:43:47.580191 [INF] STREAM: ServerID: ul5c5zg9XBBSq3PMBuRn4j
[1] 2019/06/27 04:43:47.580196 [INF] STREAM: Go version: go1.11.1
[1] 2019/06/27 04:43:47.604905 [INF] STREAM: Recovering the state...
[1] 2019/06/27 04:43:47.605177 [INF] STREAM: No recovered state
[1] 2019/06/27 04:43:47.605272 [INF] STREAM: Cluster Node ID : "pharmer-cluster-1"
[1] 2019/06/27 04:43:47.605287 [INF] STREAM: Cluster Log Path: pharmer-cluster/"pharmer-cluster-1"
[1] 2019/06/27 04:43:52.714067 [INF] STREAM: Shutting down.
[1] 2019/06/27 04:43:52.714455 [FTL] STREAM: Failed to start: failed to join Raft group pharmer-cluster
From nats client
$ kubectl logs -f pharmer-clsuter-868bd8d465-xwv77
stan: connect request timeout
$ kubectl logs -f nats-cluster-1
[1] 2019/06/26 19:51:55.159570 [INF] Starting nats-server version 1.4.0
[1] 2019/06/26 19:51:55.159628 [INF] Git commit [ce2df36]
[1] 2019/06/26 19:51:55.159856 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/06/26 19:51:55.159907 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/06/26 19:51:55.159915 [INF] Server is ready
[1] 2019/06/26 19:51:55.160129 [INF] Listening for route connections on 0.0.0.0:6222
[1] 2019/06/26 19:51:55.169644 [INF] 10.60.3.20:6222 - rid:1 - Route connection created
[1] 2019/06/26 19:51:55.173249 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-1.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:55.173271 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:56.183583 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:56.183612 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-1.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:56.565355 [INF] 10.60.3.20:52526 - rid:2 - Route connection created
[1] 2019/06/26 19:51:57.190034 [INF] 10.60.1.16:53246 - rid:4 - Route connection created
[1] 2019/06/26 19:51:57.190107 [INF] 10.60.1.16:6222 - rid:3 - Route connection created
[1] 2019/06/26 19:51:57.195326 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:58.207202 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:51:59.227256 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:00.240231 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:01.251089 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:02.262939 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:03.273854 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:03.409542 [INF] 10.60.1.19:51194 - rid:5 - Route connection created
[1] 2019/06/26 19:52:04.291668 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:05.305117 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:06.348944 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-3.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:52:07.360903 [INF] 10.60.1.19:6222 - rid:10 - Route connection created
[1] 2019/06/26 19:55:57.369605 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:55:59.370067 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:01.370584 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:03.371988 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:05.372566 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:07.373118 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:09.373677 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:11.374126 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:13.374572 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:15.375083 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:17.375585 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:19.376117 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:21.376767 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:23.377261 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:25.377780 [ERR] Error trying to connect to route: dial tcp 10.60.3.20:6222: i/o timeout
[1] 2019/06/26 19:56:26.384664 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:27.390153 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:28.397100 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:29.402200 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:30.407839 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:31.417194 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:32.423944 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:33.429647 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:34.436051 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:35.441762 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:36.452196 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
[1] 2019/06/26 19:56:37.458081 [ERR] Error trying to connect to route: dial tcp: lookup nats-cluster-2.nats-cluster-mgmt.default.svc on 10.63.240.10:53: no such host
Hey,
I tried your PV support with EFS RWX but it wasn't that good so I have a proposed implementation for PV support inside the operator.
eg
# Default values for nats-streaming-cluster
nats:
replicas: 3
clusterId: "nats-cluster"
stan:
replicas: 3
storage:
size: 10Gi
keepOnDelete: true
clusterId: "nats-streaming-cluster"
where it actually creates the pvs with the pods themselves or mounts them if they exist? The only thing I am not sure about what happens when you delete the operator you could leave the volumes hanging around if you delete your cluster?
What do you guys think?
For release v0.4.0
, the deployment.yaml
file attached to the release page contains a reference to a docker image called synadia/nats-streaming-operator:v0.4.0
, which does not exist. Rather, an image called synadia/nats-streaming-operator:0.4.0
exists (the v is missing).
Also, the deployment.yaml file located in the repository at the commit which is tagged v0.4.0
contains a reference to a docker image synadia/nats-streaming-operator:v0.3.0-v1alpha1
.
You could sort this out by updating the deploy directory before cutting a release, and releasing a v0.4.1 or 0.4.1 release on both github and docker hub once it's consistent.
Was't resilience supposed to be the great benefit of deploying NATS clusters? I came this morning to the office and found ALL nats-streaming-1-*
pods in CrashLoopBackOff
with around 500 restarts, meanwhile ALL messages have been obviously lost.
nats-cluster-1-1 1/1 Running 0 41h
nats-cluster-1-2 1/1 Running 0 42h
nats-cluster-1-3 1/1 Running 0 41h
nats-operator-5b47bc4f8-77glm 1/1 Running 0 42h
nats-streaming-1-1 0/1 CrashLoopBackOff 490 42h
nats-streaming-1-2 0/1 CrashLoopBackOff 486 41h
nats-streaming-1-3 0/1 CrashLoopBackOff 490 42h
nats-streaming-operator-59647b496-v4vv5 1/1 Running 0 42h
$ kubectl logs nats-streaming-1-1
[1] 2019/04/05 10:43:45.406013 [INF] STREAM: Starting nats-streaming-server[nats-streaming-1] version 0.12.2
[1] 2019/04/05 10:43:45.406058 [INF] STREAM: ServerID: PlHofW9bI3tXiJYkkRkQCQ
[1] 2019/04/05 10:43:45.406061 [INF] STREAM: Go version: go1.11.6
[1] 2019/04/05 10:43:45.406064 [INF] STREAM: Git commit: [4489c46]
[1] 2019/04/05 10:43:45.422431 [INF] STREAM: Recovering the state...
[1] 2019/04/05 10:43:45.422755 [INF] STREAM: No recovered state
[1] 2019/04/05 10:43:45.422838 [INF] STREAM: Cluster Node ID : "nats-streaming-1-1"
[1] 2019/04/05 10:43:45.422847 [INF] STREAM: Cluster Log Path: nats-streaming-1/"nats-streaming-1-1"
[1] 2019/04/05 10:43:50.531934 [INF] STREAM: Shutting down.
[1] 2019/04/05 10:43:50.532450 [FTL] STREAM: Failed to start: failed to join Raft group nats-streaming-1
Even if I delete all pods it still doesn't recover. I have to delete the whole natsstreamingcluster.streaming.nats.io/nats-streaming-1
and recreate it to make it work.
Helm chart can be hosted via Github pages and chart releases can be automated via Github actions.
Finally the nats-streaming-operator can be added to helm hub for better reach.
I'm glad to help with this.
I've executed the Getting Started section to install the NATS Operator and the NATS Streaming operator by copying those four yaml files locally and running helm install in the project directory.
helm install --name my-app --namespace default helm/test
project
└───helm
│ └───test
│ │ Chart.yaml
│ └───templates
| nats-operator-deployment.yaml
| nats-operator-prereqs.yaml
| nats-streaming-operator-deployment.yaml
| nats-streaming-operator-rbac.yaml
The CRDs get created as expected. The operator pods get created which I believe is also expected but not 100% sure from the README.
What I'm not expecting is the example-nats pods getting created.
# kubectl get crd
natsclusters.nats.io 2019-09-19T18:52:58Z
natsserviceroles.nats.io 2019-09-19T18:52:58Z
natsstreamingclusters.streaming.nats.io 2019-09-20T14:29:24Z
# kubectl get pods
NAME READY STATUS RESTARTS AGE
example-nats-1 1/1 Running 0 113s
example-nats-2 1/1 Running 0 112s
example-nats-3 1/1 Running 0 110s
nats-operator-dd7f4945f-l788t 1/1 Running 0 2m12s
nats-streaming-operator-6fbb6695ff-mkkzx 1/1 Running 0 2m12s
The example-nats pods should be created when content below is applied which I have not done.
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "example-stan"
spec:
size: 3
natsSvc: "example-nats"
In the nats-operator logs, I see that it is starting works and waiting for the example-nats pods to be running.
I don't see anything in https://github.com/nats-io/nats-operator/releases/download/v0.5.0/10-deployment.yaml, or the other three files, about creating "example-nats".
Is this supposed to be happening under the hood? How can I stop this example-nats cluster from being created when installing the operators?
level=info msg="started workers" pkg=controller
level=info msg="waiting for pod \"default/example-nats-1\" to become ready" cluster-name=example-nats namespace=default pkg=cluster
level=info msg="pod \"default/example-nats-1\" became ready" cluster-name=example-nats namespace=default pkg=cluster
level=info msg="waiting for pod \"default/example-nats-2\" to become ready" cluster-name=example-nats namespace=default pkg=cluster
level=info msg="pod \"default/example-nats-2\" became ready" cluster-name=example-nats namespace=default pkg=cluster
level=info msg="waiting for pod \"default/example-nats-3\" to become ready" cluster-name=example-nats namespace=default pkg=cluster
level=info msg="pod \"default/example-nats-3\" became ready" cluster-name=example-nats namespace=default pkg=cluster
I have a nat-streaming cluster that I've defined to be three pods, however it seems the operator is not able to find this property of the CRD or is ignoring it.
Operator Log
time="2019-01-23T21:01:52Z" level=info msg="Starting NATS Streaming Operator v0.2.0"
time="2019-01-23T21:01:52Z" level=info msg="Go Version: go1.11"
time="2019-01-23T21:01:53Z" level=info msg="Adding cluster 'logging/nats-streaming-logging-cluster' (uid=25707797-1f51-11e9-997b-166d38be4f4a)"
time="2019-01-23T21:05:30Z" level=info msg="Deleted 'logging/nats-streaming-logging-cluster' cluster (uid=25707797-1f51-11e9-997b-166d38be4f4a)"
time="2019-01-23T21:05:55Z" level=info msg="Adding cluster 'logging/nats-streaming-logging-cluster' (uid=ac99a498-1f52-11e9-997b-166d38be4f4a)"
time="2019-01-23T21:05:55Z" level=info msg="Missing pods for 'logging/nats-streaming-logging-cluster' cluster (size=0/1), creating 1 pods..."
time="2019-01-23T21:05:55Z" level=info msg="Creating pod 'logging/nats-streaming-logging-cluster-1'"
NATS Streaming Cluster Definition
$ k get stancluster/nats-streaming-logging-cluster -o yaml
apiVersion: streaming.nats.io/v1alpha1
kind: NatsStreamingCluster
metadata:
creationTimestamp: "2019-01-23T21:05:55Z"
generation: 1
labels:
app: nats-streaming-logging-cluster
chart: nats-streaming-logging-cluster-0.1.0
heritage: Tiller
release: nats-streaming-logging-cluster
name: nats-streaming-logging-cluster
namespace: logging
resourceVersion: "80341241"
selfLink: /apis/streaming.nats.io/v1alpha1/namespaces/logging/natsstreamingclusters/nats-streaming-logging-cluster
uid: ac99a498-1f52-11e9-997b-166d38be4f4a
spec:
configFile: /etc/stan/config/secret.conf
natsSvc: nats-logging-cluster
size: 3
store: SQL
template:
spec:
containers:
- name: nats-streaming
volumeMounts:
- mountPath: /etc/stan/config
name: nats-streaming-logging-cluster
readOnly: true
volumes:
- name: nats-streaming-logging-cluster
secret:
secretName: nats-streaming-logging-cluster
Is there something obvious that I am missing that would prevent the operator from seeing that size is set to 3?
I noticed that the Dockerfile in docker/operator pulls from alpine:3.8.
It would be nice to add the ability to execute bash on the pod to verify that any configuration files are being deployed correctly.
Example command:
kubectl exec -ti /bin/bash
Output:
kubectl exec -ti /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: "/bin/bash": stat /bin/bash: no such file or directory": unknown
Docker File Enhancement:
RUN apk add --no-cache bash
Does this operator not support deploying streaming clusters in other name spaces like nats-operator does? I tried modifying the resource definition to be Cluster scoped instead of Namespace but start getting an error:
msg="Failed to create replica Pod: an empty namespace may not be set during creation"
Is there any way to do this? Or it's just not supported and all your clusters must be in default namespace?
Thanks
Hi,
First of all thanks a lot for the great work you've been doing with NATS/NATS Streaming.
I'm currently looking for a proper way to deploy NATS Streaming next to our NATS cluster on Kubernetes and it seems that using a file persistence mechanism, in addition with an HA topology, is pretty complicated at the moment. As the Helm chart stored in this repository makes use of a Deployment
, as well as the operator, it becomes very tricky to handle horizontal scaling as volumes won't be dynamically created.
I am already aware of this Helm chart but it is still an external repository.
Would you potentially be interested in:
StatefulSet
StatefulSet
instead of a Deployment
In case the last two ones might be of interest to you, I could definitely take care of creating the PR. I only want to make sure this is something you might be looking for first.
This issue is more or less a duplicate/proposal to fix:
Thanks in advance for reading
Hi,
we are using an authenticated NATS cluster and are big fans of this operator.
Since the operator is overriding the STAN pods' command, I see no chance to add --user
/--pass
.
Is there a way to authenticate NatsStreamingCluster
s?
Best regards
helm install --name nats-steaming --namespace nats-io -f values.yaml .
Error:
Error: release nats-steaming-operator failed: Deployment.apps "nats-steaming-operator-nats-streaming-operator" is invalid: spec.strategy: Unsupported value: apps.DeploymentStrategy{Type:"OnDelete", RollingUpdate:(*apps.RollingUpdateDeployment)(nil)}: supported values: "Recreate", "RollingUpdate"
It should be possible to configure the certs used for the client connection to the NATS cluster.
If all pods go down, the cluster will never go up (in this case I was testing on a single node: minikube, and the node crashed)
The log will look permanently like this:
[1] 2018/12/05 16:43:14.755467 [INF] STREAM: Starting nats-streaming-server[stan] version 0.11.2
[1] 2018/12/05 16:43:14.755701 [INF] STREAM: ServerID: hX4G9R76qCMRvR1HThDCd5
[1] 2018/12/05 16:43:14.755745 [INF] STREAM: Go version: go1.11.1
[1] 2018/12/05 16:43:14.762222 [INF] STREAM: Recovering the state...
[1] 2018/12/05 16:43:14.762387 [INF] STREAM: No recovered state
[1] 2018/12/05 16:43:14.762463 [INF] STREAM: Cluster Node ID : "stan-2"
[1] 2018/12/05 16:43:14.762494 [INF] STREAM: Cluster Log Path: stan/"stan-2"
[1] 2018/12/05 16:43:19.908446 [INF] STREAM: Shutting down.
[1] 2018/12/05 16:43:19.910918 [FTL] STREAM: Failed to start: failed to join Raft group stan
Is there possibility to run nats-straeaming-server cluster which works without NATS cluster - NATS is started in embedded mode in nats-streaming-server?
I'm running on my local minikube, it seem some how the pod gracefully shutdown:
kubectl get pods -n nats
NAME READY STATUS RESTARTS AGE
nats-operator-5b9df7cbb7-vmxt7 1/1 Running 1 25h
nats-streaming-operator-66479b8d84-qwb2s 1/1 Running 1 25h
wistful-sponge-nast-streaming-nast-1 1/1 Running 0 8h
wistful-sponge-nast-streaming-nast-2 1/1 Running 0 8h
wistful-sponge-nast-streaming-nast-3 1/1 Running 0 8h
wistful-sponge-nast-streaming-nats-streaming-1 0/1 Completed 2 25h
wistful-sponge-nast-streaming-nats-streaming-2 0/1 Completed 1 25h
wistful-sponge-nast-streaming-nats-streaming-3 0/1 Completed 2 25h
Logs:
kubectl logs wistful-sponge-nast-streaming-nats-streaming-2 -n nats
[1] 2019/07/03 09:37:04.768787 [INF] STREAM: Starting nats-streaming-server[wistful-sponge-nast-streaming-nats-streaming] version 0.12.2
[1] 2019/07/03 09:37:04.768858 [INF] STREAM: ServerID: sdR75tHu4vdSxVG7yOHCci
[1] 2019/07/03 09:37:04.768861 [INF] STREAM: Go version: go1.11.6
[1] 2019/07/03 09:37:04.768863 [INF] STREAM: Git commit: [4489c46]
[1] 2019/07/03 09:37:04.786394 [INF] STREAM: Recovering the state...
[1] 2019/07/03 09:37:04.788582 [INF] STREAM: No recovered state
[1] 2019/07/03 09:37:04.788686 [INF] STREAM: Cluster Node ID : "wistful-sponge-nast-streaming-nats-streaming-2"
[1] 2019/07/03 09:37:04.788690 [INF] STREAM: Cluster Log Path: wistful-sponge-nast-streaming-nats-streaming/"wistful-sponge-nast-streaming-nats-streaming-2"
[1] 2019/07/03 09:37:06.837362 [INF] STREAM: Message store is RAFT_FILE
[1] 2019/07/03 09:37:06.837449 [INF] STREAM: Store location: store
[1] 2019/07/03 09:37:06.837537 [INF] STREAM: ---------- Store Limits ----------
[1] 2019/07/03 09:37:06.837571 [INF] STREAM: Channels: 100 *
[1] 2019/07/03 09:37:06.837588 [INF] STREAM: --------- Channels Limits --------
[1] 2019/07/03 09:37:06.837668 [INF] STREAM: Subscriptions: 1000 *
[1] 2019/07/03 09:37:06.837707 [INF] STREAM: Messages : 1000000 *
[1] 2019/07/03 09:37:06.837741 [INF] STREAM: Bytes : 976.56 MB *
[1] 2019/07/03 09:37:06.837757 [INF] STREAM: Age : unlimited *
[1] 2019/07/03 09:37:06.837848 [INF] STREAM: Inactivity : unlimited *
[1] 2019/07/03 09:37:06.837887 [INF] STREAM: ----------------------------------
[1] 2019/07/03 12:36:23.901524 [INF] STREAM: connection "_NSS-wistful-sponge-nast-streaming-nats-streaming-acks" reconnected to NATS Server at "nats://172.17.0.22:4222"
[1] 2019/07/03 12:36:23.906155 [INF] STREAM: connection "_NSS-wistful-sponge-nast-streaming-nats-streaming-general" reconnected to NATS Server at "nats://172.17.0.23:4222"
[1] 2019/07/03 12:36:23.913372 [INF] STREAM: Shutting down.
Is this issue of nats streaming operator?
nats.yaml
---
apiVersion: "nats.io/v1alpha2"
kind: "NatsCluster"
metadata:
name: "nats-cluster"
namespace: "nats"
spec:
size: 3
nats-straming.yaml
---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
name: "nats-streaming-cluster"
namespace: "nats"
spec:
size: 3
natsSvc: "nats-cluster"
After kubectl apply
there is no pods spawned in given namespace. Is there any workaround for this?
I noticed after running the operator that sometimes the streaming pods would end up on the same node. I prefer spreading pods across multiple nodes in case of a rare node failure.
This is the yaml I usually add to either my deployment or pod yamls:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nats-streaming
topologyKey: kubernetes.io/hostname
Is this something that could be of use for the operator?
How do I configure the Store Limits of the NATS Streaming Server?
For example, I want to set the number of subscriptions to be unlimited as well as increase the message payload. When I was using NATS Streaming Server locally, I was able to pass in different parameters to change these settings.
Currently the NATS Operator allows for Helm Chart installation. Would make the most since for this operator to support it as well and have the NATS Operator a dependency to it.
Hello there! I've been using nats-streaming-server for a while on my Kubernetes clusters and I love it.
I think it would be super great to have nats and/or nats-streaming as a commercial Kubernetes application in Google Cloud Platform Marketplace
Is there any plan to include nats on that marketplace?
Trying to create NatsStreaming(Stan) pod wutg replica set 3, Using a custom store dir for a Persistent Volume. But Getting Error
Unable to Mount Volumes for Pod Because “volume is already exclusively attached to one node and can’t be attached to another”
This is Because, NatsStreaming Operator doesn't have StateFullSet. Is there any way to solve it.
When we deploy the streaming operator deployment in Kubernetes v1.16.0 it gives an error like below.
no matches for kind "Deployment" in version "apps/v1beta2"
The reason for the error was Kubernetes v1.16.0 deprecated the apps/v1beta2
library.
Use apps/v1
instead of apps/v1beta2
.
Streaming Server SQL Store Options:
--sql_driver <string> Name of the SQL Driver ("mysql" or "postgres")
--sql_source <string> Datasource used when opening an SQL connection to the database
--sql_no_caching <bool> Enable/Disable caching for improved performance
--sql_max_open_conns <int> Maximum number of opened connections to the database
I got error when I deploy NatsStreamingCluster
[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"
I use GKE
full message
[1] 2019/12/26 07:16:45.747712 [INF] STREAM: ServerID: JTmPHIR4BFp2ZuAWkekcIl
[1] 2019/12/26 07:16:45.747715 [INF] STREAM: Go version: go1.11.13
[1] 2019/12/26 07:16:45.747717 [INF] STREAM: Git commit: [910d6e1]
[1] 2019/12/26 07:16:45.760913 [INF] STREAM: Recovering the state...
[1] 2019/12/26 07:16:45.761073 [INF] STREAM: No recovered state
[1] 2019/12/26 07:16:45.762399 [INF] STREAM: Shutting down.
[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.