Comments (17)
Hmm... So this shouldn't happen because we have our own mutex surrounding ipset logic and all the functions that you posted above has the correct locking logic.
However, I can't argue with the logs. Can you please run kube-router with -v 1
as one of the options and then re-create the scenario and post me the logs? I'm especially interested in the lines like:
Attempting to attain ipset mutex lock
Attained ipset mutex lock, continuing...
Returned ipset mutex lock
and the ordering of those from the various controllers surrounding the errors.
from kube-router.
I have found out that I cannot reproduce it on any of my development clusters, but only in prod. Hence I will do that next time I a security bulletin for kernel is issued (and a new canonical kernel is released): it usually happens at least every 2 weeks.
from kube-router.
Here is entire log which includes events slightly before and slightly after incident when a pod could not connect to the service IP.
Timestamps when the pod tried and failed to connect: 21:45:37.945
and 21:45:41.917
.
And then a bit later it succeeded at: 21:45:59.723809+00:00
Service IP it tried to connect to: 10.32.131.122
And entire (mildly anonymised but no lines removed) log: https://www.dropbox.com/scl/fi/ima6kknivqs17nj075t1d/kube-router-v1.log?rlkey=aigkfmrl08n35ksik3nddw2nz&dl=0
Log is attached via dropbox because github does not accept comments longer than 64kB.
from kube-router.
Hmm... So I don't see anything in the logs. It seems that the error that you initially opened the issue for doesn't appear in the logs at all. There's only 1 error, and its a pretty common one, where an ipset fails to delete due to a reference not having cleared the kernel yet, that got corrected less than 2 seconds later in the next sync.
In terms of not having reachability to a pod for the time window that you mentioned, I don't see anything in the logs that would cause that. The failed ipset does get cleared right around there, but I think that is a red-herring as there are numerous successful syncs between the time period that it broke and that error that synced without any errors.
You do have a pretty high amount of churn on your host, but all of the controllers seem to be syncing fairly quickly.
In terms the service IP, that IP doesn't show up in the logs at all, because most things are logged by service name, so you might be able to diagnose that side more than I would be able to.
from kube-router.
In terms the service IP, that IP doesn't show up in the logs at all, because most things are logged by service name, so you might be able to diagnose that side more than I would be able to.
It shows:
I0823 21:45:20.292157 1 service_endpoints_sync.go:594] Found a destination 10.31.14.64:5432 (Weight: 1) in service tcp:10.32.131.122:5432 (Flags: [hashed entry]) which is no longer needed so cleaning up
at this point in time it's removed.
And apparently this is when it's added back:
I0823 21:46:00.111395 1 network_services_controller.go:914] Received update to endpoint: org-d-alarms-xtr-k8s-1-prod/db-pgbouncer from watch API
I0823 21:46:00.111533 1 ecmp_vip.go:325] Received update to endpoint: org-d-alarms-xtr-k8s-1-prod/db-pgbouncer from watch API
I0823 21:46:00.111587 1 ecmp_vip.go:184] Updating service org-d-alarms-xtr-k8s-1-prod/db-pgbouncer triggered by endpoint update event
I0823 21:46:00.112829 1 network_services_controller.go:948] Syncing IPVS services sync for update to endpoint: org-d-alarms-xtr-k8s-1-prod/db-pgbouncer
I0823 21:46:00.112874 1 network_services_controller.go:454] Performing requested sync of ipvs services
And in-between there is no ipvs service available.
And yes, sorry for not mentioning what ns/service-name
that IP is associated with.
from kube-router.
And a tiny update on my previous
And then a bit later it succeeded at:
21:45:59.723809+00:00
Statement: this timestamp is when the application has started. It definitely tried to connect some time later than that. As the application does not have verbose logging on when exactly it connected to the database during initialisation I can only tell that it happened between: 21:45:59.723809+00:00
(and more precisely - 100% later than 2023-08-23T21:45:59.992235+00:00
) and 21:46:02.477+00:00
(to which I0823 21:46:00.112874
falls ideally).
from kube-router.
@zerkms - Sorry for missing the service IP in the logs, I must have made a typo or something.
From what I can see, without knowing more about this specific service that you're deploying, it looks like Kubernetes likely told us that the pod was no longer ready or healthy or deployed or some such, and so we withdrew it from the service. Later on it came back so we put it back.
So as far as I can see, again without knowing more, it looks like kube-router did what it was supposed to do. However, I think this error is a bit off topic from the original issue reported. The first one was about kube-router encountering a kernel error where it wasn't able to update IPVS. This one is about something different.
I'd recommend that we keep this thread about the kernel error (of which I can't find any evidence that it happened in this case from the log you provided). If you want to pursue this other error, we should probably open another issue with more information about how db-pgbouncer
is deployed, along with an even higher log level, maybe log level 3?
from kube-router.
From what I can see, without knowing more about this specific service that you're deploying, it looks like Kubernetes likely told us that the pod was no longer ready or healthy or deployed or some such, and so we withdrew it from the service. Later on it came back so we put it back.
That's how I read it too. BUT!! It's 100% healthy pods available there (and they happily serve during the same time frames). And it's not a single service - as you can see in that log it's a large batch of them removed. And those services don't belong to the same (or similar applications) - those are just random services from the entire cluster.
So as far as I can see, again without knowing more, it looks like kube-router did what it was supposed to do. However, I think this error is a bit off topic from the original issue reported. The first one was about kube-router encountering a kernel error where it wasn't able to update IPVS. This one is about something different.
Agree. Should we close this (as I don't have any more details for the original one) and create a new one?
If you want to pursue this other error, we should probably open another issue with more information about how db-pgbouncer is deployed, along with an even higher log level, maybe log level 3?
As I mentioned above: db-pgbouncer
is just one that I picked for example, those are absolutely different services (see ports) And yep - I can collect more logs at level 3 (I'm surprised nobody else ever experienced it too - I have it in several different clusters).
What they have in common - they are the pods from the same node: 10.31.14.xxx
I0823 21:45:20.285541 1 service_endpoints_sync.go:594] Found a destination 10.31.14.58:8080 (Weight: 1) in service tcp:10.32.54.139:80 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.286338 1 service_endpoints_sync.go:594] Found a destination 10.31.14.65:5432 (Weight: 1) in service tcp:10.32.157.238:5432 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.288207 1 service_endpoints_sync.go:594] Found a destination 10.31.14.249:9153 (Weight: 1) in service tcp:10.32.0.10:9153 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.291732 1 service_endpoints_sync.go:594] Found a destination 10.31.14.51:4443 (Weight: 1) in service tcp:10.32.229.27:443 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.292157 1 service_endpoints_sync.go:594] Found a destination 10.31.14.64:5432 (Weight: 1) in service tcp:10.32.131.122:5432 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.293237 1 service_endpoints_sync.go:594] Found a destination 10.31.14.35:8443 (Weight: 1) in service tcp:10.32.18.253:443 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.298051 1 service_endpoints_sync.go:594] Found a destination 10.31.14.204:8080 (Weight: 1) in service tcp:10.32.195.94:80 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.298707 1 service_endpoints_sync.go:594] Found a destination 10.31.14.249:53 (Weight: 1) in service tcp:10.32.0.10:53 (Flags: [hashed entry]) which is no longer needed so cleaning up
I0823 21:45:20.299980 1 service_endpoints_sync.go:594] Found a destination 10.31.14.249:53 (Weight: 1) in service udp:10.32.0.10:53 (Flags: [hashed entry]) which is no longer needed so cleaning up
If I needed to take a guess - to me it looks kube-router
under those circumstances removes services that still have at least one healthy pod running (for whatever reason).
Btw, is it suspicious that coredns ip/port is twice there: 10.31.14.249:53
?
from kube-router.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
from kube-router.
I think it's not stale, but I will bring more logs with extra verbose flag, next week on a next kernel upgrade loop.
from kube-router.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
from kube-router.
Okay, I forgot about it, sorry :-D Nonetheless, within next couple of weeks on next upgrade cycle I will provide more logs and will stop bumping the report.
from kube-router.
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
from kube-router.
Ok, I need some more time.
from kube-router.
Okay, it looks like I cannot reproduce it anymore on 1.27.x.
It was easy and reliable to reproduce in 100% on 1.26.x though.
Hence closing :-)
from kube-router.
@aauren I am having same issue. NetworkPolicies do not work for me. I run 1.26.4. In logs I see
Failed to cleanup stale ipsets: failed to delete ipset KUBE-DST-3E7NRCUJY5FMHIWS due to ipset v7.17: Set cannot be destroyed: it is in use by a kernel component
from kube-router.
@vladimirtiukhtin can you open a new issue with all of the fields in the template asking with as many other details as possible? Maybe some debug logs and reproduction instructions?
from kube-router.
Related Issues (20)
- NetworkPolicy's Egress is not working HOT 12
- kube-router can't find eth0 interface HOT 7
- DSR mode: "rpc error: code = NotFound desc = could not find container "": container ID should not be empty" HOT 3
- container image v2.1.0 is missing the iptables-legacy binaries HOT 1
- CLI Option `--cleanup-config` Is Not Working Correctly
- Kube-router does not handle networkpolicies quick enough for jobs to work HOT 2
- NetworkPolicy ipv6 doesn't work. HOT 2
- Pods with hostNetwork=true can't connect to Kube API Server HOT 3
- Routing issue in IPv6-Only cluster HOT 6
- v2.1.1: TCPMSS not setup with DSR HOT 2
- Bug in network policy ipsets when using dualStack HOT 2
- . HOT 1
- Initial BGP sync during kube-router startup extremely slow in kubernetes v1.29 HOT 6
- /var/lib/kube-router/kubeconfig does not regenerate when configmap changes are made HOT 3
- kube-router crashloop backoff without obvious cause on brand new cluster HOT 8
- v2.1: DSR+TCPMSS with non-ready services not set-up correctly HOT 6
- Globally enable hairpin mode for externalIPs HOT 2
- kube-router should cleanup rules it does not handle anymore in its chains HOT 2
- kube-router duplicates rules in the KUBE-ROUTER-INPUT chain HOT 3
- Custom ipset sets and entries get reverted periodically HOT 12
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kube-router.