Git Product home page Git Product logo

Comments (27)

adrien-f avatar adrien-f commented on July 17, 2024 2

👋 Greetings!

Would you be open if I were to look at this issue and see how we could solve this with integrating the signer from AWS?

from keda.

JorTurFer avatar JorTurFer commented on July 17, 2024 2

Would you be open if I were to look at this issue and see how we could solve this with integrating the signer from AWS?

Yeah, if you are willing to give a try, it'd be nice!

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024 1

@adrien-f
Following is the original scaler based on sarama library:
https://github.com/kedacore/keda/blob/80806a73218e7d128bd25945f573c2a91316d1d3/pkg/scalers/kafka_scaler.go
and the experimental scaler is this :
https://github.com/kedacore/keda/blob/80806a73218e7d128bd25945f573c2a91316d1d3/pkg/scalers/apache_kafka_scaler.go
The thing mentioned in the above comments suggests that the original scaler based on sarama library can itself be modified to be made compatible with AWS MSK IAM. Please see the following comment:
#5531 (comment)

from keda.

adrien-f avatar adrien-f commented on July 17, 2024 1

Hey there!

I've implemented the MSK signer! Let me know what you think :)

from keda.

dttung2905 avatar dttung2905 commented on July 17, 2024

Hi @sameerjoshinice,

Thanks for reporting this to us. An i/o timeout and context deadline exceed often mean network connection error. I have a few questions:

  • Has it setup been working well for you before you encounter this problem? Or this is the first time this scaler has been run, causing the outage?
  • Did you try to debug by setting up a testing pod, making the same sasl + tls connection using Kafka cli instead? If this test does not pass, it means there are errors with the tls cert + sasl
  • How did you manage to find out that KEDA operator is causing CPU spike in AWS MSK brokers ? What was the number of affected brokers out of the AWS MSK fleet ?
  • If you could get more logs for troubleshooting, that would be great

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

Hi @dttung2905 ,
Please see answers inline
Has it setup been working well for you before you encounter this problem? Or this is the first time this scaler has been run, causing the outage?
[SJ]: First time this scaler has been run causing the outage.
Did you try to debug by setting up a testing pod, making the same sasl + tls connection using Kafka cli instead? If this test does not pass, it means there are errors with the tls cert + sasl
[SJ]: There are other clients which are contacting the MSK with same role and are working fine. Those clients are Java based mostly.
How did you manage to find out that KEDA operator is causing CPU spike in AWS MSK brokers ? What was the number of affected brokers out of the AWS MSK fleet ?
[SJ]: There are 3 brokers in shared MSK and all of them got affected. This happened twice and both the time, it was KEDA scaler whose permissions were enabled for access to the MSK and issue started happening.
If you could get more logs for troubleshooting, that would be great.
[SJ]: I will try to get more logs as and when I get something of importance.

from keda.

jared-schmidt-niceincontact avatar jared-schmidt-niceincontact commented on July 17, 2024

We also saw this error from the Keda operator before the timeouts and context deadline started happening:

ERROR scale_handler error getting metric for trigger {"scaledObject.Namespace": "mynamespace", "scaledObject.Name": "myscaler", "trigger": "apacheKafkaScaler", "error": "error listing consumer group offset: %!w()"}

from keda.

jared-schmidt-niceincontact avatar jared-schmidt-niceincontact commented on July 17, 2024

Our suspicion is that the scaler caused a flood of broken connections that didn't close properly and eventually caused all of the groups to rebalance which pegged the CPU. The rebalances can be seen within a few minutes of starting the scalingobject.

I also have this email which highlights some things AWS was finding at the same time:

I’ve been talking to our AWS TAM and the AWS folks about this issue. They still believe based on the logs that they have access to (which we don’t) that the problems are related to a new IAM permission that is required when running under the newest Kafka version. They are seeing many authentication issues related to the routing pods. My coworker and I have been playing with permissions to give the application rights requested by AWS. The CPU on the cluster dropped slightly when we did that, however, we are getting the following error still even after applying the update on the routing pods:

Connection to node -2 () terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue.

AWS believes that the authentication sessions on the backend have been marked as expired, but they have not been removed and are treated as invalid. They have been attempting to manually remove them, but have run into problems doing so. They are going to restart all the MSK brokers to clear out the session cache.

from keda.

jared-schmidt-niceincontact avatar jared-schmidt-niceincontact commented on July 17, 2024

Unfortunately, restarting the brokers didn't fix the CPU problems.

from keda.

JorTurFer avatar JorTurFer commented on July 17, 2024

Did you try restarting KEDA operator?
I'm checking and apparently we are closing the connection correctly in case of failure:

// Close closes the kafka client
func (s *apacheKafkaScaler) Close(context.Context) error {
if s.client == nil {
return nil
}
transport := s.client.Transport.(*kafka.Transport)
if transport != nil {
transport.CloseIdleConnections()
}
return nil
}

But maybe there is any other way to close the client that we've missed :/

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

Following is the analysis from AWS MSK team for the issue. They see this as a problem in Keda scaler. The issue is mainly new apache kafka scaler keep on retrying constantly with non renewed credentials even after session expiry.

Based on the authorizer logs, we see that KEDA is denied to access certain resources. This is leading to the same scaler retrying. This retry happens constantly until the session expires. When session expires, the credential is not renewed by KEDA, and thus, it attempts to call the cluster with an outdated credential. This leads to a race condition where the requests are constantly in AUTHENTICATION failed state. This leads to the request queue, and then the network queue filling up, which leads to high CPU.

The behaviour on MSK side is that the failure is happening in the authorization flow, and not the authentication flow. This was hard to reproduce. However, when we add latency in between the server and authorization flow which might happen in case of network queue being overloaded, we can reproduce this behaviour. The rejection of these requests is correct. Under normal circumstances, they would be rejected in authentication flow, but under heavy load, they are being rejected in authorization flow.

Mitigation
In order to fix this, the KEDA configurations need to be fixed to allow it access to all topics and groups. This will stop the retries, and allow the clients to be closed before the session expires.

An issue should be raised with KEDA about this. The scaler will always eventually crash if authentication or authorization fails. This can trigger with any KEDA scaler if permissions are not sufficient. It will keep retrying until session expires, and then face this issue.

from keda.

sansmoraxz avatar sansmoraxz commented on July 17, 2024

Been a while since I worked with Kafka, I made the initial commit for the scaler.

If it's an issue with rotating credentials I think it would be better to raise the issue in over at segmentio/kafka-go. They are the ones maintaining the underlying library. Could it be confirmed if it's the newer versions of Kafka having the issue? Or for that matter if earlier builds of Keda have the issue. I don't see many changes in in this repo that would give this issue.

from keda.

sansmoraxz avatar sansmoraxz commented on July 17, 2024

The behaviour on MSK side is that the failure is happening in the authorization flow, and not the authentication flow. This was hard to reproduce. However, when we add latency in between the server and authorization flow which might happen in case of network queue being overloaded, we can reproduce this behaviour. The rejection of these requests is correct. Under normal circumstances, they would be rejected in authentication flow, but under heavy load, they are being rejected in authorization flow.

Is this specific to RBAC or same problem can be seen when an IAM user is used? I don't have access to infra to test this on tbh.

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

Been a while since I worked with Kafka, I made the initial commit for the scaler.

If it's an issue with rotating credentials I think it would be better to raise the issue in over at segmentio/kafka-go. They are the ones maintaining the underlying library. Could it be confirmed if it's the newer versions of Kafka having the issue? Or for that matter if earlier builds of Keda have the issue. I don't see many changes in in this repo that would give this issue.

Version of Kafka being used is 3.5.1. We could not confirm if it was an issue with earlier version or not.

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

The behaviour on MSK side is that the failure is happening in the authorization flow, and not the authentication flow. This was hard to reproduce. However, when we add latency in between the server and authorization flow which might happen in case of network queue being overloaded, we can reproduce this behaviour. The rejection of these requests is correct. Under normal circumstances, they would be rejected in authentication flow, but under heavy load, they are being rejected in authorization flow.

Is this specific to RBAC or same problem can be seen when an IAM user is used? I don't have access to infra to test this on tbh.
This problem was seen with the awsRoleArn and using msk_iam as sasl method.

from keda.

JorTurFer avatar JorTurFer commented on July 17, 2024

@dttung2905 @zroubalik , do you know how the integration of AWS MSK IAM within sarama is? I mean, if we are close to unify both scalers again we can just work in that direction

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

I see AWS has released the signer https://github.com/aws/aws-msk-iam-sasl-signer-go .
This signer can be integrated with IBM sarama library. The signer implements the interface necessary for SASL OAUTHBEARER authentication. This means IBM sarama does not need any change for supporting IAM for MSK, but same can be achieved by using SASL OAUTHBEARER authentication with different implementations of token providers depending on whether role, profile or credentials are specified. This means the new experimental scaler wont be needed with the IAM support already available with sarama using AWS provided SASL signer.

from keda.

JorTurFer avatar JorTurFer commented on July 17, 2024

Yeah, we knew it because another folk told it to use some weeks ago, IIRC @dttung2905 is checking how to integrate it

from keda.

dttung2905 avatar dttung2905 commented on July 17, 2024

I see AWS has released the signer https://github.com/aws/aws-msk-iam-sasl-signer-go .

@sameerjoshinice yes. I think thats the repo to use. It has been mentioned in this Sarama issue IBM/sarama#1985

@JorTurFer I did not recall I was looking into that sorry. But I'm happy to test it out if I could get my hands on a testing MSK environment :D

from keda.

JorTurFer avatar JorTurFer commented on July 17, 2024

@JorTurFer I did not recall I was looking into that sorry

Lol, maybe I'm wrong but that what's I remember 😰 xD

No worries, IDK why I remembered it but surely I was wrong

from keda.

zroubalik avatar zroubalik commented on July 17, 2024

@adrien-f thanks, let me assing this issue to you.

from keda.

adrien-f avatar adrien-f commented on July 17, 2024

Greetings 👋

I was able to get our MSK cluster (1000+ topics) to enable IAM authentication. With the current codebase, it connected fine so that's good news on the current state of the authentication system.

Immediately, I notice the scaler runs the following:

func (s *apacheKafkaScaler) getTopicPartitions(ctx context.Context) (map[string][]int, error) {
metadata, err := s.client.Metadata(ctx, &kafka.MetadataRequest{
Addr: s.client.Addr,
})
if err != nil {
return nil, fmt.Errorf("error getting metadata: %w", err)
}
s.logger.V(1).Info(fmt.Sprintf("Listed topics %v", metadata.Topics))

That MetadataRequest is not scoped to the topic the scaler is looking at. Which means it retrieves all information for all topics & partitions on the cluster and it can get big. I think it could be fair to already scope that data to the topic configured for the scaler.

Moreover, the following is from the Kafka protocol documentation:

The client does not need to keep polling to see if the cluster has changed; it can fetch metadata once when it is instantiated cache that metadata until it receives an error indicating that the metadata is out of date. This error can come in two forms: (1) a socket error indicating the client cannot communicate with a particular broker, (2) an error code in the response to a request indicating that this broker no longer hosts the partition for which data was requested.

  1. Cycle through a list of "bootstrap" Kafka URLs until we find one we can connect to.
  2. Fetch cluster metadata.
  3. Process fetch or produce requests, directing them to the appropriate broker based on the topic/partitions they send to or fetch from.
    If we get an appropriate error, refresh the metadata and try again.

Caching topic partitions might also be another option to investigate.

I will continue investigating and open a PR with these suggestions 👍

from keda.

sameerjoshinice avatar sameerjoshinice commented on July 17, 2024

@adrien-f I thought we were discussing the idea of modifying the existing Kafka scaler based on sarama library than fixing the issue with the experimental scaler based on segment-io library.

from keda.

adrien-f avatar adrien-f commented on July 17, 2024

Hey @sameerjoshinice ! Isn't apache-kafka the experimental scaler?

from keda.

adrien-f avatar adrien-f commented on July 17, 2024

https://keda.sh/docs/2.13/scalers/apache-kafka/ the first one kafka, unless I'm mistaken, does not support MSK IAM auth.

from keda.

adrien-f avatar adrien-f commented on July 17, 2024

Got it 👍 I'll look at adding that !

from keda.

gjacquet avatar gjacquet commented on July 17, 2024

I think part of the issue might be related to #5806.
I have also faced issues with connection remaining active but using outdated credentials.

from keda.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.