Comments (7)
Thanks @twmb. We've some workloads where we tune for lower latency (financial transactions) at the expense of CPU. It's a careful trade-off. Either way, I think it isn't ideal for a client lib to impose opinionated config when the spec allows it, except for sanity checking config values. In that case, commit should ideally be the minimum practical value, something like 1ms
.
from franz-go.
Hello! Yes this is possible, but I’m curious what the use case is here? I worry about the ramifications of consumers committing up to 1000x / second.
from franz-go.
Is the goal to rely on autocommitting but try to commit frequently enough that you avoid the risk of losing data in the event of a server crash? Just trying to understand the use case, and I'm happy to drop this to 1ms (as pointed out, this is a two character patch in the code 😄), I know somebody else ran into problems with another client where their consumers were committing 10,000x / sec / client, which caused undue load on their brokers.
from franz-go.
Is the goal to rely on autocommitting but try to commit frequently enough that you avoid the risk of losing data in the event of a server crash?
It's to get a message pushed to the stream with the lowest possible latency. Realistically, it won't be a 1ms interval but probably around 5-10ms (as mentioned earlier, at the cost of CPU). With a 100ms commit interval, the latency for messages written can vary from ~1 to 100ms; wild standard deviation which wouldn't be acceptable for workloads where latency is a critical metric.
Either way, these decisions should be left to the implementer. Not ideal for neutral client lib to enforce opinionated values.
And, thanks for the lib! 🚀 We've found that it uses significantly less CPU and RAM compare to other Go Kafka libs.
from franz-go.
The autocommitting is unrelated to producing and consuming, so it is unrelated to the latency of messages written. Is there an external system monitoring the commits / waiting for commits before consuming (i.e., is this trying to be similar to EOS but not quite)?
Again, the change is simple and I will be making it, but I'm trying to figure out what the use case here. It can help me potentially add documentation on why a lower value may be chosen, for example.
from franz-go.
Ah, sorry, we have the wires crossed here! I've been referring to the producer commit / flush frequency all along. We've a generic stream config abstraction on top of franz, hence the confusion with terms. Consumption autocommit is a non-issue for us.
Please close this issue. That said, I guess the original argument still stands, that the value should ideally be left to the implementer to configure. Thanks for your time!
from franz-go.
Cool. I may leave the min setting for now then until I somebody points out a workflow where it is necessary to drop the min autocommit interval—this is the type of thing that I think people may confuse and then break their cluster a little bit. The configuration knob you’re looking for is Linger
, which there is no default (no lingering at all), and if you do configure, there is no minimum, only a maximum (1min).
Currently I think if a person is looking for autocommitting less than 100ms they may prefer just using manual comitting, because looking for autocommitting less than 100ms to me implies looking for some behavioral guarantees that autocomitting just inherently cannot provide.
However! It may still be useful to drop to 10ms or 1ms min, I just want to know a valid use case before I do so so that I can include a bit of documentation for why a person may want a small autocommit. So I agree it can be left to the implementer, but having a higher min at the moment may be the forcing function for a person to notify me of what I can add for documentation. What do you think?
from franz-go.
Related Issues (20)
- Is there a way to test kgo.Opts returned from wrapper function? HOT 1
- fetch using topic id HOT 3
- How to delete offset when message got consumed? HOT 4
- Imbalanced Partition Assignment to Consumers Per Topic HOT 1
- Want to delete the consumer group after processing records from kafka HOT 1
- Connecting franz-go with a confluent kafka cloud cluster HOT 2
- Decoding GroupMetadataValueMember fails HOT 1
- Cache metadata more HOT 1
- Question regarding manual commit example HOT 1
- Update GroupMetadataKey / GroupMetadataValue
- Bump epoch if the log_start_offset advanced and broker returned unknown_producer_id
- kgo.LiveProduceConnection option for low-latency settings HOT 4
- BrokerResponse too large HOT 6
- `GetConsumeTopics` returns all topics when consuming via regex, not just the topics that are being consumed HOT 2
- Consumers stuck in endless rebalancing loop (consumer based off the goroutine-per-partition example)
- Potential memory leak in kgo/broker.go HOT 2
- kprom: Support for multiple clients HOT 2
- Fetch retry with COORDINATOR_NOT_AVAILABLE error never recovers HOT 2
- Direct consumer doesn't implement KIP-392 HOT 3
- strange issue with 1.17.0 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from franz-go.