Git Product home page Git Product logo

Comments (11)

YangXin-Sheep avatar YangXin-Sheep commented on May 24, 2024

someone's benchmark shows buffered channel can speed up channel select,it means buffered channel is more effect. if requests queued means cpu overload or slow io. in cpu overload case spawn a goroutine immediately can not effect,because requests queued just from channel queue move to golang runtime queue, it is just in another way to queued, spawn a new goroutine also cost time. in slow io case, just can improve input qps. so I think buffered channel can improve proformance

from grpc-go.

SaveTheRbtz avatar SaveTheRbtz commented on May 24, 2024

Do you have a repro case where NumStreamWorkers with buffering on the serverWorkerChannel is visibly faster?

Current approach is based on the following logic: if all NumStreamWorkers goroutines are busy then spawning new goroutine is O(us), while buffering the request and waiting for a free goroutine is O(ms to seconds). Hence, the current approach tries to minimize tail latencies of the server.

from grpc-go.

YangXin-Sheep avatar YangXin-Sheep commented on May 24, 2024

### It supposed that NumStreamWorkers is enough. In golang GMP model, goroutine is task of thread queued in Processor. Spawning new goroutine is O(us), it is just goroutine create time, you ingnored goroutine schedule time. Spawned goroutine may not scheduled immediately,it may also (like queue in channel) queued in Processor.

from grpc-go.

YangXin-Sheep avatar YangXin-Sheep commented on May 24, 2024

Do you have a repro case where NumStreamWorkers with buffering on the serverWorkerChannel is visibly faster?

Current approach is based on the following logic: if all NumStreamWorkers goroutines are busy then spawning new goroutine is O(us), while buffering the request and waiting for a free goroutine is O(ms to seconds). Hence, the current approach tries to minimize tail latencies of the server.

In another word,if channel buffer appear accumulate. It means goroutine will also accumulate in Processor.In that case must increase thread num and cpu cores.

from grpc-go.

SaveTheRbtz avatar SaveTheRbtz commented on May 24, 2024

You are totally right for the cases when all gorotines are CPU bound and do not yield. In cases where there is IO involved (e.g. a standard backend that spends most of it's time waiting on an upstream database) you usually can find a P faster than you can process a request end-to end.

That said, it is all theoretical, if you have a repro where unbuffered channel here is a bottleneck, it would simplify decision making. Also, if we were to make this channel buffered, what would be the right buffer size? 1, GOMAXPROC, other?

from grpc-go.

zasweq avatar zasweq commented on May 24, 2024

If you think this approach is faster, can you provide a benchmark with the difference?

from grpc-go.

YangXin-Sheep avatar YangXin-Sheep commented on May 24, 2024

If you think this approach is faster, can you provide a benchmark with the difference?

https://zhuanlan.zhihu.com/p/101063277 In this benchmark shows suitable buffered channel is more faster.

from grpc-go.

SaveTheRbtz avatar SaveTheRbtz commented on May 24, 2024

On the micro-benchmark side there is no doubt, based on Amdahl's law, that a buffered channel would be faster. But if this was the only concern developers care about channels would not even have a non-buffered mode.

It would be better if you showed the benefit of buffered channel using an end-to-end test with grpc-go with NumStreamWorkers set on a simple web-app using something like k6. (ideally, using different setting for the buffer size to see the scaling properties.)

from grpc-go.

github-actions avatar github-actions commented on May 24, 2024

This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed.

from grpc-go.

arvindbr8 avatar arvindbr8 commented on May 24, 2024

Hi @YangXin-Sheep -- Like @SaveTheRbtz suggested - it would be great if you could show us the benefit of buffered channel using an gRPC-go example since the benchmark results that you provided just shows us the benefit of buffered go channels.

that being said, even if we decide to enable NumStreamWorkers with buffering on the serverWorkerChannel - Not sure what the right size of the buffer should be.

from grpc-go.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.