Git Product home page Git Product logo

Comments (5)

giltene avatar giltene commented on September 1, 2024 9

This is a classic example of correct reporting of experienced latency when the target system simply can’t keep up with the requested rate, and incorporates back pressure in its network protocols (e.g. “uses TCP”, or “puts orders for coffee in a pile of paper slips that baristas pick up and execute”).

Wrk2 reports the response time that would be experienced by clients if those clients actually attempted to access the system at the rate requested, not if the clients were willing to wait and come back when the system is ready to respond, without holding that wait time against it.

Imagine you had 1,000 people per hour come to a coffee shop and waited in line for coffee, and the coffee shop had 10 baristas furiously picking up orders and executing them at a total rate of 127.56 coffee cups per hour between them. A “coffee shark” watching the baristas take the next pending coffee order from a pile of orders and make it would report 4.7 minutes on the average to compete a cup of coffee (60 * 10 / 127.56). But the median person in line would be reporting “it takes way more than 5 minutes to get a cup of coffee here”.

Your command line asked the test to generate 1000 reqs per second (with at most 10 requests in flight at any given time, -c 10), and your output shows that under that setup, the system was only able to serve 127.56 reqs per second on the average. That means that a backlog was accumulating throughout your 60 seconds test, and requests in that backlog are experiencing multi-second latencies dominated by wait times before they hit the wire for your shark to measure.

If the service rate was completely stable at the average of 127.56 reqs per second, you would have processed ~7,652 requests during the 60 seconds the test ran for. You would also be accumulating a backlog of ~872 requests every second. A total of 60,000 requests would be eligible to start during the 60 seconds window at your requested rate of 1000/sec, but most of those (52,348) will never complete within the test window. Of the ones that do complete by the 60 seconds mark, the last one, which is the 7,652nd request sent, was supposed to be sent 7.562 seconds after the test period started (at 1,000/sec as requested), and would have taken 52.438 seconds from when it was eligible to be sent until a response was received. But all that wire shark will see of that time is the tiny fraction of a second that the request will spend on the wire during the last gasp of the 60 seconds test period.

from wrk2.

wenxzhen avatar wenxzhen commented on September 1, 2024

thanks to @giltene .

from wrk2.

wenxzhen avatar wenxzhen commented on September 1, 2024

@giltene append another question here, how wrk2 calculates the "mean lat." and "rate sampling interval"? Is it possible to do a warm up before the test?

from wrk2.

giltene avatar giltene commented on September 1, 2024

There is already a built-in warmup: There is a ~10 seconds "calibration" process that runs at full speed but is not counted in the eventual results.
https://github.com/giltene/wrk2/blob/master/src/wrk.c#L344 is triggered ~ CALIBRATE_DELAY_MS into the start of the run.
Unfortunately, there is no flag to control this currently. CALIBRATE_DELAY_MS is a compile constant of 10000. You can always edit wrk.h and recompile wrk2....

from wrk2.

Fatahir avatar Fatahir commented on September 1, 2024

@giltene do you know how to generate latency of requests per unit time? I think we can do it in done function through "latency(i)". But i dont know how to do it. Latency parameter in "done" function is usertype data
la

from wrk2.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.