Git Product home page Git Product logo

statsdcc's People

Contributors

rrusso1982 avatar sdomalap avatar setaou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

statsdcc's Issues

Metrics take too long to show up

It seems that metrics take up to 3 minutes to show up in Grafana (which is rendering from Graphite/Carbon).

I tried configuring different flush frequencies (60 seconds, 30 seconds, 10 seconds and 1 second). I have also tried several worker and UDP thread count without luck.

Our current production metrics obtained from StatsD are rendered every minute without any further lag.

Proxy tuning

Workers' queues are filling up and proxy starts dropping metrics. I tried increasing the amount of workers but needed 2048 workers to stop dropping metrics but this doesn't sound like the correct approach (besides CPU usage goes to 100%)

I have also tried increasing Boost's lock free queue to 65000 instead of 10000 (I noticed there's a TODO in the code as well) without luck. Is it possible to leave the queue without a fixed size?

I am also noticing a high system CPU usage (which I don't see in aggregator mode).

How do you guys tune this in production?

behavior about frequency

my config:

{
    "servers": {
      "udp": {
        "port": 8125
      },
      "tcp": {
        "port": 8125
      }
    },
    "frequency": 10,
    "log_level": "debug",
    "backends": {
      "stdout": true
    }
}

environment: in a docker container with os amzn2

How to repeat:

  1. start statsdcc with the config above
  2. echo "foo:1|c" | nc -u 127.0.0.1 8125
  3. wait time over 10s
  4. echo "bar:1|c" | nc -u 127.0.0.1 8125
  5. info about foo is printed out
  6. bar won't be printed out if no more metric sent to statsdcc

I expected that statsdcc will flush metrics every 10s by automatically. But it seems that the flushing action must be triggered by a metric sending after 10s.
Is it a behavior by design? Or am I missing anything?

Metrics drop

I have just setup a statsdcc-proxy with two statsdcc-aggregators and one carbon backend.

After doing this, the metrics dropped to nearly a quarter as they used to be when I had a single statsdcc-aggregator.

I guess that the proxy is hashing the metric so that the same metric is always received by the same statsdcc-aggregator. So, that shouldn't be a problem.

It also seems that I'm losing metrics from certain application nodes that I used to have before adding the proxy.

Any ideas why I'm facing this behavior?

Project dependencies

Could you provide list of dependencies for your project? I try to build it according to instruction and get

/usr/local/include/json/value.h:325:10: note:   initializing argument 1 of ‘Json::Value& Json::Value::operator=(Json::Value)’
   Value& operator=(Value other);
          ^
lib/CMakeFiles/proxy_http_server.dir/build.make:62: recipe for target 'lib/CMakeFiles/proxy_http_server.dir/net/servers/http/proxy/http_server.cc.o' failed
make[2]: *** [lib/CMakeFiles/proxy_http_server.dir/net/servers/http/proxy/http_server.cc.o] Error 1
CMakeFiles/Makefile2:218: recipe for target 'lib/CMakeFiles/proxy_http_server.dir/all' failed
make[1]: *** [lib/CMakeFiles/proxy_http_server.dir/all] Error 2
Makefile:94: recipe for target 'all' failed
make: *** [all] Error 

docker image

Is there a docker image available for setting up statsdcc?

Proxy Optimization

It seems that once the proxy receives a multi-metrics packet it parses all the metrics, hashes and send each of them individually to the corresponding aggregator.

I'm wondering about the impact that sending a multi-metrics packet to the aggregator can have. This would imply accumulating metrics until certain size is exceed (or certain amount of time has past) and then sending the packet.

Have you thought of this? Are you planning to implement it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.