Git Product home page Git Product logo

unbound-telemetry's Introduction

unbound-telemetry

Logo

Coverage Status MIT licensed Minimum rustc version

Unbound DNS resolver metrics exporter for Prometheus

Deprecation notice

This repo is archived and will not be maintained anymore. You can use the unbound_exporter by Let's Encrypt instead.

Features

  • Communicates with unbound via TLS, UDS socket or shared memory
  • Compatible with kumina/unbound_exporter; your dashboard should just work
  • Small binary size (~2 Mb after strip) and memory footprint (~10 Kb)
  • Takes ~10 ms to respond with gathered metrics
  • Blazing fast!

Platform support

This project is developed, manually and automatically tested with Linux.

Following platforms are tested in the CI environment and expected to work:

  • Windows
  • macOS

It is expected that FreeBSD, NetBSD and OpenBSD will work too, but there are no any manual or automatic checks for them exist.

Note that communication via UDS socket or shared memory is not supported for Windows.

Installation

From sources

  1. Rust language compiler version >= 1.46 is required
  2. Clone the repository
  3. Run the following command
    $ cargo build --release
  4. Get the compiled executable from the ./target/release/unbound-telemetry

Usage

HTTP interface is available at http://0.0.0.0:9167 by default and can be changed via CLI arguments.

TCP socket

First of all, enable remote-control option in the unbound.conf, configure control interface address and TLS if needed.

Run the following command to see possible flags and options:

$ unbound-telemetry tcp --help

Unix domain socket

Similar to TLS socket, you need to enable remote-control option first.

Run the following command to see possible flags and options:

$ unbound-telemetry uds --help

Shared memory

Enable shm-enable option in the unbound.conf and run the following command:

$ unbound-telemetry shm --help

Monitoring

/healthcheck URL can be used for automated monitoring; in case when exporter is not able to access the unbound instance, HTTP 500 error will be returned, response body will contain plain text error description.

Grafana

This Grafana dashboard can be used to show all metrics provided by this exporter.

License

unbound-telemetry is released under the MIT License.

Donations

If you appreciate my work and want to support me or this project, you can do it here.

unbound-telemetry's People

Contributors

catholic-indulgence-vaper avatar dependabot-preview[bot] avatar svartalf avatar thergbway avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

unbound-telemetry's Issues

Telemetry `/healthcheck` method causes `could not SSL_read crypto error` on unbound resolver

I use all unbound-telemetry methods: /healthcheck and /metrics.
All methods work fine except for the /healthcheck. It causes error on unbound resolver so my unbound.log is full of:

[1641860821] unbound[1:0] error: could not SSL_read crypto error:00000000:lib(0):func(0):reason(0)
[1641860823] unbound[1:0] error: could not SSL_read crypto error:00000000:lib(0):func(0):reason(0)
........
[1641860825] unbound[1:0] error: could not SSL_read crypto error:00000000:lib(0):func(0):reason(0)

Seems like this is not a big problem but the log is full of error messages.
After researching I found the reason is that /healthcheck implementation closes connection to unbound wrong (maybe because of wrong implementation in tokio lib). If you add sending some payload to unbound it will fix the problem.

    async fn healthcheck(&self) -> io::Result<()> {
    //  this causes the problem at unbound side:
    //  let _ = self.transport.connect().await?;

    //  this fixes the problem
        let mut socket = self.transport.connect().await?;
        socket.write_all(b" ").await?;

        Ok(())
    }

This is just an idea how to fix that, I beleive there is a better solution for it.

Listen on dual socket

I would like to run the exporter on IPv4 and IPv6 dual stack which is right now not possible.

RUSTSEC-2021-0139: ansi_term is Unmaintained

ansi_term is Unmaintained

Details
Status unmaintained
Package ansi_term
Version 0.12.1
URL ogham/rust-ansi-term#72
Date 2021-08-18

The maintainer has adviced this crate is deprecated and will not
receive any maintenance.

The crate does not seem to have much dependencies and may or may not be ok to use as-is.

Last release seems to have been three years ago.

Possible Alternative(s)

The below list has not been vetted in any way and may or may not contain alternatives;

See advisory page for additional details.

Unable to parse num.query.udpout (unknown key)

On Unbound 1.16.2 I get these warnings in my syslog:

Aug 17 12:19:29 doh unbound-telemetry[1111]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'num.query.udpout=2415', unknown key
Aug 17 12:20:29 doh unbound-telemetry[1111]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'num.query.udpout=3831', unknown key
Aug 17 12:21:29 doh unbound-telemetry[1111]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'num.query.udpout=4043', unknown key

Pre-built packages

Would be nice to have them at least for Linux, Windows and macOS x86_64

Versioned releases

Would it be possible to tag specific versions / cut releases, so that it's easier to track changes and create versioned artifacts for deployment?

Unable to observe unbound statistics: Unable to parse 'unknown record type' value

I am running Unbound on a FreeBSD 13.0-RELEASE-p3 system and I am attempting to get unbound-telemetry working again after updating Unbound from 1.13.1 to 1.13.2. This same system worked just fine when I was running Unbound 1.13.1. I should add that unbound-control stats shows me statistics just fine.

Here's is the debug level output from unbound-telemetry:

% sudo /usr/local/bin/unbound-telemetry tcp -b 0.0.0.0:9167 --server-cert-file /usr/local/etc/unbound/unbound_server.pem --control-cert-file /usr/local/etc/unbound/unbound_control.pem --control-key-file /usr/local/etc/unbound/unbound_control.key -l debug
INFO  [unbound_telemetry::server] Listening on 0.0.0.0:9167
DEBUG [hyper::proto::h1::io] read 236 bytes
DEBUG [hyper::proto::h1::io] parsed 5 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Unable to parse 'unknown record type' value
DEBUG [hyper::proto::h1::io] flushed 204 bytes

I tried pulling down and building the latest commits, but I get the same error.

Clearly, something about the update of Unbound from 1.13.1 to 1.13.2 broke something, but it would be helpful if unbound-telemetry would tell me which key failed.

I would also appreciate any assistance in debugging what broke.

RUSTSEC-2021-0078: Lenient `hyper` header parsing of `Content-Length` could allow request smuggling

Lenient hyper header parsing of Content-Length could allow request smuggling

Details
Package hyper
Version 0.13.10
URL GHSA-f3pg-qwvg-p99c
Date 2021-07-07
Patched versions >=0.14.10

hyper's HTTP header parser accepted, according to RFC 7230, illegal contents inside Content-Length headers.
Due to this, upstream HTTP proxies that ignore the the header may still forward them along if it chooses to ignore the error.

To be vulnerable, hyper must be used as an HTTP/1 server and using an HTTP proxy upstream that ignores the header's contents
but still forwards it. Due to all the factors that must line up, an attack exploiting this vulnerablity is unlikely.

See advisory page for additional details.

`num.zero_ttl` renamed to `num.expired` in unbound 1.10.1

Following this commit
NLnetLabs/unbound@f7fe95a

The statistic num.zero_ttl has been renamed to num.expired, causing telemetry to ignore this "new" statistic:

WARN  [unbound_telemetry::statistics::parser] Unable to parse 'thread0.num.expired=0', unknown key
WARN  [unbound_telemetry::statistics::parser] Unable to parse 'thread1.num.expired=0', unknown key
WARN  [unbound_telemetry::statistics::parser] Unable to parse 'total.num.expired=0', unknown key

Add new metrics

Dec 24 14:40:22 unbound-telemetry[4001737]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'mem.http.query_buffer=0', unknown key
Dec 24 14:40:22 unbound-telemetry[4001737]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'mem.http.response_buffer=0', unknown key
Dec 24 14:40:22 unbound-telemetry[4001737]: WARN  [unbound_telemetry::statistics::parser] Unable to parse 'num.query.https=0', unknown key

Unable to observe unbound statistics: Unable to parse 'CLASS0' value

Hi,

I'm getting the following error:
Unable to observe unbound statistics: Unable to parse 'CLASS0' value

Telemetry version: unbound-telemetry 0.1.0

Stats output:

thread0.num.queries=610401755
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=588162357
thread0.num.cachemiss=22239398
thread0.num.prefetch=4495936
thread0.num.zero_ttl=0
thread0.num.recursivereplies=22239338
thread0.requestlist.avg=101.415
thread0.requestlist.max=1808
thread0.requestlist.overwritten=0
thread0.requestlist.exceeded=0
thread0.requestlist.current.all=124
thread0.requestlist.current.user=60
thread0.recursion.time.avg=0.311847
thread0.recursion.time.median=0.104955
thread0.tcpusage=3
thread1.num.queries=611224065
thread1.num.queries_ip_ratelimited=0
thread1.num.cachehits=588592233
thread1.num.cachemiss=22631832
thread1.num.prefetch=4496075
thread1.num.zero_ttl=0
thread1.num.recursivereplies=22631768
thread1.requestlist.avg=102.598
thread1.requestlist.max=1919
thread1.requestlist.overwritten=0
thread1.requestlist.exceeded=0
thread1.requestlist.current.all=106
thread1.requestlist.current.user=64
thread1.recursion.time.avg=0.312118
thread1.recursion.time.median=0.104841
thread1.tcpusage=0
thread2.num.queries=609543483
thread2.num.queries_ip_ratelimited=0
thread2.num.cachehits=587407795
thread2.num.cachemiss=22135688
thread2.num.prefetch=4491317
thread2.num.zero_ttl=0
thread2.num.recursivereplies=22135634
thread2.requestlist.avg=100.85
thread2.requestlist.max=2012
thread2.requestlist.overwritten=0
thread2.requestlist.exceeded=0
thread2.requestlist.current.all=110
thread2.requestlist.current.user=54
thread2.recursion.time.avg=0.316015
thread2.recursion.time.median=0.104826
thread2.tcpusage=0
thread3.num.queries=609563048
thread3.num.queries_ip_ratelimited=0
thread3.num.cachehits=587162114
thread3.num.cachemiss=22400934
thread3.num.prefetch=4497989
thread3.num.zero_ttl=0
thread3.num.recursivereplies=22400890
thread3.requestlist.avg=101.969
thread3.requestlist.max=1876
thread3.requestlist.overwritten=0
thread3.requestlist.exceeded=0
thread3.requestlist.current.all=97
thread3.requestlist.current.user=44
thread3.recursion.time.avg=0.315178
thread3.recursion.time.median=0.104789
thread3.tcpusage=4
thread4.num.queries=611964503
thread4.num.queries_ip_ratelimited=0
thread4.num.cachehits=588905687
thread4.num.cachemiss=23058816
thread4.num.prefetch=4502846
thread4.num.zero_ttl=0
thread4.num.recursivereplies=23058733
thread4.requestlist.avg=101.907
thread4.requestlist.max=2091
thread4.requestlist.overwritten=0
thread4.requestlist.exceeded=0
thread4.requestlist.current.all=143
thread4.requestlist.current.user=83
thread4.recursion.time.avg=0.304069
thread4.recursion.time.median=0.104932
thread4.tcpusage=2
thread5.num.queries=612029690
thread5.num.queries_ip_ratelimited=0
thread5.num.cachehits=589898464
thread5.num.cachemiss=22131226
thread5.num.prefetch=4497126
thread5.num.zero_ttl=0
thread5.num.recursivereplies=22131177
thread5.requestlist.avg=101.263
thread5.requestlist.max=1873
thread5.requestlist.overwritten=0
thread5.requestlist.exceeded=0
thread5.requestlist.current.all=96
thread5.requestlist.current.user=49
thread5.recursion.time.avg=0.314101
thread5.recursion.time.median=0.104745
thread5.tcpusage=1
thread6.num.queries=609613369
thread6.num.queries_ip_ratelimited=0
thread6.num.cachehits=586940742
thread6.num.cachemiss=22672627
thread6.num.prefetch=4497511
thread6.num.zero_ttl=0
thread6.num.recursivereplies=22672576
thread6.requestlist.avg=101.29
thread6.requestlist.max=1899
thread6.requestlist.overwritten=0
thread6.requestlist.exceeded=0
thread6.requestlist.current.all=93
thread6.requestlist.current.user=51
thread6.recursion.time.avg=0.306884
thread6.recursion.time.median=0.104772
thread6.tcpusage=0
thread7.num.queries=608759644
thread7.num.queries_ip_ratelimited=0
thread7.num.cachehits=586439335
thread7.num.cachemiss=22320309
thread7.num.prefetch=4490196
thread7.num.zero_ttl=0
thread7.num.recursivereplies=22320247
thread7.requestlist.avg=102.314
thread7.requestlist.max=1940
thread7.requestlist.overwritten=0
thread7.requestlist.exceeded=0
thread7.requestlist.current.all=109
thread7.requestlist.current.user=62
thread7.recursion.time.avg=0.316621
thread7.recursion.time.median=0.104756
thread7.tcpusage=3
total.num.queries=4883099557
total.num.queries_ip_ratelimited=0
total.num.cachehits=4703508727
total.num.cachemiss=179590830
total.num.prefetch=35968996
total.num.zero_ttl=0
total.num.recursivereplies=179590363
total.requestlist.avg=101.703
total.requestlist.max=2091
total.requestlist.overwritten=0
total.requestlist.exceeded=0
total.requestlist.current.all=878
total.requestlist.current.user=467
total.recursion.time.avg=0.312056
total.recursion.time.median=0.104827
total.tcpusage=13
time.now=1609591842.801112
time.up=181085.454982
time.elapsed=181085.454982
mem.cache.rrset=570420018
mem.cache.message=570425143
mem.mod.iterator=16588
mem.mod.validator=140747857
mem.mod.respip=0
mem.mod.subnet=140552
mem.mod.ipsecmod=0
mem.streamwait=0
histogram.000000.000000.to.000000.000001=13735612
histogram.000000.000001.to.000000.000002=0
histogram.000000.000002.to.000000.000004=0
histogram.000000.000004.to.000000.000008=0
histogram.000000.000008.to.000000.000016=0
histogram.000000.000016.to.000000.000032=1
histogram.000000.000032.to.000000.000064=97
histogram.000000.000064.to.000000.000128=377
histogram.000000.000128.to.000000.000256=698
histogram.000000.000256.to.000000.000512=1367
histogram.000000.000512.to.000000.001024=2558
histogram.000000.001024.to.000000.002048=5169
histogram.000000.002048.to.000000.004096=11247
histogram.000000.004096.to.000000.008192=2428458
histogram.000000.008192.to.000000.016384=2384985
histogram.000000.016384.to.000000.032768=408683
histogram.000000.032768.to.000000.065536=180786
histogram.000000.065536.to.000000.131072=117815982
histogram.000000.131072.to.000000.262144=21102257
histogram.000000.262144.to.000000.524288=15426745
histogram.000000.524288.to.000001.000000=2540766
histogram.000001.000000.to.000002.000000=1720689
histogram.000002.000000.to.000004.000000=1360061
histogram.000004.000000.to.000008.000000=139129
histogram.000008.000000.to.000016.000000=112267
histogram.000016.000000.to.000032.000000=120539
histogram.000032.000000.to.000064.000000=29664
histogram.000064.000000.to.000128.000000=22341
histogram.000128.000000.to.000256.000000=11351
histogram.000256.000000.to.000512.000000=12785
histogram.000512.000000.to.001024.000000=13622
histogram.001024.000000.to.002048.000000=2088
histogram.002048.000000.to.004096.000000=38
histogram.004096.000000.to.008192.000000=1
histogram.008192.000000.to.016384.000000=0
histogram.016384.000000.to.032768.000000=0
histogram.032768.000000.to.065536.000000=0
histogram.065536.000000.to.131072.000000=0
histogram.131072.000000.to.262144.000000=0
histogram.262144.000000.to.524288.000000=0
num.query.type.TYPE0=21838
num.query.type.A=4576639648
num.query.type.NS=195714
num.query.type.MF=1
num.query.type.CNAME=223477
num.query.type.SOA=252841
num.query.type.MR=2
num.query.type.NULL=238628
num.query.type.WKS=230227
num.query.type.PTR=8446520
num.query.type.HINFO=231428
num.query.type.MX=34204
num.query.type.TXT=510353
num.query.type.AAAA=151992709
num.query.type.NXT=3
num.query.type.SRV=1883446
num.query.type.NAPTR=1578821
num.query.type.DS=215
num.query.type.DNSKEY=252
num.query.type.TYPE65=140375490
num.query.type.TYPE96=1
num.query.type.ANY=82558
num.query.type.other=17173
num.query.class.CLASS0=2
num.query.class.IN=4882932525
num.query.class.CH=1073
num.query.class.HS=3
num.query.class.CLASS5=1
num.query.class.other=21945
num.query.opcode.QUERY=4882955549
num.query.tcp=353856
num.query.tcpout=8035409
num.query.tls=0
num.query.tls.resume=0
num.query.ipv6=0
num.query.flags.QR=0
num.query.flags.AA=4780
num.query.flags.TC=0
num.query.flags.RD=4881598247
num.query.flags.RA=1963578
num.query.flags.Z=125956
num.query.flags.AD=406164
num.query.flags.CD=348932
num.query.edns.present=1544145
num.query.edns.DO=559536
num.answer.rcode.NOERROR=4541650937
num.answer.rcode.FORMERR=74634
num.answer.rcode.SERVFAIL=27118108
num.answer.rcode.NXDOMAIN=258691024
num.answer.rcode.NOTIMPL=82358
num.answer.rcode.REFUSED=1126718
num.answer.rcode.nodata=143652060
num.query.ratelimited=0
num.answer.secure=50709972
num.answer.bogus=118372
num.rrset.bogus=435
num.query.aggressive.NOERROR=14204
num.query.aggressive.NXDOMAIN=3
unwanted.queries=0
unwanted.replies=189737
msg.cache.count=1800872
rrset.cache.count=1411946
infra.cache.count=986445
key.cache.count=378804
num.query.authzone.up=0
num.query.authzone.down=0
num.query.subnet=0
num.query.subnet_cache=0

RUSTSEC-2021-0124: Data race when sending and receiving after closing a `oneshot` channel

Data race when sending and receiving after closing a oneshot channel

Details
Package tokio
Version 0.2.25
URL tokio-rs/tokio#4225
Date 2021-11-16
Patched versions >=1.8.4, <1.9.0,>=1.13.1
Unaffected versions <0.1.14

If a tokio::sync::oneshot channel is closed (via the
oneshot::Receiver::close method), a data race may occur if the
oneshot::Sender::send method is called while the corresponding
oneshot::Receiver is awaited or calling try_recv.

When these methods are called concurrently on a closed channel, the two halves
of the channel can concurrently access a shared memory location, resulting in a
data race. This has been observed to cause memory corruption.

Note that the race only occurs when both halves of the channel are used
after the Receiver half has called close. Code where close is not used, or where the
Receiver is not awaited and try_recv is not called after calling close,
is not affected.

See tokio#4225 for more details.

See advisory page for additional details.

Unable to observe unbound statistics

First of all, this is by far the best telemetry data I have seen for unbound.

Unfortunately, while trying to fetch the metrics and health-check, gives me the following error.
Unable to observe unbound statistics: failed to lookup address information: Name does not resolve

The control server seems to be available and reachable. Any ideas on how to resolve this?

Unable to observe unbound statistics: Unable to parse 'NSAP-PTR' value

Hi,

I'm getting the following error:
Unable to observe unbound statistics: Unable to parse 'NSAP-PTR' value

root@ERS-203:~# unbound-control stats
thread0.num.queries=12627360
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=12292681
thread0.num.cachemiss=334679
thread0.num.prefetch=363908
thread0.num.zero_ttl=286002
thread0.num.recursivereplies=334671
thread0.requestlist.avg=23.5715
thread0.requestlist.max=905
thread0.requestlist.overwritten=0
thread0.requestlist.exceeded=0
thread0.requestlist.current.all=19
thread0.requestlist.current.user=8
thread0.recursion.time.avg=0.471094
thread0.recursion.time.median=0.0615196
thread0.tcpusage=1
thread1.num.queries=12483349
thread1.num.queries_ip_ratelimited=0
thread1.num.cachehits=12147541
thread1.num.cachemiss=335808
thread1.num.prefetch=379956
thread1.num.zero_ttl=301021
thread1.num.recursivereplies=335799
thread1.requestlist.avg=24.4879
thread1.requestlist.max=855
thread1.requestlist.overwritten=0
thread1.requestlist.exceeded=0
thread1.requestlist.current.all=14
thread1.requestlist.current.user=6
thread1.recursion.time.avg=0.580630
thread1.recursion.time.median=0.0662706
thread1.tcpusage=0
thread2.num.queries=12675382
thread2.num.queries_ip_ratelimited=0
thread2.num.cachehits=12304968
thread2.num.cachemiss=370414
thread2.num.prefetch=392516
thread2.num.zero_ttl=299106
thread2.num.recursivereplies=370411
thread2.requestlist.avg=28.9079
thread2.requestlist.max=823
thread2.requestlist.overwritten=0
thread2.requestlist.exceeded=0
thread2.requestlist.current.all=19
thread2.requestlist.current.user=1
thread2.recursion.time.avg=0.603078
thread2.recursion.time.median=0.0790135
thread2.tcpusage=0
thread3.num.queries=13058721
thread3.num.queries_ip_ratelimited=0
thread3.num.cachehits=12636539
thread3.num.cachemiss=422182
thread3.num.prefetch=484305
thread3.num.zero_ttl=363643
thread3.num.recursivereplies=422150
thread3.requestlist.avg=54.9533
thread3.requestlist.max=874
thread3.requestlist.overwritten=0
thread3.requestlist.exceeded=0
thread3.requestlist.current.all=56
thread3.requestlist.current.user=32
thread3.recursion.time.avg=0.556060
thread3.recursion.time.median=0.0992251
thread3.tcpusage=4
thread4.num.queries=13459709
thread4.num.queries_ip_ratelimited=0
thread4.num.cachehits=12925374
thread4.num.cachemiss=534335
thread4.num.prefetch=506220
thread4.num.zero_ttl=376395
thread4.num.recursivereplies=534324
thread4.requestlist.avg=30.1408
thread4.requestlist.max=808
thread4.requestlist.overwritten=0
thread4.requestlist.exceeded=0
thread4.requestlist.current.all=16
thread4.requestlist.current.user=9
thread4.recursion.time.avg=0.449964
thread4.recursion.time.median=0.0764512
thread4.tcpusage=1
thread5.num.queries=12728479
thread5.num.queries_ip_ratelimited=0
thread5.num.cachehits=12386985
thread5.num.cachemiss=341494
thread5.num.prefetch=413588
thread5.num.zero_ttl=332805
thread5.num.recursivereplies=341489
thread5.requestlist.avg=24.5389
thread5.requestlist.max=826
thread5.requestlist.overwritten=0
thread5.requestlist.exceeded=0
thread5.requestlist.current.all=19
thread5.requestlist.current.user=3
thread5.recursion.time.avg=0.608134
thread5.recursion.time.median=0.0652771
thread5.tcpusage=1
thread6.num.queries=12579941
thread6.num.queries_ip_ratelimited=0
thread6.num.cachehits=12238847
thread6.num.cachemiss=341094
thread6.num.prefetch=377053
thread6.num.zero_ttl=294330
thread6.num.recursivereplies=341093
thread6.requestlist.avg=23.1885
thread6.requestlist.max=751
thread6.requestlist.overwritten=0
thread6.requestlist.exceeded=0
thread6.requestlist.current.all=13
thread6.requestlist.current.user=0
thread6.recursion.time.avg=0.515187
thread6.recursion.time.median=0.0675836
thread6.tcpusage=0
total.num.queries=89612941
total.num.queries_ip_ratelimited=0
total.num.cachehits=86932935
total.num.cachemiss=2680006
total.num.prefetch=2917546
total.num.zero_ttl=2253302
total.num.recursivereplies=2679937
total.requestlist.avg=30.8006
total.requestlist.max=905
total.requestlist.overwritten=0
total.requestlist.exceeded=0
total.requestlist.current.all=156
total.requestlist.current.user=59
total.recursion.time.avg=0.535307
total.recursion.time.median=0.0736201
total.tcpusage=7
time.now=1609950562.475494
time.up=21771.934211
time.elapsed=21771.934211
mem.cache.rrset=453388560
mem.cache.message=548318318
mem.mod.iterator=16588
mem.mod.validator=8912304
mem.mod.respip=0
mem.mod.subnet=140552
mem.streamwait=0
histogram.000000.000000.to.000000.000001=691633
histogram.000000.000001.to.000000.000002=0
histogram.000000.000002.to.000000.000004=0
histogram.000000.000004.to.000000.000008=0
histogram.000000.000008.to.000000.000016=0
histogram.000000.000016.to.000000.000032=9
histogram.000000.000032.to.000000.000064=15
histogram.000000.000064.to.000000.000128=16
histogram.000000.000128.to.000000.000256=50
histogram.000000.000256.to.000000.000512=88
histogram.000000.000512.to.000000.001024=174
histogram.000000.001024.to.000000.002048=412
histogram.000000.002048.to.000000.004096=829
histogram.000000.004096.to.000000.008192=2215
histogram.000000.008192.to.000000.016384=62065
histogram.000000.016384.to.000000.032768=105034
histogram.000000.032768.to.000000.065536=438177
histogram.000000.065536.to.000000.131072=341679
histogram.000000.131072.to.000000.262144=613722
histogram.000000.262144.to.000000.524288=272680
histogram.000000.524288.to.000001.000000=87752
histogram.000001.000000.to.000002.000000=27591
histogram.000002.000000.to.000004.000000=16068
histogram.000004.000000.to.000008.000000=7067
histogram.000008.000000.to.000016.000000=4225
histogram.000016.000000.to.000032.000000=4091
histogram.000032.000000.to.000064.000000=1620
histogram.000064.000000.to.000128.000000=889
histogram.000128.000000.to.000256.000000=640
histogram.000256.000000.to.000512.000000=877
histogram.000512.000000.to.001024.000000=287
histogram.001024.000000.to.002048.000000=26
histogram.002048.000000.to.004096.000000=5
histogram.004096.000000.to.008192.000000=1
histogram.008192.000000.to.016384.000000=0
histogram.016384.000000.to.032768.000000=0
histogram.032768.000000.to.065536.000000=0
histogram.065536.000000.to.131072.000000=0
histogram.131072.000000.to.262144.000000=0
histogram.262144.000000.to.524288.000000=0
num.query.type.TYPE0=2869
num.query.type.A=73456559
num.query.type.NS=19355
num.query.type.MD=1
num.query.type.CNAME=3518
num.query.type.SOA=102213
num.query.type.NULL=8
num.query.type.WKS=2831
num.query.type.PTR=796892
num.query.type.HINFO=2869
num.query.type.MX=5877
num.query.type.TXT=422067
num.query.type.NSAP-PTR=1
num.query.type.AAAA=9258386
num.query.type.SRV=1244403
num.query.type.NAPTR=67774
num.query.type.DS=1922
num.query.type.RRSIG=2242
num.query.type.DNSKEY=578
num.query.type.TYPE65=4139593
num.query.type.TYPE128=2
num.query.type.TYPE136=2
num.query.type.ANY=81211
num.query.type.other=81
num.query.class.CLASS0=7
num.query.class.IN=89604457
num.query.class.CLASS2=1
num.query.class.CH=131
num.query.class.CLASS6=1
num.query.class.CLASS54=1
num.query.class.ANY=120
num.query.class.other=6532
num.query.opcode.QUERY=89611250
num.query.tcp=101689
num.query.tcpout=161659
num.query.tls=0
num.query.tls.resume=0
num.query.ipv6=0
num.query.flags.QR=0
num.query.flags.AA=23271
num.query.flags.TC=0
num.query.flags.RD=89600199
num.query.flags.RA=15645
num.query.flags.Z=1458
num.query.flags.AD=10127
num.query.flags.CD=6886
num.query.edns.present=589847
num.query.edns.DO=16376
num.answer.rcode.NOERROR=76112012
num.answer.rcode.FORMERR=887
num.answer.rcode.SERVFAIL=314176
num.answer.rcode.NXDOMAIN=10929563
num.answer.rcode.NOTIMPL=0
num.answer.rcode.REFUSED=2255430
num.answer.rcode.nodata=5911499
num.query.ratelimited=0
num.answer.secure=2697235
num.answer.bogus=1553
num.rrset.bogus=564
num.query.aggressive.NOERROR=13172
num.query.aggressive.NXDOMAIN=21
unwanted.queries=0
unwanted.replies=1635
msg.cache.count=1724314
rrset.cache.count=1235845
infra.cache.count=9941
key.cache.count=18014
num.query.authzone.up=0
num.query.authzone.down=0
num.query.subnet=0

Unbound statistics

Hi
I'm using unbound in version 1.13.1, the remote control is active on the system's standard port, when I try to use telemetry via tcp, it is only shown in the metrics tab that unbound is up and nothing else, even with the option " extendend statistics: yes ".

If I try to use via shm or uds, I get the following error:" # Unable to observe unbound statistics: No such file or directory (os error 2) "

Did that happen in your use?

Unable to parse 'other' value

Unbound in certain conditions increment a "other" Rtype, this makes unbound-telemetry unresponsive with the following message:

ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Unable to parse 'other' value
# unbound-control stats_noreset | grep other
num.query.type.other=4

Running it in docker a container

I have my unbound running in a docker container, and whanted to run the telemetry in a container besides it, but I can't seem to figure out how you would do that, or if it is even posible?

Any help you could give or documentation you could point to would be appreciated, thanks.

ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Unable to parse 'TYPE0' value

Hello i am getting the error
"ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Unable to parse 'TYPE0' value"

unbound version 1.9.0-2+deb10u2
Debian 10.6

Thanks.

unbound-control stats_noreset

thread0.num.queries=227964
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=171380
thread0.num.cachemiss=56584
thread0.num.prefetch=0
thread0.num.zero_ttl=0
thread0.num.recursivereplies=56577
thread0.requestlist.avg=13.4816
thread0.requestlist.max=83
thread0.requestlist.overwritten=0
thread0.requestlist.exceeded=0
thread0.requestlist.current.all=15
thread0.requestlist.current.user=7
thread0.recursion.time.avg=0.140696
thread0.recursion.time.median=0.0554801
thread0.tcpusage=0
total.num.queries=227964
total.num.queries_ip_ratelimited=0
total.num.cachehits=171380
total.num.cachemiss=56584
total.num.prefetch=0
total.num.zero_ttl=0
total.num.recursivereplies=56577
total.requestlist.avg=13.4816
total.requestlist.max=83
total.requestlist.overwritten=0
total.requestlist.exceeded=0
total.requestlist.current.all=15
total.requestlist.current.user=7
total.recursion.time.avg=0.140696
total.recursion.time.median=0.0554801
total.tcpusage=0
time.now=1603905778.775090
time.up=31316.719528
time.elapsed=31316.719528
mem.cache.rrset=4456276
mem.cache.message=4456676
mem.mod.iterator=16588
mem.mod.validator=2306628
mem.mod.respip=0
mem.mod.subnet=74504
mem.streamwait=0
histogram.000000.000000.to.000000.000001=5011
histogram.000000.000001.to.000000.000002=0
histogram.000000.000002.to.000000.000004=0
histogram.000000.000004.to.000000.000008=0
histogram.000000.000008.to.000000.000016=0
histogram.000000.000016.to.000000.000032=0
histogram.000000.000032.to.000000.000064=0
histogram.000000.000064.to.000000.000128=2
histogram.000000.000128.to.000000.000256=9
histogram.000000.000256.to.000000.000512=2
histogram.000000.000512.to.000000.001024=1
histogram.000000.001024.to.000000.002048=11
histogram.000000.002048.to.000000.004096=23
histogram.000000.004096.to.000000.008192=2259
histogram.000000.008192.to.000000.016384=3138
histogram.000000.016384.to.000000.032768=8535
histogram.000000.032768.to.000000.065536=13414
histogram.000000.065536.to.000000.131072=12398
histogram.000000.131072.to.000000.262144=7229
histogram.000000.262144.to.000000.524288=2963
histogram.000000.524288.to.000001.000000=1023
histogram.000001.000000.to.000002.000000=337
histogram.000002.000000.to.000004.000000=113
histogram.000004.000000.to.000008.000000=38
histogram.000008.000000.to.000016.000000=17
histogram.000016.000000.to.000032.000000=34
histogram.000032.000000.to.000064.000000=17
histogram.000064.000000.to.000128.000000=3
histogram.000128.000000.to.000256.000000=0
histogram.000256.000000.to.000512.000000=0
histogram.000512.000000.to.001024.000000=0
histogram.001024.000000.to.002048.000000=0
histogram.002048.000000.to.004096.000000=0
histogram.004096.000000.to.008192.000000=0
histogram.008192.000000.to.016384.000000=0
histogram.016384.000000.to.032768.000000=0
histogram.032768.000000.to.065536.000000=0
histogram.065536.000000.to.131072.000000=0
histogram.131072.000000.to.262144.000000=0
histogram.262144.000000.to.524288.000000=0
num.query.type.TYPE0=8
num.query.type.A=180215
num.query.type.NS=11
num.query.type.CNAME=4
num.query.type.SOA=30
num.query.type.WKS=6
num.query.type.PTR=32273
num.query.type.HINFO=6
num.query.type.MX=14
num.query.type.TXT=295
num.query.type.AAAA=11209
num.query.type.SRV=536
num.query.type.NAPTR=91
num.query.type.TYPE65=3257
num.query.type.other=5
num.query.class.IN=227952
num.query.class.other=8
num.query.opcode.QUERY=227960
num.query.tcp=3
num.query.tcpout=438
num.query.tls=0
num.query.tls.resume=0
num.query.ipv6=0
num.query.flags.QR=0
num.query.flags.AA=0
num.query.flags.TC=0
num.query.flags.RD=227954
num.query.flags.RA=7
num.query.flags.Z=2
num.query.flags.AD=0
num.query.flags.CD=0
num.query.edns.present=173
num.query.edns.DO=3
num.answer.rcode.NOERROR=201577
num.answer.rcode.FORMERR=2
num.answer.rcode.SERVFAIL=176
num.answer.rcode.NXDOMAIN=26186
num.answer.rcode.NOTIMPL=0
num.answer.rcode.REFUSED=14
num.answer.rcode.nodata=6304
num.query.ratelimited=0
num.answer.secure=8714
num.answer.bogus=0
num.rrset.bogus=0
num.query.aggressive.NOERROR=148
num.query.aggressive.NXDOMAIN=0
unwanted.queries=0
unwanted.replies=1
msg.cache.count=13202
rrset.cache.count=13671
infra.cache.count=9976
key.cache.count=4394
num.query.authzone.up=0
num.query.authzone.down=0
num.query.subnet=0
num.query.subnet_cache=0

dashboards grafanna show NA

run command

root@linux:~/build# curl 127.0.0.1:9167/metrics | grep '^unbound_up'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 17143    0 17143    0     0  1674k      0 --:--:-- --:--:-- --:--:-- 1674k
unbound_up 1

4`0WV(5XWYXA7D K$Q@M (J

statistics for query names

Hi, is there a metric that gives the query name? I mean I would like to see how many queries are sent to resolve "youtube.com" for example. I would like to put that info in my grafana dashboard.

Cannot run the script on a Raspberry Pi

Sorry if this solution cannot work on a Raspberry Pi but I have a version 4B and after building unbound_telemetry I keep getting errors:

Here's my unbound.conf


# /etc/unbound/unbound.conf.d directory.
include: "/etc/unbound/unbound.conf.d/*.conf"
    server:
      # These options should be added to the existing server configuration,
      # overwriting existing values if they're there.

      # This refreshes expiring cache entries if they have been accessed with
      # less than 10% of their TTL remaining
      prefetch: yes

      # This attempts to reduce latency by serving the outdated record before
      # updating it instead of the other way around. Alternative is to increase
      # cache-min-ttl to e.g. 3600.
      cache-min-ttl: 0
      serve-expired: yes
      # I had best success leaving this next entry unset.
      # serve-expired-ttl: 3600 # 0 or not set means unlimited (I think)

      # Use about 2x more for rrset cache, total memory use is about 2-2.5x
      # total cache size. Current setting is way overkill for a small network.
      # Judging from my used cache size you can get away with 8/16 and still
      # have lots of room, but I've got the ram and I'm not using it on anything else.
      # Default is 4m/4m
      msg-cache-size: 128m
      rrset-cache-size: 256m

      # enable shm for stats, default no.  if you enable also enable
      # statistics-interval, every time it also writes stats to the
      # shared memory segment keyed with shm-key.
      #shm-enable: yes

      statistics-interval: 0
      extended-statistics: yes
      statistics-cumulative: yes

remote-control:
# Enable remote control with unbound-control(8) here.
control-enable: yes
# what interfaces are listened to for remote control.
control-interface: 127.0.0.1
# port number for remote control operations.
control-port: 8953



Unbound is running:


pi@pihole1:~/unbound-telemetry $ sudo netstat -pant | grep 8953
tcp        0      0 127.0.0.1:8953          0.0.0.0:*               LISTEN      27479/unbound

Here's the error I'm getting when I click the Metrics hyperlink:
# Unable to observe unbound statistics: Connection reset by peer (os error 104)

Log trace


pi@pihole1:~/unbound-telemetry $ sudo ./unbound-telemetry tcp -b 0.0.0.0:9167 --control-interface 127.0.0.1:8953 --log-level trace
TRACE [mio::poll] registering with poller
INFO  [unbound_telemetry::server] Listening on 0.0.0.0:9167
TRACE [mio::poll] registering with poller





TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] Conn::read_head
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 513 bytes
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 513])
TRACE [hyper::proto::h1::role] Request.parse Complete(513)
DEBUG [hyper::proto::h1::io] parsed 11 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 434 bytes
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 434])
TRACE [hyper::proto::h1::role] Request.parse Complete(434)
DEBUG [hyper::proto::h1::io] parsed 10 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
DEBUG [hyper::proto::h1::io] read 0 bytes
TRACE [hyper::proto::h1::conn] found unexpected EOF on busy connection: State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] State::close_read()
DEBUG [hyper::server::conn::spawn_all] connection error: connection closed before message completed
TRACE [mio::poll] deregistering handle with poller

Some more debug info:


pi@pihole1:~/unbound-telemetry $ sudo unbound-control stats_noreset
thread0.num.queries=728
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=621
thread0.num.cachemiss=107
thread0.num.prefetch=60
thread0.num.zero_ttl=49
thread0.num.recursivereplies=107
thread0.requestlist.avg=0.610778
thread0.requestlist.max=9
thread0.requestlist.overwritten=0
thread0.requestlist.exceeded=0
thread0.requestlist.current.all=0
thread0.requestlist.current.user=0
thread0.recursion.time.avg=0.152097
thread0.recursion.time.median=0.0805916
thread0.tcpusage=0
total.num.queries=728
total.num.queries_ip_ratelimited=0
total.num.cachehits=621
total.num.cachemiss=107
total.num.prefetch=60
total.num.zero_ttl=49
total.num.recursivereplies=107
total.requestlist.avg=0.610778
total.requestlist.max=9
total.requestlist.overwritten=0
total.requestlist.exceeded=0
total.requestlist.current.all=0
total.requestlist.current.user=0
total.recursion.time.avg=0.152097
total.recursion.time.median=0.0805916
total.tcpusage=0
time.now=1625619187.760907
time.up=603.963341
time.elapsed=603.963341
mem.cache.rrset=402442
mem.cache.message=111528
mem.mod.iterator=16504
mem.mod.validator=60967
mem.mod.respip=0
mem.mod.subnet=41372
mem.streamwait=0
histogram.000000.000000.to.000000.000001=4
histogram.000000.000001.to.000000.000002=0
histogram.000000.000002.to.000000.000004=0
histogram.000000.000004.to.000000.000008=0
histogram.000000.000008.to.000000.000016=0
histogram.000000.000016.to.000000.000032=0
histogram.000000.000032.to.000000.000064=0
histogram.000000.000064.to.000000.000128=0
histogram.000000.000128.to.000000.000256=0
histogram.000000.000256.to.000000.000512=0
histogram.000000.000512.to.000000.001024=0
histogram.000000.001024.to.000000.002048=0
histogram.000000.002048.to.000000.004096=0
histogram.000000.004096.to.000000.008192=0
histogram.000000.008192.to.000000.016384=3
histogram.000000.016384.to.000000.032768=17
histogram.000000.032768.to.000000.065536=21
histogram.000000.065536.to.000000.131072=37
histogram.000000.131072.to.000000.262144=10
histogram.000000.262144.to.000000.524288=10
histogram.000000.524288.to.000001.000000=1
histogram.000001.000000.to.000002.000000=4
histogram.000002.000000.to.000004.000000=0
histogram.000004.000000.to.000008.000000=0
histogram.000008.000000.to.000016.000000=0
histogram.000016.000000.to.000032.000000=0
histogram.000032.000000.to.000064.000000=0
histogram.000064.000000.to.000128.000000=0
histogram.000128.000000.to.000256.000000=0
histogram.000256.000000.to.000512.000000=0
histogram.000512.000000.to.001024.000000=0
histogram.001024.000000.to.002048.000000=0
histogram.002048.000000.to.004096.000000=0
histogram.004096.000000.to.008192.000000=0
histogram.008192.000000.to.016384.000000=0
histogram.016384.000000.to.032768.000000=0
histogram.032768.000000.to.065536.000000=0
histogram.065536.000000.to.131072.000000=0
histogram.131072.000000.to.262144.000000=0
histogram.262144.000000.to.524288.000000=0
num.query.type.A=417
num.query.type.AAAA=296
num.query.type.TYPE65=15
num.query.class.IN=728
num.query.opcode.QUERY=728
num.query.tcp=0
num.query.tcpout=0
num.query.tls=0
num.query.tls.resume=0
`num.query.ipv6=0`
num.query.flags.QR=0
num.query.flags.AA=0
num.query.flags.TC=0
num.query.flags.RD=728
num.query.flags.RA=0
num.query.flags.Z=0
num.query.flags.AD=0
num.query.flags.CD=0
num.query.edns.present=0
num.query.edns.DO=0
num.answer.rcode.NOERROR=728
num.answer.rcode.FORMERR=0
num.answer.rcode.SERVFAIL=0
num.answer.rcode.NXDOMAIN=0
num.answer.rcode.NOTIMPL=0
num.answer.rcode.REFUSED=0
num.answer.rcode.nodata=36
num.query.ratelimited=0
num.answer.secure=240
num.answer.bogus=0
num.rrset.bogus=0
num.query.aggressive.NOERROR=1
num.query.aggressive.NXDOMAIN=0
unwanted.queries=0
unwanted.replies=0
msg.cache.count=351
rrset.cache.count=1795
infra.cache.count=370
key.cache.count=49
num.query.authzone.up=0
num.query.authzone.down=0
num.query.subnet=0
num.query.subnet_cache=0

Any help is greatly appreciated.

Better support for unknown query types and classes

With current implementation all uknown query types, classes and opcodes are silently ignored if their textual representation can't be casted into strict types provided by domain crate. This behavior is not really future-proof, so instead smth like

enum Rtype {
    Known(domain::Rtype),
    Unknown(String)
}

should be used instead to accomodate new types appearing in unbound output.

Freeze TLS dependency version

Some specific version of native-tls should be used instead of git version:

[patch.crates-io]
# Note: fuzzing crate has the same override
native-tls = { git = "https://github.com/Goirad/rust-native-tls.git", branch = "pkcs8-squashed" }

Yet, native-tls does not support PCKS8 certificates, see sfackler/rust-native-tls#147

As an alternative, rustls can be used if it will be able to handle PCKS8 PEM files and communicate with unbound correctly. Additionally it will mitigate all these issues with building when you are required to have openssl dev files and so on.

Unable to see unbound metrics with shm

I have unbound and unbound-telemetry running on the same server.

The OS is Ubuntu 20.04.
Unbound version is 1.9.4 and the latest unbound-telemetry.

This is my unbound.conf

server:
    verbosity: 5
    num-threads: 3
    interface: 0.0.0.0
    interface-automatic: no
    port: 53
    access-control: 0.0.0.0/0 allow
    shm-enable: yes
    shm-key: 11777
    chroot: ""
    pidfile: "/tmp/unbound.pid"
    username: ""
    directory: "/tmp"
    logfile: "/var/log/unbound.log"
    log-time-ascii: yes
    log-queries: yes
    target-fetch-policy: "0 0 0 0 0"
    prefetch: yes
    minimal-responses: yes
    outgoing-range: 40460
    do-ip6: no
    do-not-query-localhost: no
    module-config: "iterator"
    edns-buffer-size: 1480
    ## Extended Stats ##
    statistics-interval: 1
    extended-statistics: yes
    statistics-cumulative: yes

# Performance Tunning
    msg-cache-slabs: 2
    rrset-cache-slabs: 2
    infra-cache-slabs: 2
    key-cache-slabs: 2
    rrset-cache-size: 256m
    msg-cache-size: 128m
    so-rcvbuf: 4m
    so-sndbuf: 4m
    num-queries-per-thread: 20000
    infra-cache-numhosts: 100000

remote-control:
       control-enable: yes
       control-interface: 0.0.0.0
       control-port: 8953
       server-key-file: "/etc/unbound/unbound_server.key"
       server-cert-file: "/etc/unbound/unbound_server.pem"
       control-key-file: "/etc/unbound/unbound_control.key"
       control-cert-file: "/etc/unbound/unbound_control.pem"

include: /etc/unbound/unbound.conf.d/*.conf

I am able to get metrics (e.g. localhost:9167/metrics) if I start unbound-telemetry with tcp:

root@pc-199:~/unbound-telemetry-master/target/release# ./unbound-telemetry tcp --server-cert-file /etc/unbound/unbound_server.pem --control-cert-file /etc/unbound/unbound_control.pem --control-key-file /etc/unbound/unbound_control.key
INFO  [unbound_telemetry::server] Listening on 0.0.0.0:9167
curl localhost:9167/metrics
# TYPE unbound_num_threads gauge
# HELP unbound_num_threads The number of threads to create to serve clients
unbound_num_threads 3
# TYPE unbound_time_up_seconds_total counter
# HELP unbound_time_up_seconds_total Uptime since server boot in seconds
...

But I get "curl: (52) Empty reply from server" by curlling the same URL (localhost:9167/metrics), if I start unbound-telemtry with shm

./unbound-telemetry shm -k 11777 -l trace
TRACE [mio::poll] registering with poller
INFO  [unbound_telemetry::server] Listening on 0.0.0.0:9167
TRACE [mio::poll] registering with poller
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] Conn::read_head
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 85 bytes
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 85])
TRACE [hyper::proto::h1::role] Request.parse Complete(85)
DEBUG [hyper::proto::h1::io] parsed 3 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [unbound_telemetry::sources::memory::wrappers] Acquiring shared memory region access for key 11777
TRACE [unbound_telemetry::sources::memory::shm] `shmget(11777)` call returned the segment id 8
TRACE [unbound_telemetry::sources::memory::shm] `shmat(8)` call resulted in code 139739723681792
TRACE [unbound_telemetry::sources::memory::shm] `shmget(11778)` call returned the segment id 9
TRACE [unbound_telemetry::sources::memory::shm] `shmat(9)` call resulted in code 139739717521408
DEBUG [unbound_telemetry::sources::memory::wrappers] Successfully acquired an access to the unbound shared memory region with key 11777
thread 'main' panicked at 'not implemented', src/sources/memory/wrappers.rs:52:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
TRACE [mio::poll] deregistering handle with poller

unbound-control stats_noreset returns stats as expected.

$unbound-control stats_noreset
thread0.num.queries=0
thread0.num.queries_ip_ratelimited=0
thread0.num.cachehits=0
...

RUSTSEC-2021-0079: Integer overflow in `hyper`'s parsing of the `Transfer-Encoding` header leads to data loss

Integer overflow in hyper's parsing of the Transfer-Encoding header leads to data loss

Details
Package hyper
Version 0.13.10
URL GHSA-5h46-h7hh-c6x9
Date 2021-07-07
Patched versions >=0.14.10

When decoding chunk sizes that are too large, hyper's code would encounter an integer overflow. Depending on the situation,
this could lead to data loss from an incorrect total size, or in rarer cases, a request smuggling attack.

To be vulnerable, you must be using hyper for any HTTP/1 purpose, including as a client or server, and consumers must send
requests or responses that specify a chunk size greater than 18 exabytes. For a possible request smuggling attack to be possible,
any upstream proxies must accept a chunk size greater than 64 bits.

See advisory page for additional details.

Unable to observe unbound statistics: Connection reset by peer (os error 104)

root@dns2-2:~# ./unbound-telemetry tcp --log-level trace
TRACE [mio::poll] registering with poller
INFO  [unbound_telemetry::server] Listening on 0.0.0.0:9167
TRACE [mio::poll] registering with poller
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] Conn::read_head
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 599 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 599])
TRACE [hyper::proto::h1::role] Request.parse Complete(599)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 11 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 599 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 599])
TRACE [hyper::proto::h1::role] Request.parse Complete(599)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 11 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 599 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 599])
TRACE [hyper::proto::h1::role] Request.parse Complete(599)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 11 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] Conn::read_head
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 246 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 246])
TRACE [hyper::proto::h1::role] Request.parse Complete(246)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 5 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 246 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 246])
TRACE [hyper::proto::h1::role] Request.parse Complete(246)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 5 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 0 bytes
TRACE [hyper::proto::h1::io] parse eof
TRACE [hyper::proto::h1::conn] State::close_read()
DEBUG [hyper::proto::h1::conn] read eof
TRACE [hyper::proto::h1::conn] State::close_write()
TRACE [hyper::proto::h1::conn] State::close_read()
TRACE [hyper::proto::h1::conn] State::close_write()
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
TRACE [hyper::proto::h1::conn] shut down IO complete
TRACE [mio::poll] deregistering handle with poller
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 246 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 246])
TRACE [hyper::proto::h1::role] Request.parse Complete(246)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 5 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }
TRACE [hyper::proto::h1::conn] Conn::read_head
DEBUG [hyper::proto::h1::io] read 246 bytes
TRACE [tracing::span] parse_headers
TRACE [tracing::span::active] -> parse_headers
TRACE [hyper::proto::h1::role] Request.parse([Header; 100], [u8; 246])
TRACE [hyper::proto::h1::role] Request.parse Complete(246)
TRACE [tracing::span::active] <- parse_headers
TRACE [tracing::span] -- parse_headers
DEBUG [hyper::proto::h1::io] parsed 5 headers
DEBUG [hyper::proto::h1::conn] incoming body is empty
TRACE [mio::poll] registering with poller
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: KeepAlive, writing: Init, keep_alive: Busy }
TRACE [mio::poll] deregistering handle with poller
ERROR [unbound_telemetry::server] Unable to observe unbound statistics: Connection reset by peer (os error 104)
TRACE [tracing::span] encode_headers
TRACE [tracing::span::active] -> encode_headers
TRACE [hyper::proto::h1::role] Server::encode status=500, body=Some(Known(79)), req_method=Some(GET)
TRACE [tracing::span::active] <- encode_headers
TRACE [tracing::span] -- encode_headers
DEBUG [hyper::proto::h1::io] flushed 200 bytes
TRACE [hyper::proto::h1::conn] flushed({role=server}): State { reading: Init, writing: Init, keep_alive: Idle }



Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.