dns-oarc / dnsperf Goto Github PK
View Code? Open in Web Editor NEWDNS Performance Testing Tools
Home Page: https://www.dns-oarc.net/tools/dnsperf
License: Apache License 2.0
DNS Performance Testing Tools
Home Page: https://www.dns-oarc.net/tools/dnsperf
License: Apache License 2.0
I'm trying to build dnsperf from source on Centos 7.7. This is an AWS machine with very little installed by default:
# lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.7.1908 (Core)
Release: 7.7.1908
Codename: Core
Following this repo's README, and following my nose a little, I've come up with these steps:
yum install -y git autoconf automake gcc libtool
yum install -y openssl-devel ldns-devel ck-devel libnghttp2-devel
git clone https://github.com/DNS-OARC/dnsperf.git
cd dnsperf
./autogen.sh
./configure
make
make install
The problem is that two of the packages listed in the README don't exist on Centos 7.7! The -y
flag (although sensible in general) hides this from the casual copy-paster:
[root@foo dnsperf]# yum install openssl-devel ldns-devel ck-devel libnghttp2-devel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: download.cf.centos.org
* extras: download.cf.centos.org
* updates: download.cf.centos.org
Package 1:openssl-devel-1.0.2k-22.el7_9.x86_64 already installed and latest version
Package ldns-devel-1.6.16-10.el7.x86_64 already installed and latest version
No package ck-devel available.
No package libnghttp2-devel available.
Nothing to do
And then when you get four steps further along in the step-by-step above:
[root@foo dnsperf]# ./configure
checking for libssl... yes
checking for libcrypto... yes
checking for TLS_method in -lssl... no
checking for ck... no
checking ck_ring.h usability... no
checking ck_ring.h presence... no
checking for ck_ring.h... no
configure: error: libck headers not found
My question is: How can I get these packages libck
and libnghttp2
on Centos 7.7? Or, how can I convince dnsperf to build without them?
FWIW, I'm really interested only in the resperf
program; I don't need the dnsperf
program itself.
This other issue seems related but I can't figure out what the recommended action is.
Hi everyone,
We have been working on integrating gss-tsig method of sending dns updates into the dnsperf tool.
Our code is complete and going through testing phase.
There is an issue when we try to load test and it fails randomly (more on Debian client than on Ubuntu client) with below cause:
root@build-vm-buster:~/Packages# dnsperf -s [dnsperfserver.bcdnsperf.com](http://dnsperfserver.bcdnsperf.com/) -d add-rec_50000 -u -g -c 30 -W -t 15
DNS Performance Testing Tool
Version 2.10.0
[Status] Command line: dnsperf -s [dnsperfserver.bcdnsperf.com](http://dnsperfserver.bcdnsperf.com/) -d add-rec_50000 -u -g -c 30 -W -t 15
[Status] Sending updates (to 10.244.105.233:53)
[Status] Started at: Thu Feb 23 17:44:18 2023
[Status] Stopping after 1 run through file
Warning: gss_init_sec_context error: major = 589824, minor = 100002
Error: GSSAPI error: Major = Invalid token was supplied, Minor = Unknown error.
Error: Failed to initialize GSS context: sent = 12232
This error occurs while initializing security context using TKEY response from the server.
Note:
Different ways tried so far-
-Wl,-Bsymbolic-functions
used for UbuntuLet me know if anyone has any hint about this.
Should I create a merge request so helpers can get a direct look into the code and provide better suggestions ?
api-read.facebook.com.\002\004\003\002\002\002\002\005\004\004\003\004\006\005\006\006\006\005\006\006\006\007\009\008\006\007\009\007\006\006\008\011\008\009\010\010\010\010\010\006\008\011\012\011\010\012\009\010\010\010\255\219. A
Error message:
Warning: invalid domain name (or out of space): api-read.facebook.com.\002\004\003\002\002\002\002\005\004\004\003\004\006\005\006\006\006\005\006\006\006\007\009\008\006\007\009\007\006\006\008\011\008\009\010\010\010\010\010\006\008\011\012\011\010\012\009\010\010\010\255\219.
As far as I can tell this is valid query withing protocol-permitted limits. dnspython parses it just fine, wire length 84 bytes.
I'm attempting to use the new TCP support in 2.30 against Akamai/Nominum Vantio CacheServe. The server configuration is trivial, and the dnsperf input file is a single query, which will be answered out of the server's cache. I'm sending queries between 2 reasonably fast machines running CentOS 6.5 in the same lab.
I'm running this command:
dnsperf -s platform-perf-vantio-1 -p 12347 -d in -l10 -c2 -m tcp
and seeing results like this:
Queries sent: 215420
Queries completed: 215420 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 215420 (100.00%)
Average packet size: request 29, response 61
Run time (s): 10.064001
Queries per second: 21405.005822
Average Latency (s): 0.004335 (min 0.000075, max 0.080095)
Latency StdDev (s): 0.008782
(for the record, when using UDP instead of TCP, I'm seeing about 250000 queries per second).
When I try using 2.2.1 with the pull request in #34 (and one minor bug fix, to fix a problem that showed up on Linux and not macOS), and using "-z" instead of "-m tcp", I'm seeing this:
Queries sent: 2476115
Queries completed: 2476115 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 2476115 (100.00%)
Average packet size: request 31, response 61
Run time (s): 10.007593
Queries per second: 247423.631237
That's 11.5x faster.
As another comparison point, if I use dnsperf from https://github.com/Sinodun/dnsperf-tcp, also using -z, I see this:
Queries sent: 1932941
Queries completed: 1932941 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 1932941 (100.00%)
Average packet size: request 31, response 61
Run time (s): 10.039865
Queries per second: 192526.592738
TCP connections: 2
Ave Queries per conn: 966471
TCP HS time per client (s): 0.000000 (0.00%)
TLS HS time per client (s): 0.000000 (0.00%)
Total HS time per client (s): 0.000000 (0.00%)
TCP HS time per connection (s): 0.000000
TLS HS time per connection (s): 0.000000
Total HS time per connection (s): 0.000000
Adjusted Queries/s: 192526.592738
That's 9x faster.
I'm not sure what the code in 2.3.0 is doing (it's pretty opaque and not commented), but I don't understand how it can be so much slower, or why the pull request was rejected.
what ???
./autoreconf --force --install --include=m4
-bash: ./autoreconf: No such file or directory
please tell me which autoreconf
By default, is DNSPerf synchronous or asynchronous for sending UDP requests? Is there any way to change this behavior?
relates to Homebrew/homebrew-core#54798
github action run, https://github.com/Homebrew/homebrew-core/runs/681367799
Undefined symbols for architecture x86_64:
"_gss_accept_sec_context", referenced from:
_gss_accept_sec_context_spnego in libdns.a(spnego.o)
(maybe you meant: _gss_accept_sec_context_spnego)
"_gss_acquire_cred", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
"_gss_delete_sec_context", referenced from:
_dst_gssapi_deletectx in libdns.a(gssapictx.o)
"_gss_display_name", referenced from:
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
"_gss_display_status", referenced from:
_gss_error_tostring in libdns.a(gssapictx.o)
"_gss_export_sec_context", referenced from:
_gssapi_dump in libdns.a(gssapi_link.o)
"_gss_get_mic", referenced from:
_gssapi_sign in libdns.a(gssapi_link.o)
"_gss_import_name", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
"_gss_import_sec_context", referenced from:
_gssapi_restore in libdns.a(gssapi_link.o)
"_gss_init_sec_context", referenced from:
_gss_init_sec_context_spnego in libdns.a(spnego.o)
(maybe you meant: _gss_init_sec_context_spnego)
"_gss_inquire_cred", referenced from:
_log_cred in libdns.a(gssapictx.o)
"_gss_release_buffer", referenced from:
_gssapi_sign in libdns.a(gssapi_link.o)
_gssapi_dump in libdns.a(gssapi_link.o)
_gss_error_tostring in libdns.a(gssapictx.o)
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
_gss_accept_sec_context_spnego in libdns.a(spnego.o)
...
"_gss_release_cred", referenced from:
_dst_gssapi_releasecred in libdns.a(gssapictx.o)
"_gss_release_name", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
"_gss_verify_mic", referenced from:
_gssapi_verify in libdns.a(gssapi_link.o)
"_krb5_free_context", referenced from:
_check_config in libdns.a(gssapictx.o)
"_krb5_get_default_realm", referenced from:
_check_config in libdns.a(gssapictx.o)
"_krb5_init_context", referenced from:
_check_config in libdns.a(gssapictx.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [resperf] Error 1
make[2]: *** Waiting for unfinished jobs....
Undefined symbols for architecture x86_64:
"_gss_accept_sec_context", referenced from:
_gss_accept_sec_context_spnego in libdns.a(spnego.o)
(maybe you meant: _gss_accept_sec_context_spnego)
"_gss_acquire_cred", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
"_gss_delete_sec_context", referenced from:
_dst_gssapi_deletectx in libdns.a(gssapictx.o)
"_gss_display_name", referenced from:
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
"_gss_display_status", referenced from:
_gss_error_tostring in libdns.a(gssapictx.o)
"_gss_export_sec_context", referenced from:
_gssapi_dump in libdns.a(gssapi_link.o)
"_gss_get_mic", referenced from:
_gssapi_sign in libdns.a(gssapi_link.o)
"_gss_import_name", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
"_gss_import_sec_context", referenced from:
_gssapi_restore in libdns.a(gssapi_link.o)
"_gss_init_sec_context", referenced from:
_gss_init_sec_context_spnego in libdns.a(spnego.o)
(maybe you meant: _gss_init_sec_context_spnego)
"_gss_inquire_cred", referenced from:
_log_cred in libdns.a(gssapictx.o)
"_gss_release_buffer", referenced from:
_gssapi_sign in libdns.a(gssapi_link.o)
_gssapi_dump in libdns.a(gssapi_link.o)
_gss_error_tostring in libdns.a(gssapictx.o)
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
_gss_accept_sec_context_spnego in libdns.a(spnego.o)
...
"_gss_release_cred", referenced from:
_dst_gssapi_releasecred in libdns.a(gssapictx.o)
"_gss_release_name", referenced from:
_dst_gssapi_acquirecred in libdns.a(gssapictx.o)
_log_cred in libdns.a(gssapictx.o)
_dst_gssapi_initctx in libdns.a(gssapictx.o)
_dst_gssapi_acceptctx in libdns.a(gssapictx.o)
"_gss_verify_mic", referenced from:
_gssapi_verify in libdns.a(gssapi_link.o)
"_krb5_free_context", referenced from:
_check_config in libdns.a(gssapictx.o)
"_krb5_get_default_realm", referenced from:
_check_config in libdns.a(gssapictx.o)
"_krb5_init_context", referenced from:
_check_config in libdns.a(gssapictx.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [dnsperf] Error 1
make[1]: *** [install-recursive] Error 1
make: *** [install-recursive] Error 1�
As of BIND 9.16, their internal libraries are not for public consumption and even if #72 works for now it might break in the future. Best would be to remove the dependency but it will require a lot of rewriting.
TODO:
isc_result_t
opt.c
net.c
and isc_net* in dnsperf.c
datafile.c
and change usage in dnsperf.c
/ resperf.c
dns.c
and change usage dnsperf.c
/ resperf.c
result.h
Also see #67.
Version: 2.10.0, commit c1bef8b
Sometimes I've noticed crash in the -m tcp
mode:
dnsperf: net.h:133: perf_net_sockeq: Assertion `sock_b' failed.
Shortest reproducer I can come up with is:
while dnsperf -s 127.0.0.1 -d /tmp/qlist -l 0.0001 -m tcp; do true; done
where 127.0.0.1 does not listen on port 53. /tmp/qlist
has just net. SOA
and nothing else. Repeat it couple times and it should crash within a minute.
Unfortunately the machine where this happens is in a farm and does not have debug build and all the other jazz needed to get proper traceback.
root@n1:~# /usr/local/bin/dnsperf -d /tmp/dnsperf_file.txt -s 10.1.1.63 -m tcp -p 53
DNS Performance Testing Tool
Version 2.3.2
[Status] Command line: dnsperf -d /tmp/dnsperf_file.txt -s 10.1.1.63 -m tcp -p 53
[Status] Sending queries (to 10.1.1.63)
[Status] Started at: Sat Dec 7 15:15:58 2019
[Status] Stopping after 1 run through file
[Status] Testing complete (end of file)
Statistics:
Queries sent: 123
Queries completed: 123 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 123 (100.00%)
Average packet size: request 36, response 71
Run time (s): 0.712423
Queries per second: 172.650237
Average Latency (s): 0.339607 (min 0.009501, max 0.573167)
Latency StdDev (s): 0.182204
root@n1:~# /usr/local/bin/dnsperf -d /tmp/dnsperf_file.txt -s 10.1.1.63 -m tcp -p 53 -l 60
DNS Performance Testing Tool
Version 2.3.2
[Status] Command line: dnsperf -d /tmp/dnsperf_file.txt -s 10.1.1.63 -m tcp -p 53 -l 60
[Status] Sending queries (to 10.1.1.63)
[Status] Started at: Sat Dec 7 15:16:04 2019
[Status] Stopping after 60.000000 seconds
Warning: received a response with an unexpected (maybe timed out) id: 127
Error: failed to receive packet: Connection reset by peer
how can test dns with tcp ?
We noticed that when testing TCP with DNSPerf (and ResPerf) almost all queries are failing with timeouts. We see results that we can't really explain away or find any config changes to remediate the issue.
I am hoping that maybe you'll be able to shed some light on this issue and maybe point me to the right direction.
act as n clients
argument is increased. When using resperf with -C 100
(probably 100 different source ports/5-tuples) there are zero queries lost.Create a VPC in X region (in our case we used us-east-1 region).
In it start-up a DNSDist server and another one for testing (same hardware/vm e.g. c5n.large
in our case).
In a remote VPC/ location start up a PDNS Authoritative server (In our case we used ap-southeast-2 region) with BIND files as its backend.
DNSDist is configured to forward all requests to the PDNS server IP.
From the test server create a data file (intput.txt
) with records that are present in the PDNS bind backend and install DNSperf tools and run the following:
5.1. Single 5-tuple: time resperf -d input.txt -s ${DNSDIST_IP_ADRESS} -r 0 -c 60 -R -m 100 -M tcp -C 1
- most queries failed, and the test duration is around 100 seconds
Warning: received a response with an unexpected id: 493
Warning: received a response with an unexpected id: 494
Warning: received a response with an unexpected id: 495
Warning: received a response with an unexpected id: 496
Warning: received a response with an unexpected id: 497
Warning: received a response with an unexpected id: 498
Warning: received a response with an unexpected id: 499
Warning: received a response with an unexpected id: 500
Warning: received a response with an unexpected id: 501
Warning: received a response with an unexpected id: 502
Warning: received a response with an unexpected id: 503
...
Statistics:
Queries sent: 5999
Queries completed: 238
Queries lost: 5761
Response codes: NOERROR 230 (96.64%), NXDOMAIN 8 (3.36%)
Reconnection(s): 0
Run time (s): 100.000000
Maximum throughput: 100.000000 qps
Lost at that point: 0.00%
real 1m40.005s
user 0m59.683s
sys 0m40.319s
5.2. Multiple clients: time resperf -d input.txt -s ${DNSDIST_IP_ADRESS} -r 0 -c 60 -R -m 100 -M tcp -C 100
- No failures, test done in 60 seconds
Statistics:
Queries sent: 5999
Queries completed: 5999
Queries lost: 0
Response codes: NOERROR 5760 (96.02%), NXDOMAIN 239 (3.98%)
Reconnection(s): 0
Run time (s): 60.188028
Maximum throughput: 100.000000 qps
Lost at that point: 0.00%
real 1m0.193s
user 0m36.248s
sys 0m23.944s
5.3. directly to remote backend: time resperf -d input.txt -s ${PDNS_IP_ADRESS} -r 0 -c 60 -R -m 100 -M tcp -C 1
- No failures, test done in 60 seconds
Statistics:
Queries sent: 5999
Queries completed: 5999
Queries lost: 0
Response codes: NOERROR 5760 (96.02%), NXDOMAIN 239 (3.98%)
Reconnection(s): 0
Run time (s): 60.461496
Maximum throughput: 100.000000 qps
Lost at that point: 0.00%
real 1m0.466s
user 0m36.201s
sys 0m24.068s
Same data file using DNSPerf tool:
6.1. Single 5-tuple: dnsperf -s ${DNSDIST_IP_ADRESS} -m tcp -d input.txt -c 1 -T 1 -l 60 -t 5 -q 100
- most queries failed, 1222 sent and only 24 completed (1.96%)
...
Warning: received a response with an unexpected (maybe timed out) id: 213
[Timeout] Query timed out: msg id 805
Warning: received a response with an unexpected (maybe timed out) id: 214
[Timeout] Query timed out: msg id 806
Warning: received a response with an unexpected (maybe timed out) id: 215
[Timeout] Query timed out: msg id 807
Warning: received a response with an unexpected (maybe timed out) id: 216
[Timeout] Query timed out: msg id 808
Warning: received a response with an unexpected (maybe timed out) id: 217
[Timeout] Query timed out: msg id 809
Warning: received a response with an unexpected (maybe timed out) id: 218
[Timeout] Query timed out: msg id 810
Warning: received a response with an unexpected (maybe timed out) id: 219
[Timeout] Query timed out: msg id 811
Warning: received a response with an unexpected (maybe timed out) id: 220
[Timeout] Query timed out: msg id 812
Warning: received a response with an unexpected (maybe timed out) id: 221
[Timeout] Query timed out: msg id 813
Warning: received a response with an unexpected (maybe timed out) id: 222
[Timeout] Query timed out: msg id 814
...
Statistics:
Queries sent: 1222
Queries completed: 24 (1.96%)
Queries lost: 1198 (98.04%)
Response codes: NOERROR 24 (100.00%)
Average packet size: request 73, response 87
Run time (s): 64.751393
Queries per second: 0.370648
Average Latency (s): 2.659450 (min 0.395701, max 4.923507)
Latency StdDev (s): 1.392166
Connection Statistics:
Reconnections: 0
Average Latency (s): 0.000429 (min 0.000429, max 0.000429)
real 1m4.922s
user 0m2.258s
sys 0m2.704s
6.2. Multiple clients: dnsperf -s ${DNSDIST_IP_ADRESS} -m tcp -d input.txt -c 100 -T 10 -l 60 -t 5 -q 100
- No failures, test done in 60 seconds
Statistics:
Queries sent: 2582120
Queries completed: 2582120 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 2517567 (97.50%), NXDOMAIN 64553 (2.50%)
Average packet size: request 73, response 90
Run time (s): 60.093897
Queries per second: 42968.090420
Average Latency (s): 0.002280 (min 0.000335, max 0.595486)
Latency StdDev (s): 0.015051
Connection Statistics:
Reconnections: 0
Average Latency (s): 0.012769 (min 0.000390, max 0.025119)
Latency StdDev (s): 0.006465
real 1m0.121s
user 0m26.466s
sys 1m16.860s
6.3. directly to remote backend: dnsperf -s ${PDNS_IP_ADRESS} -m tcp -d input.txt -c 1 -T 1 -l 60 -t 5 -q 100
- No failures, test done in 60 seconds
Statistics:
Queries sent: 25665
Queries completed: 25665 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 25024 (97.50%), NXDOMAIN 641 (2.50%)
Average packet size: request 73, response 90
Run time (s): 60.460280
Queries per second: 424.493568
Average Latency (s): 0.233273 (min 0.196206, max 0.623432)
Latency StdDev (s): 0.073038
Connection Statistics:
Reconnections: 0
Average Latency (s): 0.196184 (min 0.196184, max 0.196184)
real 1m0.466s
user 0m0.264s
sys 0m0.296s
6.3. Increasing client timeout: dnsperf -s ${DNSDIST_IP_ADRESS} -m tcp -d input.txt -c 1 -T 1 -l 60 -t 30 -q 100
Statistics:
Queries sent: 411
Queries completed: 411 (100.00%)
Queries lost: 0 (0.00%)
Response codes: NOERROR 401 (97.57%), NXDOMAIN 10 (2.43%)
Average packet size: request 73, response 89
Run time (s): 79.455371
Queries per second: 5.172715
Average Latency (s): 16.993428 (min 0.394257, max 19.614739)
Latency StdDev (s): 4.818326
Connection Statistics:
Reconnections: 0
Average Latency (s): 0.000405 (min 0.000405, max 0.000405)
real 1m19.461s
user 0m9.069s
sys 0m10.413s
Since there are no failures when sending the queries directly to the remote PDNS or when using multiple 5-tuples, I expect this to work the same when passing through some middle-man e.g. DNSDist.
netstat
, ifconfig
).dnsdist_frontend_tcpdiedreadingquery
dnsdist_server_tcpreusedconnections
, dnsdist_frontend_tcpavgqueriesperconnection
and dnsdist_frontend_queries|dnsdist_frontend_responses
. No errors or timeouts (e.g. tcpConnectTimeouts
, tcpReadTimeouts
, tcpWriteTimeouts
, tcpGaveUp
, tcpTooManyConcurrentConnections
, dnsdist_server_drops
all are unchanged during the test).I'll try to list some of the options we tried to change to help narrow this down. All the below had no effect at all on the results mentioned above (and during the test, there is no OS logs or application logs at all).
max-tcp-connections=1024
max-tcp-connections-per-client=0 # also tried with higher values
max-tcp-transactions-per-conn=0 # also tried with higher values
tcp-idle-timeout=10 # also tried with higher values
tcp-fast-open=1
reuseport=yes
# also tested multiple values to:
distributor-threads=n
receiver-threads=n
retrieval-threads=n
max-queue-length=n
queue-limit=n
newServer({
...
maxInFlight=10240 # also tried with higher values,
maxConcurrentTCPConnections=10000 # also tried with higher values,
tcpFastOpen=true,
tcpConnectTimeout=100,
tcpSendTimeout=100,
tcpRecvTimeout=100
})
# Opening multiple sockets on startup
addLocal("3.104.42.175:53", {reusePort=true,tcpFastOpenQueueSize=10000, tcpListenQueueSize=10000, maxInFlight=10000,maxConcurrentTCPConnections=10000})
addLocal("3.104.42.175:53", {reusePort=true,tcpFastOpenQueueSize=10000, tcpListenQueueSize=10000, maxInFlight=10000,maxConcurrentTCPConnections=10000})
addLocal("3.104.42.175:53", {reusePort=true,tcpFastOpenQueueSize=10000, tcpListenQueueSize=10000, maxInFlight=10000,maxConcurrentTCPConnections=10000})
sysctl net.ipv4.tcp_tw_reuse=1 # tested 0 and 2
sysctl net.ipv4.tcp_syncookies # tested both 1 and 0
sysctl net.core.somaxconn=16384 # tested multiple values
sysctl net.ipv4.tcp_max_syn_backlog=2048 # tested multiple values
sysctl net.ipv4.tcp_syncookies=1
sysctl net.ipv4.tcp_orphan_retries=7
sysctl net.ipv4.tcp_reordering=5
sysctl net.ipv4.tcp_fack=1
sysctl net.ipv4.tcp_dsack=1
sysctl net.ipv4.tcp_sack=1
sysctl fs.file-max
echo 20000000 > /proc/sys/fs/nr_open
ulimit # noproc and open files
addLocal("${Private_IP}")
setACL({"0.0.0.0/0"})
my_domains = newSuffixMatchNode()
newServer({
address="${Remote_PDNS_IP}",
name="pdns.example.com",
pool="pdns.example.com"
})
pc = newPacketCache(100000, {
maxTTL=1,
minTTL=0,
temporaryFailureTTL=5,
keepStaleData=true,
staleTTL=5,
dontAge=true
})
getPool("example.com"):setCache(pc)
addAction({"example.com"}, PoolAction("example.com"))
my_domains:add(newDNSName("example.com"))
local-address=${Private_IP}
config-dir=/etc/powerdns
daemon=no
guardian=no
disable-axfr=yes
local-port=53
log-dns-details=yes
loglevel=3
primary=yes
secondary=no
setgid=pdns
setuid=pdns
socket-dir=/var/run/pdns
version-string=powerdns
launch=bind
bind-config=/var/powerdns/bind/named.conf
include-dir=/etc/powerdns/pdns.d
distributor-threads=3
negquery-cache-ttl=10
query-cache-ttl=20
receiver-threads=2
retrieval-threads=2
xfr-cycle-interval=60
webserver=true
allow-notify-from=0.0.0.0
bind-check-interval=300
max-tcp-connections=1024
options {
directory "/var/powerdns/bind";
};
zone "example.com" IN {
type master;
file "example.com";
};
@ 3600 SOA ns1.example.com. admin.example.com. 353 60 7200 604800 60
@ 15 A 1.1.1.1
@ 3600 NS ns1.example.com.
ns1 15 A 1.1.1.1
test 5 A 2.2.2.2
example.com NS
example.com A
ns1.example.com A
test.example.com A
i suffer from the question about when i use resperf to test DNS performmance.
there is the console:
$ resperf -d datafile.txt -s xxx
DNS Resolution Performance Testing Tool
Version 2.2.1
[Status] Command line: resperf -d a.txt -s xxx
[Status] Sending
Error: ran out of query data
there is my datafile:
dns1.ikang.com. A
s-static.ak.fbcdn.net. A
lachicabionica.com. A
www.mediafire.com. A
can you help me, thanks!
Look at changing the default number of threads to the number of cores available and maybe add option to -T
to query an interface for the number of queues it has and run with that, see ethtool --show-channels ifname
.
For backward compatibility, maybe use -T cores
to set to the number of available cores.
When trying to build dnsperf from git repo on Debian 10 (Buster), I see these errors during the configure
step:
[...]
checking for isc/hmacsha.h... yes
./configure: line 9727: syntax error near unexpected token `libssl,'
./configure: line 9727: `PKG_CHECK_MODULES(libssl, libssl)'
Both dependency sets were installed.
Subsequential make
fails.
It would be awesome if dnsperf had a mode where each connection is used at most times, where N is user-specified value. It would allow to exercise connection open/close code paths more heavily than the current "optimal" mechanism which pipelines everything.
Bonus points if the parameter allows 0
, i.e. open a new connection and close it right away without sending a query. Of course it's dumb, but I have a use-case where one dnsperf/client instance hammers the server with TCP open/close and another instance tries to get answers to useful DNS queries.
Thank you for considering this.
One query timed-out but HTTP/2 return codes said all responses received.
[Status] Command line: dnsperf -n 2000 -m doh -T 1 -c 1 -d doh.csv -Q 1000 -S 1 -O doh-method=POST -O doh-uri=https://dns.google/dns-query -s dns.google -t 10
[Status] Sending queries (to 8.8.4.4:443)
[Status] Started at: Thu May 11 10:00:21 2023
[Status] Stopping after 2000 runs through file
1683792022.208230: 971.944468
1683792023.209306: 1001.921932
[Timeout] Query timed out: msg id 1191
[Status] Testing complete (end of file)
Statistics:
Queries sent: 2000
Queries completed: 1999 (99.95%)
Queries lost: 1 (0.05%)
Response codes: NOERROR 1999 (100.00%)
Average packet size: request 37, response 53
Run time (s): 2.026278
Queries per second: 986.537879
Average Latency (s): 0.008476 (min 0.004251, max 0.020062)
Latency StdDev (s): 0.001934
Connection Statistics:
Reconnections: 0 (0.00% of 1 connections)
Average Latency (s): 0.020142 (min 0.020142, max 0.020142)
DNS-over-HTTPS statistics:
HTTP/2 return codes: 200: 2000
Version: 2.7.0
Reproducer:
echo '\". A' | dnsperf -v
dnsperf results:
> FORMERR \". A 0.000348
Wire format:
"\x00\x00\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x01\x22\x2e\x00\x00\x01\x00\x01"
\x2e
above is ASCII .
which is an extra byte - it should not be there.
dnsperf results:
> NXDOMAIN \". A 0.000348
Wire format:
b'\xb0\xe7\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x01\x22\x00\x00\x01\x00\x01'
The same happens with other \encoded qnames I tested - \\. A
, \123. A
, \123.xxx A
.
I'm a little confused by these two projects. Nominum web site has a tarball and doesn't link to either github project. Is this project the official source?
Hi team,
Thanks for providing the awesome tool for DNS performance testing.
Is there any way to specify the real client ip like dig -subnet
option in this tool?
I see there is a -e
option but I have no idea on how to use it to simulate the subnet option in EDNS, any example would be appreciated!
Version affected: 2.10.0
First, definition of -Q in the manual page:
-Q max_qps
Limits the number of requests per second. There is no default limit.
(Emphasis added by me.)
This does not match the real behavior of that feature. Right now the code looks like this:
/* Rate limiting */
if (tinfo->max_qps > 0) {
run_time = now - times->start_time;
req_time = (MILLION * stats->num_sent) / tinfo->max_qps;
if (req_time > run_time) {
usleep(req_time - run_time);
now = perf_get_time();
continue;
}
}
It calculates when the next packet should be sent based on total number of packets, not number of packets in given second.
Consequently, if the server is slower than specified -Q for a period of time, unsent packets "accumulate" and then the -Q value is overshot, possibly very significantly.
Here is example of such test run in table. Columns 2, 3, 4 are number of packets as reported by dnsperf. See time 25 s and following.
time [s] | sent | recvd | timeouts |
---|---|---|---|
1 | 100021 | 99673 | 0 |
2 | 100031 | 100024 | 0 |
3 | 100028 | 100027 | 0 |
4 | 100027 | 100031 | 0 |
5 | 100028 | 100030 | 0 |
6 | 100023 | 100023 | 323 |
7 | 100022 | 100023 | 0 |
8 | 100018 | 100015 | 0 |
9 | 100025 | 100020 | 0 |
10 | 100019 | 100022 | 0 |
11 | 100130 | 100134 | 0 |
12 | 100026 | 100026 | 0 |
13 | 100028 | 100033 | 0 |
14 | 100018 | 100018 | 0 |
15 | 100032 | 100025 | 0 |
16 | 100023 | 100027 | 0 |
17 | 100026 | 100017 | 0 |
18 | 100022 | 100028 | 0 |
19 | 100025 | 100020 | 0 |
20 | 100026 | 100025 | 0 |
21 | 100034 | 100039 | 0 |
22 | 100026 | 100029 | 0 |
23 | 100028 | 100014 | 0 |
24 | 100023 | 100040 | 0 |
25 | 100028 | 100023 | 0 |
26 | 51631 | 50659 | 0 |
27 | 0 | 0 | 0 |
28 | 0 | 0 | 0 |
29 | 0 | 0 | 0 |
30 | 0 | 0 | 0 |
31 | 9383 | 8383 | 1000 |
32 | 0 | 0 | 0 |
33 | 0 | 0 | 0 |
34 | 0 | 0 | 0 |
35 | 0 | 0 | 0 |
36 | 9388 | 8388 | 1000 |
37 | 0 | 0 | 0 |
38 | 0 | 0 | 0 |
39 | 0 | 0 | 0 |
40 | 0 | 0 | 0 |
41 | 9349 | 8350 | 1000 |
42 | 9 | 8 | 0 |
43 | 0 | 0 | 0 |
44 | 0 | 0 | 0 |
45 | 0 | 0 | 0 |
46 | 98089 | 98080 | 999 |
47 | 215355 | 215341 | 1 |
48 | 214683 | 214645 | 0 |
49 | 214870 | 214909 | 0 |
50 | 214906 | 214837 | 0 |
51 | 215887 | 215965 | 0 |
52 | 219043 | 218387 | 0 |
53 | 209458 | 209342 | 0 |
54 | 213390 | 213553 | 0 |
55 | 215100 | 215093 | 0 |
56 | 214460 | 214249 | 0 |
57 | 214883 | 215026 | 609 |
58 | 213354 | 213398 | 0 |
59 | 212175 | 211942 | 0 |
Simplest reproducer is to run dnsperf with -Q against an echo server, kill the server in the middle of test, and then restart it. dnsperf will send more queries as it tries to "catch up".
$ yes '. A' | dnsperf -O suppress=timeout -t 1 -Q 100000 -l 10 -s 127.0.0.1 -S1 -p 5353
DNS Performance Testing Tool
Version 2.10.0
[Status] Command line: dnsperf -O suppress=timeout -t 1 -Q 100000 -l 10 -s 127.0.0.1 -S1 -p 5353
[Status] Sending queries (to 127.0.0.1:5353)
[Status] Started at: Thu Feb 2 16:12:30 2023
[Status] Stopping after 10.000000 seconds
1675350751.204563: 99992.803613
1675350752.205596: 100005.694118
1675350753.206756: 34156.378601
1675350754.207943: 0.000000
1675350755.209119: 0.000000
1675350756.210197: 0.000000
1675350757.211227: 226568.634307
1675350758.211343: 324562.350767
1675350759.211482: 114571.074621
[Status] Testing complete (time limit)
Statistics:
Queries sent: 999995
Queries completed: 999595 (99.96%)
Queries lost: 400 (0.04%)
Response codes: NOERROR 999595 (100.00%)
Average packet size: request 17, response 17
Run time (s): 10.000020
Queries per second: 99959.300081
Average Latency (s): 0.000160 (min 0.000005, max 0.000604)
Latency StdDev (s): 0.000142
Hi team,
I have been struggling to get past the ./configure step as I'm trying to build dnsperf from the GitHub repository. The exact error encountered is the following:
./configure: line 3938: syntax error near unexpected token 'disable-static'
./configure: line 3938: LT_INIT(disable-static)
When running echo $? straight after the configure script I get exit status error '2'.
It goes without saying that I have already installed all dependencies. The machine I am working with is ubuntu linux. These are the steps I took prior to getting stuck on the error above:
apt update
apt-get install -y libssl-dev libldns-dev libck-dev libnghttp2-dev autoconf libtool
git clone https://github.com/DNS-OARC/dnsperf.git
cd dnsperf
./autogen.sh
./configure <-------------WHERE THE PROBLEM OCCURS
I went over the README file 4 times in case there were any other dependencies I had to install but I couldn't see any. Google searching did not help me much either.
Could you please help me with this?
Thanks in advance.
+ ../resperf -s 1.1.1.1 -m 1 -d ../../../src/test/datafile2 -r 2 -c 2 -M udp -y hmac-sha256:test:Ax42vsuHBjQOKlVHO8yU1zGuQ5hjeSz01LXiNze8pb8=
Failing assertion due to probable leaked memory in context 0x5651d6fb6590 ("") (stats[888].gets == 1).
../../../lib/isc/mem.c:1406: REQUIRE((__builtin_expect(((mctx0) != ((void *)0)), 1) && __builtin_expect((((const isc__magic_t *)(mctx0))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C')))), 1))) failed, back trace
#0 0x7f1593434a4a in ??
#1 0x7f1593434980 in ??
#2 0x7f15934485e5 in ??
#3 0x7f1593448818 in ??
#4 0x7f1593448d87 in ??
#5 0x7f159344bb95 in ??
#6 0x5651d55abdc3 in ??
#7 0x5651d55ae1f7 in ??
#8 0x7f1592eca0b3 in ??
#9 0x5651d55a03be in ??
Aborted (core dumped)
Recently the set of accepted QTYPE strings changed and my old query-set (generated from real traffic) has lots of cases that get skipped now. Issues I see:
*
instead of ANY
, and meaningless Reserved
and Unassigned
TYPE123
One way of dealing with this would be using ldns; fixing these directly shouldn't need that much code changes either.
I see my query-set even contains some TYPE0
which would need additional code due to unrepresentability in ldns API, but I think it's fine to drop that weird case.
Reproducer:
# dnsperf -T10 -c 2000 -l 1 -s ::1 -d /tmp/ql
DNS Performance Testing Tool
Version 2.4.0
[Status] Command line: dnsperf -T10 -c 2000 -l 1 -s ::1 -d /tmp/ql
[Status] Sending queries (to ::1)
[Status] Started at: Mon Jan 18 15:47:46 2021
[Status] Stopping after 1.000000 seconds
*** buffer overflow detected ***: terminated
*** buffer overflow detected ***: terminated
Aborted (core dumped)
Content of query list -d
is irrelevant, and so is -l
, and -s
.
BIND 9.16 removes isc-config.sh and changes API of isc_mem_create() and isc_buffer_allocate(). I don't have time to do a full PR-ready diff with autoconf checks to make it work with either version, but to hopefully save someone else a bit of time, this is what I'm using in the OpenBSD port for now.
The API changes were in isc-projects/bind9@1b716a3 and isc-projects/bind9@4459745
Index: src/dns.c
--- src/dns.c.orig
+++ src/dns.c
@@ -137,10 +137,7 @@ perf_dns_createctx(bool updates)
return NULL;
mctx = NULL;
- result = isc_mem_create(0, 0, &mctx);
- if (result != ISC_R_SUCCESS)
- perf_log_fatal("creating memory context: %s",
- isc_result_totext(result));
+ isc_mem_create(&mctx);
ctx = isc_mem_get(mctx, sizeof(*ctx));
if (ctx == NULL) {
@@ -373,9 +370,7 @@ perf_dns_parseednsoption(const char* arg, isc_mem_t* m
option->mctx = mctx;
option->buffer = NULL;
- result = isc_buffer_allocate(mctx, &option->buffer, strlen(value) / 2 + 4);
- if (result != ISC_R_SUCCESS)
- perf_log_fatal("out of memory");
+ isc_buffer_allocate(mctx, &option->buffer, strlen(value) / 2 + 4);
result = isc_parse_uint16(&code, copy, 0);
if (result != ISC_R_SUCCESS) {
Index: src/dnsperf.c
--- src/dnsperf.c.orig
+++ src/dnsperf.c
@@ -389,10 +389,7 @@ setup(int argc, char** argv, config_t* config)
isc_result_t result;
const char* mode = 0;
- result = isc_mem_create(0, 0, &mctx);
- if (result != ISC_R_SUCCESS)
- perf_log_fatal("creating memory context: %s",
- isc_result_totext(result));
+ isc_mem_create(&mctx);
dns_result_register();
Index: src/resperf.c
--- src/resperf.c.orig
+++ src/resperf.c
@@ -226,10 +226,7 @@ setup(int argc, char** argv)
isc_result_t result;
const char* _mode = 0;
- result = isc_mem_create(0, 0, &mctx);
- if (result != ISC_R_SUCCESS)
- perf_log_fatal("creating memory context: %s",
- isc_result_totext(result));
+ isc_mem_create(&mctx);
dns_result_register();
When testing against an authoritative server it can be useful to clear RD bit as it is not really supposed to be set when a recursive server queries an auth server and an auth server may decide to REFUSED responding if RD is set.
Line 1024 in 71e810f
seems to force set it.
Any objection about making an option to clear it?
I believe there's an error in the usage of the offset
parameter in the per_thread()
function?
It's giving me unexpected results, where for example if I specify -T 6 -c 24
I would expect to get exactly four sockets per thread, but I'm actually getting the sequence (5, 5, 5, 5, 4, 4) making a total of 28 clients created instead of the specified 24.
Thanks for the heads up. I'm setting the
dnsperf
snap to Unpublished at this time, since I no longer have time to maintain it. If you (or another maintainer) change your mind and would like to releasednsperf
as a snap, please let me know and I'll be happy to transfer control of the snap name.
Originally posted by @mpontillo in #1 (comment)
Trying to build from source on Centos system. All dependencies installed, but I get the following error:
dns.o: In function name_fromstring': /root/dnsperf/dnsperf-2.3.4/src/dns.c:200: undefined reference to
isc_buffer_constinit'
Not sure how to proceed
I get following issues while building a Docker image based on CentOS8:
Problem 1: conflicting requests
- nothing provides libbind9.so.160()(64bit) needed by dnsperf-2.3.4-1.el8.x86_64
- nothing provides libdns.so.1102()(64bit) needed by dnsperf-2.3.4-1.el8.x86_64
- nothing provides libisc.so.169()(64bit) needed by dnsperf-2.3.4-1.el8.x86_64
- nothing provides libisccfg.so.160()(64bit) needed by dnsperf-2.3.4-1.el8.x86_64
Problem 2: conflicting requests
- nothing provides libbind9.so.160()(64bit) needed by resperf-2.3.4-1.el8.x86_64
- nothing provides libdns.so.1102()(64bit) needed by resperf-2.3.4-1.el8.x86_64
- nothing provides libisc.so.169()(64bit) needed by resperf-2.3.4-1.el8.x86_64
- nothing provides libisccfg.so.160()(64bit) needed by resperf-2.3.4-1.el8.x86_64
The Problem is, I have a newer version of the lib in https://centos.pkgs.org/8/centos-appstream-x86_64/bind-libs-9.11.13-5.el8_2.x86_64.rpm.html
/usr/lib64/libbind9.so.161 vs libbind9.so.160
I assume it's the same for the other libs but did not looked further. EPEL is activated.
Add option to pin thread pairs to the same core to test if it gives performance improvements.
dnsperf-2.11.1 on macos ventura using homebrew
doh-uri extended option not working:
dnsperf -m doh -s blahblah.com -O doh-uri=https://blahblah.com/dns-query
DNS Performance Testing Tool
Version 2.11.1
invalid long option: doh-uri=https://blahblah.com/dns-query
Cannot seem to get it to take a uri, have tried many iterations of the syntax
robertpaolucci@CNMUS0022 dnsperf-2.11.1 % dnsperf -H
DNS Performance Testing Tool
Version 2.11.1
Usage: dnsperf ... -O [=] ...
Available long options:
latency-histogram: collect and print detailed latency histograms
verbose-interval-stats: print detailed statistics for each stats_interval
num-queries-per-conn=: Number of queries to send per connection
suppress=<message[,message,...]>: suppress messages/warnings, see man-page for list of message types
doh-method=<doh_method>: the HTTP method to use for DNS-over-HTTPS: GET or POST (default: GET)
doh-uri=<doh_uri>: the URI to use for DNS-over-HTTPS (default: https://localhost/dns-query)
./autogen.sh: line 19: autoreconf: command not found
Dependencies are already installed
My thing centos system
strerror_r()
like Flamethrower can generate detailed metrics for each of its concurrent senders. Metrics include send and receive counts, timeouts, min, max and average latency, errors, and the like. The output format is JSON, and is suitable for ingestion into databases such as Elastic for further processing or visualization. See the -o flag.
Attempting to use dnsperf 2.11.2 to characterize performance on a production dns server (CacheServe) using DoH requests. However, running into seg faults when using a qps that is lower than the total number of connections, as well as controlling the number of queries per connection.
Ex. dnsperf -m doh -s -d -q 140 -l 300 -Q 15 -c 60 -T 4 -W -O num-queries-per-conn=1 -O doh-method=GET -S 1
Can avoid the seg fault by reducing the thread count to 1 (-T 1). Attaching back trace of the above command line. Error occurs for both POST and GET requests.
What's interesting is that was not able to repro issue on dns.google.
dnsperf -s 8.8.8.8 -l 120 -m doh -T 4 -c 100 -O num-queries-per-conn=1 -d -Q 25 -S 1 -O doh-method=GET -t 10
FreeBSD's package of bind only includes a static version of libdns
which needs symbols that are for some reason not available in the SO version of libcrypto
so you need to link against the static version of it.
/usr/bin/ld: error: undefined symbol: EVP_DigestSign
>>> referenced by openssleddsa_link.c
>>> openssleddsa_link.o:(openssleddsa_sign) in archive /usr/local/lib/libdns.a
/usr/bin/ld: error: undefined symbol: EVP_DigestVerify
>>> referenced by openssleddsa_link.c
>>> openssleddsa_link.o:(openssleddsa_verify) in archive /usr/local/lib/libdns.a
cc: error: linker command failed with exit code 1 (use -v to see invocation)
This was fixed by add /usr/lib/libcrypto.a
to LDFLAGS
but this should not be needed.
Please give me an example
Something is broken with the ramp-up/QPS in resperf using TLS, this data file has >500 entries and it should not run out:
DNS Resolution Performance Testing Tool
Version 2.3.4
[Status] Command line: resperf -s 8.8.8.8 -m 1 -d ../../../src/test/datafile -r 2 -c 2 -M tls
[Status] Sending
[Status] Ramp-up done, sending constant traffic
Error: ran out of query data
Same file works with TCP:
DNS Resolution Performance Testing Tool
Version 2.3.4
[Status] Command line: resperf -s 8.8.8.8 -m 1 -d ../../../src/test/datafile -r 2 -c 2 -M tcp
[Status] Sending
[Status] Ramp-up done, sending constant traffic
[Status] Waiting for more responses
[Status] Testing complete
Statistics:
Queries sent: 2
Queries completed: 2
Queries lost: 0
Response codes: NOERROR 2 (100.00%)
Run time (s): 4.000003
Maximum throughput: 2.000000 qps
Lost at that point: 0.00%
Version 2.11.2
Attempting to use DoH with POST requests often results in seg fault. The test will run for about 45s, will then experience a flurry of query timeouts, and eventually seg fault. While the test is running normally, the CPU is running at 20% on a single core, but jumps to 100% when the timeouts occur.
Below is command line used to generate the seg fault.
dnsperf -m doh -s -d -q 500 -l 90 -c 50 -Q 2000 -T 4 -W -O doh-method=POST -S 1 -W
Examples from syslog:
May 4 09:57:27 jmeterclient20 kernel: dnsperf[28734]: segfault at 887f90 ip 0000000000887f90 sp 00007f210475c6b8 error 15 May 4 09:57:27 jmeterclient20 abrt-hook-ccpp: Process 28733 (dnsperf) of user 1008 killed by SIGSEGV - dumping core
May 4 10:32:08 jmeterclient20 kernel: dnsperf[9784]: segfault at 0 ip (null) sp 00007f17816c16b8 error 14 in dnsperf[400000+14000] May 4 10:32:08 jmeterclient20 abrt-hook-ccpp: Process 9783 (dnsperf) of user 1008 killed by SIGSEGV - dumping core
Have experimented with lowering/increasing outstanding queries (-q), reducing the number of threads (-T), limiting the input file to a single entry, and increasing the buffer size (-b). All result in a seg fault. Have been able to avoid the seg fault by reducing the queries per second (-Q) to a rate that's the same as client count or below. Including the output when the seg fault occurs.
Server: ProLiant DL360 Gen10
CPU: Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
OS: CentOS Linux release 7.7.1908 (Core)
The "original" version 2.1.0.0 had quite nice PDFs in it. Have we lost those? I've never seen any sources for those PDFs, so I'm afraid it's possible we won't be able to get their sources anymore 😢
/usr/bin/ld: cannot find -lprotobuf-c
/usr/bin/ld: cannot find -ljson-c
collect2: error: ld returned 1 exit status
This should have been detected during ./configure
.
See actual problem description below.
Sometimes the reported latency is equal to the current unix timestamp.
I've confirmed in GDB that recvd[i].sent
time is set to 0 while packet is being processed in do_recv() inner loop. Dunno how is that possible.
Commit: 432b47f - i.e. before merging support for periodic stats and latency histograms, but with the recent errno fixes.
Log:
$ src/dnsperf -Q 5 -d /tmp/qlist -s 192.168.0.1 -l 10 -S1 -O suppress=unexpected,timeout -l5 -t0.1 -S1
DNS Performance Testing Tool
Version 2.10.0
[Status] Command line: dnsperf -Q 5 -d /tmp/qlist -s 192.168.0.1 -l 10 -S1 -O suppress=unexpected,timeout -l5 -t0.1 -S1
[Status] Sending queries (to 192.168.0.1:53)
[Status] Started at: Wed Feb 1 11:21:45 2023
[Status] Stopping after 5.000000 seconds
1675246906.520453: 4.993758
1675246907.521559: 4.994476
1675246908.522734: 4.994132
1675246909.523736: 4.994995
[Status] Testing complete (time limit)
Statistics:
Queries sent: 25
Queries completed: 25 (100.00%)
Queries lost: 1 (4.00%)
Response codes: NOERROR 25 (100.00%)
Average packet size: request 17, response 92
Run time (s): 5.000322
Queries per second: 4.999678
Average Latency (s): 67009876.346007 (min 0.014435, max 1675246908.065011)
Latency StdDev (s): -nan
PCAP with the session: dns.pcap.zip
Hello folks,
I wanted to report something I have been notified by my security team after running some DNS performance tests using the DNSperf sample query datafile. It seems the domain osuno.no-ip.biz
is considered by some cloud providers like AWS as a domain name associated with a Command & Control (C&C) server.
I am reporting this here as I didn't find any other place in you webpage to report this kind of issues. Might be worth to remove that domain from this datafile :)
Thanks for your work!
relates to Homebrew/homebrew-core#54450
Current issue I saw is:
checking for BIND 9 libraries... checking for isc-config.sh... no
configure: error: BIND 9 libraries must be installed
The logs are in GHA runs, https://github.com/Homebrew/homebrew-core/runs/659210419
Version: 638e7e7
For mysterious reasons, dnsperf
sometimes thinks it is connected to TCP when it really is not - because nothing is listening on the target port.
Reproducer:
while true; do time dnsperf -s 127.0.0.2 -d /tmp/qlist -l 0.0001 -m tcp; done
/tmp/qlist
content: net. SOA
DNS Performance Testing Tool
Version 2.10.0
[Status] Command line: dnsperf -s 127.0.0.2 -d /tmp/qlist -l 0.0001 -m tcp
[Status] Sending queries (to 127.0.0.2:53)
[Status] Started at: Mon Jan 16 18:05:21 2023
[Status] Stopping after 0.000100 seconds
[Timeout] Query timed out: msg id 0
[Status] Testing complete (time limit)
Statistics:
Queries sent: 1
Queries completed: 0 (0.00%)
Queries lost: 1 (100.00%)
Response codes:
Average packet size: request 21, response 0
Run time (s): 0.001089
Queries per second: 0.000000
Average Latency (s): 0.000000 (min 0.000000, max 0.000000)
Connection Statistics:
Reconnections: 26
Average Latency (s): 0.000041 (min 0.000025, max 0.000294)
Latency StdDev (s): 0.000054
real 0m5.008s
user 0m1.765s
sys 0m3.235s
This should be "impossible" because nothing is listening on 127.0.0.2 port 53. To verify that I've captured PCAP with all attempts leading to this output:
dnsperf.pcap.zip
i have several question about dnsperf, when i build from sources, i suffer from "undefined reference to `isc_hmacmd5_init'" , how can i next? i know the question about so。
Perhaps this is similar the TLS run out of query data problem?
Example;
resperf -V
DNS Resolution Performance Testing Tool
Version 2.4.0
~$ cat network-logcollector-ams2z*.log | grep -v ANY | resperf -s 10.70.250.12 -e -v -M udp
DNS Resolution Performance Testing Tool
Version 2.4.0
[Status] Command line: resperf -s 10.70.250.12 -e -v -M udp
[Status] Sending
Error: ran out of query data
But,
~$ cat network-logcollector-ams2z*.log | wc -l
104190755
At the other end,
2 11:52:46
1 11:52:54
230 11:53:04
1721 11:53:05
3369 11:53:06
5067 11:53:07
6718 11:53:08
8384 11:53:09
10051 11:53:10
11716 11:53:11
11145 11:53:12
2 11:53:26
2 11:53:46
Doesn't add up to 100 million queries to me,
Any ideas?
9.13 has generic interface for HMAC instead of per type, use isc_hmac_t
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.