Git Product home page Git Product logo

dns-packet's Introduction

dns-packet

Coverage Status

An abstract-encoding compliant module for encoding / decoding DNS packets. Lifted out of multicast-dns as a separate module.

npm install dns-packet

UDP Usage

const dnsPacket = require('dns-packet')
const dgram = require('dgram')

const socket = dgram.createSocket('udp4')

const buf = dnsPacket.encode({
  type: 'query',
  id: 1,
  flags: dnsPacket.RECURSION_DESIRED,
  questions: [{
    type: 'A',
    name: 'google.com'
  }]
})

socket.on('message', message => {
  console.log(dnsPacket.decode(message)) // prints out a response from google dns
})

socket.send(buf, 0, buf.length, 53, '8.8.8.8')

Also see the UDP example.

TCP, TLS, HTTPS

While DNS has traditionally been used over a datagram transport, it is increasingly being carried over TCP for larger responses commonly including DNSSEC responses and TLS or HTTPS for enhanced security. See below examples on how to use dns-packet to wrap DNS packets in these protocols:

API

var buf = packets.encode(packet, [buf], [offset])

Encodes a DNS packet into a buffer containing a UDP payload.

var packet = packets.decode(buf, [offset])

Decode a DNS packet from a buffer containing a UDP payload.

var buf = packets.streamEncode(packet, [buf], [offset])

Encodes a DNS packet into a buffer containing a TCP payload.

var packet = packets.streamDecode(buf, [offset])

Decode a DNS packet from a buffer containing a TCP payload.

var len = packets.encodingLength(packet)

Returns how many bytes are needed to encode the DNS packet

Packets

Packets look like this

{
  type: 'query|response',
  id: optionalIdNumber,
  flags: optionalBitFlags,
  questions: [...],
  answers: [...],
  additionals: [...],
  authorities: [...]
}

The bit flags available are

packet.RECURSION_DESIRED
packet.RECURSION_AVAILABLE
packet.TRUNCATED_RESPONSE
packet.AUTHORITATIVE_ANSWER
packet.AUTHENTIC_DATA
packet.CHECKING_DISABLED

To use more than one flag bitwise-or them together

var flags = packet.RECURSION_DESIRED | packet.RECURSION_AVAILABLE

And to check for a flag use bitwise-and

var isRecursive = message.flags & packet.RECURSION_DESIRED

A question looks like this

{
  type: 'A', // or SRV, AAAA, etc
  class: 'IN', // one of IN, CS, CH, HS, ANY. Default: IN
  name: 'google.com' // which record are you looking for
}

And an answer, additional, or authority looks like this

{
  type: 'A', // or SRV, AAAA, etc
  class: 'IN', // one of IN, CS, CH, HS
  name: 'google.com', // which name is this record for
  ttl: optionalTimeToLiveInSeconds,
  (record specific data, see below)
}

Supported record types

A

{
  data: 'IPv4 address' // fx 127.0.0.1
}

AAAA

{
  data: 'IPv6 address' // fx fe80::1
}

CAA

{
  flags: 128, // octet
  tag: 'issue|issuewild|iodef',
  value: 'ca.example.net',
  issuerCritical: false
}

CNAME

{
  data: 'cname.to.another.record'
}

DNAME

{
  data: 'dname.to.another.record'
}

DNSKEY

{
  flags: 257, // 16 bits
  algorithm: 1, // octet
  key: Buffer
}

DS

{
  keyTag: 12345,
  algorithm: 8,
  digestType: 1,
  digest: Buffer
}

HINFO

{
  data: {
    cpu: 'cpu info',
    os: 'os info'
  }
}

MX

{
  preference: 10,
  exchange: 'mail.example.net'
}

NAPTR

{
  data:
    {
      order: 100,
      preference: 10,
      flags: 's',
      services: 'SIP+D2U',
      regexp: '!^.*$!sip:[email protected]!',
      replacement: '_sip._udp.example.com'
    }
}

NS

{
  data: nameServer
}

NSEC

{
  nextDomain: 'a.domain',
  rrtypes: ['A', 'TXT', 'RRSIG']
}

NSEC3

{
  algorithm: 1,
  flags: 0,
  iterations: 2,
  salt: Buffer,
  nextDomain: Buffer, // Hashed per RFC5155
  rrtypes: ['A', 'TXT', 'RRSIG']
}

NULL

{
  data: Buffer('any binary data')
}

OPT

EDNS0 options.

{
  type: 'OPT',
  name: '.',
  udpPayloadSize: 4096,
  flags: packet.DNSSEC_OK,
  options: [{
    // pass in any code/data for generic EDNS0 options
    code: 12,
    data: Buffer.alloc(31)
  }, {
    // Several EDNS0 options have enhanced support
    code: 'PADDING',
    length: 31,
  }, {
    code: 'CLIENT_SUBNET',
    family: 2, // 1 for IPv4, 2 for IPv6
    sourcePrefixLength: 64, // used to truncate IP address
    scopePrefixLength: 0,
    ip: 'fe80::',
  }, {
    code: 'TCP_KEEPALIVE',
    timeout: 150 // increments of 100ms.  This means 15s.
  }, {
    code: 'KEY_TAG',
    tags: [1, 2, 3],
  }]
}

The options PADDING, CLIENT_SUBNET, TCP_KEEPALIVE and KEY_TAG support enhanced de/encoding. See optionscodes.js for all supported option codes. If the data property is present on a option, it takes precedence. On decoding, data will always be defined.

PTR

{
  data: 'points.to.another.record'
}

RP

{
  mbox: 'admin.example.com',
  txt: 'txt.example.com'
}

SSHFP

{
  algorithm: 1,
  hash: 1,
  fingerprint: 'A108C9F834354D5B37AF988141C9294822F5BC00'
}

RRSIG

{
  typeCovered: 'A',
  algorithm: 8,
  labels: 1,
  originalTTL: 3600,
  expiration: timestamp,
  inception: timestamp,
  keyTag: 12345,
  signersName: 'a.name',
  signature: Buffer
}

SOA

{
  data:
    {
      mname: domainName,
      rname: mailbox,
      serial: zoneSerial,
      refresh: refreshInterval,
      retry: retryInterval,
      expire: expireInterval,
      minimum: minimumTTL
    }
}

SRV

{
  data: {
    port: servicePort,
    target: serviceHostName,
    priority: optionalServicePriority,
    weight: optionalServiceWeight
  }
}

TLSA

{
  usage: 3,
  selector: 1,
  matchingType: 1,
  certificate: Buffer
}

TXT

{
  data: 'text' || Buffer || [ Buffer || 'text' ]
}

When encoding, scalar values are converted to an array and strings are converted to UTF-8 encoded Buffers. When decoding, the return value will always be an array of Buffer.

If you need another record type, open an issue and we'll try to add it.

License

MIT

dns-packet's People

Contributors

amithm7 avatar arachnid avatar buffrr avatar floydpink avatar hildjj avatar jviide avatar kyleaedwards avatar m4t7e avatar mafintosh avatar martinheidegger avatar martinkolarik avatar msimerson avatar pusateri avatar richardschneider avatar silverwind avatar wolfy1339 avatar xuhjbj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dns-packet's Issues

decoding header flags

Currently, the header flags are decoded as:
flags: flags & 32767

I understand the Q/R flag is pulled out as 'type' but it is technically part of the flags and probably should be included in the flags field. Currently, this breaks a test where I encode a packet and decode it and compare the fields.

Thoughts?

Would it break backward compatibility to change it now? Maybe it's better to leave it alone and mask out the flags comparison in the test.

TXT records can no longer have empty text

Starting with commit f6db3d3, I can no longer send TXT records with an empty payload using multicast-dns:

{name, type: "TXT", ttl, data: ""}

I have identified the problem to be the coalescing of data and options on line 1241 of dns-packet:

return name.encodingLength(a.name) + 8 + renc(a.type).encodingLength(a.data || a.options)

Would it be possible to change that into a comparison with null and/or undefined instead to allow empty TXT records again? Sending empty TXT records may seem pointless, but I found it to be required for interoperability with certain mDNS clients.

ERR_OUT_OF_RANGE

I get this whenever I attempt to use Google's DoH endpoint. I don't get it from cloudflare. Any ideas?

internal/buffer.js:77
  throw new ERR_OUT_OF_RANGE(type || 'offset',
  ^

RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 785. Received 858
    at boundsError (internal/buffer.js:77:9)
    at Buffer.readUInt16BE (internal/buffer.js:323:5)
    at Object.question.decode (C:\Users\Cam\projects\playground\base64url\node_modules\dns-packet\index.js:1417:31)
    at decodeList (C:\Users\Cam\projects\playground\base64url\node_modules\dns-packet\index.js:1537:19)
    at Object.exports.decode (C:\Users\Cam\projects\playground\base64url\node_modules\dns-packet\index.js:1477:12)
    at IncomingMessage.<anonymous> (C:\Users\Cam\projects\playground\base64url\node_modules\@sagi.io\dns-over-https\index.js:78:31)
    at IncomingMessage.emit (events.js:321:20)
    at IncomingMessage.Readable.read (_stream_readable.js:508:10)
    at flow (_stream_readable.js:979:34)
    at resume_ (_stream_readable.js:960:3) {
  code: 'ERR_OUT_OF_RANGE'
}

what is node010?

Installing this package drops an 11MB (!) file called "node010" in the module dir: what is this file? Is this, as the name might seem to suggest, a single-file compiled node 0.10? What is it used for? Can the README.md be extend to explain what this file is and what its security implications are (because this is a DNS package, there are always security implications)

how to make query with DNSSEC

I would like to proceed with DNS Query by applying DNSSEC. However, there is no example to query DNSSEC. Can you give me an example of DNS query by applying DNSSEC?

name.encode for root

name.encode encodes the root domain incorrectly:
'' as 00 00 and '.' as 00 00 00 instead of 00
According to RFC1035 3.1 the first 00 length byte terminates a name.

Trailing dot in name gets an unfilled byte

When a question's name field has a trailing dot (e.g. 1.0.0.127.in-addr.arpa.), the length is computed before the trailing dot is dropped. This results in a buffer that's one byte longer than necessary, and which has a trailing byte that goes unfilled. For example,

        var reversed = '151.1.168.192.in-addr.arpa.';
        resolver.query({
            flags: 1 << 8 | 1 << 5,
            id: getRandomInt(1, 65534),
            questions: [
                {
                    name: reversed,
                    type: 'PTR',
                    class: 'IN'
                },
            ],
            additionals: [
                {
                    name: '.',
                    type: 'OPT',
                    udpPayloadSize: 0x1000,
                },
            ],
        });

results in a buffer len of 56, but only 55 bytes are written.

The relevant code is here:

name.encode = function (str, buf, offset) {
  if (!buf) buf = Buffer.allocUnsafe(name.encodingLength(str)) // <-- allocate based on str len
  if (!offset) offset = 0
  const oldOffset = offset

  // strip leading and trailing .
  const n = str.replace(/^\.|\.$/gm, '') // <-- mutate str
  if (n.length) {
    const list = n.split('.') // <-- loses the last byte

This was discovered while diagnosing #52 (also trying to diagnose mafintosh/multicast-dns#13).

As a workaround, manually dropping the trailing dot in constructing the address seems to work (but not to fix my issue, oh well.)

Unify header flag names (breaking)

We currently have these header flag booleans:

flag_auth
flag_trunc
flag_rd
flag_ra
flag_z
flag_ad
flag_cd

I notices that there is a registered list of 2-character flag names here and I think we should use flag_{xx} naming scheme. Note that this is a breaking change on flag_auth, flag_trunc and flag_z (which is reserved and should likely be removed, unless there's a use case for it).

cc: @pusateri

Replace `Buffer` with `Uint8Array`

Currently, this library cannot directly be used in non-Node environments (without Buffer polyfills). Are there any issues with switching to ArrayBuffer instead? Just wondering if such a contribution would be accepted.

Decode TXT responses as text

These are currently encoded as a Buffer, but I think it would be more useful to have them as a text string. If we decode them as text, should ASCII or UTF8 be used? I'd wager the spec only supports ASCII but I see no harm if we do UTF8 as it is a superset of ASCII.

Cannot decode multicast DNS packet

This packet cannot be decoded:

000084000000000600000000045f686170045f746370056c6f63616c00000c000100001194001411496e646f6f7243616d20324b2d30413143c00c09496e646f6f7263616dc016002f8001000000780005c0a5000140c027002f8001000011940009c02700050000800040c03b00018001000000780004c0a801f3c0270021800100000078000800000000b6f8c03bc0270010800100001194004c0463233d310466663d321469643d34413a34413a33373a37383a34443a3030086d643d54383430300670763d312e310573233d31340473663d300563693d31370b73683d4a6b6e5466773d3d

This is the error:

…/node_modules/dns-packet/index.js:83
        throw new Error('Cannot decode name (bad pointer)')
        ^

Error: Cannot decode name (bad pointer)
    at name.decode (…/node_modules/dns-packet/index.js:83:15)
    at rnsec.decode (…/node_modules/dns-packet/index.js:1147:28)
    at answer.decode (…/node_modules/dns-packet/index.js:1451:18)
    at decodeList (…/node_modules/dns-packet/index.js:1625:19)
    at exports.decode (…/node_modules/dns-packet/index.js:1566:12)
    at Object.<anonymous> (…/dns.js:3:11)
    at Module._compile (node:internal/modules/cjs/loader:1159:14)
    at Module._extensions..js (node:internal/modules/cjs/loader:1213:10)
    at Module.load (node:internal/modules/cjs/loader:1037:32)
    at Module._load (node:internal/modules/cjs/loader:878:12)

Node.js v18.12.1

This is the reproduction case:

const dnsPacket = require('dns-packet');

dnsPacket.decode(
	Buffer.from('000084000000000600000000045f686170045f746370056c6f63616c00000c000100001194001411496e646f6f7243616d20324b2d30413143c00c09496e646f6f7263616dc016002f8001000000780005c0a5000140c027002f8001000011940009c02700050000800040c03b00018001000000780004c0a801f3c0270021800100000078000800000000b6f8c03bc0270010800100001194004c0463233d310466663d321469643d34413a34413a33373a37383a34443a3030086d643d54383430300670763d312e310573233d31340473663d300563693d31370b73683d4a6b6e5466773d3d', 'hex'),
);

Interestingly enough, https://pypi.org/project/dnslib/ cannot decode it either while https://github.com/mdns-js/node-dns-js handles it just fine.

const dns = require('dns-js');

const result = dns.DNSPacket.parse(
	Buffer.from('000084000000000600000000045f686170045f746370056c6f63616c00000c000100001194001411496e646f6f7243616d20324b2d30413143c00c09496e646f6f7263616dc016002f8001000000780005c0a5000140c027002f8001000011940009c02700050000800040c03b00018001000000780004c0a801f3c0270021800100000078000800000000b6f8c03bc0270010800100001194004c0463233d310466663d321469643d34413a34413a33373a37383a34443a3030086d643d54383430300670763d312e310573233d31340473663d300563693d31370b73683d4a6b6e5466773d3d', 'hex'),
);

console.log(result);

Not sure what is going on here.

RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range.

I have seen specific DNS request for google is failing with this error
RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 49. Received 73
at Object.answer.decode (/home/pi/bin/nodejs/dns/node_modules/dns-packet/index.js:1353:31)

DNS request was : 000e0100000100000000000106676f6f676c6503636f6d0000010001010029100000000000000c000a0008121badeb9ce77e55

Ad block dns server?

Could you use this to detect certain ad hosts and return 0.0.0.0 and act like an ad blocker on dns level?

RR type encoding length

Currently, the RR type encoding length is set to the length of the RR type data plus 2. RR type A has length 6, RR Type AAAA has length 18, etc. I think this includes the RDATA LEN short but it seems like it shouldn't.

11 MB "node010" file

After installation via NPM, there is an 11MB file in the dns-packet folder.

I can't find a reference to it.
Is there a reason for it being there?

test

AXFR Method are not answers

By example, when create a query using nsztm1.digi.ninja server as UDP4 protocol and zonetransfer.me as hostname in AXFR method return a empty array response:

const dnsPacket = require('dns-packet')
const dgram = require('dgram')

const socket = dgram.createSocket('udp4')

const buf = dnsPacket.encode({
  type: 'query',
  id: 1,
  flags: dnsPacket.RECURSION_DESIRED,
  questions: [{
    type: 'AXFR',
    name: 'zonetransfer.me'
  }]
})

socket.on('message', message => {
  console.log(dnsPacket.decode(message))
})

socket.send(buf, 0, buf.length, 53, 'nsztm1.digi.ninja')

Results:

{
  id: 1,
  type: 'response',
  flags: 257,
  flag_qr: true,
  opcode: 'QUERY',
  flag_aa: false,
  flag_tc: false,
  flag_rd: true,
  flag_ra: false,
  flag_z: false,
  flag_ad: false,
  flag_cd: false,
  rcode: 'FORMERR',
  questions: [ { name: 'zonetransfer.me', type: 'AXFR', class: 'IN' } ],
  answers: [],
  authorities: [],
  additionals: []
}

I was expecting it to respond with the known and unknown records as a objects and buffer.

But using dig command:

$ dig -t AXFR zonetransfer.me @nsztm1.digi.ninja

; <<>> DiG 9.16.1-Ubuntu <<>> -t AXFR zonetransfer.me @nsztm1.digi.ninja
;; global options: +cmd
zonetransfer.me.	7200	IN	SOA	nsztm1.digi.ninja. robin.digi.ninja. 2019100801 172800 900 1209600 3600
zonetransfer.me.	300	IN	HINFO	"Casio fx-700G" "Windows XP"
zonetransfer.me.	301	IN	TXT	"google-site-verification=tyP28J7JAUHA9fw2sHXMgcCC0I6XBmmoVi04VlMewxA"
zonetransfer.me.	7200	IN	MX	0 ASPMX.L.GOOGLE.COM.
zonetransfer.me.	7200	IN	MX	10 ALT1.ASPMX.L.GOOGLE.COM.
zonetransfer.me.	7200	IN	MX	10 ALT2.ASPMX.L.GOOGLE.COM.
zonetransfer.me.	7200	IN	MX	20 ASPMX2.GOOGLEMAIL.COM.
zonetransfer.me.	7200	IN	MX	20 ASPMX3.GOOGLEMAIL.COM.
zonetransfer.me.	7200	IN	MX	20 ASPMX4.GOOGLEMAIL.COM.
zonetransfer.me.	7200	IN	MX	20 ASPMX5.GOOGLEMAIL.COM.
zonetransfer.me.	7200	IN	A	5.196.105.14
zonetransfer.me.	7200	IN	NS	nsztm1.digi.ninja.
zonetransfer.me.	7200	IN	NS	nsztm2.digi.ninja.
_acme-challenge.zonetransfer.me. 301 IN	TXT	"6Oa05hbUJ9xSsvYy7pApQvwCUSSGgxvrbdizjePEsZI"
_sip._tcp.zonetransfer.me. 14000 IN	SRV	0 0 5060 www.zonetransfer.me.
14.105.196.5.IN-ADDR.ARPA.zonetransfer.me. 7200	IN PTR www.zonetransfer.me.
asfdbauthdns.zonetransfer.me. 7900 IN	AFSDB	1 asfdbbox.zonetransfer.me.
asfdbbox.zonetransfer.me. 7200	IN	A	127.0.0.1
asfdbvolume.zonetransfer.me. 7800 IN	AFSDB	1 asfdbbox.zonetransfer.me.
canberra-office.zonetransfer.me. 7200 IN A	202.14.81.230
cmdexec.zonetransfer.me. 300	IN	TXT	"; ls"
contact.zonetransfer.me. 2592000 IN	TXT	"Remember to call or email Pippa on +44 123 4567890 or [email protected] when making DNS changes"
dc-office.zonetransfer.me. 7200	IN	A	143.228.181.132
deadbeef.zonetransfer.me. 7201	IN	AAAA	dead:beaf::
dr.zonetransfer.me.	300	IN	LOC	53 20 56.558 N 1 38 33.526 W 0.00m 1m 10000m 10m
DZC.zonetransfer.me.	7200	IN	TXT	"AbCdEfG"
email.zonetransfer.me.	2222	IN	NAPTR	1 1 "P" "E2U+email" "" email.zonetransfer.me.zonetransfer.me.
email.zonetransfer.me.	7200	IN	A	74.125.206.26
Hello.zonetransfer.me.	7200	IN	TXT	"Hi to Josh and all his class"
home.zonetransfer.me.	7200	IN	A	127.0.0.1
Info.zonetransfer.me.	7200	IN	TXT	"ZoneTransfer.me service provided by Robin Wood - [email protected]. See http://digi.ninja/projects/zonetransferme.php for more information."
internal.zonetransfer.me. 300	IN	NS	intns1.zonetransfer.me.
internal.zonetransfer.me. 300	IN	NS	intns2.zonetransfer.me.
intns1.zonetransfer.me.	300	IN	A	81.4.108.41
intns2.zonetransfer.me.	300	IN	A	167.88.42.94
office.zonetransfer.me.	7200	IN	A	4.23.39.254
ipv6actnow.org.zonetransfer.me.	7200 IN	AAAA	2001:67c:2e8:11::c100:1332
owa.zonetransfer.me.	7200	IN	A	207.46.197.32
robinwood.zonetransfer.me. 302	IN	TXT	"Robin Wood"
rp.zonetransfer.me.	321	IN	RP	robin.zonetransfer.me. robinwood.zonetransfer.me.
sip.zonetransfer.me.	3333	IN	NAPTR	2 3 "P" "E2U+sip" "!^.*$!sip:[email protected]!" .
sqli.zonetransfer.me.	300	IN	TXT	"' or 1=1 --"
sshock.zonetransfer.me.	7200	IN	TXT	"() { :]}; echo ShellShocked"
staging.zonetransfer.me. 7200	IN	CNAME	www.sydneyoperahouse.com.
alltcpportsopen.firewall.test.zonetransfer.me. 301 IN A	127.0.0.1
testing.zonetransfer.me. 301	IN	CNAME	www.zonetransfer.me.
vpn.zonetransfer.me.	4000	IN	A	174.36.59.154
www.zonetransfer.me.	7200	IN	A	5.196.105.14
xss.zonetransfer.me.	300	IN	TXT	"'><script>alert('Boo')</script>"
zonetransfer.me.	7200	IN	SOA	nsztm1.digi.ninja. robin.digi.ninja. 2019100801 172800 900 1209600 3600
;; Query time: 228 msec
;; SERVER: 81.4.108.41#53(81.4.108.41)
;; WHEN: mar mar 29 10:59:39 -03 2022
;; XFR size: 50 records (messages 1, bytes 1994)

DoH query id should always be 0 according to RFC 8484

According to RFC 8484 section 4.1, the id should always be 0 to optimize caching:

In order to maximize HTTP cache friendliness, DoH clients using media formats that include the ID field from the DNS message header, such as "application/dns-message", SHOULD use a DNS ID of 0 in every DNS request. HTTP correlates the request and response, thus eliminating the need for the ID in a media type such as "application/dns-message". The use of a varying DNS ID can cause semantically equivalent DNS queries to be cached separately.

id: getRandomInt(1, 65534),

Packet encoding fails: The value of "offset" is out of range.

Experimenting a bit with creating packages using the encode method. The below package returns an error:
RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 971. Received 1013

const dnsPacket = require('dns-packet')
const buf = dnsPacket.encode({
type: 'query',
id: 23751,
flags: 256,
questions: [ { type: 'A', name: 'google.com' } ]
})

Perhaps this is related to the issue here: #60

Stacktrace:
at Object.question.decode (C:\sources\lab\node_modules\dns-packet\index.js:1417:31)
at decodeList (C:\sources\lab\node_modules\dns-packet\index.js:1537:19)
at Object.exports.decode (C:\sources\lab\node_modules\dns-packet\index.js:1477:12)
at IncomingMessage. (C:\sources\lab\buildpacket.js:30:35)

Make coverage checking repeatable

Proposal:

  • add nyc as a dev dependency
  • Add a coverage script to package.json that generates HTML
  • Add coverage outputs to .gitignore

Optional, if desired (trivial for me to do, but more controversial):

  • Install coveralls in .travis.yml
  • On successful run, send output to coveralls.io
  • Add support for auto-refresh of the coverage HTML (makes writing coverage tests fun)

class option

To do version queries, like:

var socket = require("dns-socket")();

socket.query({
  questions: [{
    type: "TXT",
    class: "CH",
    name: "version.bind"
  }]
}, 53, "4.2.2.2", function(err, res) {
  console.log(err, res);
});

I'd imagine support for these classes could be useful:

Class Value
IN 0x0001
CH 0x0003
HS 0x0004
NONE 0x00FE
ANY 0x00FF

parse and generate EDNS0 options

Better support for "popular" EDNS0 options from the IANA list. I've the start of a patch for ECS that allows this:

    options: [ {
      code: 8,
      ip: 'fe80::/64'
    } ]

on parse, you get this:

       options:
        [ { code: 8,
            data: <Buffer 00 02 40 00 fe 80 00 00 00 00 00 00>,
            family: 2,
            sourcePrefixLength: 64,
            scopePrefixLength: 0,
            ip: 'fe80::' } ] } ] }

note that the data field is retained for backward compatibility.

If this approach makes sense and is interesting to the maintainers, I'll send a PR eventually with support for all of the options in the IANA list that have RFCs that are clear enough to make sense to me, including tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.