Git Product home page Git Product logo

api.distributed.press's People

Contributors

akhileshthite avatar asotnetworks avatar benhylau avatar fauno avatar jackyzha0 avatar rangermauve avatar uditvira avatar yurkowashere avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api.distributed.press's Issues

Test publishing of fil.org website

The fil.org website has their code here we can try to publish the static site with Distributed Press. Instructions from their dev:

  1. Clone the repo and make sure you have Node installed locally (tested on Node v14, v16 seems a little unstable still)
  2. Build with npm ci && npm run generate
  3. This creates a /dist directory in the site root, which can be tarballed for your purposes

(You can also just run it as a served resource from Node on localhost with npm ci && npm run dev, but I imagine you just want the static version)

Support new protocol: Earthstar

Please review Support New Protocol and address how the considerations can be fulfilled by the suggested protocol. In cases where they cannot be fulfilled, please explain why and offer alternatives.

Notes

Earthstar isn't quite ready for this publishing use-case yet, but we're working in this direction! It might be a couple of months until we're ready.

Background on Earthstar

Earthstar is first and foremost a data format for collaboration using signed documents (specification), along with a server and client library for handling that format, written in Typescript.

You can think of us as "like SSB or Dat, but mutable and deletable instead of append-only, easier collaboration on data, with support for multiple devices per user, and it works in regular browsers". Or "like CouchDb, but with signed documents so anyone can help run servers without requiring full trust."

We've started with HTTP servers and regular browser clients. We'll add direct p2p support later (via hyperswarm). Any way you can get two peers to connect, you can sync their data, we just haven't focused much on the connection part yet.

Earthstar data is stored in workspaces which are like shared folders or Slack groups. Workspaces hold documents which are signed lumps of data, basically files. The data does not form a hash chain or other structure; each document is separate.

The data is hosted on pub server(s). Pub servers have no authority, are interchangeable and redundant. Anyone can run their own pub server to add redundancy to a workspace.

There is no auto-discovery of peers and no global network. Each workspace is a little island. You can only join the workspaces you know about, and you have to know which pub servers are hosting them. Workspace names and pub addresses use "anyone who knows the link" style privacy. So data does not spread very far unless you announce it outside of Earthstar.

What flavor of distributed is this?

Earthstar is best for collaborative tools (wikis, blogs, games, todo lists, etc) but it should also support this publishing use-case.

Without direct p2p support yet, this is more like "Mastodon but with fungible, redundant servers that anyone can host easily". Or like hosting a website mirror, you can easily host a "mirror pub" which syncs with the original pub. "Easily archivable sites".

Users can also sync the entire content down to their device for use offline.

How to operate a node

Pubs vs Apps

The pub server is a small nodeJs program that stores the data in an sqlite file. It serves an HTTP API for syncing, and a basic HTML view for browsing around workspaces. But normally you don't look at pubs directly, you go through apps.

Apps are is a standalone single-page apps hosted in the traditional way, and they use the Earthstar TypeScript library to reach out to pub server(s) to sync data to the browser's localStorage. They can do dynamic things like render chatrooms, let you write comments on blog entries, etc. An example app is Twodays Crossing which is a chat app where messages disappear after 2 days.

To run a pub server, use the earthstar-pub package:

npm install -g earthstar-pub
earthstar-pub --help

Settings: A pub server can be "open" or "closed" (does it accept newly pushed workspaces?). It can also be "discoverable" or not (can visitors learn about existing workspaces?)

URL scheme and domain name system

Mauve and I have been working out a URL scheme here, for use in Agregore browser. This is early work:

https://github.com/earthstar-project/earthstar-fetch/tree/initial-implementation

Usually apps fetch Earthstar data through the pub HTTP API and render it in custom ways. This repo above is a shift to a model where the pub server directly hosts and renders Earthstar documents as webpages.

Publishing process and cost (computation or financial) for content and DNS

Earthstar is not a blockchain and nothing costs money except having some kind of hosting on the internet somewhere.

Each published file must be hashed and signed once, which is very fast. We'd need a little script to read a directory of HTML files and dump them into a local Earthstar pub, and then you'd sync that pub to your production pub servers.

You can update your content as frequently as you like. Clients can even live-stream changes as they're published.

Pub servers need to

  • be publically routable
  • have HTTPS so browsers won't complain about mixed content
  • have a persistent filesystem to store their SQLite files

We've run them on glitch.com, raspberry pis with PageKite, DigitalOcean droplets, etc.

HTTP gateway support

Right now, our apps are hosted anywhere as static sites, and they reach out to pub servers to get content.

A simpler model is to have the pub server also serve the public-facing website content, especially if it's just HTML and not interactive.

It would be pretty easy to add some URLs to the pub server to do this.

We also need to add a separation between reading and writing permissions, so that you can publish without giving away your write key. This is also fairly simple but it's a few months down our roadmap. We call it "invite-only workspaces" (the invite is for writing; anyone can read if they know the address.)

Thanks!

We're just a couple of people building an non-corporate tool for communities and we want it to be useful. Feedback is very welcome, please reach out with questions to our Discord.

๐Ÿ’œ ๐ŸŒฑ Cinnamon

APIv1: Creating a site returns an error

When I try to create a new site I get an error, but nothing is logged on the server.

<!DOCTYPE html>
<html lang="en">
<head> 
<meta charset="utf-8">
<title>Error</title>
</head> 
<body> 
<pre>Cannot POST /v1/sites</pre> 
</body> 
</html> 

Spin up a dev server

Context from email:

  1. hyperdrive-publisher sync. RangerMauve/hyperdrive-publisher#8 Diego has a patch WIP. Mauve and I discussed to stand up a dev instance of Distributed Press and give SSH access to yโ€™all so we can verify easily. (The โ€œofficialโ€ instance of Distributed Press at api.distributed.press is now serving the production COMPOST magazine and website of our cooperative, thatโ€™s why we are avoiding testing on that.) Itโ€™s on me to set up the instance.

So let's spin up a temporary server on Hypha's server, that is a parallel of api.distributed.press called api.dev.distributed.press. Everyone who wants to get SSH access can paste their pubkey here in this thread?

Error while removing website

https://0xacab.org/sutty/sutty/-/issues/12909

https://0xacab.org/sutty/distributed-press-api-client/-/blob/antifascist/lib/distributed_press/v1/client/site.rb#L96

When the DP toggle is disabled on Sutty, Sutty sends a removal request to DP but it's returning an error "file does not exist".

Steps to reproduce:

  • Enable DP toggle on Sutty
  • Publish changes
  • Wait for confirmation e-mail, it should contain all URLs for DP on it
  • Disable DP toggle on Sutty
  • Check error on srv2.distributed.press

Integration with Sutty: Publishing changes to Distributed Press

Related to #26

The current APIv0 only accepts a full tarball of the website before publishing it. Adding support for incremental synchronization would make publication faster. distributed-press-api-client streams the tarball so it doesn't have to tarball-then-send.

Some ideas:

  • SSH+Rsync: are well-stablished, well-known and secure methods to synchronize directories. Distributed Press could be such a server, and it would only need a public SSH key to authenticate site owners. An authorized_keys file can be generated for each key with the exact command it allows for synchronization (ie. rrsync), providing rw access to the site directory and nothing else. This can be achieved using the AuthorizedKeysCommand directive (see sshd_config(5)), a pseudo-shell like Gitlab and Gitea or even statically generated authorized_keys files (prepending options to a public key, see sshd(8) ยง "AUTHORIZED_KEYS FILE FORMAT").

    Also SSH private keys can be used to sign authorization tokens on OpenSSH latest versions (I think it's supported since v8).

    Sutty already supports pushing changes to SSH+Rsync servers. This replaced Syncthing for synchronization between servers a few months ago. We had to because ST was creating issues with filename UTF-8 normalization and sometimes just simply refused to synchronize some files.

  • Neocities-like API: provides synchronization over HTTPS and clients can detect files changes locally by comparing checksums. I'm not sure of the details, but client-side it's like Rsync over HTTPS.

  • In any case, an API endpoint for running hooks could be added, so it can be pinged after a successful synchronization to run publication tasks.

Test video publish and playback

๐ŸŽŸ๏ธ Re-ticketed from: #
๐Ÿ“… Due date: Feb 13, 2021
๐ŸŽฏ Success criteria: Make sure we can publish large video and play back.

Task Summary

Test 500 MB video file upload via publish API in Distributed Press. Test playback on different browser, or do we need to embed a video.js.

Test video link in chat.

To Do

  • Test publishing
  • Check WWW, IPFS, Hypercore
  • Check playback on desktop and mobile browsers

Support new protocol: Mirror distributed.press on decentralized a Handshake (HNS) name

How to operate a node

https://github.com/handshake-org/hsd or https://github.com/handshake-org/hnsd
...

URL scheme and domain name system

Instead of renting the "distributed" subdomain from the owner of the ".press" top-level domains, Handshake names are decentralized top-level domains, which means Distributed Press could fully control its own domain names. One caveat is that most major browsers do not yet natively support Handshake, so visitors will need to use a Handshake-compatible browser like Puma Browser, a gateway like HNS.to, a public resolver like HDNS.io, an application that resolves Handshake like NextDNS, or a Handshake node to access Handshake domains.

One option is to use a Handshake name as a "backup" for distributed.press, so that in the event where the distributed.press domain name is seized, its contents can still be accessible via the Handshake name.

docs.namebase.io will probably be helpful for this effort.
...

Publishing process and cost (computation or financial) for content and DNS

Handshake works identically to the existing DNS system, so there shouldn't be any issues here.
...

HTTP gateway support

I don't totally understand what's being addressed here but it should be possible.
...

PUTing tarball takes a long time

Synchronizing a 44MB site takes ~5m, I'm attaching logs.

CMS side:

{"@timestamp":"2023-02-08 20:10:53 +0000","@version":1,"content_length":"718","http_method":"GET","message":"[HTTParty] 200 \"GET /v1/sites/compost.testing.sutty.nl\" 718 ","path":"/v1/sites/compost.testing.sutty.nl","response_code":200,"severity":"info","tags":["HTTParty"]}
{"@timestamp":"2023-02-08 20:15:45 +0000","@version":1,"content_length":"0","http_method":"PUT","message":"[HTTParty] 200 \"PUT /v1/sites/compost.testing.sutty.nl\" 0 ","path":"/v1/sites/compost.testing.sutty.nl","response_code":200,"severity":"info","tags":["HTTParty"]}

DP side:

{"time":"15:10:58","reqId":"req-2p","req":{"method":"GET","url":"/v1/sites/compost.testing.sutty.nl"},"msg":"incoming request"}
{"time":"15:10:58","reqId":"req-2p","res":{"statusCode":200},"responseTime":3.6780920028686523,"msg":"request completed"}

{"time":"15:11:59","reqId":"req-2q","req":{"method":"PUT","url":"/v1/sites/compost.testing.sutty.nl"},"msg":"incoming request"}
{"time":"15:11:59","reqId":"req-2q","msg":"Downloading tarfile for site"}
{"time":"15:11:59","reqId":"req-2q","msg":"Processing tarball: /tmp/4a1dad813f85a909.gz"}
{"time":"15:11:59","reqId":"req-2q","msg":"Deleting old files"}
{"time":"15:11:59","reqId":"req-2q","msg":"Extracting tarball"}
{"time":"15:12:01","reqId":"req-2q","msg":"Performing sync with site"}
{"time":"15:12:01","reqId":"req-2q","msg":"[hyper] Sync Start"}
{"time":"15:12:01","reqId":"req-2q","msg":"[ipfs] Sync Start"}
{"time":"15:12:05","reqId":"req-2q","msg":"[hyper] Published: hyper://dc17wdupgqk75men4p8ywtimeq7fbajutnsa3j1997ni1s6in6py/"}

{"time":"15:15:46","reqId":"req-2q","msg":"[ipfs] Sync start"}
{"time":"15:15:46","reqId":"req-2q","msg":"[ipfs] Generated key: k51qzi5uqu5dmgqcsmmd4y5g717crmhzep30z88ic3cgc1jy32ufxpj0j0ybj3"}
{"time":"15:15:46","reqId":"req-2q","msg":"[ipfs] Got root CID: bafybeig367my34zdt77krv27zhliocol6iqpooryxkekptcdv2vzjxzs6m, performing IPNS publish (this may take a while)..."}
{"time":"15:15:49","reqId":"req-2q","msg":"[ipfs] Published to IPFS under k51qzi5uqu5dmgqcsmmd4y5g717crmhzep30z88ic3cgc1jy32ufxpj0j0ybj3: /ipfs/bafybeig367my34zdt77krv27zhliocol6iqpooryxkekptcdv2vzjxzs6m"}
{"time":"15:15:49","reqId":"req-2q","msg":"Finished sync"}
{"time":"15:15:49","reqId":"req-2q","res":{"statusCode":200},"responseTime":230262.7318353653,"msg":"request completed"}

{"time":"15:17:08","reqId":"req-2r","req":{"method":"GET","url":"/v1/sites/compost.testing.sutty.nl"},"msg":"incoming request"}
{"time":"15:17:08","reqId":"req-2r","res":{"statusCode":200},"responseTime":3.9629344940185547,"msg":"request completed"}

I've grouped log lines by time proximity, converted to readable timestamps and removed some unrelated info.

req-2p is the client sending a request to update site local data (links, enabled protocols), req-2q is started immediately after req-2p but is logged a minute later when the tarball finishes being received. All log lines are sent together when the tarball finishes being extracted, so I think there's something blocking there.

Then nothing happens for some minutes, on htop there's high CPU usage from ipfs daemon and node processes, and a new block of log lines and the request finishes. I think the block is intended to be logged at different times too, and ipfs sync starts twice!

I'm not sure what's req-2r! I'm only making two requests.

Support X-Ipfs-Path headers for static hosting

IPFS Companion can redirect traditional HTTP requests to IPFS if the x-ipfs-path response header is provided.

Additionally, some browser vendors like Brave may display an Open using IPFS button on the address bar when this header is returned for the root document in the current tab.

source

I think this can be done statically on Nginx config with something like this:

server {
  add_header "X-Ipfs-Path" "/ipns/$ssl_server_name";
}

Social API design

Motivations & goals

The Social API allows sharing of social messages across fragmented DWeb networks using different protocol. The design relies on Webmention and Microformats2.

Traditionally, websites employing Webmention either have to run a backend process or defer to a centralized third-party service to process Webmentions. We are exploring how we can operate an ephemeral service to process Webmentions and utilize the DWeb as a shared content store. Being able to "like" or "repost" immutable content also opens up a lot of longer term possibilities, such as versioned cross-referencing or linking monetization with social interactions.

This also solves one of the great shortcomings of the DWeb: fragmentation of social spaces. Fragmentation limits authors' abilities to build large audiences or capture useful metrics about the success of their work. With Webmention and direct payments like Web Monetization to facilitate a social and feedback layer, we can build strong disintermediated and federated networks with rich feedback loops.

Design proposal

The Distributed Press server's Social API HTTP endpoint will receive Webmentions from any website, then store a copy in our web server, pinned onto a Hypercore Key and an IPNS DNSLink. The server essentially ingests social messages on the HTTP endpoint and seed them on DWeb networks.

The tentative design is to store Webmentions as structured files on disk in a hierarchy that makes them easy to reconcile with the posts they belong to. By relying on files over complex database servers, weโ€™ll ensure that the content can be stored and replicated on IPFS and Hypercore. (Review unwalled.garden for possible schema.)

What can we do with it?

Each post in the published static website essentially has an associated "Dropbox" seeded on IPFS and Hypercore, storing Webmentions-based social messages. When a reader loads up this website, on WWW or DWeb, they can also see the Webmentions collected for each post.

This "DWeb Dropbox of social messages" bridge conversations together, without centralizing them in a single database. Clients can also choose not to broadcast their discussions (e.g. a thread on Secure Scuttlebutt is meant to be semi-private).
We will implement the standard interactions enabled by Webmentions (like, reply, quotation, citation, share) with basic Microformats2 vocabularies.

With our Social API, DWeb publications may receive existing Webmentions from the IndieWeb community, and also create social connections and references to that existing content base.

Bump Pinned Version in Ansible Script

Hey folks!

I recently deployed my own instance of distributed.press and found that because the version of distributed press is pinned to v1.0.0 in Ansible, hypercore doesn't work out of the box, instead serving a blank index page! ๐Ÿ˜…

I wasn't sure if there was a reason for this so haven't opened a PR but more than happy too.

Content API design

Motivations & goals

The Content API will serve metadata about the website content that is published by the Distributed Press server. For example, what are the list of URLs we can access one.compost.digital/foreword/ at? In this case, they are:

  • https://one.compost.digital/foreword/
  • ipns://one.compost.digital/foreword/
  • hyper://one.compost.digital/foreword/

Or what are the historic versions of this page? For example:

These should be served by the Content API.

In addition, to facilitate sharing of content into distributed social networks, the API should serve alternative formats of content. For example, if I am reading something on Beaker, I should be able to request a markdown version that I can share to Scuttlebutt or Aether (both markdown based).

Full text can be made available by multiple formats, signed by authors, editors, and third-parties (e.g. truth verification groups).

Design proposal

See:

Identity and signature schemes should be multi-provider.

What can we do with it?

  • Locate content across the DWeb across different protocols
  • Reference historic versions of content on the DWeb
  • Share full text of content in different formats to distributed social networks where communities gather and discussions can take place
  • Sign and verify full text of content in different formats (e.g. a markdown blob reproduced to a social network can be signature verified)

Cronjob shouldn't have output

The staging server logs are full of emails sent to root (plus postfix complaining about the hostname). Inspecting /var/mail/root is full of "All done" emails from cron.

Not sure if this is going to be kept for APIv1 but it could just exit 0 and be verbose only on errors, so you don't have as many useless emails, or run it with chronic which ignores output if everything goes well.

Optionally support the host option

To run swagger-codegen-cli from Docker I needed #31 to bind to any address (::), I did this on JS:

diff --git a/v1/index.ts b/v1/index.ts
index 3eba15a..4f48053 100644
--- a/v1/index.ts
+++ b/v1/index.ts
@@ -1,8 +1,9 @@
 import apiBuilder from './api/index.js'
 
 const PORT = Number(process.env.PORT ?? '8080')
+const HOST = process.env.HOST ?? '::1';
 const server = await apiBuilder({ useLogging: true, useSwagger: true, usePrometheus: true })
-server.listen({ port: PORT }, (err, _address) => {
+server.listen({ port: PORT, host: HOST }, (err, _address) => {
   if (err != null) {
     server.log.error(err)
     process.exit(1)

But it's not the same as not sending the host option.

Upload site content with cURL

I am attempting to upload site content to a running DP instance with cURL:

curl -X POST -H "Authorization: Bearer TOKEN" -F '[email protected]' https://my-dp-instance.com/v1/sites/my-site.com

This fails with:

{
  "statusCode":400,
  "error":"Bad Request",
  "message":"body must be object"
}

The docs say that the PUT body must be of type multipart/form-data.

I am fairly certain that my DP instance is configured correctly because I can upload the same public.tar.gz file from within a Node 16 REPL, as in this example.

Any advice is appreciated!!

Thank you,

Joseph

Don't run public gateway

Set Gateway.NoFetch to true.

When set to true, the gateway will only serve content already in the local repo and will not fetch files from the network.

This prevents the IPFS gateway from being abused as a public general gateway.

Certbot shouldn't contain all custom domains

Right now deploying a custom domains involves adding the domain to a common certificate, but this has some issues:

  • IIRC SubjectAltName is limited to 50 domains, so eventually a single cert for everything will stop working
  • The cert publishes the site names for all sites hosted at DP
  • When you remove domains from the inventory, certbot generates a new certificate under a new path and borks the configuration files

IMO, nginx should have a catch-all configuration for static sites (I can provide an example conf) and api.distributed.press should issue new certificates for individual domains on site creation through a publisher token, so there's no need to run a playbook to set custom domains.

But at least the playbook could issue individual certs, so adding and removing custom domains is easier and certbot doesn't do stuff difficult to debug. (I personally dislike how it modifies your nginx conf files, last time I used it, it fixed their indentation.)

Integration with Sutty: DNS and DNSLinks

Hi! I'm opening this in a two part proposal.

Right now, Distributed Press requires domains to be delegated and managed by it, so it can keep DNSLinks and other records updated. In Sutty we have a mixed bag for DNS management and in many cases it won't be possible to delegate them to Distributed Press. I need to write about our DNS in more detail, but basically each Sutty node is its own nameserver. If a node is down, its information goes down with it, so we don't have to make changes to other nodes.

So, if Distributed Press could act as a DNS authoritative server for the _dnslink zone of each of the domains it hosts:

  • Authoritative name servers (Sutty's or otherwise) for an example.org domain would publish _dnslink NS nsX.distributed.press records, where X is the nameserver number. This delegates control of _dnslink subdomain to Distributed Press, so it doesn't need full control of the DNS zone, only the _dnslink subdomain.

  • The Distributed Press backend would become a DNS server serving _dnslink TXT dnslink=... records. These could be generated on the fly via a key-value map of domain to latest CID.

  • TTLs could be kept low, ie. 60 seconds.

  • Adding support for AXFR/IXFR to the backend would allow to keep any number of nsX.distributed.press nameservers. Even the Distributed Press API could be served privately, and more stablished DNS nameservers (like knot or nsd) could become public replicators, with rate limits and other security features.

This not only makes DNSLink lookup more efficient by only returning relevant TXT records but enables you to improve the security of an automated setup or delegate control over your DNSLink records to a third party without giving away complete control over the original DNS zone. source (June, 2022)

Questions

  • How would it play with other protocols? I tried to find documentation on Hypercore but couldn't find anything, how does hyper://example.org work?

Nextra Folder Structure isn't served properly

Nextra supports having index pages for folders. However, the structure they use is slightly different from most sites. Instead of having folder/index.md, Nextra forces you to name it folder.md

See more: https://nextra.site/docs/docs-theme/page-configuration#folders-with-index-page

I suspect this may be due to the NGINX config. This causes URLs like https://docs.distributed.press/deployment to fail to load even though the files are present (https://github.com/hyphacoop/docs.distributed.press/blob/main/pages/deployment.mdx)

Security: Copied symlinks are not mangled

Payloads for sites can contain relative symlinks that point to sensitive content on the host machine. If an example file is a symlink the process of syncing to protocols may follow the symlink and upload the file. The danger here is that the symlink may be outside the actual uploaded folder

Reproducing

Steps to reproduce:

  1. Create a new empty folder with a pwned.txt file and symlink it to ../../../../../../api.distributed.press/README.md
  2. Create a new site using Distributed Press and upload the content containing the symlink
  3. Wait for publish
  4. Visit the IPFS gateway link

Proof-of-concept (courtesy of fauno): https://yolandia-sutty-nl.ipns.ipfs.hypha.coop/pwned.txt

This is fine on Hyper gateway as I don't think it follows symlinks (https://yolandia-sutty-nl.hyper.hypha.coop/pwned.txt) but our IPFS gateway does (https://yolandia-sutty-nl.ipns.ipfs.hypha.coop/pwned.txt)

As an additional note, creating a recursive symlink may also cause Distributed Press to hang when uploading (this can caused observed 504s)

Solution

rsync has an option to mangle symlinks, we should probably do something similar: https://www.man7.org/linux/man-pages/man1/rsync.1.html (search for --munge-links)

Corrupt dat-store

Found that dat-store got corrupt after several months, even though we are only pinning several text files of API responses every 15 min. To recover, we had to:

  1. Wipe ~/.local/share/dat-store-nodejs
  2. Delete all ~/.distributed-press/data/projects/*/private/dat-seed-*

This is related to existing hypercore problems using our current stack.

After performing the above, this link works again.

Nameserver should respond with A and AAAA records for websites

Since Godaddy doesn't support ALIAS records, we had to delegate a domain name to dns.he.net to create an ALIAS to api.distributed.press so it keeps IP addresses in sync, and NS to _dnslink for P2P protocols.

If the DP nameserver could craft answers for A/AAAA queries for websites pointing to its own IP address, it would be easy to just delegate the domain to DP and not having to configure a third party nameserver.

Right now it seems to be ignoring the query type, I ask for A records and returns TXT:

drill a custom.domain @api.distributed.press
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 57997
;; flags: qr rd ; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 
;; QUESTION SECTION:
;; custom.domain. IN      A

;; ANSWER SECTION:
custom.domain.    60      IN      TXT     "dnslink=/ipns/XXXX/"
custom.domain.    60      IN      TXT     "dnslink=/hyper/XXXX/"

;; AUTHORITY SECTION:

;; ADDITIONAL SECTION:

;; Query time: 179 msec
;; SERVER: 198.50.215.13
;; WHEN: Wed Oct 25 19:04:32 2023
;; MSG SIZE  rcvd: 222

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.