Git Product home page Git Product logo

tsnsrv's People

Contributors

andrew-d avatar antifuchs avatar cdzombak avatar combine-prs-app-for-boinkor-net[bot] avatar dependabot[bot] avatar efarrer avatar github-actions[bot] avatar natsukium avatar patka-123 avatar peterkeen avatar supersandro2000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tsnsrv's Issues

Only logs the first-time auth URL at verbose intensity

When starting a new tsnsrv service under a new name (and without a preauth key), you have to authenticate to tailscale before the service gets added to your tailnet; the tsnet library logs an authentication URL so an administrator can go and do that, but it logs that at verbose intensity only.

This has been reported to me twice now; I think it's a problem with the tsnet library, but maybe we can do something about it.

Adding a licence

It would be nice if the project had a licence, so we know who can use it, and in what setting.

Add a non-scratch-based Docker image?

TLDR: Current Docker image is based on scratch and cannot be used directly to proxy connections to another container. Solution could be to create a second set of container images that ship the tsnsrv binary along with a minimal OS.

Hey ๐Ÿ‘‹,
first of all, really cool project and I am trying to integrate it into my own lab now! However, I came across a small issue (which may totally be due to me not knowing how to use the tool properly). For context, I am using podman and setup a pod for each service I intend to use along with a tsnsrv instance.

Issue

Since the base Docker image created in CI is based on scratch, running the Docker image directly fails in multiple ways. Running directly using

podman run \
    --name test \
    -e TS_AUTHKEY=<mykey> \
    tsnsrv:main tsnsrv -name test http://127.0.0.1:80

results in tsnet not being able to connect with the following error:

2023/09/30 19:52:01 could not connect to tailnet: tsnet.Up: tsnet: neither $XDG_CONFIG_HOME nor $HOME are defined

This error can be remedied by adding a "fake" $HOME such as -e HOME=/root for example, which allows tsnet to establish a connection and show up in the Tailscale UI, but when hitting the actual endpoint in the tailnet, results in an error such as

2023/09/30 20:05:56 http: TLS handshake error from 100.88.179.118:37224: 500 Internal Server Error: acme.GetReg: Get "https://acme-v02.api.letsencrypt.org/directory": tls: failed to verify certificate: x509: certificate signed by unknown authority

This bug is already documented (see tailscale/tailscale/issues/9437) and its solution also applies here.

Solution

What I ended up doing was to just create a new image based on Ubuntu (could be Alpine or anything just as well I guess) which contains ca-certificates and properly sets $HOME which then allows me to use the image as I want in my setup.

I was wondering whether you would be open to adding a second set of container images that make tsnsrv usable directly in this way? I understand the benefit of having a scratch-based Docker image of just the binary, but I am personally not a fan of having to create images that combine multiple services. I would be happy to tackle this myself but wanted to hear your opinion before.

doesn't seem to compile?

./main.go:245:11: s.mux undefined (type *validTailnetSrv has no field or method mux)

I looked through the code, and I can't imagine how it would compile - also couldn't figure out what commit the docker container was compiled from to try and reverse engineer when this happened, but I glanced through the recent commit history, and it seems like this has been there a while...

error: A definition for option `services.tsnsrv.defaults.certificateFile' is not of type `path'

I suspect #106 broke non-custom-TLS configuration:

warning: Git tree '/Users/asf/home' is dirty
copying path '/nix/store/kii30vrsz832yg94v9w2imlwn8pp69xq-source' from 'https://cache.nixos.org'...
error:
       โ€ฆ while calling the 'head' builtin

         at /nix/store/4y2xyaq03yjkzh2rp7yqcwikznns72pl-source/lib/attrsets.nix:922:11:

          921|         || pred here (elemAt values 1) (head values) then
          922|           head values
             |           ^
          923|         else

       โ€ฆ while evaluating the attribute 'value'

         at /nix/store/4y2xyaq03yjkzh2rp7yqcwikznns72pl-source/lib/modules.nix:807:9:

          806|     in warnDeprecation opt //
          807|       { value = builtins.addErrorContext "while evaluating the option `${showOption loc}':" value;
             |         ^
          808|         inherit (res.defsFinal') highestPrio;

       (stack trace truncated; use '--show-trace' to show the full trace)

       error: A definition for option `services.tsnsrv.defaults.certificateFile' is not of type `path'. Definition values:
       - In `/nix/store/42f0fibk91ld4pfg1irad7x8rpvwj0ja-source/flake.nix#nixosModules.default': ""

I think we should have tests for those nixos modules... but let's fix this first.

Docker image cannot access other containers by name

I have been using tsnsrv inside a docker-compose stack to serve private services. An example is this stack, which I spun up a month or so ago and has been running since. Note that tsnsrv accesses the registry-ui container by name.

Trying to start a new service following the same pattern this week, using the pre-built tsnsrv main image, I'm seeing connections to tsnsrv hang, and the proxied service does not seem to be receiving any traffic.

Here's the docker-compose stack that's not working:

---
version: "3"
services:
  aptly:
    container_name: aptly
    hostname: aptly
    image: urpylka/aptly:latest
    restart: unless-stopped
    volumes:
      - ./aptly.conf:/etc/aptly.conf:ro
      - /srv/dev-disk-by-label-storage/aptly:/opt/aptly

  aptly-tsnsrv:
    container_name: "aptly-tsnsrv"
    hostname: "aptly.tailnet-003a.ts.net"
    image: "ghcr.io/boinkor-net/tsnsrv:main"
    command: ["-name", "aptly", "http://aptly:80"]
    environment:
      - "TS_AUTHKEY=${APTLY_TS_AUTHKEY}"
      - "TS_STATE_DIR=/var/lib/tailscale"
    volumes:
      - /opt/docker/data/aptly/lib_tailscale:/var/lib/tailscale
    cap_add:
      - NET_ADMIN
      - NET_RAW
    restart: "unless-stopped"

tsnsrv logs the following when I make a request to it:

2023/10/14 16:58:32 wgengine: idle peer [BTi8b] now active, reconfiguring WireGuard
2023/10/14 16:58:32 wgengine: Reconfig: configuring userspace WireGuard config (with 1/9 peers)
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Created
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Updating endpoint
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Removing all allowedips
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 16:58:32 wg: [v2] [BTi8b] - UAPI: Updating persistent keepalive interval
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Starting
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Received handshake initiation
2023/10/14 16:58:32 wg: [v2] [BTi8b] - Sending handshake response
2023/10/14 16:58:32 [v1] magicsock: derp route for [BTi8b] set to derp-12 (shared home)
2023/10/14 16:58:32 [v1] peer keys: [BTi8b]
2023/10/14 16:58:32 [v1] v1.50.1-ERR-BuildInfo peers: 148/92
2023/10/14 16:58:32 magicsock: disco: node [BTi8b] d:40d6f5b1c01fdd5c now using 192.168.1.10:41641
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 64 tcp ok
2023/10/14 16:58:32 [unexpected] localbackend: got TCP conn without TCP config for port 443; from 100.85.220.69:65072
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 16:58:32 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 387 tcp non-syn
2023/10/14 16:58:34 netcheck: [v1] report: udp=true v6=false v6os=false mapvarydest=false hair=false portmap= v4a=24.247.165.150:54236 derp=12 derpdist=1v4:44ms,12v4:27ms,21v4:35ms
2023/10/14 16:58:43 wg: [v2] [BTi8b] - Receiving keepalive packet
2023/10/14 16:58:58 netcheck: [v1] report: udp=true v6=false v6os=false mapvarydest=false hair=false portmap= v4a=24.247.165.150:54236 derp=12 derpdist=1v4:74ms,12v4:39ms,21v4:74ms
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 16:59:05 WARN proxy error error="context canceled"
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 40 tcp non-syn
2023/10/14 16:59:05 Accept: TCP{100.85.220.69:65072 > 100.82.127.34:443} 40 tcp non-syn

(WARN proxy error error="context canceled" occurs when I kill the curl process making the request.)


If I expose the aptly port via 8003:80, and change the aptly-tsnsrv definition to include the following, then tsnsrv successfully connects to the proxied service and behaves as expected:

    command: ["-name", "aptly", "http://host.docker.internal:8003"]
    extra_hosts:
      - "host.docker.internal:host-gateway"

tsnsrv logs the following for this (successful) request:

2023/10/14 17:03:46 wgengine: idle peer [BTi8b] now active, reconfiguring WireGuard
2023/10/14 17:03:46 wgengine: Reconfig: configuring userspace WireGuard config (with 1/9 peers)
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Created
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Updating endpoint
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Removing all allowedips
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Adding allowedip
2023/10/14 17:03:46 wg: [v2] [BTi8b] - UAPI: Updating persistent keepalive interval
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Starting
2023/10/14 17:03:46 [v1] peer keys: [BTi8b]
2023/10/14 17:03:46 [v1] v1.50.1-ERR-BuildInfo peers: 0/0
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Received handshake initiation
2023/10/14 17:03:46 wg: [v2] [BTi8b] - Sending handshake response
2023/10/14 17:03:46 [v1] magicsock: derp route for [BTi8b] set to derp-12 (shared home)
2023/10/14 17:03:46 magicsock: disco: node [BTi8b] d:40d6f5b1c01fdd5c now using 192.168.1.10:41641
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 64 tcp ok
2023/10/14 17:03:46 [unexpected] localbackend: got TCP conn without TCP config for port 443; from 100.85.220.69:49785
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 52 tcp non-syn
2023/10/14 17:03:46 Accept: TCP{100.85.220.69:49785 > 100.82.127.34:443} 387 tcp non-syn
2023/10/14 17:03:47 INFO served original=/api rewritten=http://host.docker.internal:8003/api [email protected] origin_node=XX.tailnet-003a.ts.net. duration=1.919322ms http_status=301

I'm not knowledgeable about how the images are being built after #57, but: is it possible the new build process results in a binary that's unable to use DNS to lookup the proxied service by name?

Automatic tests for the nixos module

We currently support an increasing number of ways to use the nixos module, and it's getting a bit unwieldy to test the different paths. Having automatic tests for the various variants would be great.

Ship a nixos module for registering oci-container sidecars

Since tsnsrv is available as a docker image, it would be great if we could integrate that in nixos as well! (Can you tell I run a few oci-containers in a nixos host that would benefit from a sidecar?)

This requires a few steps:

  • An automated step similar to the SRI/narhash update workflow that will pull the image with nix-prefetch-docker and updates a pinned docker reference defining a dockerContainers attribute on the flake.nix, which builds an image containing tsnsrv.
  • The actual module. It should take:
    • The regular tsnsrv customization options
    • A container to sidecar the tsnsrv container to (i.e. share network namespace with)
    • An optional docker image spec, defaulting to what we generate above.

Brittle behavior when starting subsequent tsnsrv instances.

When a user runs tsnsrv as follows:

TS_AUTHKEY=... tsnsrv --name first http://localhost:9090

Then everything works wonderfully.

However if I try and start a second instance.

TS_AUTHKEY=... tsnsrv --name second http://localhost:9090

Then the second instance will fail and the tailscale console will report the error: "Duplicate node key" under the Machine tab.

The conflict is caused by the tsnsrv alwasys using the .config/tsnet-tsnsrv directory (on Linux) to store configuration. This behavior can be avoided by using the -stateDir or TS_STATE_DIR for the second instance however the failure is surprising and will likely lead to confusion. A better solution would be to use a unique directory name (based on the --name parameter) for each tsnsrv instance such as .config/tsnet-tsnsrv-. This would eliminate the surprises and confusion for users.

The cost of this proposal is that it is a backwards incompatible change and that existing users would need to recreate their instances (use a new key). If desired tsnsrv could warn users of the incompatibility by warning all users that have a .config/tsnet-tsnsrv directory that they should delete this directory and recreate their instances.

problem installing using "go install"

I get the following error:

go install github.com/boinkor-net/tsnsrv@latest
go: downloading github.com/boinkor-net/tsnsrv v0.0.0-20231225155417-6fcdde841ca9
go: github.com/boinkor-net/tsnsrv@latest: github.com/boinkor-net/[email protected]: parsing go.mod:
	module declares its path as: github.com/antifuchs/tsnsrv
	        but was required as: github.com/boinkor-net/tsnsrv

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.