Git Product home page Git Product logo

Comments (67)

mash-graz avatar mash-graz commented on August 28, 2024 2

hmmm... i really like the idea, to prevent the actual CLI behavior till the next major release in semantic versioning conformance, but nevertheless we should try to reduce the needed changes to the bare minimum.

i would therfore suggest:

  1. add --publish without a short option rigtht now for the ingress forwarding right now (=next minor release)
  2. support --api-port/-a in parallel to the existing/deprecated --port/-p

in this case users could use the new syntax from now on and any revoke or redefinition of --port/-p at one of the next major releases shouldn't affect them anymore.

btw: i was just looking again, how docker interprets all those possible colon separated variants:

https://docs.docker.com/engine/reference/run/#expose-incoming-ports

this myriad of variants is perhaps an overkill for our actual purpose, nevertheless i would at least try to stay somehow close and compatible to this well known conventions...

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024 2

@iwilltry42 @mash-graz O.K. I will stick with --publish for now, and add --add-port as its alias.

from k3d.

zeerorg avatar zeerorg commented on August 28, 2024 1

Maybe issue #6 is a relevant issue in this regard ? Though for port-forwarding the current recommended way of doing it is using kubectl port-forward

from k3d.

goffinf avatar goffinf commented on August 28, 2024 1

@iwilltry42 That works for me.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024 1

if a working minor release could be realized with any usable solution, i'll be happy. :)
i don't want convince anybody, just help to find a working solution resp. rethink it from another point of view...

How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

that's an interesting amendment, because in some cases it could indeed make sense to bind the exposed API connectivity to just one specified host-IP/network instead of 0.0.0.0 for security reasons!

from k3d.

goffinf avatar goffinf commented on August 28, 2024 1

My 4 cents on complexity ... and forgive me for stating the obvious.

The experience for a Dev and for and Ops should be largely similar in terms of configuration and deployment as for any Kubernetes installation. This isn’t some ‘hello-world’ demo environment but intended to be ‘production ready’ at least for edge devices (although personally I don’t draw any distinction or try an pigeonhole k3d/k3s in that way).

In my case, and I suspect many others, I need to provide a local Kubernetes environment for all the application developer teams that are migrating existing apps into a micro-services / container based solutions architecture, and to support the majority of new apps that we build. For that environment to be genuinely useful we need to be able to use the same set of Kubernetes constructs to deploy and expose workloads and use the same automation thru build and deploy pipelines as we do for our hosted environments. Without that it will just create a misalignment which I suspect will relegate k3d to ‘just for demos and training’. That’s not what I need, there are plenty of those tools available already.

So a key aspect is being able to route external traffic into this local cluster through external load balancers and Ingress. That’s what we do ‘in the wild’, so that’s what we need to do here. I am aware we could use NodePort or host port mapping, but we don’t use that approach much, so for me, Ingress is the preferred option. I want to rock up to my local k3d cluster and run my cluster config scripts and my application helm install (the exact same ones I use in Production - give or take), and a few seconds later be doing Development work or running integration tests, whatever.

So .... from a complexity perspective, tool chain builders always get the most challenging work to do, but the benefit is to the thousands of others who consume the platform and don’t care how the magic happens and complain when it doesn’t or it behaves differently in some (sometimes small and acceptable) way. Such is the lot for those of us that build tools for others, but it’s the most exciting job of all.

Please don’t see this as a rant against k3d or the difficult choices in terms of what needs to be supported or not, and which features have priority. I am massively appreciative of the work and integrity of everyone who is contributing whether that’s code or comments. I just wanted to orient my ‘ask’ in such a way that clearly says that I don’t really want to be in a position of introducing k3d to my dev teams by starting off with all the things that it doesn’t support or worse, does support but completely differently to the next environment along.

Like many others we believe in the whole immutable concept ‘cool-ade’ and would be loathed to move from that position and maintain different solutions for the same ‘thing’.

HTHs

Fraser.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024 1

a few additional considerations concerning the security impact of this feature:

i don't think, we should forward/expose any network ports or network traffic, belonging to interfaces, which are conceptually understood as only internally accessible in k8s (e.g. bound to the ClusterIP).

only those network endpoints, which are intended as public accessible via ExternalIPs in k8s should be available outside of k3ds docker cage and forwarded by this feature! everything else should be still isolated from the outer world resp. only accessible via more sophisticated and secure mechanism, like kubectl proxy!

i think, we really should respect this very important general security requirement, otherwise we will make nonsense of k3ds main goal and work against the idea of reliable sandbox utilization.
k3d should be useful tool to encapsulate a working cluster installation and naturaly provide access to its public network endpoints -- just like the real thing! --, but it shouldn't go further and undermine important k8s security concepts.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

I would also opt for kubectl port-forward as @zeerorg said.
But, since ingress can in many cases be the only service that needs ports mapped to the host, I could imagine adding an extra flag to k3d create for ingress port mapping. E.g. k3d create --ingress-http 30080 --ingress-https 30443 for mapping http/https ports to the host system.
Or a single flag for mapping any arbitrary port.

WDYT

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

a working solution to specify the port forward during k3d creation would be indeed very helpful!

from k3d.

goffinf avatar goffinf commented on August 28, 2024

Unsurprisingly I can confirm that using kubectl port-forward does work, but ... I would still much prefer to define Ingress resources

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

but ... I would still much prefer to define Ingress resources

1+

the actual behavior looks rather inconvenient and insufficient to me.

if it's possible forward the API network connectivity ports to the public network, it should be done resp. be configurable for ingress ports as well. without this feature k3d is IMHO hardly usable for serious work.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

So I'd go with an additional (string-slice) flag like --extra-port <host:k3s> here.
No since it's most wanted to use this for ingress, it should be enough to expose the ports specified here on the server node, right?
Or we take a more sophisticated approach and extend it to --extraport <host:k3s:node>, where node can be either server, workers, all or <node name>.
Any opinions on that?

from k3d.

goffinf avatar goffinf commented on August 28, 2024

Being able to specify the node ‘role’ is more flexible if we are just talking about exposing ports in the general sense. I’m not sure I can think of a use case for using these for an Ingress object for Control Plane or Etcd (and as yet there is no separation of these roles - but that might happen in the future ?), but it’s still better to have the option. So here the prototype would be something like ...

—add-port <role>:<port>

Where role can be worker (default), controlplane, or etcd. (or just server if cp and etcd will always be combined)

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

i'm not sure, if it really makes sense, to search for a more abstract / future proof / complicated command line syntax in this particular case?

in fact, we just want to utilize same very simple docker-API "PortBindings" functionality in all of this cases -- isn't it?

i therefore would simply extended the usability of the existing -p/--port command line -- i.e. make it usable multiple times [for API connectivity and an arbitrary list of ingress ports] and allow "container-port:host-port" pairs for usage scenarios with more than one instance in parallel. this would look rather similar to the expose syntax in dockers CLI resp. a natural and commonly expected simple wrapper behavior.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

I agree with you @mash-graz that we could re-use the --port flag, but I think that right now it would break UX, since in the original sense --port X simply mapped port X to the K8s API port in the server container. This functionality we would break by introducing the --port c:h syntax, so we would at least need to find a transitional solution.

I also think like supported by @goffinf that it would be a good thing to narrow it down to node roles, where setting the role would be optional.
@goffinf: I think only --add-port <role>:<port> needs a notion of host and container port.
To stick to the docker syntax I'd go with --add-port <hostport>:<containerport>:role, say "map this port from my host to this port on those nodes".

@mash-graz: any idea how we could transition from our current --port <server-port> syntax to the syntax mentioned above? And would it fulfill your needs?

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

@iwilltry42

any idea how we could transition from our current --port syntax to the syntax mentioned above? And would it fulfill your needs?

yes -- i also have some doubts concerning the backwards compatibility of such a change.
and indeed, an additional --add-port-option could solve this risk in a very reliable manner.
but is it really neccesarry?

  • if the -p/--port-option isn't specified on the command line it's interpreted just like a single usage of -p 6433, because s3d is hardly usable without exposing the kubernets API on the default port reachable from the public network.

  • all other useful invocations of this parameter will need the colon-notation, because they actual ingest port on the container network has to be specified anyway [only the port on host side, i.e. the number after the colon sign could be seen as optional in case of using the same port]. so we just would have to look for the colon-sign resp. differentiate between a single int argument and int:[int]-sequences

  • users should be even free to use crazy setups like.: k3d create --server-arg=https-listen-port=8999 -p 8999:6433 (or similar) in the suggested -p/--port syntax without breaking the system logic.

nevertheless i could accept the --add-port alternative just as well.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

You know what? Why not both?
We could create a minor release of 1.x with the --add-port flag and a major release 2.0.0 (which can totally bring in breaking changes) with the extended --port flag.

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

Borrowing from the Docker CLI, we could also consider using --publish for mapping host ports into the k3s node ports. In fact, I am working a pull request for it. It would be great to assign the -p short hand to this option as well. (I am also o.k. with --add-port if it is more preferred)

I think it is useful to keep the api port spec separate from the --publish option. Since the worker nodes need to known where the API port is for joining the cluster. How about we change it to --api-port ,-a, which takes a string argument in the form of 'ip:host-port'?

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

You're right there, I didn't think of that...
I like you're suggestion with the api-port 👍

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

So to wrap things up, I'd suggest doing the following:

For the next minor release:

  • add --add-port option
  • hint on deprecation of --port and breaking change in next major version

For the next major release:

  • re-use --port, -p flag for generic port mapping in the style of --port <hostport>[:<containerport>][:<node roles>], where default for <node role> would be all and <containerport> would be same as <hostport> if left blank.
  • introduce --api-port, -a <hostport>[:<containerport>] with old functionality of --port flag

Any final comments on that?
@andyz-dev , are you already working on a PR for this? If not, I'd have time now 👍

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

BTW: If you didn't already, you might want to consider joining our slack channel #k3d on https://rancher-users.slack.com after signing up here: https://slack.rancher.io/ 😉

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

@iwilltry42 I already have --publish working, just polishing it before sending out the pull request. I will also rename it to --add-port.

I am not working on--api-port, nor on deprecating --port. Please feel free to take them up.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@andyz-dev Alright, I'll base the next steps on the results of your merge, so I don't have to deal with all the merge conflicts 😁

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@mash-graz , yep, like that procedure 👍
Awaiting @andyz-dev's PR now for --publish or --add-port and will base the other changes on top of that.

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

Regarding all the possibilities of port-mappings, I'm on your side there that we should stick close to what docker does. Though I'd really like to put the notion of node roles (or at some point in the future also node IDs) in there somehow so that we can specify which nodes should have those ports mapped to the host.

yes -- it definitely makes sense, to catch the different ways of exposing services in k8s by more adequate/selective forwarding strategies in the long run...

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Thanks for the PR #32 @andyz-dev !
@goffinf and @mash-graz , maybe you want to have a look there as well 👍

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

thanks @andyz-dev ! 👍
that's a much more complex patch than expected.

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s.
although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

thanks @andyz-dev ! 👍
that's a much more complex patch than expected.

I feel the same way. Any suggestion on how to simplify it?

please correct me, if i'm totally wrong, but i don't think, this forwarding of all worker nodes is necessary or useful for typical ingress/LoadBalancer scenarios -- e.g. when k3ds traefik default installation will be utilized. in this case, all the internal routing is already concentrated/bound to just on single IP-addr port pair within the docker context. we only have to forward it from this internal docker network to the real public outer world -- i.e. one of the more common networks of the host.

In the product we are working on, we need our LB to run on the worker nodes. For HA, we usually run more than 2 LBs. I think there is a need to exposing ports on more than one worker nodes.

I agree with you that exposing ports on all works is overkill. Would the "node role" concept proposed by @iwilltry42 work for you? May be we should add it soon.

but again: maybe i'm totally wrong concerning this point? -- please don't hesitate to correct me!

but your approach could make some sense for some of the other network exposing modes of k8s.
although i would at least suggest 1000-steps as port offset in this case to minimize conflicts -- e.g. other daemons listening on privileged standard ports (<=1024).

Notice we are only stepping the host ports.
May be we can add a --publish-stepping option. Then again, this may be a moot point with "node role"

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@mash-graz I agree with you there, it got way more complex than I first expected it to be.
But I cannot think of a simpler solution than what @andyz-dev did (good job!).
I already worked on combining his solution with the node role (and regexp validation of input), but it will make the whole thing even more complex (but also cool).
I'd go with an extended docker syntax like ip:hostPort:containerPort/protocol@node where only containerPort is mandatory and @node can be used multiple times (either node role or node name, while all is default).

For the port clashes I was thinking of something like a --auto-remap flag instead of doing it by default?
We could even go as far as automatically checking if ports are already being used with a simple net.Listen() to avoid crashes.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Thanks for the explanation @goffinf , but I think that you don't need to justify yourself.
I'm totally on your side there, I'd value functionality over complexity as well 👍
Originally I built k3d from @zeerorg's idea, just because the other tools like kind, minikube, etc. didn't fulfill my needs for local development... so I guess we have similar needs there ;)

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

i think, we all agree on the perspective and criteria expressed by @goffinf. we just have to find a sufficient solution to realize this challenging demands also in practice.

i still see typical ingress and LB setups utillizing a single 'Endpoint'-entry as the most realistic usage scenario. this kind of simple one [external] port A -> one [docker] port B mapping should be therefore supported without to much confusing implementation overheap and rethinking on the users side.

maybe we could archive this goal by just stipulate the proposed role "server" instead of "all" as the default behavior to get just this simple one-to-one mapping for k3s. nevertheless i have my doubts, if this really works for multi master clusters and it also doesn't really correspond to the intended semantics of this role-notation.

if we really want to support other network exposing capabilities of k8s beside this relative simple one-to-one ingress scenarios, we maybe indeed forced to forward and map the ports from all worker nodes to the host. nevertheless i would see this kind of network access translation more as a workaround for test and development purposes, because it differs to much from the internal topology and addressing. it's IMHO just a workaround to get access to more fancy k8s internals and networking exposing capabilities, but from my point of view it doesn't seem to be necessary for more common practical usage scenarios.

i also wouldn't underestimate all the necessary provisions to support dynamic changes in this case -- i.e. add and remove worker nodes -- and update the port mapping in a consistent manner during all this changes.

the same could be expressed for more demanding/exotic HA/LB setups, which IMHO can not be simple 'simulated' resp. transferred to docker environments and arbitrary bundles of incremented numbered ports, but would need at least multiply [virtual] IP endpoints on the host side as well to work as expected [i.e. simulate a take over by utilizing the same port numbers] in this kind of translated contexts.

but again: maybe i really don't get the point?

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Merged PR #34 by @andyz-dev (Thank you!).
Pre-Release with the --publish/--add-port flags pushed: https://github.com/rancher/k3d/releases/tag/v1.2.0-beta.0

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

I will base my work of adding the notion of node roles on top of that.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

thanks @iwilltry42 for this beta!
i'll try to test it tonight...

but could you please elaborate a little bit more about your idea concerning this "role" specifier?

should it be understood just as a kind of label based selection mechanism, which would e.g. allow to expose only ports for a subset of services, or is it more intended as a flag, which helps do handle the various available "Types" of k8s network exposure by different and in each case adequate means (e.g. selectively bind it only to the desired network within the docker context)?

its perhaps also useful to find a sufficient documentation string resp. explanation for the default behavior (i.e. if 'role' isn't explicitly specified).

thanks

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

You're welcome @mash-graz , but the credits should go to our new collaborator @andyz-dev who implemented it 👍

The "role" specifier is nothing that complex. It's only to map ports only from certain types of nodes that you create with k3d. Let's call it node-specifier from now on.
Then you could e.g. write --publish 0.0.0.0:1234:4321/tcp@workers to map port 4321 from the worker nodes (containers in k3d) to localhost (for now by using the offset to avoid port clashes) or --publish 3333@k3d-mycluster-worker-1@k3d-mycluster-worker-2 to map 3333 from the two specified nodes to localhost. So the specifier could be either one of server, workers, all or <node-name>.
Default would then be all which makes the whole thing backwards compatible to what we have right now with the beta.

But feel free to drop suggestions on what you would expect 👍

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

You're welcome @mash-graz , but the credits should go to our new collaborator @andyz-dev who implemented it +1

sorry -- i had the impression, that you merged improvements from your "new-publish" branch as well.
and frankly, i like those additional changes much more, than the original PR proposed by @andyz-dev, because his implementation IMHO simply ignores the specific needs of a sufficient k8s "ingress" handling.

concerning your node-specifier i'm still contemplating, if it looks useful/worthwhile to me?
at least it opens some capabilities, which simply couldn't realized otherwise...

nevertheless i would see a more sufficient consideration of the different expose variants and their proper handling as much more important. but i don't think, this can/should be handled/accomplished/intermixed with your proposed notation.

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

I like the node-specifier concept in general. I am fine with 'all', 'server' and 'workers'. Since the is something internally generated, user may not know what it is until it has been generated (In theory, we could even change the way it is generated from release to release). How about we also allow a number as a valid node name specifier.

For example, to create a five node cluster with the first two work nodes as ingress nodes, the k3d command line will look like:

k3d create --workers 4 --publish 8080:80@0@1

It will be shorter and easier to type as well.

For my eyes, the following is a bit easier to read than above:

k3d create --workers 4 --publish 8080:80[0,1]

To be clear, I am not suggesting that we do away with supporting the full node-name. In a blog post, the full node name will more readable.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

hmm... i'm definitely not happy with all this questionable half-baked workarounds:

therefore a radical new proposal:

shouldn't we simply implement a command line option to enable the NetworkMode=host just as we could set it in dockers CLI.

unfortunately this docker feature isn't available resp. working on macOS and windows, but on linux it would make the cluster accessible from the outside on the common network device of the host system resp. use the public host IP as External-IP within k8s .

because k8s does almost only utilize private address space internally, this option wouldn't induce much harm -- i personally would say: less than dirty arbitrary port forwarding. nevertheless users could still enable all sorts of direct access to specific nodes and their ports from within k8s by well defined and already available k8s control options in this case.

how do you think about this kind of alternative solution?

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

@mash-graz, I don't follow the objections raised above. So let me make an attempt to understand better. If the following are obvious, apologies in advance.

If the main goal to implement one host port to one docker port for a k3s cluster.

The following should create the desired cluster:
$ k3d create --publish 8080:80

k3s by default comes with traefik ingress controller that listens on port 80. Ingress rules can be configured via the normal k8s way (i.e. via yaml file). Once ingress is configured ,

$ curl http://localhost:8080/xxxxx . should just work.

If one needs a larger cluster, but only one ingress node, something list the follow can work (needs @iwilltry42's changes to merge in).

$ k3d create --publish 8080:80@server --workers 5

If one needs a larger cluster, but two ingress nodes (for HA), then the following can work
$ k3d create --publish 8080:80@server@0 --workers 5

If the goal is to provide remote access to k3d cluster. The the following should work, assume the desired remote routable IP.

$ k3d create --publish :8080:80

On the other hand, if the goal is not to provide any remote access to the k3d cluster, but only use it for local development and testing

$ k3d create --publish 127.0.0.1:8080:80

I understand and agree that node port and host port are not commonly used, but they are valid k8s configurations. To be honest, I don't see a down side in supporting them, in addition to the common use cases we mostly care about.

Or probably you have some other ingress use cases in mind?

from k3d.

goffinf avatar goffinf commented on August 28, 2024

I don’t have any objection to supporting nodeport and host port mapping. But I don’t think we should assume that they are the same as Ingress other than in a very generalised sense. I want to be able to support the Kubernetes Ingress object, not a simulation of it.

Currently in k3s (in its docker implementation), Ingress is only supported through a specific port mapping in the compose file and that doesn’t work if you have more than ONE worker. There is an open issue for that, which I was rather hoping that k3d would address given that it’s focus is the docker implementation specifically.

Some of the discourse above has started to venture into much more significant customisation and as others have said this is very likely to give rise to all sorts of edge cases to mange which would move k3d from being a light-Weight wrapper to k3s to something more uncomfortably substantial I think .. not sure .. but I see some early warning signs of creeping complexity.

I still think the role type has some legs since in many cases I don’t want to expose application ports on the server node.

Certainly some of @andyz-dev examples look pretty simple. I’m not 100% convinced about using a node ordinal (a name maybe ok though).

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

your explanation looks correct and plausible, but it still doesn't convince me anymore.

when we started this debate, i also simply thought, that this issue could be solved by a simple workaround and a few injected port forward instructions at least for trivial typical usage scenarios. but after watching your attempts, to stress this possibilities much wider and bypass lots of abstractions and indirection utilized by a clean k8s system, i simply had to change my mind. you somehow demonstrated convincingly why we shouldn't take this path...

if you look at the behavior of k3d without your patch on a linux system, you see, that it is already using ExternalIPs resp. providing access from the host via the docker0 bridge:

k3s$ kubectl get svc -n kube-system 
NAME       CLUSTER-IP     EXTERNAL-IP    PORT(S)                      AGE
kube-dns   10.43.0.10     <none>         53/UDP,53/TCP,9153/TCP       3d
traefik    10.43.227.46   192.168.16.2   80:32448/TCP,443:31643/TCP   3d

and http access works, if you utilize this particular IP:

~$ wget -O- http://192.168.16.2
--2019-05-10 01:51:58--  http://192.168.16.2/
Connecting to 192.168.16.2:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2019-05-10 01:51:58 ERROR 404: Not Found.

it's just a pity, that it is only listening on docker0 and not on our usual network, which would obviate the need for any further routing solution or redirection. but on the other hand, thats exactly what we should expect and like about docker and similar sandboxing environments: it isolates the networks!

so it's really just a question, how we can overcome this particular behavior, without to much side effects and unforeseeable troubles? we don't need a super strong brute force approach, which bypasses most of k8s fundamental network principles, but a nice, clean and simple solution to just enable this communication between the public network and already designated external IP communication endpoints in k8s.

giving all this worries and dissatisfaction concerning port gambling solutions, i'm more and more thinking about this already expressed other alternative. it also looks like a more promising strategy to overcome dynamic changes and reconfiguration within the clusters, which can not be handled in a sufficient manner by our one shot docker configuration attempts.

from k3d.

andyz-dev avatar andyz-dev commented on August 28, 2024

@goffinf Thanks for the input. Is there a reference to the issue you mentioned on k3s not being able to support more than one ingress node? I'd like to take a closer look and make sure multiple node input works on k3d. We should probably also take a fresh look at the ingress from top down (instead of coding up) to make sure the end result works for the community.

@mash-graz Thanks for the input, I do value them very much and the end solution will be better if we think about them collectively. That's why @iwilltry42 made the pre-release available so that more folks can try it out and provide feed backs. FWIW, from tip of the master, on MAC OS, the show svc of traefik also gave the external IP of docker0, (on MAC docker0 runs inside a VM, of course). Sounds like you are already thinking of an alternative design, If helpful, I will be happy to continue the discussion (may be we can make use of the slack channel), or be a sounding board to help you flush out your design ideas.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Hey there, now that was a long read 😁
I was a bit busy in the last days, but now I took the time to read through all the comments again to figure out what's the best thing to do.

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host.
Since we often don't need this, this could just introduce security issues.

The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

Since the portmapping is the most flexible thing that we can do at creation time without too much engineering extra-overhead, I think we should stick with it, but in a different way then now.

I would still propose to enhance the docker notation with the node specifier, i.e. [<ip>:][<host-port>:]container-port[/<protocol>]@<node-specifier>, where container-port and node-specifier are mandatory (first change). The auto-offset of host ports would then be an optional feature (e.g. --port-auto-offset) and 1:1 port-mapping will be emphasized by enforcing the node-specifier in --publish.

Still, I think that @mash-graz idea of somehow connecting to the docker network is a super cool idea and I think that we might be able to support this at some point when we've had more time for researching it.
The network-mode=host feature we could add with a hint that it will only work for Linux users.
I think there are more people interested in this (#6).

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

After the first easy implementation, we can think of further getting rid of auto-offset, e.g. by allowing port ranges for host-port, so that we will generate a 1:1 mapping out of a single flag.
E.g.

  • we pass --workers 5 --publish 6550-6554:1234@workers to have port 1234 of each worker mapped to the host
  • We get 0.0.0.0:6550:1234/tcp@worker-0, 0.0.0.0:6551:1234/tcp@worker-1, ..., 0.0.0.0:6554:1234/tcp@worker-4
    Since this requires quite a bit more code and thoughts, I will keep it out of my initial PR.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

thanks @iwilltry42 for preparing this patch!

So as far as I understand the main risk/concern in the current approach is that we expose ports with on each k3d node (containers) with an auto-offset on the host.
Since we often don't need this, this could just introduce security issues.
The main goal of this should be to support single hostport to single containerport bindings to e.g. allow ingress or LB port forwarding for local development.

yes -- that's more or less my point of view too. :)

we should really try to concentrate on this simple one-to-one port mapping (LB/ingress), although the more complex case (auto offset...) should be supported as good as possible.

I would still propose to enhance the docker notation with the node specifier, ... , where container-port and node-specifier are mandatory.

does it really make sense, to require the node-specifier as mandatory field?

if we understand the one-to-one port mapping case as the most common usage scenario, we can simple suppose it as the default mode of operation, as long as no other node specifier is explicitly given on the command line.
this would make it easier to use resp. needs less lengthy command line options in practice.

i still see the problem, how the one-to-one port mapping can be specified in an unambiguous manner by the proposed notation?
a ...@server will only work for single master clusters, but in the future, when k3s will finally be able to support multi master clusters as well, it unfortunately will not always signify a unique node resp. a one-to-one mapping anymore...

i guess, it's a kind of fundamental misunderstanding or oversimplification, if we presuppose a congruence between docker container entities and k8s more complex network abstraction. both accomplish orthogonal goals by different means. utilizing port mappings/forwarding in workarounds to satisfy some practical access requirements, should be always seen as rather questionable and botchy cutoff.

there are already some interesting tools and improvements of kubectl port-forward available, which seem to solve similar kinds of network routing and node access in a comfortable and k8s idiosyncrasies respecting fashion:

https://github.com/txn2/kubefwd
https://github.com/flowerinthenight/kubepfm

they are most likely a more suitable choice, if one wants to handle demanding network access to remote clusters and k3s running within docker or VMs. in comparison with our docker specific approach this kind of solution comes with a few pros and cons:

pros:

  • much cleaner and unambiguous access to k8s services
  • works even in case of dynamic changes happening in the cluster

cons:

  • they need root privileges to forward privileged ports
  • it's a little bit less effective and slower in comparison to other variants of direct access
  • it can be only used, when the cluster is already running

The network-mode=host feature we could add with a hint that it will only work for Linux users.

yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

from k3d.

goffinf avatar goffinf commented on August 28, 2024

I've had limited success with kubefwd, which I also thought might have some legs for this problem (having Kelsey Hightower endorse a product must give it some street creds I suppose). Anyway, my environment is Windows SubSystem for Linux (WSL). I appreciate that's not the case for everyone, but in corporates' its pretty common.

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
2019/05/13 23:27:21  _          _           __             _
2019/05/13 23:27:21 | | ___   _| |__   ___ / _|_      ____| |
2019/05/13 23:27:21 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/13 23:27:21 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/13 23:27:21 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/13 23:27:21
2019/05/13 23:27:21 Version 1.8.2
2019/05/13 23:27:21 https://github.com/txn2/kubefwd
2019/05/13 23:27:21
2019/05/13 23:27:21 Press [Ctrl-C] to stop forwarding.
2019/05/13 23:27:21 'cat /etc/hosts' to see all host entries.
2019/05/13 23:27:21 Loaded hosts file /etc/hosts
2019/05/13 23:27:21 Hostfile management: Backing up your original hosts file /etc/hosts to /root/hosts.original
2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused
2019/05/13 23:27:21 Done...

I know that kubeconfig works ...

kubectl --kubeconfig=$PWD/.kube/config get nodes
NAME                       STATUS   ROLES    AGE     VERSION
k3d-k3s-default-server     Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-0   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-1   Ready    <none>   6h38m   v1.14.1-k3s.4
k3d-k3s-default-worker-2   Ready    <none>   6h38m   v1.14.1-k3s.4

Using the binary was better and did work ...

sudo kubefwd services -n default

2019/05/14 00:32:44  _          _           __             _
2019/05/14 00:32:44 | | ___   _| |__   ___ / _|_      ____| |
2019/05/14 00:32:44 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/05/14 00:32:44 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/05/14 00:32:44 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/05/14 00:32:44
2019/05/14 00:32:44 Version 1.8.2
2019/05/14 00:32:44 https://github.com/txn2/kubefwd
2019/05/14 00:32:44
2019/05/14 00:32:44 Press [Ctrl-C] to stop forwarding.
2019/05/14 00:32:44 'cat /etc/hosts' to see all host entries.
2019/05/14 00:32:44 Loaded hosts file /etc/hosts
2019/05/14 00:32:44 Hostfile management: Original hosts backup already exists at /home/goffinf/hosts.original
2019/05/14 00:32:44 WARNING: No backing pods for service kubernetes in default on cluster .
2019/05/14 00:32:44 Forwarding: nginx-demo:8081 to pod nginx-demo-76d6b7f896-855r2:80

and a curl FROM WITHIN WSL works ...

curl http://nginx-demo:8081
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

HOWEVER ... http://nginx-demo:8081 is NOT available from the host (from a browser) itself unless you update the Windows hosts file to match the entry in /etc/hosts (WSL will inherit the Windows hosts file on start-up but doesn't add entries to it as it does with /etc/hosts) .... e.g.

You need to add this to /c/Windows/System32/drivers/etc/hosts (which is what kubefwd added to /etc/hosts in this example)

127.1.27.1 nginx-demo nginx-demo.default nginx-demo.default.svc.cluster.local

You can use the 127.1.27.1 IP without altering the Windows hosts files but that not particularly satisfactory ..

e.g. this will work from a browser on the host ... http://127.1.27.1:8081

In some ways this is WORSE than kubectl forward since at least there I can access the service on localhost:8081 without needing to mess with the Windows hosts file.

So TBH, neither of these is especially attractive to me even if they do leverage features native to the platform.

@iwilltry42 I'll try out your patch tomorrow (I have a bit of time off work).

I do agree with much that @mash-graz has said, but I'm minded to at least move forwards even if what is implemented now becomes redundant later on.

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

Running kubefwd in a docker container, even providing an absolute path to the kubeconfig just results in connection refused ...

 docker run --name fwd -it --rm -v $PWD/.kube/config:/root/.kube/config txn2/kubefwd services -n default
 ...
 2019/05/13 23:27:21 Error forwarding service: Get https://localhost:6443/api/v1/namespaces/default/services: dial tcp 127.0.0.1:6443: connect: connection refused

I know that kubeconfig works ...

your kubeconfig seems to use an API server entry, which points to localhost:6443.
this will only work on your local machine and not for remote access to your cluster. using virtual machines envrionments or docker sandboxes, have to be seen as kind of remote access in this respect. localhost doesn't connect to the same machine in this case...

just edit the server entry of your kubeconfig and use the IP of your machines network card instead.

concerning the mentioned windows hosts-file synchronization issues, i would answer: yes, it's just another dirty workaround.

but this mechanism does have same important advantages in practice. faked name entries like this are nearly indispensable, if you have to develop and test services behind L7 reverseproxies resp. name based dispatching. just forwarding ports to arbitrary IPs doesn't work in this case.
but again: it's just another tricky workaround, and by no means a clean and exemplary solution. ;)

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Thanks for the feedback!
@mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes.
Also, I'll have a look into the two projects you mentioned 👍

@goffinf , maybe WSL2 will bring you some improvements soon 😉

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

@mash-graz So you would make @server the default for the time that we don't have HA (= multiple masters) in k3s? As soon as we get HA mode, we could think of adding a loadbalancer in front so that we don't have multiple host ports open for the same ports on the master nodes.

yes -- i definitely would make it the default behavior!

it may look a little bit crazy and ignorant, that i'm still insisting on this particular little implementation detail, but i think it makes indeed an important difference for end-users. in most cases they'll only want to forward LB/ingress http/s access on standard port 80 and 443 on the host sides public network -- that's at least my expectation -- , and this trivial usage scenario should be supported as simple as possible. it shouldn't need any unfamiliar and complex command line options and just work reliable and as expected by common mind out of the box.

Also, I'll have a look into the two projects you mentioned

these other alternatives do not render our port forwarding efforts redundant, because they are more designed to realize save remote access to network services of a cluster, instead of just making ports accessible on the public network -- i.e. they accomplish a slightly different purpose --, nevertheless it's interesting to study, how they overcome some of the obstacles and ambiguities related to this forwarding challenge.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Your reasoning makes total sense @mash-graz , so I updated the default node to server in my PR #43
UPDATE: need to change a different thing later actually...

from k3d.

goffinf avatar goffinf commented on August 28, 2024

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

from k3d.

goffinf avatar goffinf commented on August 28, 2024

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

Thanks

Fraser.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@mash-graz just following up with your comment ....

... your kubeconfig seems to use an API server entry, which points to localhost:6443. ... just edit the server entry of your kubeconfig and use the IP of your machines network card instead

Unfortunately that doesn't appear to work. No kubectl commands succeed with that amendment and WSL also crashes. Obviously the default for k3s is localhost.

I thought that I might be able to pass this via the bind-address server arg as you can with k3s,..

sudo k3s server --bind-address 192.168.0.29 ...

but I couldn't see anything in the k3d docs which suggests how k3s server args are exposed. Do you know ?

You can pass k3s server args to k3d using the --server-arg/-x flag.
E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@iwilltry42 So I have installed v1.2.0-beta.1 and run k3d with this ...

k3d create --publish 8081:8081@server --workers 2

I can see port 8081 published on the server ..

docker container ls -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                                            NAMES
c367af69df28        rancher/k3s:v0.5.0   "/bin/k3s agent"         59 seconds ago       Up 56 seconds                                                        k3d-k3s-default-worker-1
0211bedcfb27        rancher/k3s:v0.5.0   "/bin/k3s agent"         About a minute ago   Up 58 seconds                                                        k3d-k3s-default-worker-0
e30c8789d6da        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   About a minute ago   Up About a minute   0.0.0.0:6443->6443/tcp, 0.0.0.0:8081->8081/tcp   k3d-k3s-default-server

I have a deployment and service for nginx where the service is listening on 8081

apiVersion: v1
kind: Service
metadata:
  name: nginx-demo
  labels:
    app: nginx-demo
spec:
#  type: NodePort
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo

Would you expect to be able to successful call that service on 8081. If I try curl ...

With the Manifest above I wouldn't expect it to work, since NodePort is commented out, so no port is exposed on the node. But even then, NodePort range is 30000-32767, so one of those ports has to be set and exposed for it to work.

curl http://localhost:8081 -v
* Rebuilt URL to: http://localhost:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: localhost:8081
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact

Added an Ingress (no change) ...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-demo
          servicePort: 8081

What am I missing ?

You didn't map the ports for ingress, so that wouldn't work as well. I'll create a demo for this 👍

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

You can pass k3s server args to k3d using the --server-arg/-x flag.
E.g. k3d create -x "--bind-address 192.168.0.29" or k3d create -x --bind-address=192.168.0.29

yes -- that's the correct answer to question, but i don't think, it will solve the troubles described by @goffinf.

it doesn't matter to which IP the k3s server API is bound inside the container, because from the outside it's always reached via this port forwarding internally specified by k3d (0.0.0.0:6443->6443/tcp) which maps it to all interfaces on the host side by this 0.0.0.0 notation. it should be therefore reachable on the host as https://localhost:6443 just as by using the public server name resp. one of the external IPs of the machine.

perhaps @goffinf is fighting some windows/WSL specific issues, but on linux i do not have any troubles to reach the API from outside of k3ds docker instance, neither locally on the host nor by remote access, and it doesn't make a difference if kubectl is used or kubefwd.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@goffinf this is a simple example of what I tested with k3d (on Linux):

  1. Create a cluster, mapping the ingress port 80 to localhost:8081
    k3d create --api-port 6550 --publish 8081:80 --workers 2

  2. Get the kubeconfig file
    export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"

  3. Create a nginx deployment
    kubectl create deployment nginx --image=nginx

  4. Create a ClusterIP service for it
    kubectl create service clusterip nginx --tcp=80:80

  5. Create an ingress object for it with kubectl apply -f

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80
  1. Curl it via localhost
    curl localhost:8081/

That works for me.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

@goffinf or the same using a NodePort service:

  1. Create a cluster, mapping the port 30080 from worker-0 to localhost:8082
    k3d create --publish 8082:30080@k3d-k3s-default-worker-0 --workers 2 -a 6550

...

  1. Create a NodePort service for it with kubectl apply -f
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - name: 80-80
    nodePort: 30080
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: NodePort
  1. Curl it via localhost
    curl localhost:8082/

from k3d.

goffinf avatar goffinf commented on August 28, 2024

@iwilltry42 I can confirm that using the latest version (1.2.0-beta.2) that the Ingress example works as expected with WSL. I can use curl localhost:8081 directly from WSL and within a browser on the host.

Moreover Ingress works using a domain also. In this case I created the k3d cluster and mapped port 80:80 for the server (default) providing access to the Ingress Controller on that port rather the 8081 ...

k3d create --publish 80:80 --workers 2
...
docker container ls
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                        NAMES
eedb8c962387        rancher/k3s:v0.5.0   "/bin/k3s agent"         30 seconds ago      Up 27 seconds                                                    k3d-k3s-default-worker-1
96ca910c7949        rancher/k3s:v0.5.0   "/bin/k3s agent"         32 seconds ago      Up 29 seconds                                                    k3d-k3s-default-worker-0
e10a95dc10b4        rancher/k3s:v0.5.0   "/bin/k3s server --h…"   34 seconds ago      Up 32 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:6443->6443/tcp   k3d-k3s-default-server

Then defined the deployment, service and ingress as follows (noting the ingress now defines the host domain) ...

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo-dom
  template:
    metadata:
      labels:
        app: nginx-demo-dom
    spec:
      containers:
      - name: nginx-demo-dom
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-demo-dom
  labels:
    app: nginx-demo-dom
spec:
  ports:
    - port: 8081
      targetPort: 80
      name: http
  selector:
    app: nginx-demo-dom
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-demo-dom
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: k3d-ingress-demo.com
    http:
      paths:
      - backend:
          serviceName: nginx-demo-dom
          servicePort: 8081

Using curl the services was reachable ..

curl -H "Host: k3d-ingress-demo.com" http://localhost

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
...
</html>

So congrats, the publish capability and Ingress are working fine and very naturally iro k8s. Great work

Changing the url to something non-existant returns the default backend 404 response as expected ...

curl -H "Host: k3d-ingress-demox.com" http://localhost
404 page not found

curl localhost
404 page not found

curl localhost/foo
404 page not found

Finally (again as expected but good to confirm) requests are properly load balanced across the 2 replicas that were defined in the deployment, alternating on each request.

Regards

Fraser.

from k3d.

goffinf avatar goffinf commented on August 28, 2024

@iwilltry42 In your example which now appears on the github README, was there a reason you chose to use the --api-port arg (it doesn't seem to materially impact whether the example works or not, so I wasn't sure if you were showing that for some other reason ?)

k3d create --api-port 6550 ...

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Hey @goffinf , thank you very much for your feedback and for confirming the functionality of the new feature!
No, it was just that 6443 is constantly in use on my machine and I just left it in there so that people see the --api-port flag instead of the --port flag which we want to "deprecate" (i.e. change functionality).
Do you think it's too confusing? Then I'd rather remove it 👍

UPDATE: I removed the -a 6550 from the NodePort example and added a note regarding the --api-port flag to the ingress example 👍

from k3d.

goffinf avatar goffinf commented on August 28, 2024

Haha, beat me to it. I was going to suggest that it would not be confusing if you added a note.

In general I prefer plenty of examples that show off one, or a small number of features, rather than a single example that has everything packed into it, especially where there might be a difference in behaviour for particular combinations. You’ve done that now, so that’s perfect.

Talking of documentation and examples, the question I asked a few days ago around passing additional server args is I think worth documenting (i.e. using —server-arg or -x) and provides an opportunity to talk briefly about the integration between k3d and k3s. I don’t know whether it’s possible to mirror every k3s arg or not (if that is the case you could simply link through to the k3s docs rather than repeat it all I guess) ?

I suspect others might also be interested in how, or indeed if, k3d will track the life-cycle of k3s and respond as/if/when new features are added or changed. IMO that’s an important consideration when selecting tools that app devs might adopt for a variety of reasons. Whilst everyone accepts the ephemeral nature of open source projects and, as in this case, if the user experience is relatively intuitive such that the skills investment isn’t high, it’s less of a concern, but ... it’s still nice to back tools that have a strong likelihood of a longer shelf-life and an active community. Just a thought.

I note the new FAQ section. Happy to help out here although I am aware of how important it is to ensure that all docs are accurate and up-to-date.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Well... with --server-arg you can pass any argument to the k3s server... but if it will work in the end, we cannot verify.
It'd be a huge amount of additional work to ensure/verify that all the k3s settings are working in a dockerized environment. E.g. to support the docker flag --docker for k3s, you'd have to put it in a dind image and/or pull through the docker socket from the host system.

Anyways, I'm totally in for adding additional documentation and would be super happy about your contributions to them, since you appear to be a very active user :)

Maybe we can come to the point where we'll be able to create a compatibility matrix for k3d and k3s 👍

from k3d.

goffinf avatar goffinf commented on August 28, 2024

Precisely. I spend a good deal of time at my place of employment writing up a variety of docs, from best practice guides and standard prototypes to run books. I can’t claim to be brilliant at it, but I do recognise the importance of clear information which illustrate through descriptions and examples the key use cases, and, importantly set out the scope. The latter plays to your comment about any potential tie-in (or not) with k3s, since many no doubt view k3d as a sister project or one that implies some level of dependency. I think it would be good to set that out and the extent to which that is true, perhaps especially so as docker as a container run-time has somewhat less focus these days (you can take Darren’s comment about k3s ... of course I did a DinD implementation .. in a couple of ways I guess).

I have noted from our conversations and other issues, both here and on k3s and k3os (I tend to read them all since there is much to be learned from other people’s concerns, as well as an opportunity to help sometimes) that there is still a level of ‘hidden’ configuration that is not obvious. That is not to say it’s deliberate, it most often to do with the time available to work on new features vs. documenting existing ones, and of course an assumed level of (pre) knowledge.

Anyways, I am active because I think this project has merit and potential for use by me and my work colleagues. So anything I can do to help I will.

I note Darren commented recently that WSL2 and k3d would be a very satisfactory combination, and I agree. But, since we aren’t in the business of vapourware, there’s still much to offer without WSL2 imo.

I think the next non-rc release might provide a good moment to review docs and examples.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

I'm looking forward to your contributions to k3d's docs :)
Maybe we can open a new issue/project for docs, where we can add parts, which users might like to see there 👍

Anyways... I think this issue is growing a bit too big. I guess the main painpoint of this issue has been solved, right? So can it be closed then @goffinf?

from k3d.

mash-graz avatar mash-graz commented on August 28, 2024

The network-mode=host feature we could add with a hint that it will only work for Linux users.
yes, i still think this variant could be a worthwhile and extraordinary user friendly option on linux machines. i'll try to test it and prepare a PR for this feature as soon as possible.

i finally could figure out an implementation of this alternative manner to expose the most common network access variants by a simple --host/--hostnetwork option and opened PR #53.

it has some pros (e.g. you don't have to specify all the ports resp. can be reconfigure them via k8s mechanism), but also cons (e.g. it will most likely only work on the linux platform).

in fact it's only exposing the server on the host network, because remapping multiply workers resp. their control ports on one machine isn't a trivial taskt. connecting the workers to server on the host network is also a bit tricky, because most of dockers internal name services do not work across different networks or aren't available on linux machines. i therefore had use the gateway IP of our custom network as workaround to reach the host...

i'm not sure, if it is really a useful improvement, after all this wonderful recent port mapping improvmenst, developed be @iwilltry42 and @andyz-dev, nevertheless i would be happy, if you could take a look at it.

from k3d.

iwilltry42 avatar iwilltry42 commented on August 28, 2024

Thanks for your PR @mash-graz , I just have to dig a bit deeper into the networking part to leave a proper review.

from k3d.

goffinf avatar goffinf commented on August 28, 2024

@iwilltry42 My thoughts exactly. This issue has served its purpose and an initial implementation has been delivered. Thank you. I am happy to close this down and raise any additional work as new issues.

from k3d.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.