Git Product home page Git Product logo

samba-container's People

Contributors

anoopcs9 avatar dmulder avatar obnoxxx avatar phlogistonjohn avatar spuiuk avatar synarete avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

samba-container's Issues

Remove "force user" ?

https://github.com/obnoxxx/samba-container/blob/96855006968ebf1b17b34e758d7d1d994359b3fe/images/samba/smb.import.conf#L18

I was able to remove this line and things continued to work. The uid of the files written matched that in /etc/passwd.

However, I realize that statically baking in uids is probably not the best approach. Plus, some of the container orchestration systems, like kubernetes, may add on what the smbd in the container can impersonate.

I'd like to discuss how the container might set up IDs/Username/Passwords more dynamically.

Support to add local users to local groups

I'm evaluating if and how to migrate from a local samba installation to samba-container. It seems that most of my requirements are satisfied. I'm working with local users and groups, and I'm happy to see that both are supported sufficiently.

Unfortunately, the final piece of the puzzle is missing: adding local users to local groups.

Since this feature is quite important for me, I created a workaround by manually post-processing the /etc/group file between init and server start.

This workaround has become quite complicated because the components don't play well together:

  • Since I need to modify the group file, I'd like to mount it from the host.
  • samba-container's user configuration doesn't change the content of existing files /etc/passwd and /etc/group
    but creates new files and renames/moves them. This makes it impossible to bind mount these files.
    Fortunately, we have arguments --etc-passwd-path & --etc-group-path to relocate them to a directory that can be mounted from the host.
  • The users_passdb command ignores --etc-passwd-path & --etc-group-path and expects passwd and group in /etc.
    So the files have to be writable in some host-mounted directory and additionally readable in /etc.
    This can be managed either by 3 mounts or by a custom entrypoint that creates symlinks.
  • With this setup, it's not possible to combine initialization and server run,
    because the group file must be modified after user initialization and before server start.

I hope you understand that I'd like to avoid this situation.
So I'd like to ask if samba-container can be extended by support for adding users to groups.
The configuration could look like this:

users:
  all_entries:
    - name: alice
      uid: 1001
      gid: 1001
      password: foo
    - name: bob
      uid: 1002
      gid: 1002
      password: bar
groups:
  all_entries:
    - name: readers
      gid: 2000
      members: [ alice, bob ]
    - name: writers
      gid: 2001
      members: [ alice ]

Last but not least: Thanks for maintaining samba-container!

Occasionally clients can't discover AD Global Catalog server

I've been debugging a big Cockpit AD test flake for three days now, and still can't put my finger on it, so maybe you have an idea. This started to fail since we moved from https://github.com/Fmstrat/samba-domain/ to https://quay.io/repository/samba.org/samba-ad-server , i.e. the client side didn't change. What this test does is roughly this:

  • Start one "services" VM with a samba-ad-server podman container (called f0.cockpit.lan), with exporting all ports
  • Start one "client/cockpit" VM x0.cockpit.lan with realmd, adcli and such.
  • Create an alice user in Samba AD on "services"
  • On the client, join the domain, and wait until the alice user is visible, i.e. id alice succeeds.

This works most of the time. After joining:

# sssctl domain-status cockpit.lan
Online status: Online

Active servers:
AD Global Catalog: f0.cockpit.lan
AD Domain Controller: f0.cockpit.lan

But in about 10% of local runs and 50% of runs in CI, it looks like this:

Online status: Offline

Active servers:
AD Global Catalog: not connected
AD Domain Controller: cockpit.lan

and /var/log/sssd/sssd_cockpit.lan.log has a similar error:

   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_get_account_info_send] (0x0200): Got request for [0x1][BE_REQ_USER][[email protected]]
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_attach_req] (0x0400): [RID#5] DP Request [Account #5]: REQ_TRACE: New request. [sssd.nss CID #4] Flags [0x0001].
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_attach_req] (0x0400): [RID#5] [CID #4] Backend is offline! Using cached data if available
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_attach_req] (0x0400): [RID#5] Number of active DP request: 1
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [sss_domain_get_state] (0x1000): [RID#5] Domain cockpit.lan is Active
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [_dp_req_recv] (0x0400): DP Request [Account #5]: Receiving request data.
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_req_destructor] (0x0400): DP Request [Account #5]: Request removed.
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [dp_req_destructor] (0x0400): Number of active DP request: 0
   *  (2023-11-17  0:47:14): [be[cockpit.lan]] [sbus_issue_request_done] (0x0040): sssd.dataprovider.getAccountInfo: Error [1432158212]: SSSD is offline
********************** BACKTRACE DUMP ENDS HERE *********************************

(2023-11-17  0:47:15): [be[cockpit.lan]] [ad_sasl_log] (0x0040): [RID#6] SASL: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server krbtgt/[email protected] not found in Kerberos database)
   *  ... skipping repetitive backtrace ...
(2023-11-17  0:47:15): [be[cockpit.lan]] [sasl_bind_send] (0x0020): [RID#6] ldap_sasl_interactive_bind_s failed (-2)[Local error]
   *  ... skipping repetitive backtrace ...
(2023-11-17  0:47:15): [be[cockpit.lan]] [sdap_cli_connect_recv] (0x0040): [RID#6] Unable to establish connection [1432158227]: Authentication Failed
   *  ... skipping repetitive backtrace ...
(2023-11-17  0:47:19): [be[cockpit.lan]] [resolv_gethostbyname_done] (0x0040): querying hosts database failed [5]: Input/output error
********************** PREVIOUS MESSAGE WAS TRIGGERED BY THE FOLLOWING BACKTRACE:

This is a race condition -- I can gradually strip down the test until it doesn't involve Cockpit at all any more -- the only effect that it has is to cause some I/O and CPU noise (like packagekit checking for updates). I can synthesize this with client-side commands like this:

        m.write("/etc/realmd.conf", "[cockpit.lan]\nfully-qualified-names = no\n", append=True)
        m.spawn("for i in $(seq 10); do grep -r . /usr >&2; done", "noise")
        time.sleep(1)
        self.assertIn("cockpit.lan", m.execute("realm discover"))
        m.execute(f"echo '{self.admin_password}' | realm join -vU {self.admin_user} cockpit.lan")m
        m.execute('while ! id alice; do sleep 5; done', timeout=300)

This is cockpit test API lingo, but m.execute just runs a shell command on the client VM, while m.spawn() runs it in the background.

Do you happen to have any idea to investigate further what exactly fails here?

deploying smbd+winbindd in a pod requires shared net namespace

Currently, when using smbd and winbindd in tandem to provide shares as a domain member the running containers must share a net namespace. smbd fails to start when the net namespace is not share. Errors are simliar to:

Security token: (NULL)
UNIX token of user 0
Primary group is 0 and contains 0 supplementary groups
Failed to fetch domain sid for ZZZ-BEST
pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0
push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1
push_conn_ctx(0) : conn_ctx_stack_ndx = 0
setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1
Security token: (NULL)
UNIX token of user 0
Primary group is 0 and contains 0 supplementary groups
Could not find map for sid S-1-5-32-544
create_builtin_administrators: Failed to create Administrators
pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0
Failed to check for local Administrators membership (NT_STATUS_INVALID_PARAMETER_MIX)
push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1
push_conn_ctx(0) : conn_ctx_stack_ndx = 0
setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1
Security token: (NULL)
UNIX token of user 0
Primary group is 0 and contains 0 supplementary groups
Could not find map for sid S-1-5-32-545
create_builtin_users: Failed to create Users
pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0
push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1
push_conn_ctx(0) : conn_ctx_stack_ndx = 0
setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1
Security token: (NULL)
UNIX token of user 0
Primary group is 0 and contains 0 supplementary groups
Could not find map for sid S-1-5-32-546
create_builtin_guests: Failed to create Guests
pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0
Failed to check for local Guests membership (NT_STATUS_INVALID_PARAMETER_MIX)
create_local_token failed: NT_STATUS_INVALID_PARAMETER_MIX
ERROR: failed to setup guest info.

This is a minor issue as a shared net namespace may be needed for other aspects but I thought it was worth logging it.

rebuild samba-container containers when updating sambacc

Since the containers built in samba-containers depend on sambacc, it is important to update the containers everytime sambacc is updated. This is not automatically kicked off at this time.

Fox example:
Sambacc has been updated by commit
4e2a4e3 commands: Add 'check ctdb-nodestatus' command
I would like to update samba-operator to use this new sambacc command. The samba-operator uses the samba-container image built in this repository. However it hasn't build the image incorporating the new sambacc and hence cannot be considered up to date.

github actions generates many warnings

I figure this is because some of the actions were hare configured and are "out of date".

Example (15 warnings):
https://github.com/samba-in-kubernetes/samba-container/actions/runs/3391037070

They seem to boil down to:

  • Node.js 12 actions are deprecated.
  • The set-outputcommand is deprecated and will be disabled soon

The latter may be more of a priority for the "... will be disabled soon" bit. Of course, they don't say how soon.

Anyone want to update the github actions? It may also be worth checking on the up-to-date-ness of the actions in some of the other sink repos too.

Research the need for Samba VFS fileid

We need to check:

  • do we need a special fileid handling?
  • if yes, is the sufficient vfs_fileid module sufficient?
  • if not, we need to code what is missing

Configurable Samba AD container.

We have our Samba AD DC container (used for testing) but it has a largely hard-coded configuration. We should make (parts of) this changeable so that at least we can vary parameters during testing.

Tests suites regularly failing: test-ad-server-kubernetes on default,opensuse,amd64

Over the last week or so the test suite has been failing with regularity. One consistent failure case is the suite test-ad-server-kubernetes running with the build parameters default, opensuse, amd64.

I kicked off a rerun recently and the same failure is exhibited.
This needs investigation.

Examples:
https://github.com/samba-in-kubernetes/samba-container/actions/runs/6438421335
https://github.com/samba-in-kubernetes/samba-container/actions/runs/6444959862
https://github.com/samba-in-kubernetes/samba-container/actions/runs/6477174091
https://github.com/samba-in-kubernetes/samba-container/actions/runs/6490445394
https://github.com/samba-in-kubernetes/samba-container/actions/runs/6521431790

Pursue/complete exising work to add JSON output to smbstatus

As part of the metrics story started in #41 we think the cleanest approach is to get the metrics from smbstatus as JSON. There's existing (unmerged) work to add JSON output to samba commands. If we were able to help complete this work for smbstatus it can serve as the basis for our metric collection.

Document the image publish workflow

While this is only directly relevant to project maintainers and devs it's still good to document the workflow around adding and managing the workfow that publishes images to quay.io . Especially the part that isn't part on quay.io that isn't part of the workflow yamls.

Missing DNS forwarder setting

Hello everyone,
just playing with quay.io/samba.org/samba-ad-server:v0.3 to create on-the-fly AD DC for testing purposes, as a single container not running under K8s.

For my use case it would be of great value to have the possibility to (optionally) set dns forwarder = ...: I found nothing related on https://github.com/samba-in-kubernetes/sambacc/blob/627c6c09a9f198f6f8ad46412bf970f72ad6745e/sambacc/addc.py#L99C1-L100 and not sure how to properly force it into the container.

TIA,
Matteo

deploying smbd+winbindd in a pod requires shared pid namespace

When using smbd and winbindd in tandem to provide shares as a domain member the running containers must share a pid namespace. The samba services somehow use the pids to initialize the datagram messaging layer and this fails when the servers both start as pid 1.

AD: default to a safer example domain

Because I originally cooked up the scripts we use for the AD image as a quick and dirty prototype for my own use the scripts set up the samba AD server with a phony subdomain of a domain I currently have registered for my own personal use.
While it's good that I know that there will not be any conflicts we should probably start looking towards cleaning this up sooner, rather than later, and preventing something so linked to me from proliferating in the scripts and tests.

I did a quick search and found https://tools.ietf.org/html/rfc6761
Based on this I think we're best off defaulting to ".test." as our domain for local, ephemeral, test domains. At the same time I can make either the whole string or the "" bit based on an environment variable to start making the scripts more usable for other cases and reducing the amount of stuff that is hard coded.

Opinions, @obnoxxx ?

Including additional VFS modules

Currently when we install samba we leave out certain VFS modules. These are modules that are packaged separately in fedora:
samba-vfs-cephfs
samba-vfs-glusterfs
samba-vfs-iouring

Should we just include them in all container images? Or would it be better to have some sort of alternate image/layer to handle it?

Makefile: Show better error message when podman/docker command fails

Hi,

there could be a more meaningful error message if the podman/docker command (CONTAINER_CMD) fails.

Example:
If 'podman version' does not work, because the machine is not up and running, 'make build' fails with:

build --tag samba-container:latest --tag quay.io/samba.org/samba-server:latest -f images/server/Dockerfile.fedora images/server
make: build: No such file or directory
make: *** [.build.server] Error 1

It would be good to see the podman error to know what actually does not work properly and be able to address this.

Thanks!

Cheers,
Karolin

error in build-image script in CI

The new build-image script encountered an exception while running the push job in the CI:
https://github.com/samba-in-kubernetes/samba-container/actions/runs/5640169117/job/15276476477

Step 24/24 : CMD ["run", "smbd"]
 ---> Running in d71e6b4cc928
Removing intermediate container d71e6b4cc928
 ---> c863549f232c
Successfully built c863549f232c
Successfully tagged quay.io/samba.org/samba-server:default-fedora-amd64
Successfully tagged quay.io/samba.org/samba-server:latest
Successfully tagged quay.io/samba.org/samba-server:fedora-latest
Successfully tagged samba-server:default-fedora-amd64
Successfully tagged samba-server:fedora-latest
/home/runner/work/samba-container/samba-container/hack/build-image --without-repo-bases --container-engine=docker  --kind=server --package-source=default --distro-base=fedora  --repo-base=quay.io/samba.org/  --push
Traceback (most recent call last):
  File "/home/runner/work/samba-container/samba-container/hack/build-image", line 593, in <module>
    main()
  File "/home/runner/work/samba-container/samba-container/hack/build-image", line 583, in main
    _action(cli, img)
  File "/home/runner/work/samba-container/samba-container/hack/build-image", line 401, in push
    if tag.endswith(("-latest", "-nightly")):
AttributeError: 'tuple' object has no attribute 'endswith'
make: *** [Makefile:94: push-server] Error 1
Error: Process completed with exit code 2.

Toolbox image build is failing

I also suspect a chicken-and-egg problem is preventing this from resolving naturally.

The error:

 Total                                           5.7 MB/s |  34 MB     00:05     
Running transaction check
Transaction check succeeded.
Running transaction test
The downloaded packages were saved in cache until the next successful transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Transaction test error:
  file /usr/bin/systemd-tmpfiles from install of systemd-250.3-8.fc36.x86_64 conflicts with file from package systemd-standalone-tmpfiles-250.9-1.fc36.x86_64

The command '/bin/sh -c dnf -y install samba-test' returned a non-zero code: 1
make: *** [Makefile:138: .build.toolbox] Error 1

I was unable to reproduce this locally which leads me to thinking there's a chicken and egg problem. What I suspect is that at some point the fedora base image and the systemd update conflicted in the image that was created as samba-server:latest. Now every time it builds it builds based on the samba-server:latest on quay.io which has the problem. Then the build fails when the build fails a newer samba-server:latest is not pused to quay because all jobs must succeed to trigger the push.

I see two options:

  1. We do not require all jobs to run for push. Each "chain" of actions relates to one image
  2. During the build the job(s) responsible for building the toolbox share/reuse the samba-server image that was just built. This may mean combining the steps responsible for building those images.

I prefer item 2 above. It may seem a messy on first glance, but it reflects an underlying relationship between those images.

nightly builds not using nightly samba rpms

+ dnf install --setopt=install_weak_deps=False -y findutils python-pip python3-jsonschema python3-samba python3-pyxattr samba samba-client samba-winbind samba-winbind-clients tdb-tools ctdb
Warning: failed loading '/etc/yum.repos.d/samba-nightly-master.repo', skipping.
Last metadata expiration check: 0:00:06 ago on Tue Jan 17 02:40:06 2023.
Package findutils-1:4.9.0-1.fc36.x86_64 is already installed.

It ends up having the distro packages rather than the nightly builds from centos ci.
I was looking for one thing and found this instead.

Fix centos toolbox container base image

In PR #135 I added a comment to images/toolbox/Containerfile.centos:

# FIXME - this is not a real tag publicly available in the
# quay.io/samba.org/samba-client repository. This only works if you build
# the centos client locally first or acquire the image from a side channel.
# This needs to be converted to something public and/or configurable
# later.
FROM quay.io/samba.org/samba-client:centos-latest

Because the "centos-latest" tag is not universally available. It only ever exists locally on a system that first builds the centos client image. Thus the quay.io/samba.org/ part is a bit of a fib. This issue exists to discuss and do something about this


          > This raise two questions:
1. What is the appropriate tag we should use?

I don't know all I know is this will break in some circumstances. But I didn't want this PR to become sidetracked by that issue so I didn't try to fix it, just left a breadcrumb to follow up on later.

2. Should we still use `samba-client` as base image for `toolbox` or maybe use other (centos,fedora,opensuse) as base?

I think layering the images is fine, but there are some downsides to it. Let's discuss this more in a new issue or meeting, etc. :-)

Originally posted by @phlogistonjohn in #135 (comment)

ci: reuse images across build steps

I think it should be possible to share the images built by the dedicated build steps with the steps that execute the test scripts in the github ci. At the very least I'd like to investigate if it is possible and know once and for all.

The goal would be to speed up the CI and avoid redundant build steps.

How to get a [global] option into smb.conf?

I am trying to get ldapmodify to work in the quay.io/samba.org/samba-ad-server container (after installing /usr/bin/ldapmodify). For that I need to set ldap server require strong auth = no option in smb.conf's [global] section. But despite https://github.com/samba-in-kubernetes/sambacc/blob/master/docs/configuration.md and various examples, it just doesn't seem to stick:

my ~/samba-ad.json:

{
  "samba-container-config": "v0",
  "configs": {
    "demo": {
      "instance_features": ["addc"],
      "domain_settings": "sink",
      "instance_name": "f0",
      "globals": ["default"]
    }
  },
  "domain_settings": {
    "sink": {
      "realm": "COCKPIT.LAN",
      "short_domain": "COCKPIT",
      "admin_password": "foobarFoo123"
    }
  },
  "globals": {
    "default": {
      "options": {
        "ldap server require strong auth": "no"
      }
    }
  }
}
podman run -it --rm --name samba     --privileged --network=host     -v /root/samba-ad.json:/etc/samba/container.json     -h f0.cockpit.lan  quay.io/samba.org/samba-ad-server

And yet there's no sign of it:

# podman exec -it samba cat /etc/samba/smb.conf
# Global parameters
[global]
	dns forwarder = 127.0.0.53
	netbios name = F0
	realm = COCKPIT.LAN
	server role = active directory domain controller
	workgroup = COCKPIT
	idmap_ldb:use rfc2307 = yes

[sysvol]
	path = /var/lib/samba/sysvol
	read only = No

[netlogon]
	path = /var/lib/samba/sysvol/cockpit.lan/scripts
	read only = No

I also tried other options, like "guest ok": "no" which is from /usr/share/sambacc/examples/example1.json

How does this work?

Thanks in advance!

Update to actions/checkout@v4

Github is emitting complaints that the node.js vesion used by our use of an older version of actions/checkout is too old. This issue is a reminder for me to update this. :-)

Add AARCH64 Support

Hi, on my K8s cluster on the x86/Ubuntu nodes it works fine (thanks for that) but it's a multi-arch cluster and I can't migrate the container to my arm64/Alpine nodes. Are there any plans to make the container available for arm architecture?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.