Git Product home page Git Product logo

omd-labs-docker's Introduction

omd-labs-docker

OMD "Labs" Edition (https://labs.consol.de/de/omd/) on Docker with Ansible support.

Author: Sven Nierlein, sven.nierlein at consol.de

Original Author: Simon Meggle

Automated builds, branches & tags

Each image build gets triggered by the OMD Labs build system as soon as there are new packages of OMD available:

Automated builds are triggered for the following branches:

  • master => :nightly (=snapshot builds)
  • vX.XX => :vX.XX (=stable version)
  • latest => :latest (=latest stable version)

Each image already contains a "demo" site.

Usage

run the "demo" site

Run the "demo" site in OMD Labs Edition:

# Rocky 9
docker run -p 8443:443 consol/omd-labs-rocky
# Debian 11
docker run -p 8443:443 consol/omd-labs-debian

Use the Makefile to work with locally built images:

# run a local image
make -f Makefile.omd-labs-rocky start
# build a "local/" image without overwriting the consol/ image
make -f Makefile.omd-labs-rocky build
# start just the bash
make -f Makefile.omd-labs-rocky bash

The container will log its startup process:

Config and start OMD site: demo
--------------------------------------
Checking for volume mounts...
--------------------------------------
 * local/: [No Volume]
 * etc/: [No Volume]
 * var/: [No Volume]


Checking for Ansible drop-in...
--------------------------------------
Nothing to do (/root/ansible_dropin/playbook.yml not found).

omd-labs: Starting site demo...
--------------------------------------
Preparing tmp directory /omd/sites/demo/tmp...Starting rrdcached...OK
Starting npcd...OK
Starting naemon...OK
Starting dedicated Apache for site demo...OK
Initializing Crontab...OK
OK

Notice the section "Data volume check". In this case there were no host mounted data volumes used. The "start.sh" script has renamed all .ORIG folders to the original name in case there are no mounted volumes.

Custom sites

Change the default sitename

The default sitename "demo" can be changed. Build a custom image while SITENAME is set:

  • clone this repository, cd into the folder containg the Dockerfile, e.g. omd-labs-rocky
  • build a local image:
export SITENAME=mynewsite; make -f Makefile.omd-labs-rocky build

Each container instance of this image will start the site "mynewsite" instead of "demo".

Add another site dynamically

If you need to set the sitename dynamically without building the whole image from scratch (see above), you can create images with another OMD-site beneath the default site (see above). Create a custom Dockerfile which uses the original image as the base image:

FROM: consol/omd-labs-rocky:nightly
...
...

The sitename in your custom OMD image can now be changed by setting the variable NEW_SITENAME to a new value:

export NEW_SITENAME=anothersite

The ONBUILD commands in the original Dockerfile execute after the current Dockerfile build completes. ONBUILD executes in any child image derived FROM the current image. Think of the ONBUILD command as an instruction the parent Dockerfile gives to the child Dockerfile.

Use data containers

Host mounted data folders

As soon as the container dies, all monitoring data (configuration files, RRD data, InfluxDB, log files etc.) are lost, too. To keep the data persistent, use host mounted volumes.

This command

  make -f Makefile.omd-labs-rocky startvol

starts the container with three volume mounts:

  • ./site/etc => $OMD_ROOT/etc.mount
  • ./site/local => $OMD_ROOT/local.mount
  • ./site/var => $OMD_ROOT/var.mount

On the very first start, this folders will be created on the host file system. In that case, the start.sh synchronize ongoing through lsycnd the content into the volumes (etc.mount, local.mount, var.mount) from the original folders (etc, local, var):

  • $OMD_ROOT/etc => $OMD_ROOT/etc.mount
  • $OMD_ROOT/local => $OMD_ROOT/local.mount
  • $OMD_ROOT/var => $OMD_ROOT/var.mount
Config and start OMD site: demo
--------------------------------------
Checking for volume mounts...
--------------------------------------
 * local/: [EXTERNAL Volume] at /opt/omd/sites/demo/local.mount
   * mounted volume is writable
   => local.mount is empty; initial sync from local local ...
   * writing the lsyncd config for local.mount...
 * etc/: [EXTERNAL Volume] at /opt/omd/sites/demo/etc.mount
   * mounted volume is writable
   => etc.mount is empty; initial sync from local etc ...
   * writing the lsyncd config for etc.mount...
 * var/: [EXTERNAL Volume] at /opt/omd/sites/demo/var.mount
   * mounted volume is writable
   => var.mount is empty; initial sync from local var ...
   * writing the lsyncd config for var.mount...

lsyncd: Starting lsyncd ...
--------------------------------------
16:38:44 Normal: --- Startup, daemonizing ---
16:38:44 Normal: --- Startup, daemonizing ---

Checking for Ansible drop-in...
--------------------------------------
Nothing to do (/root/ansible_dropin/playbook.yml not found).

omd-labs: Starting site demo...
--------------------------------------
Preparing tmp directory /omd/sites/demo/tmp...Starting rrdcached...OK
Starting npcd...OK
Starting naemon...OK
Starting dedicated Apache for site demo...OK
Initializing Crontab...OK
OK

On the next start the folders are not empty anymore and used as usual.

Checking available space on mount point

Before OMD starts, each of the three mount points (etc, local, var) can be checked for free available disk space to ensure that the container can store its data. The threshold can be given as an environent variable in the docker run command. If there is not enough space on any mount point, the startup script fails. On a container orchestration platform like OpenShift this should be handled by your deployment config (=the running pod only gets shutdown if the new pod was started properly).

docker run -d -p 8443:443 \
  -v $(pwd)/site/local:/omd/sites/demo/local.mount \
  -v $(pwd)/site/etc:/omd/sites/demo/etc.mount     \
  -v $(pwd)/site/var:/omd/sites/demo/var.mount     \     
  -e VOL_VAR_MB_MIN=700000  \
  -e VOL_ETC_MB_MIN=500     \
  -e VOL_LOCAL_MB_MIN=6000  \
  consol/omd-labs-rocky:nightly

docker logs 91992828cc1dca7839cb2842933897b94329fe2c6b395c5ccb8b9fa056057679
Config and start OMD site: demo
--------------------------------------
Checking for volume mounts...
--------------------------------------
 * local/: [EXTERNAL Volume] at /opt/omd/sites/demo/local.mount
   * OK: Free space on /opt/omd/sites/demo/local.mount is 499826MB (required: 6000MB)
   * OK: mounted volume is writable
   <= Volume contains data; sync into local local ...
   * writing the lsyncd config for local.mount...
 * etc/: [EXTERNAL Volume] at /opt/omd/sites/demo/etc.mount
   * OK: Free space on /opt/omd/sites/demo/etc.mount is 499825MB (required: 500MB)
   * OK: mounted volume is writable
   <= Volume contains data; sync into local etc ...
   * writing the lsyncd config for etc.mount...
 * var/: [EXTERNAL Volume] at /opt/omd/sites/demo/var.mount
   * ERROR: Mounted volume has only 499825MB left (required: 700000MB, set by VOL_VAR_MB_MIN).
no crontab for demo
Removing Crontab...Stopping Nagflux.... Not running.
Stopping dedicated Apache for site demo...(not running)...OK
Stopping naemon...not running...OK
Stopping Grafana.... Not running.
Stopping influxdb.... Not running.

Start OMD-Labs with data volumes

To test if everything worked, simply start the container with

  make startvol

This starts the container with the three data volumes. Everything the container writes into one of those three folder, it will synchronized into the persistent file system.

(make startvol is just a handy shortcut to bring up the container. In Kubernetes/OpenShift you won't need this.)

Ansible drop-ins

For some time OMD-Labs comes with full Ansible support, which we can use to modify the container instance on startup. How does this work?

start sequence

By default, the OMD-labs containers start with the CMD /root/start.sh. This script

  • checks if there is a playbook.yml in $ANSIBLE_DROPIN (default: /root/ansible_dropin, changeable by environment). If found, the playbook is executed. It is completely up to you if you only place one single task in playbook.yml, or if you also include Ansible roles. (with a certain point of complexity, you should think about a separate image, though...)
  • starts the OMD site "demo" & Apache as a foreground process

Include Ansible drop-ins

Just a folder containing a valid playbook into the container:

docker run -it -p 8443:443 -v $(pwd)/my_ansible_dropin:/root/ansible_drop consol/omd-labs-debian

Login & Password

When starting the container, OMD will create a random default password for the omdadmin user. There are several ways to handle this:

  1. using the data volume will bring your own htpasswd file
  2. set your default omdadmin password per ansbible dropin, ex. like:

playbook.yml:

---
- hosts: all
  tasks:
  - shell: sudo su - demo -c "set_admin_password omd"

Debugging

If you want to see more verbose output from Ansible to debug your role, adjust the environment variable value ANSIBLE_VERBOSITY to e.g. 3:

docker run -it -p 8443:443 -e ANSIBLE_VERBOSITY=3 -v $(pwd)/my_ansible_dropin:/root/ansible_drop consol/omd-labs-debian

omd-labs-docker's People

Contributors

datamuc avatar dependabot[bot] avatar pottah avatar sni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

omd-labs-docker's Issues

unable to build debian

I tried to make build a debian docker image but it fails, probably because of extra %3D at the end of url make is trying to download, infact , without %3D with browser I can get the file.
Is there a directory where I can put the downloaded file and use the one downloaded via firefox ?

Please let me know, below the full transcript of make


$ make -f Makefile.omd-labs-debian build
./hooks/build 
CLUTTER_IM_MODULE=xim
COLORTERM=truecolor
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
DEFAULTS_PATH=/usr/share/gconf/ubuntu.default.path
DESKTOP_SESSION=ubuntu
DISPLAY=:0
DOCKER_REPO=index.docker.io/consol/omd-labs-debian
GDMSESSION=ubuntu
GJS_DEBUG_OUTPUT=stderr
GJS_DEBUG_TOPICS=JS ERROR;JS LOG
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
GNOME_SHELL_SESSION_MODE=ubuntu
GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/1d63c7dd_f085_467c_9ef8_392577038b2c
GNOME_TERMINAL_SERVICE=:1.86
GPG_AGENT_INFO=/run/user/1000/gnupg/S.gpg-agent:0:1
GTK_IM_MODULE=ibus
GTK_MODULES=gail:atk-bridge
HOME=/home/magowiz
IMAGE_NAME=local/omd-labs-debian:nightly
IM_CONFIG_PHASE=2
LANG=it_IT.UTF-8
LESSCLOSE=/usr/bin/lesspipe %s %s
LESSOPEN=| /usr/bin/lesspipe %s
LOGNAME=magowiz
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
MAKEFLAGS=
MAKELEVEL=1
MAKE_TERMERR=/dev/pts/3
MAKE_TERMOUT=/dev/pts/3
MANDATORY_PATH=/usr/share/gconf/ubuntu.mandatory.path
MFLAGS=
OLDPWD=/home/magowiz/docker
PATH=/home/magowiz/script:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PWD=/home/magowiz/docker/omd-labs-docker
QT4_IM_MODULE=xim
QT_ACCESSIBILITY=1
QT_IM_MODULE=ibus
SESSION_MANAGER=local/dell-vs15:@/tmp/.ICE-unix/6988,unix/dell-vs15:/tmp/.ICE-unix/6988
SHELL=/bin/bash
SHLVL=2
SOURCE_BRANCH=master
SSH_AGENT_PID=8341
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
TERM=xterm-256color
TEXTDOMAINDIR=/usr/share/locale/
TEXTDOMAIN=im-config
USER=magowiz
USERNAME=magowiz
_=/usr/bin/env
VTE_VERSION=5201
WINDOWPATH=2
XAUTHORITY=/run/user/1000/gdm/Xauthority
XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg
XDG_CURRENT_DESKTOP=ubuntu:GNOME
XDG_DATA_DIRS=/usr/share/ubuntu:/home/magowiz/.local/share/flatpak/exports/share/:/var/lib/flatpak/exports/share/:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop:/home/magowiz/snap/anbox/common/app-data
XDG_MENU_PREFIX=gnome-
XDG_RUNTIME_DIR=/run/user/1000
XDG_SEAT=seat0
XDG_SESSION_DESKTOP=ubuntu
XDG_SESSION_ID=2
XDG_SESSION_TYPE=x11
XDG_VTNR=2
XMODIFIERS=@im=ibus
Sending build context to Docker daemon  197.1kB
Step 1/35 : FROM debian:8
8: Pulling from library/debian
3d77ce4481b1: Already exists 
error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/4e/4eb8376dc2a33d7b87d6aa2a5b4bd024023382f7094a7e5b1c3bdd0fdaba19f9/data?verify=1529329612-e6Z2JYW9JsC7oOWVMvrzEj2S2Ik%3D: net/http: TLS handshake timeout
Makefile.omd-labs-debian:31: recipe for target 'build' failed
make: *** [build] Error 1

400 Bad Request (plain HTTP to an SSL-enabled port) via reverse proxy to port 80

Hello,

I have a reverse proxy (Traefik) which forwards my requests to port 80 of the container. I access the reverse proxy via HTTPS (port 443). In the container, I have set the Apache mode to "own" via "omd config".
In "own" mode, I assume that the container normally speaks HTTP via port 80, but this does not seem to be the case.

Message in browser:

Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.

curl from container:
to path / (OK):

# curl http://127.0.0.1:80/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://127.0.0.1/main/">here</a>.</p>
<hr>
<address>Apache/2.4.56 (Debian) Server at 127.0.0.1 Port 80</address>
</body></html>

to site /main/ (NOT OK):

# curl http://127.0.0.1:80/main/
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>400 Bad Request</title>
</head><body>
<h1>Bad Request</h1>
<p>Your browser sent a request that this server could not understand.<br />
Reason: You're speaking plain HTTP to an SSL-enabled server port.<br />
 Instead use the HTTPS scheme to access this URL, please.<br />
</p>
</body></html>

My docker-compose.yml

version: '3.7'

services:
  omd:
    build:
      context: .
      pull: true
      dockerfile_inline: |
        FROM consol/omd-labs-debian:latest
      args:
        NEW_SITENAME: main
    restart: unless-stopped
    environment:
      - TZ=Europe/Berlin
    volumes:
      - ./data/local:/opt/omd/sites/main/local.mount
      - ./data/etc:/opt/omd/sites/main/etc.mount
      - ./data/var:/opt/omd/sites/main/var.mount
    networks:
      - traefik-net
      - default
    labels:
      - "traefik.enable=true"
      - "traefik.http.services.omd.loadbalancer.server.port=80"
      - "traefik.http.routers.omd-http.entrypoints=http"
      - "traefik.http.routers.omd-http.rule=Host(`my.host.name`)"
      - "traefik.http.routers.omd-http.middlewares=https-redirect@file"
      - "traefik.http.routers.omd-https.entrypoints=https"
      - "traefik.http.routers.omd-https.rule=Host(`my.host.name`)"
      - "traefik.http.routers.omd-https.middlewares=websocket-header@file"

networks:
  traefik-net:
    external: true
    name: traefik-net
  default:

Am I missing something, does something need to be configured differently or is this a bug?

omd-labs-ubuntu: docker run fails with wrong permissions

root@###:~# docker run consol/omd-labs-ubuntu
container_linux.go:247: starting container process caused "exec: "/root/start.sh": permission denied"
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused "exec: \"/root/start.sh\": permission denied"\n".

Apache 2 error

When i try to start the container i always get this error:

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message

I even tried to use docker-compose file and set the ServerName on the 000-default.conf file of Apache, but still didn't work.

PNP4Nagios reactivates after container rebuild

Monitoring core: naemon
Using omd with docker compose.

  1. Start omd-labs-docker
  2. Open shell, e.g. docker compose exec omd /bin/bash
  3. Switch to site user (here main): su - main
  4. Go to ~/etc/naemon/naemon.d/
  5. There is a pnp4nagios.cfg, because PNP4Nagios is enabled by default
  6. Run omd stop && omd config set PNP4NAGIOS off && omd start
  7. File is gone, PNP4Nagios is disabled.
  8. Run a docker compose restart, PNP4Nagios is still disabled, pnp4nagios.cfg doesn't exist
  9. Run a docker compose down && docker compose up -d, pnp4nagios.cfg exists, so it creates perfdata files
  10. Again open shell (see 2. & 3.), run omd config show PNP4NAGIOS
  11. It shows "off", which is clearly not the case.

I don't know if this is a omd or a omd-labs-docker problem, but because a restart works and only a rebuild fails, I think this is a docker problem.

My docker-compose.yml:

version: '3.7'

services:
  omd:
    build:
      context: .
      pull: true
      dockerfile_inline: |
        FROM consol/omd-labs-debian:latest
        RUN apt-get update \
            && apt-get install -y iputils-ping nano \
            && rm -rf /var/lib/apt/lists/*
        RUN ln -s /bin/ping /usr/bin/ping && \
            ln -s /bin/ping4 /usr/bin/ping4 && \
            ln -s /bin/ping6 /usr/bin/ping6
      args:
        NEW_SITENAME: main
    restart: unless-stopped
    environment:
      - TZ=Europe/Berlin
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ./data/.ssh:/opt/omd/sites/main/.ssh
      - ./data/local:/opt/omd/sites/main/local.mount
      - ./data/etc:/opt/omd/sites/main/etc.mount
      - ./data/var:/opt/omd/sites/main/var.mount

Creating a custom site doesn't work

I tried to create a custom site, but the described way
export NEW_SITENAME=CUSTOMNAME; make -f MAKEFILE build
only builds the site demo. The correct environment is show in the first step of the build process.
I cloned the github master today.

Volume mounts not working with NEW_SITENAME ARG set

I tried to set a new site name with ARG parameter which worked for creating a new site it looked like but my volume mounts were not being detected. How can I get volume mounts detected for NEW sites? Here is my docker-compose.yaml I used to test.

version: "3"

services:
  monitor:
    build:
      context: .
      args:
        NEW_SITENAME: "mysite"
    ports:
      - "8443:443"
    volumes:
      - "./etc:/opt/omd/sites/mysite/etc.mount"
      - "./local:/opt/omd/sites/mysitelocal.mount"
      - "./var:/opt/omd/sites/mysite/var.mount"

Dockerfile

FROM consol/omd-labs-ubuntu

Output on terminal:

$ docker-compose build
Building monitor
Step 1/1 : FROM consol/omd-labs-ubuntu
# Executing 3 build triggers
 ---> Using cache
 ---> Running in f04694a3b75f
Removing intermediate container f04694a3b75f
 ---> Running in c494c77fba11
CREATE new site:mysite
Removing Crontab...no crontab for demo
Stopping dedicated Apache for site demo...(not running)...OK
Stopping naemon...not running...OK
npcd was not running... could not stop
Stopping rrdcached...not running.
Removing /omd/sites/demo/tmp from /etc/fstab...OK
Deleting user and group demo...OK
Adding /omd/sites/mysite/tmp to /etc/fstab.
Preparing tmp directory /omd/sites/mysite/tmp...OK
Adding /omd/sites/mysite/tmp to /etc/fstab.
Created new site mysite with version 2.91.20190223-labs-edition.

  The site can be started with omd start mysite.
  The default web UI is available at https://c494c77fba11/mysite/

  The admin user for the web applications is omdadmin with password: xRDEf3kS
  (It can be changed with the 'set_admin_password' command as site user.)

  Please do a su - mysite for administration of this site.

Removing intermediate container c494c77fba11
 ---> 77e65ab25b7d
Successfully built 77e65ab25b7d
Successfully tagged albatross-monitoring_monitor:latest

$ docker-compose up -d && docker-compose logs -f
Recreating albatross-monitoring_monitor_1 ... done
Attaching to albatross-monitoring_monitor_1
monitor_1  | Config and start OMD site: mysite
monitor_1  | --------------------------------------
monitor_1  | Checking for volume mounts...
monitor_1  | --------------------------------------
monitor_1  |  * local/: [No Volume]
monitor_1  |  * etc/: [No Volume]
monitor_1  |  * var/: [No Volume]
monitor_1  | 
monitor_1  | 
monitor_1  | Checking for Ansible drop-in...
monitor_1  | --------------------------------------
monitor_1  | Nothing to do (/root/ansible_dropin/playbook.yml not found).
monitor_1  | 
monitor_1  | crond: Starting ...
monitor_1  | --------------------------------------
monitor_1  | /root/start.sh: line 87: crond: command not found
monitor_1  | 
monitor_1  | omd-labs: Starting site mysite...
monitor_1  | --------------------------------------
monitor_1  | Preparing tmp directory /omd/sites/mysite/tmp...Starting rrdcached...OK
monitor_1  | Starting npcd...OK
monitor_1  | Starting naemon...OK
monitor_1  | Starting dedicated Apache for site mysite...OK
monitor_1  | Initializing Crontab...OK
monitor_1  | OK
monitor_1  | 
monitor_1  | omd-labs: Starting Apache web server...
monitor_1  | --------------------------------------
monitor_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.24.0.2. Set the 'ServerName' directive globally to suppress this message

High CPU load because of rsync

Hi,

we have a lot of cpu load because of the sync procs for the mounted external paths. Is it possible to mount them directly to the correct path instead of the *.mount destination? I think this would reduce the load a lot or are there any issues which i don't have in my mind?

Config tool does not restart naemon core

When applying config changes through the config tool GUI in Thruk, the restart of the naemon core times out. The resulting output is:


Command Output:
--
Reloading naemon configuration (PID: 421)... Stopping naemon........................................................................................................................... sending SIGKILL....................ERROR Unable to stop naemon. Terminating... Starting naemon...OK  Reloading naemon configuration (PID: 421)... Stopping  naemon........................................................................................................................... sending SIGKILL....................ERROR Unable to stop naemon. Terminating... Starting naemon...OK Warning: waiting for core reload failed.

I have to resort to a docker-compose restart on the command line to apply configuration changes.

Debian image building.

I go this error on two different try.
Do you know what might be the problem ?

Is there some requirement on the host system ?
I am building the image on Centos7 from China.

I edited Dockerfile.omd-labs-debian a little.
Added these two lines before : RUN $HOME/install_common.sh debian

RUN echo "deb http://mirrors.huaweicloud.com/debian/ stable main contrib" > /etc/apt/sources.list
RUN echo "deb http://security.debian.org/ stable/updates main contrib" >> /etc/apt/sources.list
make -f Makefile.omd-labs-debian build
.....
.....
update-alternatives: using /usr/bin/bsd-mailx to provide /usr/bin/mailx (mailx) in auto mode
Setting up libsmbclient:amd64 (2:4.9.5+dfsg-5+deb10u1) ...
Setting up php-gd (2:7.3+69) ...
Setting up bind9-host (1:9.11.5.P4+dfsg-5.1+deb10u1) ...
Setting up dnsutils (1:9.11.5.P4+dfsg-5.1+deb10u1) ...
Setting up omd-4.01.20200724-labs-edition (1.debian10) ...
update-alternatives: using /omd/versions/4.01.20200724-labs-edition to provide /omd/versions/default (omd) in auto mode
Adding system group omd
sysctl: setting key "net.ipv4.ping_group_range": Read-only file system
dpkg: error processing package omd-4.01.20200724-labs-edition (--configure):
 installed omd-4.01.20200724-labs-edition package post-installation script subprocess returned error exit status 255
dpkg: dependency problems prevent configuration of omd-labs-edition-daily:
 omd-labs-edition-daily depends on omd-4.01.20200724-labs-edition; however:
  Package omd-4.01.20200724-labs-edition is not configured yet.

dpkg: error processing package omd-labs-edition-daily (--configure):
 dependency problems - leaving unconfigured
Processing triggers for libc-bin (2.28-10) ...
Processing triggers for systemd (241-7~deb10u4) ...
Processing triggers for mime-support (3.62) ...
Errors were encountered while processing:
 omd-4.01.20200724-labs-edition
 omd-labs-edition-daily
E: Sub-process /usr/bin/dpkg returned an error code (1)
The command '/bin/sh -c $HOME/install_omd.sh debian $OMD_VERSION' returned a non-zero code: 100
make: *** [build] Error 100

verbose output for ansible dropins

since dec2fdf
the output of the ansible dropin is verbose:


Checking for Ansible drop-in...
--------------------------------------
Executing Ansible drop-in...
Using /root/ansible_dropin/ansible.cfg as config file
Set default localhost to localhost
Loading callback plugin default of type stdout, v2.0 from /omd/versions/default/lib/python/ansible/plugins/callback/__init__.pyc

PLAYBOOK: playbook.yml *********************************************************
1 plays in /root/ansible_dropin/playbook.yml

PLAY [Configure Sakuli E2E checks for Site: demo] ******************************
META: ran handlers

TASK [kyo_checks : Copy Nagios config file] ************************************
task path: /root/ansible_dropin/kyo_checks/tasks/main.yml:3
looking for "./" at "/root/ansible_dropin/kyo_checks/files/./"
Using module file /omd/versions/default/lib/python/ansible/modules/files/stat.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'echo ~ && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728 `" && echo ansible-tmp-1497448694.19-142366157854728="` echo /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728 `" ) && sleep 0'
<localhost> PUT /tmp/tmpyDyk7E TO /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728/stat.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728/ /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728/stat.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728/stat.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1497448694.19-142366157854728/" > /dev/null 2>&1 && sleep 0'
<localhost> EXEC /bin/sh -c 'echo ~ && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546 `" && echo ansible-tmp-1497448694.83-64183978601546="` echo /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546 `" ) && sleep 0'
<localhost> PUT /root/ansible_dropin/kyo_checks/files/kyocera_e2e_nagios_objects.cfg TO /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/source
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/ /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/source && sleep 0'
Using module file /omd/versions/default/lib/python/ansible/modules/files/copy.py
<localhost> PUT /tmp/tmpPuZfaj TO /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/copy.py
<localhost> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/ /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/copy.py && sleep 0'
<localhost> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/copy.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/" > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
    "changed": true, 
    "checksum": "2afc346bc35bf44d9f753e4627c488aee6289a1b", 
    "dest": "/opt/omd/sites/demo/etc/nagios/conf.d/kyocera_e2e_nagios_objects.cfg", 
    "gid": 0, 
    "group": "root", 
    "invocation": {
        "module_args": {
            "attributes": null, 
            "backup": false, 
            "content": null, 
            "delimiter": null, 
            "dest": "/opt/omd/sites/demo/etc/nagios/conf.d/kyocera_e2e_nagios_objects.cfg", 
            "directory_mode": null, 
            "follow": false, 
            "force": true, 
            "group": null, 
            "mode": null, 
            "original_basename": "kyocera_e2e_nagios_objects.cfg", 
            "owner": "demo", 
            "regexp": null, 
            "remote_src": null, 
            "selevel": null, 
            "serole": null, 
            "setype": null, 
            "seuser": null, 
            "src": "/root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/source", 
            "unsafe_writes": null, 
            "validate": null
        }
    }, 
    "md5sum": "572369036cdcac3519ec554346bbc4c2", 
    "mode": "0644", 
    "owner": "demo", 
    "size": 1211, 
    "src": "/root/.ansible/tmp/ansible-tmp-1497448694.83-64183978601546/source", 
    "state": "file", 
    "uid": 1000
}
META: ran handlers
META: ran handlers

PLAY RECAP *********************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=0   

Running Demo Container Fails With Permissions Error

Docker version 17.04.0-ce, build 4845c56

Symptom:

$ sudo docker run -p 8443:443 consol/omd-labs-debian
container_linux.go:247: starting container process caused "exec: \"/root/start.sh\": permission denied"
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"/root/start.sh\": permission denied".
ERRO[0000] error getting events from daemon: net/http: request canceled 

This is caused by the start.sh script not being executable inside the container, as visible from:

$ sudo docker run --entrypoint "/bin/bash" -p 8443:443 consol/omd-labs-debian -c "ls -l /root/start.sh; id"
-rw-r--r-- 1 root root 1569 Jun 14 05:26 /root/start.sh
uid=0(root) gid=0(root) groups=0(root)

A temporary workaround is to run the container in the following manner

sudo docker run --entrypoint "/bin/bash" -p 8443:443 consol/omd-labs-debian -c "chmod +x /root/start.sh; /root/start.sh" 

Starting / restarting site in running container fails mounting tmpfs (debian, ubuntu)

I'm having trouble in Docker with the tmpfs mount step that happens in startup... this happens with omd-labs-docker and has happened in my own local customer Docker container that I was using to install OMD from scratch. The started container will report all OK with running OMD services, but after stopping in the running container, I can't start or restart without encountering a mount error at the temporary filesystem step. This (at least partially) breaks my restore.

Executing post-update script "thruk"...OK
Finished update to version 4.60-labs-edition.
Creating temporary filesystem /omd/sites/examplesite/tmp...mount: /opt/omd/sites/examplesite/tmp: permission denied.
ERROR

fstab looks like this:

# UNCONFIGURED FSTAB FOR BASE SYSTEM
tmpfs  /omd/sites/examplesite/tmp tmpfs noauto,user,mode=755,uid=examplesite,gid=examplesite 0 0

Not sure if this is something I'm missing, or a bug. I've tried on a few different Ubuntu host systems, using both debian and ubuntu makefiles.

In the example here, I'm trying to restore an older (2.4) OMD backup. The upgrade appears to work well, but I get the mount error, and none of the backup's *.cfg objects end up in my running site. Same error occurs when starting or restarting form a running container, though services seem ok at initial container startup.

Process caused \"exec: \\\"/root/start.sh\\\": permission denied\"\n".

When trying to start the image via the command

sudo docker run -p 8443:443 consol/omd-labs-ubuntu

Tested on 2 different machines. One virual machine running Ubuntu 16.04 and one physical machine also running Ubuntu 16.04

The following error pops up

container_linux.go:247: starting container process caused "exec: "/root/start.sh": permission denied"
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused "exec: \"/root/start.sh\": permission denied"\n".

Full console log:

sudo docker run -p 8443:443 consol/omd-labs-ubuntu
Unable to find image 'consol/omd-labs-ubuntu:latest' locally
latest: Pulling from consol/omd-labs-ubuntu

660c48dd555d: Pull complete
4c7380416e78: Pull complete
421e436b5f80: Pull complete
e4ce6c3651b3: Pull complete
be588e74bd34: Pull complete
c2b037e20e73: Pull complete
764bf2ce64b7: Pull complete
54f4d3152e06: Pull complete
0b94f502c12f: Pull complete
95a2ede5fb50: Pull complete
3b474f12d095: Pull complete
08b795a537ef: Pull complete
2677429f8eae: Pull complete
Digest: sha256:b4626c7ac6b5bcd55784aee809058f3dc0d1819f6154eb4f38e26c89e0de053c
Status: Downloaded newer image for consol/omd-labs-ubuntu:latest
container_linux.go:247: starting container process caused "exec: "/root/start.sh": permission denied"
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused "exec: \"/root/start.sh\": permission denied"\n".

Cron in container stops

Is it expected that cron is always running? It currently does not within the container.

If it is expected to always be running, shouldn't there be a supervisor service included in the container to ensure that cron is always running?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.