Git Product home page Git Product logo

docker-unms's People

Contributors

ernsteeldert avatar lemmi avatar nico640 avatar oliviermichaelis avatar oznu avatar ptorsten avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-unms's Issues

Add support for ARM64

Hi @Nico640 ! Since I've been running a 64bit OS on my Raspberry Pis I was interested in adding support for the ARM64 architecture to the UNMS docker image.
Only a couple of small changes were necessary to the docker-s6-debian-node Dockerfile and some more to the Dockerfile in docker-unms, which you can check out here.
The changes were basically all about downloading the correct binaries for s6-overlay and node, as well as bumping the LUA_NGINX_VERSION from 0.10.13 to 0.10.14 which includes arm64 support.

If you're interested in supporting arm64 images I'd be willing to post a proper PR with all necessary changes.

migration to 1.3.7 failed

Well the good news was that 1.3.7 is here, here is the bad news :( :

tail ~/unms/config/unms/logs/unms.log

Running docker-entrypoint /home/app/unms/index.js Sentry release: Version: 1.3.7+08951661e0.2021-01-24T13:54:56+01:00 Waiting for database containers NOTICE: extension "timescaledb" does not exist, skipping DROP EXTENSION Restoring backups and/or running migrations yarn run v1.22.4 $ yarn backup:apply && yarn migrate && yarn check-crm-db $ node ./cli/apply-backup.js $ node ./cli/migrate.js {"name":"UNMS","hostname":"83b53e3ba0c3","pid":17234,"level":30,"msg":"Connected to SiriDB server version: 2.0.42","time":"2021-02-01T23:37:11.568Z","v":0} {"name":"UNMS","hostname":"83b53e3ba0c3","pid":17234,"level":30,"msg":"SiriDB database 'unms' does not exists. Creating...","time":"2021-02-01T23:37:11.573Z","v":0} Migration failed: SiriDbError: database directory already exists: /config/siridb/unms/ at SiriDbConnection.resolveRequest (/home/app/unms/lib/siridb/connection.js:206:17) at SiriDbConnection.handleData (/home/app/unms/lib/siridb/connection.js:148:14) at Socket.emit (events.js:315:20) at addChunk (_stream_readable.js:295:12) at readableAddChunk (_stream_readable.js:271:9) at Socket.Readable.push (_stream_readable.js:212:10) at TCP.onStreamRead (internal/stream_base_commons.js:186:23) { tp: 96 } error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running docker-entrypoint /home/app/unms/index.js Sentry release: Version: 1.3.7+08951661e0.2021-01-24T13:54:56+01:00 Waiting for database containers NOTICE: extension "timescaledb" does not exist, skipping DROP EXTENSION Restoring backups and/or running migrations yarn run v1.22.4 $ yarn backup:apply && yarn migrate && yarn check-crm-db $ node ./cli/apply-backup.js {"name":"netflow","hostname":"83b53e3ba0c3","pid":760,"level":40,"msg":"DB schema version mismatch. DB schema version: 1.2.7, UNMS version: 1.3.7","time":"2021-02-01T23:37:15.668Z","v":0} $ node ./cli/migrate.js {"name":"UNMS","hostname":"83b53e3ba0c3","pid":17378,"level":30,"msg":"Connected to SiriDB server version: 2.0.42","time":"2021-02-01T23:37:17.845Z","v":0} {"name":"UNMS","hostname":"83b53e3ba0c3","pid":17378,"level":30,"msg":"SiriDB database 'unms' does not exists. Creating...","time":"2021-02-01T23:37:17.851Z","v":0} Migration failed: SiriDbError: database directory already exists: /config/siridb/unms/ at SiriDbConnection.resolveRequest (/home/app/unms/lib/siridb/connection.js:206:17) at SiriDbConnection.handleData (/home/app/unms/lib/siridb/connection.js:148:14) at Socket.emit (events.js:315:20) at addChunk (_stream_readable.js:295:12) at readableAddChunk (_stream_readable.js:271:9) at Socket.Readable.push (_stream_readable.js:212:10) at TCP.onStreamRead (internal/stream_base_commons.js:186:23) { tp: 96 } error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running docker-entrypoint /home/app/unms/index.js Sentry release: Version: 1.3.7+08951661e0.2021-01-24T13:54:56+01:00 Waiting for database containers NOTICE: extension "timescaledb" does not exist, skipping DROP EXTENSION Restoring backups and/or running migrations yarn run v1.22.4 $ yarn backup:apply && yarn migrate && yarn check-crm-db $ node ./cli/apply-backup.js $ node ./cli/migrate.js

.. and so on.

Can't add Super User

I upgraded from 0.13.3 to 1.0.6 and my only user was not a super user. I cannot find a way to run the command to create the super user. There is no unms-cli in this package
sudo ~unms/app/unms-cli set-superadmin --username USERNAME

Can't launch after update

Just updated on a Pi4, and can't get UNMS to load in web browser.

Upgrade steps:

sudo docker pull nico640/docker-unms:armhf
sudo docker stop unms
sudo docker rm unms
sudo docker run -d --name unms -p 80:80 -p 443:443 -p 2055:2055/udp -v /var/lib/docker-config/unms:/config nico640/docker-unms:armhf

Ensure Started container: sudo docker start unms

Launch URL from Chrome (on same network w/Pi - none work.
https://192.168.10.3/nms/
https://192.168.10.3/nms/login
https://192.168.10.3/nms/dashboard

I've rebooted the Pi and restarted the service, and confirmed it's running:

sudo docker ps
CONTAINER ID   IMAGE                       COMMAND   CREATED          STATUS         PORTS                                                              NAMES
1df4ecd47203   nico640/docker-unms:armhf   "/init"   45 minutes ago   Up 7 minutes   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:2055->2055/udp   unms

Does this and then fails to load...appreciate help w/straightening out what I did wrong. :) A Unifi controller I have installed on the same Pi (not in Docker container) is running fine and accessible.
image
image

UNMS not failing to create device backup

I've got two devices on my UNMS install; both have been backing up correctly. However one now no longer backs up (last backup around September) - I've attempted a manual backup and this fails. Running UNMS 1.0.5 on x86-64 hardware (unRAID box on Intel E3-Xeon)

Change Inform Port?

Hi, how can I change the inform port from 443 to 8443?

Im aware you can remap ports in docker, but the UNMS key still shows 443

Custom SSL certificates

Hallo,

i have problem with custom SSL certificates.
I don't know, if it's related to your docker or UNMS (1.0.7) itself.
I had not this problem in UNMS 1.0.6 (afaik)

Content of my 'cert' folder is:

cert -> /config/cert
live.crt -> ./FQDN.crt
live.key -> ./FQDN.key
FQDN.crt
FQDN.key

where FQDN is full qualified domain name of my RasppberyPi.

UNMS startup fail with 'Oops ...' error, reload fails with Chrome warning about wrong SSL certificates.
When I look to 'cert' folder, both my certificates are replaced with newly generated self signed certificates.

I can workaround it by replacing these selfsigned certificates with my certificates,
logging to container (docker-compose exec unms bash)
and restarting www service (nginx -s reload)

Is it possible to avoid overwriting custom certificates?

Thanks

Issue upgrading to 4.1.2

I upgraded to 4.1.2 earlier today and when the log said the migration script was started I left it and now I just got back around 5 hours later and find out that it could not start up. I attached a log file.

_unms_logs.txt

1.2.4 Never Starts Up

Pi4 Docker (all latest updates)
Did the same as usual to upgrade - removal of the service, container etc (retaining config dir) and then run:

docker service create --name=unms \
--publish=80:80/tcp \
--publish=443:443/tcp \
--publish 2055:2055/udp \
--restart-condition on-failure \
--env TZ=Europe/London \
--mount=type=bind,src=/home/pi/unms,dst=/config \
nico640/docker-unms:armhf

Now UNMS never gets beyond "UNMS is starting... This should not take more than a minute." And after 30 mins the splash changes to "Oops! Something went wrong".

Anything I might be missing?

armhf build not working on pi4?

Hi, I just had to redeploy your UNMS package into Docker on Pi4. Had it working okay a few weeks back but trying to run my same Docker deployment script using the armhf tag I'm now getting an unsupported platform error.

Any known issues for this in the current build?

Thanks

postgres: error while loading shared libraries

this is from a QNAP environment, running on Annapurna Labs ARM processor.
Container starts, but unfortunately comes up with errors regarding postgress and shared libraries.
So the gui sort of comes up, but advises it is starting and .... never starts due to the issues it is having.

A short extract from the errors:

(...)
Protocol 'inet_tcp': register/listen error: econnrefused
Starting rabbitmq-server...
Starting postgres...
postgres: error while loading shared libraries: libsystemd.so.0: ELF load command alignment not page-aligned
/usr/lib/erlang/erts-8.2.1/bin/epmd: error while loading shared libraries: libsystemd.so.0: ELF load command alignment not page-aligned
/var/run/postgresql:5432 - no response
Waiting for postgres to come up...
Starting postgres...
postgres: error while loading shared libraries: libsystemd.so.0: ELF load command alignment not page-aligned
{"name":"netflow","hostname":"UNMS","pid":1115,"level":50,"msg":"Failed to get DB schema version: connect ECONNREFUSED 127.0.0.1:5432","time":"2020-11-28T14:36:30.000Z","v":0}
/var/run/postgresql:5432 - no response
Waiting for postgres to come up...
/var/run/postgresql:5432 - no response
Waiting for postgres to come up...
Starting postgres...
postgres: error while loading shared libraries: libsystemd.so.0: ELF load command alignment not page-aligned
2020/11/28 14:36:31 [error] 2399#2399: *9 connect() failed (111: Connection refused) while connecting to upstream, client: 10.10.10.237, server: , request: "GET /nms/api/v2.1/nms/heartbeat HTTP/1.1", upstream: "http://127.0.0.1:8081/nms/api/v2.1/nms/heartbeat", host: "10.10.10.6", referrer: "https://10.10.10.6/nms
/"
(...)
and so it goes in a loop showing the same errors as above.
Could you please advise ?

Crm bad gateway

Hi,

I upgraded to 1.06, first of all: all of our devices(176) went to disconnected with wrong key error message, and crm won't start, saying bad gateway.

Sudo Lecture

Hello,

I found new problem with SSL certificate renewal:
Everyday certificate renewal procedure now fails, because it needs sudo.
Sudo is now configured to print Lecture info, so error message looks like:

Last refresh of SSL certificate had failed.
Timestamp: Today at 6:00
Error:
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
sudo: no tty present and no askpass program specified
Failed to create temp key from '/usercert/***.key'.
Keeping existing certificate for '***@***.***'. >

It not break functionality of the UNMS as it keeps using existing user certificates.
It will break functionality, if somebody use Let's Encrypt cetrificates.

Most probably this can be solved by adding line

Defaults lecture="none"

to /etc/sudoers

Best Regards
Jiri

Question

are you guys maintaining this repo from oznu

Reverse Proxy / Ability to Configure WSS Ports

Hello!

I first want to say thank you for picking up and starting to maintain a docker version of UNMS. I will also apologize in advance as I'm relatively new to docker.

I was wondering if anybody you are aware of tried to reverse proxy to this docker? I'm currently using unRaid and the Letsencrypt Nginx reverse proxy to reverse proxy to several different containers. I'm currently able to access UNMS on my local network no problem, but when I try to access from the reverse proxy I receive a 502 Bad Gateway error. It appears that the proxy can't reach the server.

I've been searching for examples and trying to adapt them to this container and haven't had any luck, would I need to map the various WSS ports to the same secure port that my reverse proxy is listening to?

Thank you!

UNMS 1.0.5 - Raspberry Pi 3 B+ hangs after some minutes

After a fresh docker-compose installation the raspberry Pi 3 B+ hangs after some minutes, I have to power restart the raspberry pi.

Here is my docker-compose file:

version: '2'
services:
unms:
image: nico640/docker-unms:armhf # use "armhf" instead of "latest" for arm devices
restart: always
ports:
- 80:80
- 443:443
- 2055:2055/udp
environment:
- TZ=Europe/Berlin
volumes:
- ./volumes/unms:/config

Log Rotation

The log file at /srv-internal/data/unms/config/unms/logs/unms.log filled its partition today at 12G and caused the apps to stop functioning.

This is the same issue I believe was described in oznu#48

I see this in my running container:

root@ckv26000:/srv-internal/data/unms# docker exec 305b79531701 cat /etc/logrotate.d/unms
/config/unms/logs/*.log {
size 10M
copytruncate
missingok
rotate 7
compress
delaycompress
}

But it doesn't seem to have any effect.

Trying to setup on Synology Nas

On startup i get this error:

{"name":"netflow","hostname":"nico640-docker-unms2","pid":1370,"level":30,"msg":"Cannot determine UNMS IP address (3 ms)","time":"2020-07-08T08:33:15.265Z","v":0}
Checking CRM database schema version.
No data returned from the query.
Retrying in 10s

Have everything setup as default ports

DNS requests to UNMS's own FQDN

Small thing I noticed.. I see a DNS request from the UNMS controller for its own FQDN every 15s.

Any thoughts as to why? That entry has a 1800s TTL.

Wrong ownership

HINT:  The server must be started by the user that owns the data directory.,
/var/run/postgresql:5432 - no response,
Starting postgres...,
FATAL:  data directory "/config/postgres" has wrong ownership,
2020/04/26 13:46:08 [error] 436#436: *204 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /nms/api/v2.1/nms/heartbeat HTTP/1.1", upstream: "http://127.0.0.1:8081/nms/api/v2.1/nms/heartbeat", host: "127.0.0.1", referrer: "https://127.0.0.1/nms/",
Waiting for postgres to come up...,
2020/04/26 13:46:20 [error] 436#436: *204 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /nms/api/v2.1/nms/heartbeat HTTP/1.1", upstream: "http://127.0.0.1:8081/nms/api/v2.1/nms/heartbeat", host: "127.0.0.1", referrer: "https://127.0.0.1/nms/"

Running on Docker for Windows with Linux subsystem

Issues with custom certificate

I can't seem to get this to work with a custom SSL cert. I created the self-signed cert using my CA in pfSense. That CA is trusted on my browser as I can access services using self-signed certs without an SSL error. I created the following directories

/docker-data/unms
/docker-data/unms/usercert

I downloaded the CRT and KEY files from pfSense and placed them in /docker-data/unms/usercert

These directories and files show ownership as administrator:administrator (my default user on this host). The files are named custom.crt and custom.key and have permissions equal to 664 (-rw-rw-r--)

Here is my compose file

version: '2'
services:
  unms:
    container_name: unms-controller
    image: nico640/docker-unms:latest
    restart: always
    ports:
      - 5080:80
      - 7443:443
      - 3055:2055/udp
    environment:
      - TZ=America/New_York
      - PUBLIC_HTTPS_PORT=7443
      - PUBLIC_WS_PORT=7443
      - SSL_CERT=custom.crt
      - SSL_CERT_KEY=custom.key
    volumes:
      - /docker-data/unms:/config

And the ls -l of the directories just for clarity

administrator@docker01:/docker-data$ ls -l
drwxrwxr-x  3 administrator administrator 4096 Nov 23 19:23 unms

administrator@docker01:/docker-data$ cd unms/

administrator@docker01:/docker-data/unms$ ls -l
total 8
-rw-rw-r-- 1 administrator administrator  402 Nov 23 19:12 docker-compose.yml
drwxrwxr-x 2 administrator administrator 4096 Nov 23 19:23 usercert

administrator@docker01:/docker-data/unms$ cd usercert/

administrator@docker01:/docker-data/unms/usercert$ ls -l
total 8
-rw-rw-r-- 1 administrator administrator 1631 Nov 23 19:23 custom.crt
-rw-rw-r-- 1 administrator administrator 1704 Nov 23 19:23 custom.key
administrator@docker01:/docker-data/unms/usercert$

After running the compose file the directory structure looks like this

administrator@docker01:/docker-data$ ls -l
drwxrwxr-x  9           911           911 4096 Nov 23 19:31 unms

administrator@docker01:/docker-data$ cd unms/

administrator@docker01:/docker-data/unms$ ls -l
total 32
drwxr-xr-x  2 administrator administrator 4096 Nov 23 19:32 cert
-rw-rw-r--  1 administrator administrator  402 Nov 23 19:12 docker-compose.yml
drwxr-xr-x  2 nobody        nogroup       4096 Nov 23 19:31 logs
drwx------ 19 messagebus    crontab       4096 Nov 23 19:32 postgres
drwxr-xr-x  2           911           911 4096 Nov 23 19:31 redis
drwxr-xr-x  3 root          root          4096 Nov 23 19:32 siridb
drwxr-xr-x  8 root          root          4096 Nov 23 19:32 unms
drwxrwxr-x  2 administrator administrator 4096 Nov 23 19:30 usercert

administrator@docker01:/docker-data/unms$ cd usercert/

administrator@docker01:/docker-data/unms/usercert$ ls -l
total 8
-rw-rw-r-- 1 administrator administrator 1631 Nov 23 19:23 custom.crt
-rw-rw-r-- 1 administrator administrator 1704 Nov 23 19:23 custom.key

administrator@docker01:/docker-data/unms/usercert$ cd ..

administrator@docker01:/docker-data/unms$ cd cert/

administrator@docker01:/docker-data/unms/cert$ ls -l
total 8
-rw-r--r-- 1 administrator administrator 1631 Nov 23 19:31 custom.crt
-rw------- 1 administrator administrator 1704 Nov 23 19:31 custom.key
lrwxrwxrwx 1 administrator nogroup         12 Nov 23 19:32 live.crt -> ./custom.crt
lrwxrwxrwx 1 administrator nogroup         12 Nov 23 19:32 live.key -> ./custom.key

So the UNMS directory changes to 911:911, the cert directory gets created as administrator:administrator, and the custom.key file gets copied into the cert directory and has permissions as 600.

The links in the this directory are administrator:nogroup

The logs show during startup

unms-controller | Enabling UNMS https and wss connections on port 443
unms-controller | Updating custom certificate.
unms-controller | mv: cannot create regular file '/cert/custom.key': Permission denied
unms-controller | Failed to copy key.
unms-controller | No certificate found.
unms-controller | Generating self-signed certificate for 'localhost'.
unms-controller | Waiting for rabbit@5a2a3efbb160 ...
unms-controller | pid is 428 ...
unms-controller |
unms-controller |               RabbitMQ 3.6.6. Copyright (C) 2007-2016 Pivotal Software, Inc.
unms-controller |   ##  ##      Licensed under the MPL.  See http://www.rabbitmq.com/
unms-controller |   ##  ##
unms-controller |   ##########  Logs: /var/log/rabbitmq/[email protected]
unms-controller |   ######  ##        /var/log/rabbitmq/[email protected]
unms-controller |   ##########
unms-controller |               Starting broker...
unms-controller | FATAL:  role "root" does not exist
unms-controller | /var/run/postgresql:5432 - accepting connections
unms-controller | FATAL:  role "root" does not exist
unms-controller | /var/run/postgresql:5432 - accepting connections
unms-controller | 1000
unms-controller | /usr/src/ucrm/scripts/init_log.sh
unms-controller | /usr/src/ucrm/scripts/dirs.sh
unms-controller | Failed to generate self-signed certificate for 'localhost'
unms-controller | 2020/11/23 19:31:11 [emerg] 913#913: open() "/etc/nginx/ip-whitelist.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/unms-https+wss.conf:36
unms-controller | nginx: [emerg] open() "/etc/nginx/ip-whitelist.conf" failed (2: No such file or directory) in /etc/nginx/conf.d/unms-https+wss.conf:36
unms-controller | Starting nginx...

And trying to access the controller after that results in a privacy error "NET::ERR_CERT_INVALID" and when I click advanced there is no option to proceed anyway and it states

192.168.1.30 normally uses encryption to protect your information. When Google Chrome tried to connect to 192.168.1.30 this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be 192.168.1.30, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Google Chrome stopped the connection before any data was exchanged.

You cannot visit 192.168.1.30 right now because the website sent scrambled credentials that Google Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.

The cert that was created has a common name of unms.ad.mydomain.com with an alternate name of 192.168.1.30. This is how I created my other certs and accessing them via fqdn or IP results in no SSL warnings.

If I attempt to connect via the fqdn I get the same thing

unms.ad.mydomain.com normally uses encryption to protect your information. When Google Chrome tried to connect to unms.ad.mydomain.com this time, the website sent back unusual and incorrect credentials. This may happen when an attacker is trying to pretend to be unms.ad.mydomain.com, or a Wi-Fi sign-in screen has interrupted the connection. Your information is still secure because Google Chrome stopped the connection before any data was exchanged.

You cannot visit unms.ad.mydomain.com right now because the website sent scrambled credentials that Google Chrome cannot process. Network errors and attacks are usually temporary, so this page will probably work later.

I'm assuming it's some sort of permissions error with all the different ownership and permissions that are created when the container is first run but I am unsure. If I run the container without the custom SSL env variables it creates a localhost self signed cert and I am able to connect to the controller via IP (after clicking proceed anyway on the SSL warning) however after a couple of minutes the session seems to disconnect, there's a warning popup on the controller that the connection has been lost and if I refresh the page I am presented with the SSL warning again. Seems to happen ever 2-3 minutes.

Postgres not initializing

I had this up and running fine, and then when I tried to access it last week it wasn't coming up properly. I blew away the config directory, installed it again, but now when it starts up it does not seem to be creating the proper config files.

It creates these directories:

cert/ logs/ postgres/ redis/ unms@ usercert/

but, for example, the postgres dir is empty and when you look at the logs it's just the following in a loop:


Running entrypoint.sh
Will use existing SSL certificate
Entrypoint finished
Calling exec
postgres: could not access the server configuration file "/config/postgres/postgresql.conf": No such file or directory
2020/01/17 22:06:39 [emerg] 7699#7699: open() "/data/log/ucrm/nginx/suspended_service_access.log" failed (40: Too many levels of symbolic links)

nginx: [emerg] open() "/data/log/ucrm/nginx/suspended_service_access.log" failed (40: Too many levels of symbolic links)

/var/run/postgresql:5432 - no response
Waiting for postgres to come up...
/var/run/postgresql:5432 - no response
Waiting for postgres to come up...
Starting nginx...
Running entrypoint.sh
Starting postgres...
Will use existing SSL certificate
Entrypoint finished
Calling exec
postgres: could not access the server configuration file "/config/postgres/postgresql.conf": No such file or directory
2020/01/17 22:06:40 [emerg] 7716#7716: open() "/data/log/ucrm/nginx/suspended_service_access.log" failed (40: Too many levels of symbolic links)

nginx: [emerg] open() "/data/log/ucrm/nginx/suspended_service_access.log" failed (40: Too many levels of symbolic links)


Is there something obvious I'm missing?

Unable to remove 2FA via cli

Hello, I'm trying to disable 2FA for a user. I'm having troubles finding how to run

unms-cli disable-two-factor --username <username>

on this container

Container on QNAP error

Just noticed that UNMS wasn't working and was throwing the following errors in the log:

[cont-init.d] 10-adduser: exited 0.,
[cont-init.d] 20-set-timezone: executing... ,
[cont-init.d] 20-set-timezone: exited 0.,
[cont-init.d] 40-prepare: executing... ,
ln: failed to create symbolic link '/www/firmwares/firmwares': File exists,
ln: failed to create symbolic link '/cert/cert': File exists,
ln: failed to create symbolic link '/usercert/usercert': File exists,
[cont-init.d] 40-prepare: exited 0.,
[cont-init.d] 50-postgres: executing... ,
Database already configured,
[cont-init.d] 50-postgres: exited 0.,
[cont-init.d] done.,
[services.d] starting services,
Starting postgres...,
Waiting for rabbitmq to start...,
Starting redis...,
Starting rabbitmq-server...,
[services.d] done.,
Starting siridb-server...,
. ,
.-__ ''-._ , _.- . . ''-._ Redis 3.2.6 (00000000/0) 64 bit,
.- .-```. ```\/ _.,_ ''-._ , ( ' , .-` | `, ) Running in standalone mode, |`-._`-...-` __...-.-.|'_.-'| Port: 6379, | -. ._ / _.-' | PID: 370, -._ -._ -./ .-' .-' ,
|-._-.
-.__.-' _.-'_.-'| , | -.
-._ _.-'_.-' | http://redis.io , -._ -._-..-'.-' .-' ,
|-._-.
-.__.-' _.-'_.-'| , | -.
-._ _.-'_.-' | , -._ -._-.
.-'_.-' _.-' ,
-._ -..-' _.-' ,
-._ _.-' , -.
.-' ,
,
370:M 03 Dec 10:44:37.282 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.,
370:M 03 Dec 10:44:37.282 # Server started, Redis version 3.2.6,
370:M 03 Dec 10:44:37.282 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.,
Starting nginx...,
370:M 03 Dec 10:44:37.285 * DB loaded from append only file: 0.002 seconds,
370:M 03 Dec 10:44:37.285 * The server is now ready to accept connections on port 6379,
[W 2020-12-03 10:44:37] We want to set a max-open-files value which exceeds 50% of the current hard limit.,
,
We will use 32000 as max_open_files for now.,
Please increase the hard-limit using:,
ulimit -Hn 65535,
Running entrypoint.sh,
LOG: could not bind IPv6 socket: Cannot assign requested address,
HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.,
Will use existing SSL certificate,
[E 2020-12-03 10:44:37] Unexpected Series ID 1606348805 is found in shard 1606348800 (/config/siridb/unms/shards/1606348800.sdb) at position 38. This indicates that this shard is probably corrupt. The next optimize cycle will most likely fix this shard but you might loose some data.,
[C 2020-12-03 10:44:37] Index and/or shard corrupt: '/config/siridb/unms/shards/1606348802.sdb',
[E 2020-12-03 10:44:37] Error while loading shard: '1606348802.sdb',
[E 2020-12-03 10:44:37] Could not read shards for database 'unms',
[E 2020-12-03 10:44:37] Could not load 'unms'.,
Entrypoint finished,
Calling exec ,
LOG: database system was shut down at 2020-12-03 10:44:32 EST,
LOG: MultiXact member wraparound protections are now enabled,
LOG: database system is ready to accept connections,
LOG: autovacuum launcher started,
FATAL: role "root" does not exist,
/var/run/postgresql:5432 - accepting connections,
FATAL: role "root" does not exist,
/var/run/postgresql:5432 - accepting connections,
1000,
2020/12/03 10:44:37 [alert] 419#419: detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html),
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html),
/usr/src/ucrm/scripts/init_log.sh,
/usr/src/ucrm/scripts/dirs.sh,
{"message":"Creating directories.","channel":"dirs.sh","datetime":"2020-12-03T10:44:37-05:00","severity":"INFO","level":200},
Running docker-entrypoint npm start,
Sentry release: ,
Version: 1.2.7+258cbb8a49.2020-09-18T14:42:15+02:00,
Waiting for database containers,
LOG: incomplete startup packet,
NOTICE: extension "timescaledb" does not exist, skipping,
DROP EXTENSION,
{"message":"Done creating directories.","channel":"dirs.sh","datetime":"2020-12-03T10:44:38-05:00","severity":"INFO","level":200},
{"message":"Creating symbolic links.","channel":"dirs.sh","datetime":"2020-12-03T10:44:38-05:00","severity":"INFO","level":200},
2020/12/03 10:44:38 [error] 461#461: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: , request: "GET // HTTP/1.1", upstream: "http://127.0.0.1:8082//", host: "192.168.0.55",
{"message":"Creating symbolic link for /usr/src/ucrm/app/data.","channel":"dirs.sh","datetime":"2020-12-03T10:44:38-05:00","severity":"INFO","level":200},
{"message":"Creating symbolic link for /usr/src/ucrm/app/EmailQueue.","channel":"dirs.sh","datetime":"2020-12-03T10:44:39-05:00","severity":"INFO","level":200},
Waiting for rabbit@unms ...,
pid is 404 ...,
{"message":"Creating symbolic link for /usr/src/ucrm/app/logs.","channel":"dirs.sh","datetime":"2020-12-03T10:44:39-05:00","severity":"INFO","level":200},
{"message":"Creating symbolic link for /usr/src/ucrm/web/uploads.","channel":"dirs.sh","datetime":"2020-12-03T10:44:39-05:00","severity":"INFO","level":200},
{"message":"Creating symbolic links for /data/updates.","channel":"dirs.sh","datetime":"2020-12-03T10:44:39-05:00","severity":"INFO","level":200},
{"message":"Publishing current /usr/src/ucrm/app/config/version.yml.","channel":"dirs.sh","datetime":"2020-12-03T10:44:39-05:00","severity":"INFO","level":200},
{"message":"Done creating symbolic links.","channel":"dirs.sh","datetime":"2020-12-03T10:44:40-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/parameters.sh,
{"message":"Replacing configuration parameters.","channel":"parameters.sh","datetime":"2020-12-03T10:44:40-05:00","severity":"INFO","level":200},
,
RabbitMQ 3.6.6. Copyright (C) 2007-2016 Pivotal Software, Inc.,

## Licensed under the MPL. See http://www.rabbitmq.com/,

##,

########## Logs: /var/log/rabbitmq/[email protected],

## /var/log/rabbitmq/[email protected],�

##########,
Starting broker...,
completed with 0 plugins.,
Starting unms-netflow...,
370:M 03 Dec 10:44:43.375 * Background append only file rewriting started by pid 1118,
Background append only file rewriting started,
Restoring backups and/or running migrations,
370:M 03 Dec 10:44:43.519 * AOF rewrite child asks to stop sending diffs.,
1118:C 03 Dec 10:44:43.519 * Parent agreed to stop sending diffs. Finalizing AOF...,
1118:C 03 Dec 10:44:43.519 * Concatenating 0.00 MB of AOF diff received from parent.,
1118:C 03 Dec 10:44:43.520 * SYNC append only file rewrite performed,
1118:C 03 Dec 10:44:43.520 * AOF rewrite: 0 MB of memory used by copy-on-write,
370:M 03 Dec 10:44:43.546 * Background AOF rewrite terminated with success,
370:M 03 Dec 10:44:43.546 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB),
370:M 03 Dec 10:44:43.546 * Background AOF rewrite finished successfully,
yarn run v1.21.1,
$ yarn backup:apply && yarn migrate && yarn check-crm-db,
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Listening for netflow packets on port 2055 (9 ms)","time":"2020-12-03T15:44:44.383Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Initializing database","time":"2020-12-03T15:44:44.391Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Database initialized (51 ms)","time":"2020-12-03T15:44:44.442Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Initializing RabbitMQ","time":"2020-12-03T15:44:44.442Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"RabbitMQ initialized (115 ms)","time":"2020-12-03T15:44:44.557Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for managed networks change notifications at queue 'settings.networks.changed'","time":"2020-12-03T15:44:44.578Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for managed networks change notifications at queue 'settings.blacklist.changed'","time":"2020-12-03T15:44:44.592Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for site IP addresses change notifications at queue 'site.ips.changed'","time":"2020-12-03T15:44:44.603Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for device authorizations at queue 'device..authorized'","time":"2020-12-03T15:44:44.614Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for device authorizations at queue 'device.
.settings.changed'","time":"2020-12-03T15:44:44.625Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for settings change notifications at queue 'settings.changed'","time":"2020-12-03T15:44:44.636Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Subscribed for log verbosity change notifications at queue 'server.log.verbosity'","time":"2020-12-03T15:44:44.647Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Loaded list of managed networks from database (10 ms) [ '192.168.0.0/24', '10.0.0.0/24', '11.0.0.0/24' ]","time":"2020-12-03T15:44:44.657Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Loaded blacklist from database (3 ms) []","time":"2020-12-03T15:44:44.662Z","v":0},
{"name":"netflow","hostname":"unms","pid":1106,"level":30,"msg":"Determined UNMS IP address '192.168.0.55' (5 ms)","time":"2020-12-03T15:44:44.679Z","v":0},
$ node ./cli/apply-backup.js,
$ node ./cli/migrate.js,
{"message":"Done replacing configuration parameters.","channel":"parameters.sh","datetime":"2020-12-03T10:44:47-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/cron_jobs_disable.sh,
crontab /tmp/crontabs/server,
/usr/src/ucrm/scripts/database_ready.sh,
{"message":"Waiting for database.","channel":"database_ready.sh","datetime":"2020-12-03T10:44:47-05:00","severity":"INFO","level":200},
{"message":"Database ready.","channel":"database_ready.sh","datetime":"2020-12-03T10:44:48-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/database_create_extensions.sh,
{"message":"Creating database extensions.","channel":"database_create_extensions.sh","datetime":"2020-12-03T10:44:48-05:00","severity":"INFO","level":200},
{"message":"Extension "citext" exists.","channel":"database_create_extensions.sh","datetime":"2020-12-03T10:44:48-05:00","severity":"INFO","level":200},
{"message":"Extension "unaccent" exists.","channel":"database_create_extensions.sh","datetime":"2020-12-03T10:44:49-05:00","severity":"INFO","level":200},
Extension 'uuid-ossp:1.1 installed.,
Extension 'pgcrypto:1.3 installed.,
Extension 'cube:1.2 installed.,
Extension 'earthdistance:1.1 installed.,
Extension 'unaccent:1.1 installed.,
Migrations finished in 0.096s,
Setting database schema version to 1.2.7,
Done,
{"message":"Extension "uuid-ossp" exists.","channel":"database_create_extensions.sh","datetime":"2020-12-03T10:44:49-05:00","severity":"INFO","level":200},
{"message":"Finished creating database extensions.","channel":"database_create_extensions.sh","datetime":"2020-12-03T10:44:49-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/migrate.sh,
{"message":"Start replace shared views.","channel":"migrate.sh","datetime":"2020-12-03T10:44:49-05:00","severity":"INFO","level":200},
$ node cli/check-crm-db.js,
{"message":"Done replace shared views.","channel":"migrate.sh","datetime":"2020-12-03T10:44:50-05:00","severity":"INFO","level":200},
{"message":"Start database migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:50-05:00","severity":"INFO","level":200},
Checking CRM database schema version.,
No data returned from the query.,
Retrying in 10s,
{"message":"Done database migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:51-05:00","severity":"INFO","level":200},
{"message":"Start shared views migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:51-05:00","severity":"INFO","level":200},
{"message":"Done shared views migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:52-05:00","severity":"INFO","level":200},
{"message":"Start notification templates migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:52-05:00","severity":"INFO","level":200},
{"message":"Done notification templates migration.","channel":"migrate.sh","datetime":"2020-12-03T10:44:53-05:00","severity":"INFO","level":200},
{"message":"Bumping UNMS version from ENV variable to database.","channel":"migrate.sh","datetime":"2020-12-03T10:44:53-05:00","severity":"INFO","level":200},
{"message":"Done bumping UNMS version from ENV variable to database.","channel":"migrate.sh","datetime":"2020-12-03T10:44:53-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/database_migrations_ready.sh,
{"message":"Waiting for migrations.","channel":"database_migrations_ready.sh","datetime":"2020-12-03T10:44:53-05:00","severity":"INFO","level":200},
{"message":"Migrations ready.","channel":"database_migrations_ready.sh","datetime":"2020-12-03T10:44:54-05:00","severity":"INFO","level":200},
/usr/src/ucrm/scripts/web.sh,
{"message":"Preparing server configuration.","channel":"web.sh","datetime":"2020-12-03T10:44:54-05:00","severity":"INFO","level":200},
cp: cannot stat '/etc/nginx/available-servers/ucrm.conf': No such file or directory,
{"message":"Done preparing server configuration.","channel":"web.sh","datetime":"2020-12-03T10:44:54-05:00","severity":"INFO","level":200},
{"message":"Updating environment variables.","channel":"web.sh","datetime":"2020-12-03T10:44:54-05:00","severity":"INFO","level":200},
{"message":"Done updating environment variables.","channel":"web.sh","datetime":"2020-12-03T10:44:55-05:00","severity":"INFO","level":200},
{"message":"Updating UCRM version.","channel":"web.sh","datetime":"2020-12-03T10:44:55-05:00","severity":"INFO","level":200},
{"message":"Done UCRM version.","channel":"web.sh","datetime":"2020-12-03T10:44:56-05:00","severity":"INFO","level":200},
Setting up the Rabbit MQ fabric,
{"message":"Building search index.","channel":"web.sh","datetime":"2020-12-03T10:44:57-05:00","severity":"INFO","level":200},
2020/12/03 10:44:58 [error] 461#461: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: , request: "GET // HTTP/1.1", upstream: "http://127.0.0.1:8082//", host: "192.168.0.55",
Checking CRM database schema version.,
CRM database schema version is '1.2.7'.,
Done in 17.49s.,
Exec npm start with memory limit 2048,
,

[email protected] prestart /home/app/unms,
yarn backup:apply && yarn migrate && yarn check-crm-db,
,
yarn run v1.21.1,
$ node ./cli/apply-backup.js,
{"message":"Generating static suspension page.","channel":"web.sh","datetime":"2020-12-03T10:45:05-05:00","severity":"INFO","level":200},
{"message":"Done generating static suspension page.","channel":"web.sh","datetime":"2020-12-03T10:45:06-05:00","severity":"INFO","level":200},
{"message":"Re/generating plugin symlinks and config files.","channel":"web.sh","datetime":"2020-12-03T10:45:06-05:00","severity":"INFO","level":200},
Done in 2.40s.,
yarn run v1.21.1,
$ node ./cli/migrate.js,
{"message":"Done re/generating plugin symlinks and config files.","channel":"web.sh","datetime":"2020-12-03T10:45:07-05:00","severity":"INFO","level":200},
{"message":"Resetting in-progress states.","channel":"web.sh","datetime":"2020-12-03T10:45:07-05:00","severity":"INFO","level":200},
Extension 'uuid-ossp:1.1 installed.,
Extension 'pgcrypto:1.3 installed.,
Extension 'cube:1.2 installed.,
Extension 'earthdistance:1.1 installed.,
Extension 'unaccent:1.1 installed.,
Migrations finished in 0.101s,
Setting database schema version to 1.2.7,
Done,
Done in 1.53s.,
{"message":"Done resetting in-progress states.","channel":"web.sh","datetime":"2020-12-03T10:45:09-05:00","severity":"INFO","level":200},
yarn run v1.21.1,
{"message":"Synchronizing with UNMS.","channel":"web.sh","datetime":"2020-12-03T10:45:09-05:00","severity":"INFO","level":200},
$ node cli/check-crm-db.js,
{"message":"Waiting for UNMS (127.0.0.1:8081).","channel":"unms_ready.sh","datetime":"2020-12-03T10:45:09-05:00","severity":"INFO","level":200},
Checking CRM database schema version.,
CRM database schema version is '1.2.7'.,
Done in 1.54s.,
,
[email protected] start /home/app/unms,
node --max_old_space_size=2048 index.js,
,
{"name":"UNMS","hostname":"unms","pid":2049,"level":30,"msg":"Master 2049 is running","time":"2020-12-03T15:45:12.013Z","v":0},
{"message":"Waiting for UNMS (127.0.0.1:8081).","channel":"unms_ready.sh","datetime":"2020-12-03T10:45:12-05:00","severity":"INFO","level":200},
{"name":"UNMS","hostname":"unms","pid":2049,"level":30,"msg":"Connected to SiriDB server version: 2.0.34","time":"2020-12-03T15:45:14.682Z","v":0},
{"name":"UNMS","hostname":"unms","pid":2049,"level":30,"msg":"SiriDB database 'unms' does not exists. Creating...","time":"2020-12-03T15:45:14.684Z","v":0},
[W 2020-12-03 10:45:14] Error handling manage request: database directory already exists: /config/siridb/unms/,
{ SiriDbError: database directory already exists: /config/siridb/unms/,
at SiriDbConnection.resolveRequest (/home/app/unms/lib/siridb/connection.js:155:17),
at SiriDbConnection.handleData (/home/app/unms/lib/siridb/connection.js:95:14),
at Socket.emit (events.js:198:13),
at Socket.EventEmitter.emit (domain.js:448:20),
at addChunk (_stream_readable.js:288:12),
at readableAddChunk (_stream_readable.js:269:11),
at Socket.Readable.push (_stream_readable.js:224:10),
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17) tp: 96, name: 'SiriDbError' } 'Unexpected error in master process',
npm ERR! code ELIFECYCLE,
npm ERR! errno 1,
npm ERR! [email protected] start: node --max_old_space_size=2048 index.js,
npm ERR! Exit status 1,�,
,
npm ERR! Failed at the [email protected] start script.,
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.,
,
npm ERR! A complete log of this run can be found in:,
npm ERR! /home/app/.npm/_logs/2020-12-03T15_45_14_719Z-debug.log,
FATAL: role "root" does not exist,

Backup restore, http error 413 due to reverse proxy

I'm opening this and will close it momentarily, this is for documentation only.

When bringing up this container on k8s I ended up being unable to restore, it would error out with http 413. I confirmed that the docker image/unms itself was not the problem it was related to the ingress-nginx service. Below is an example of an ingress resource that has the appropriate annotations.

The key is nginx.ingress.kubernetes.io/proxy-body-size: "0" which set an unlimited body size for this application. For other reverse proxy systems it might be a slightly different annotation.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: unms-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/backend-protocol: "https"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
spec:
  tls:
  - hosts:
    - unms.mydomain.com
    secretName: le-unms
  rules:
  - host: unms.mydomain.com
    http:
      paths:
      - path: /
        backend:
          serviceName: unms-service
          servicePort: https

This is running on a rPi cluster using longhorn for distributed storage. It's not as fast as my intel server but it's fast enough. Initial startup was a solid 10 minutes, same for restoring the backup, but once running it's great.

$ kubectl top pods -l app=unms
NAME     CPU(cores)   MEMORY(bytes)   
unms-0   298m         888Mi

Reverse proxy setup not working

Hello
Why is it not possible to run behind a reverse proxy? I get too many redirects to HTTPS when hitting my root url...
Docker expose 10081:80 and 12055:2055, reverse proxy from root url (standard 443) -> localhost:10081.
How do I need to set the env vars for this setup?

edit: It seems like that it's not possible to expose only http right? Or do you know how to set it up via env vars?

Cheers
Alex

Failed to create endpoint

I'm not sure if I'm doing something wrong I reinstall docker and it didn't change the error
The current error:
23a8d28c9d6443f6e87e6e6fe2515773e7f7751724e70366bfaf1c21d9b40ad3 docker: Error response from daemon: failed to create endpoint unms3 on network bridge: failed to add the host (vethbe530b8) <=> sandbox (vethe716f2d) pair interfaces: operation not supported.

Command run:
sudo docker run -d --name unms3 -p 80:80 -p 443:443 -p 2055:2055/udp -v /var/lib/docker-config/unms:/config nico640/docker -unms:armhf

It is running on a raspberry pi 4 with 4gb of ram

Node keeps on restarting

Hello @Nico640

I'm starting using your image but I keep on getting an issue. Some node stuff inside the image keeps on restarting.

Here is a log, I did not get anything out of it, to be honest. I hope you can understand a bit.

cat 2020-06-06T07_34_19_082Z-debug.log
0 info it worked if it ends with ok
1 verbose cli [ '/usr/local/bin/node', '/home/app/unms/npm', 'start' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prestart', 'start', 'poststart' ]
5 info lifecycle [email protected]~prestart: [email protected]
6 verbose lifecycle [email protected]~prestart: unsafe-perm in lifecycle true
7 verbose lifecycle [email protected]~prestart: PATH: /home/app/unms/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/app/unms/node_modules/.bin:/home/app/unms/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/9.6/bin
8 verbose lifecycle [email protected]~prestart: CWD: /home/app/unms
9 silly lifecycle [email protected]~prestart: Args: [ '-c',
9 silly lifecycle   'yarn backup:apply && yarn migrate && yarn check-crm-db' ]
10 silly lifecycle [email protected]~prestart: Returned: code: 0  signal: null
11 info lifecycle [email protected]~start: [email protected]
12 verbose lifecycle [email protected]~start: unsafe-perm in lifecycle true
13 verbose lifecycle [email protected]~start: PATH: /home/app/unms/npm/node_modules/npm-lifecycle/node-gyp-bin:/home/app/unms/node_modules/.bin:/home/app/unms/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/9.6/bin
14 verbose lifecycle [email protected]~start: CWD: /home/app/unms
15 silly lifecycle [email protected]~start: Args: [ '-c', 'node --max_old_space_size=2048 index.js' ]
16 silly lifecycle [email protected]~start: Returned: code: 132  signal: null
17 info lifecycle [email protected]~start: Failed to exec start script
18 verbose stack Error: [email protected] start: `node --max_old_space_size=2048 index.js`
18 verbose stack Exit status 132
18 verbose stack     at EventEmitter.<anonymous> (/home/app/unms/npm/node_modules/npm-lifecycle/index.js:332:16)
18 verbose stack     at EventEmitter.emit (events.js:198:13)
18 verbose stack     at ChildProcess.<anonymous> (/home/app/unms/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
18 verbose stack     at ChildProcess.emit (events.js:198:13)
18 verbose stack     at maybeClose (internal/child_process.js:982:16)
18 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
19 verbose pkgid [email protected]
20 verbose cwd /home/app/unms
21 verbose Linux 4.14.24-qnap
22 verbose argv "/usr/local/bin/node" "/home/app/unms/npm" "start"
23 verbose node v10.19.0
24 verbose npm  v6.14.5
25 error code ELIFECYCLE
26 error errno 132
27 error [email protected] start: `node --max_old_space_size=2048 index.js`
27 error Exit status 132
28 error Failed at the [email protected] start script.
28 error This is probably not a problem with npm. There is likely additional logging output above.
29 verbose exit [ 132, true ]

It should be a clean installation with almost the standard docker-compose file you provided so I don't get why this is happening.

Ciao!
M

Problems after fresh install

Hey guys,

i got a problem on step 2 after a fresh install of these package:

...
unms_1 | {"name":"UNMS","hostname":"cc7e3b6f1498","pid":1084,"component":"os","level":20,"totalmem":"976MB","freemem":"28MB","loadavg":"2.7939453125, 2.11572265625, 2.13525390625","uptime":"1181758s","msg":"","time":"2019-08-02T07:21:43.732Z","v":0}
unms_1 | {"name":"UNMS","hostname":"cc7e3b6f1498","pid":1084,"component":"proc","level":20,"delay":"25.81ms","cpu":"22.84%","memory":"151MB","uptime":"216s","msg":"[api]","time":"2019-08-02T07:21:43.760Z","v":0}
unms_1 | {"name":"UNMS","hostname":"cc7e3b6f1498","pid":1084,"level":50,"err":{"message":"Uncaught error","name":"Error","stack":"Error: Uncaught error\n at _error (/home/app/unms/node_modules/hapi/lib/protect.js:56:32)\n at module.exports.internals.Protect.internals.Protect._onError (/home/app/unms/node_modules/hapi/lib/protect.js:38:16)\n at Domain.emit (events.js:189:13)\n at Domain.EventEmitter.emit (domain.js:441:20)\n at Domain._errorHandler (domain.js:239:21)\n at Object.setUncaughtExceptionCaptureCallback (domain.js:139:29)\n at process._fatalException (internal/bootstrap/node.js:493:31)"},"msg":"[error] Uncaught error","time":"2019-08-02T07:21:43.770Z","v":0}
unms_1 | Debug: internal, implementation, error
unms_1 | Error: Uncaught error
unms_1 | at _error (/home/app/unms/node_modules/hapi/lib/protect.js:56:32)
unms_1 | at module.exports.internals.Protect.internals.Protect._onError (/home/app/unms/node_modules/hapi/lib/protect.js:38:16)
unms_1 | at Domain.emit (events.js:189:13)
unms_1 | at Domain.EventEmitter.emit (domain.js:441:20)
unms_1 | at Domain._errorHandler (domain.js:239:21)
unms_1 | at Object.setUncaughtExceptionCaptureCallback (domain.js:139:29)
unms_1 | at process._fatalException (internal/bootstrap/node.js:493:31)
unms_1 | {"name":"UNMS","hostname":"cc7e3b6f1498","pid":1084,"level":30,"msg":"[response] POST /nms/api/v2.1/nms/setup 500 (1143ms)","time":"2019-08-02T07:21:43.974Z","v":0}
unms_1 | {"name":"UNMS","hostname":"cc7e3b6f1498","pid":1084,"component":"scheduler","level":20,"task":"synchronizeSiteStatus","curr":"255.18 ms","avg":"26.02 ms","msg":"Finished task","time":"2019-08-02T07:21:43.981Z","v":0}
...

The message is that im not authorized for this action ?!

No idea what happen there

UNMS is starting ... but id does not start

Hello!

After long period of regular operation UNMS doesn't start. I have recreated container but it seems it is not problem in container. Container normally starts and is operational. Page of UNMS show rocket till the rocket goes away and message says the UNMS couldn't start: "Oops! Something went wrong."
unms.log has following errors:
{"name":"UNMS","hostname":"7d0b24011e93","pid":1934,"level":30,"msg":"SiriDB database 'unms' does not exists. Creating...","time":"2020-10-11T18:25:36.037Z","v":0}
{ SiriDbError: database directory already exists: /config/siridb/unms/
at SiriDbConnection.resolveRequest (/home/app/unms/lib/siridb/connection.js:155:17)
at SiriDbConnection.handleData (/home/app/unms/lib/siridb/connection.js:95:14)
at Socket.emit (events.js:198:13)
at Socket.EventEmitter.emit (domain.js:448:20)
at addChunk (_stream_readable.js:288:12)
at readableAddChunk (_stream_readable.js:269:11)
at Socket.Readable.push (_stream_readable.js:224:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17) tp: 96, name: 'SiriDbError' } 'Unexpected error in master process'
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: node --max_old_space_size=2048 index.js
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR! /home/app/.npm/_logs/2020-10-11T18_25_36_090Z-debug.log
Running docker-entrypoint npm start
Sentry release:
Version: 1.2.7+258cbb8a49.2020-09-18T14:42:15+02:00

Just in case, ucrm.log seems OK except this:

{"message":"Waiting for UNMS (127.0.0.1:8081).","channel":"unms_ready.sh","datetime":"2020-10-07T07:00:15+02:00","severity":"INFO","level":200}
{"message":"UNMS did not start in time, restarting container.","channel":"web.sh","datetime":"2020-10-07T07:00:15+02:00","severity":"WARNING","level":300}
Makefile:12: recipe for target 'server_with_migrate' failed
make: *** [server_with_migrate] Error 1
/usr/src/ucrm/scripts/init_log.sh
/usr/src/ucrm/scripts/dirs.sh
{"message":"Creating directories.","channel":"dirs.sh","datetime":"2020-10-07T07:00:15+02:00","severity":"INFO","level":200}
{"message":"Done creating directories.","channel":"dirs.sh","datetime":"2020-10-07T07:00:16+02:00","severity":"INFO","level":200}

There was no changes in configuration, no updates - just normal operation without any changes. As long as I know, UNMS version is 1.2.6 or 1.2.7. How can I approach solving this situation? Thank You!

UNMS 1.3.7

Hi there,

Now that 1.3.7 is official, any chance of getting an updated container?

Thanks again for maintaining this!

Raspberry Pi Docker Edgerouter issues

Hi - been trying to get this configured for ages - everything is up and running in a docker container. I have INGINX using ports 80 and 443 so tried with:

Using Portainer, so set env variable:
WS_PORT 443
PUBLIC_WS_PORT 32770
PUBLIC_HTTPS_PORT 32770

Mapped Ports:
32770 > 443 (tcp)
2055 > 2055 (udp)

I see all the devices - when I try to add my EdgeRouter, it authenticates, updates the UNMS config in the router with, what I think is, the correct stuff for the IP address of my Pi:

wss://192.168.0.173:32770+etc...

But then sits there saying "waiting for..." (can't see the rest of the message)

I tried stopping NGINX and leaving ports as default, mapping 80, 443 and 2055 to the same, but still the same behaviour.

Is there something about the EdgeRouter communicating back with the UNMS container? I know 443 gets through normally because NGINX uses it. Losing the will to live trying to get this working! So close and yet...

Cheers
Andy

0.14.4 > 1.0.2

I'm getting prompted to update from 0.14.4 to 1.0.2 inside the current stable image. I don't see any tags for the 1.x builds and if I'm not mistaken, ubnt is now dropping the pre-1.0 builds (so no more unms stable builds w/o crm). Will you be doing the same for your images or what's your plans for the new stable versions?

Thanks,

->g.

Reduction of writes to drive

I have moved to running my docker containers on a ssd due to noise and speed constraints.
One big drawback is, that SSDs have a given number of writes. Once this amount is reached waranty expires and the chance that the drive will go bad are increased.

UNMS is a container that writes a constant stream of data. In my case I can see a constant write stream of 0,5 MB/s, which does not seem much at a first glance, but if it is calculated x3600x24x356 for a year this equates to upto 15TB yearly. Add several other containers to the mix and the warranty is over.

Is there a way to reduce the amount of disk writes for unms? (reduce logging, etc )
Would it be possible to expose a variable to enable the "low writes mode"?

Thanks and regards
Marcus

PS: Thanks for providing this awesome image :)

SiriDB errors on Docker Swarm with Glusterfs filesystem

Hi Nico640, first of all thank you for your work making UNMS running on arm architecture.
I installed a 3 node Raspberry PI cluster, running Docker Swarm, GlusterFS persistent storage and Traefik reverse proxy. All works like a charm if I configure the /config folder in bind mount on local storage of the raspberry pi, but I need to put the mount on the GlusterFS mount path, to have a persistent storage if the container change node where it runs.
In the log there's this lines repeated every two or three seconds.

[W 2020-12-18 20:52:40] Database directory not found, creating directory '/config/siridb/'.,
[E 2020-12-18 20:52:40] Cannot create directory '/config/siridb/'.,
[W 2020-12-18 20:52:40] Closing SiriDB Server (version: 2.0.34)

Any idea of what's happening?

Thanks in advance
GIanmarco

UNMS 1.3.1 avalible

Hey Nico640,

could you be so kind and build a UNMS with the new version 1.3.1 witch came out today ?

Regards,
Snot

Upgrade question

Re:

"You can upgrade UNMS by downloading the latest version of this image."

Sorry, can I ask for a little Docker noob upgrade help? This is the first time I've had to upgrade a Docker install. I've been Googling around but don't find a consistent answer (maybe because there are multiple ways?) to upgrade to the new release. I'm a bit confused.

Is it as simple removing the current unms container and repeating the command below that I used the first time I installed?

sudo docker run -d --name unms -p 80:80 -p 443:443 -p 2055:2055/udp -v /var/lib/docker-config/unms:/config nico640/docker-unms:armhf

Update current version

Hello and thank you for great job you are doing!

I have problem with updating current version through web interface. This problem existed with older versions as well.
If i try to update from initial message "New UNMS version available" or from "Update available" button in upper status line, after some time i get the message "The system update failed. Update timed out." every time. But in the same time i can update via system command line (docker-compose pull unms and docker-compose up -d).

Setup Volume parameters for Synology DSM Docker

First many thanks for image! It very helpfull.
I am begineer in the Docker. The Container is running well but I would like to have all data of the container at custom defined directory (for example /volume1/docker/uisp). I do not know what parametr I should put to Advanced Settings/Volume in Mount Path. I tried /config but container did not run. Could you explain how tu setup Volume on DSM Docker? thank you a lot.

ERROR: relation "crm_db_version_view" does not exist at character 19

I haven't used UNMS after oznu deprecated the repo and so I did setup a fresh installation taken from the example docker-compose.yml

The only error output I'd be seeing on the container would be

id: 'unms': no such user
Waiting for unms to create user...
Checking CRM database schema version.
ERROR:  relation "crm_db_version_view" does not exist at character 19
STATEMENT:  SELECT value FROM crm_db_version_view
relation "crm_db_version_view" does not exist
Retrying in 10s

Nothing else would come up on the container.

Googling the issues leads me that migration wouldn't run properly, but executing it manually throws hostname resolution errors.

Switch Firmwareupgrade fail

When upgrading switches through the unms webinterface i get the following error:

xyz_switch upgrade from firmware version 1.8.2 to firmware version 1.9.0 has failed because of error (SSL_connect failed with error -313: revcd alert fatal error).

Tried with different switches.

Upgrading routers works flawlessly...

Deployment uses traefik reverse proxy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.