Git Product home page Git Product logo

docker-alerta's Introduction

Alerta Release 9.1

Actions Status Slack chat Coverage Status Docker Pulls

The Alerta monitoring tool was developed with the following aims in mind:

  • distributed and de-coupled so that it is SCALABLE
  • minimal CONFIGURATION that easily accepts alerts from any source
  • quick at-a-glance VISUALISATION with drill-down to detail

webui


Requirements

Release 9 only supports Python 3.9 or higher.

The only mandatory dependency is MongoDB or PostgreSQL. Everything else is optional.

  • Postgres version 13 or better
  • MongoDB version 6.0 or better

Installation

To install MongoDB on Debian/Ubuntu run:

$ sudo apt-get install -y mongodb-org
$ mongod

To install MongoDB on CentOS/RHEL run:

$ sudo yum install -y mongodb
$ mongod

To install the Alerta server and client run:

$ pip install alerta-server alerta
$ alertad run

To install the web console run:

$ wget https://github.com/alerta/alerta-webui/releases/latest/download/alerta-webui.tar.gz
$ tar zxvf alerta-webui.tar.gz
$ cd dist
$ python3 -m http.server 8000

>> browse to http://localhost:8000

Docker

Alerta and MongoDB can also run using Docker containers, see alerta/docker-alerta.

Configuration

To configure the alertad server override the default settings in /etc/alertad.conf or using ALERTA_SVR_CONF_FILE environment variable::

$ ALERTA_SVR_CONF_FILE=~/.alertad.conf
$ echo "DEBUG=True" > $ALERTA_SVR_CONF_FILE

Documentation

More information on configuration and other aspects of alerta can be found at http://docs.alerta.io

Development

To run in development mode, listening on port 5000:

$ export FLASK_APP=alerta FLASK_DEBUG=1
$ pip install -e .
$ flask run

To run in development mode, listening on port 8080, using Postgres and reporting errors to Sentry:

$ export FLASK_APP=alerta FLASK_DEBUG=1
$ export DATABASE_URL=postgres://localhost:5432/alerta5
$ export SENTRY_DSN=https://8b56098250544fb78b9578d8af2a7e13:[email protected]/153768
$ pip install -e .[postgres]
$ flask run --debugger --port 8080 --with-threads --reload

Troubleshooting

Enable debug log output by setting DEBUG=True in the API server configuration:

DEBUG=True

LOG_HANDLERS = ['console','file']
LOG_FORMAT = 'verbose'
LOG_FILE = '$HOME/alertad.log'

It can also be helpful to check the web browser developer console for JavaScript logging, network problems and API error responses.

Tests

To run the all the tests there must be a local Postgres and MongoDB database running. Then run:

$ TOXENV=ALL make test

To just run the Postgres or MongoDB tests run:

$ TOXENV=postgres make test
$ TOXENV=mongodb make test

To run a single test run something like:

$ TOXENV="mongodb -- tests/test_search.py::QueryParserTestCase::test_boolean_operators" make test
$ TOXENV="postgres -- tests/test_queryparser.py::PostgresQueryTestCase::test_boolean_operators" make test

Cloud Deployment

Alerta can be deployed to the cloud easily using Heroku https://github.com/alerta/heroku-api-alerta, AWS EC2 https://github.com/alerta/alerta-cloudformation, or Google Cloud Platform https://github.com/alerta/gcloud-api-alerta

License

Alerta monitoring system and console
Copyright 2012-2023 Nick Satterly

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

docker-alerta's People

Contributors

bd5872 avatar creganfl avatar dependabot[bot] avatar doodle-tnw avatar faisal-khalique avatar guillaumeouint avatar h-phil avatar headphonejames avatar hex2dec avatar hugoshaka avatar jackdown avatar klavsklavsen avatar lotooo avatar m4ce avatar moeterich avatar mwasilew2 avatar nabadger avatar nick-ax avatar nlgotz avatar pad92 avatar pantelis-karamolegkos avatar pbabics avatar satterly avatar schnidrig avatar smossber avatar tjanson avatar wagnst avatar wimfabri avatar xtavras avatar ydkn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-alerta's Issues

API Endpoints all return 404s

Seems like this is missing something as none of the api endpoints seem to exist.

/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d1525bedbd9c alerta/alerta-web:latest "/bin/sh -c '/config 3 minutes ago Up 30 minutes 0.0.0.0:80->80/tcp alerta-web
b40d673bc3ff mongo:latest "/entrypoint.sh mong 4 minutes ago Up 36 minutes 27017/tcp alerta-db

/# ENDPOINT=${1:-http://localhost:80}
/# curl -s -XPOST -H "Content-type: application/json" -H "Authorization: Key demo-key" ${ENDPOINT}/alert -d '
{
"resource": "host678:eth0",
"event": "HW:NIC:FAILED",
"group": "Hardware",
"severity": "major",
"environment": "Production",
"service": [
"Network"
],
"text": "Network interface eth0 is down.",
"value": "error"
}'

<title>404 Not Found</title>

404 Not Found


nginx/1.4.6 (Ubuntu)

Reverse proxy support with base path is incomplete

It's not possible to run the docker image behind a reverse proxy on a certain path, e.g. /alerta/. It's possible to set BASE_URL=/alerta/api but that only affects the API.

  • The UI is still served from / - that can be workarounded by altering the path on the proxy to remove /alerta from the path
  • Resources are loaded relative to the UI, so that's working
  • The callback URL for oauth (e.g. for github) if always http(s)://mydomain.com/, so the path part is missing.

Alpine Dockerfile

Hi,
Is it possible to configure this Dockerfile to alpine environment. If yes can you please the link for it.

Container restart issue: NameError: name 'PLUGINS' is not defined

The docker-entrypoint.sh script dynamically creates a file "/etc/alertad.conf" on the first container run. This file is missing a plugins definition. On a subsequent restart the container will fail to start with the following in the logs:

++ echo
++ cut -d, -f1
+ ADMIN_USER=
++ echo mongodb://mongo:27017/monitoring
++ sed -e 's/mongodb:\/\///'
+ MONGO_ADDR=mongo:27017/monitoring
+ '[' '!' -f /app/config.js ']'
+ '[' '!' -f /etc/alertad.conf ']'
++ python -c 'exec(open('\''/etc/alertad.conf'\'')); print('\'','\''.join(PLUGINS))'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
NameError: name 'PLUGINS' is not defined
+ PLUGINS=

On the second container startup, the following on line 33 causes the error:

PLUGINS=$(python -c "exec(open('$ALERTA_SVR_CONF_FILE')); print(','.join(PLUGINS))")

If you would like I could spend a few minutes and put in a pull request that fixes it (and still honors the PLUGINS environment variable if passed in from docker.)

Environment variables for GITLAB_URL is not used in docker-entrypoint.sh

Hi,

I am trying to set up Gitlab auth and noticed that GITLAB_URL is not present in here.

I would expect the config to be something like this:

'use strict';
angular.module('config', [])
  .constant('config', {
    'endpoint'    : "${BASE_URL}",
    'provider'    : "${PROVIDER}",
    'gitlab_url'   : "${GITLAB_URL}"
    'client_id'   : "${OAUTH2_CLIENT_ID}",
    'colors'      : {}
  });

There seems to be a huge difference and some options are available in the kubernetes frontend config.
https://github.com/alerta/docker-alerta/blob/master/contrib/kubernetes/frontend/config.js.template

Are there any plans to support all these settings in the standard docker-entrypoint.sh

OAUTH2_CLIENT_ID and OAUTH2_CLIENT_SECRET

Hello,

Using the variables OAUTH2_CLIENT_SECRET and OAUTH2_CLIENT_ID to use gmail auth is not working.

Inside alertad.conf was ignored:

alerta@ae5a9a8df24b:/$ cat /app/alertad.conf 
DEBUG = True
AUTH_REQUIRED = False
OAUTH2_CLIENT_ID = 'xxxxxx'
OAUTH2_CLIENT_SECRET = 'yyyyyyyyyyyy'

Using docker-compose.yml was ignored also:

    environment:
      - DEBUG=1  # remove this line to turn DEBUG off
      - DATABASE_URL=postgres://postgres:postgres@db:5432/monitoring
      - AUTH_REQUIRED=False
      - OAUTH2_CLIENT_ID=xxxx
      - OAUTH2_CLIENT_SECRET=yyyy

My /login still the old one =/

image

Debug says that he's looking for my username in pgsql database, instead of google.com auth.

Thanks

New feature - Add comments when shelved or closed alerts

As described in the tittle, it would be good to be able to add comment if the alert get shelved or closed so if someone goes and see why it's in that state you can check there faster than ask everyone in your team.
Also would be good to add simple comment in open alerts so if the alert still there and someone is working on it you can know that way

best way to upgrade alerta and keep the db

Hi Team,

do you know what is the best way to upgade alerta and keep exsting db? we are using it with docker compose and it says duplicate keys on admin when we try to upgrade

Cheers

Plugins fail to install

version: 6.5.0

When specifying plugins via INSTALL_PLUGINS I get the following error.

Installing plugin 'reject'
+ '[' -n '' ']'
+ IFS=,
+ for plugin in ${INSTALL_PLUGINS}
+ echo 'Installing plugin '\''reject'\'''
+ /venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/reject
The directory '/home/alerta/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/alerta/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/reject
  Cloning https://github.com/alerta/alerta-contrib.git to /tmp/pip-req-build-edsodrg2
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/venv/lib/python3.6/tokenize.py", line 452, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-req-build-edsodrg2/plugins/reject/setup.py'
    
    ----------------------------------------

remove cron from container

Cron is proved to be buggy in docker and can be removed from container with that simple hack in docker-compose.yaml

  alerta_cron_heartbeats:
    image: alerta/alerta-web:latest
    restart: always
    depends_on:
      - alerta-db
    volumes:
      - ./alerta/heartbeat.sh:/heartbeat.sh
    command: /heartbeat.sh

and

# cat ./alerta/heartbeat.sh
#!/bin/sh
sleep 60
alerta heartbeats --alert

it keeps all the things simplier.
container will be restarted by docker and looks like it is cron

[docs] Things to do

  • pre-install plugins as a layer
  • Config file volumes
  • What is alerta
  • Environment variable info
  • not linking db container
  • Test “how to use this image” against entry point image
  • Use labels?
  • Alpine linux version?
  • write tutorial for newbies

Kubernetes Backend image permissions still not right

Just noticed when trying to add plugins to the install images that the permissions still aren't quite right - its fine for running alertad - but the alerta user doesn't have access to install pip packages.

Will add a pull request to fix

Heartbeat API Key is invalid

2019-01-29 10:14:48,550 DEBG 'heartbeats' stdout output:
send: b'GET /api/heartbeats HTTP/1.1\r\nHost: localhost:8080\r\nUser-Agent: python-requests/2.21.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Key DEBUG:raven.base.Client:Configuring\r\n\r\n'
2019-01-29T10:14:48.551123807Z 
2019-01-29 10:14:48,569 DEBG 'uwsgi' stdout output:
[pid: 90|app: 0|req: 243/1192] 127.0.0.1 () {40 vars in 507 bytes} [Tue Jan 29 10:14:48 2019] GET /api/heartbeats => generated 143 bytes in 17 msecs (HTTP/1.1 401) 4 headers in 143 bytes (2 switches on core 0)
2019-01-29T10:14:48.569806524Z 
2019-01-29 10:14:48,574 DEBG 'nginx' stdout output:
ip=\- [\29/Jan/2019:10:14:48 +0000] "\GET /api/heartbeats HTTP/1.1" \401 \143 "\-" "\python-requests/2.21.0"
2019-01-29T10:14:48.574827806Z 
2019-01-29 10:14:48,575 DEBG 'heartbeats' stdout output:
reply: 'HTTP/1.1 401 UNAUTHORIZED\r\n'
header: Server: nginx/1.10.3
header: Date: Tue, 29 Jan 2019 10:14:48 GMT
header: Content-Type: application/json
header: Content-Length: 143
header: Connection: keep-alive
header: Access-Control-Allow-Origin: http://localhost
header: Vary: Origin
2019-01-29T10:14:48.575785624Z 
body: {
  "code": 401, 
  "errors": null, 
  "message": "API key parameter 'DEBUG:raven.base.Client:Configuring' is invalid", 
  "status": "error"
}
2019-01-29T10:14:48.575900229Z 
Error: API key parameter 'DEBUG:raven.base.Client:Configuring' is invalid
2019-01-29T10:14:48.575919092Z 

Alerta instance created by run script does not connect to mongo

Hi,

I have pulled the latest version of the docker-alerta repo and run the run.sh script (with my local customisations for email addresses and auth etc)

However whilst monitoring the logs on the mongo instance I see no connections being made to mongo.

I have tried this both on AWS and on centos 7.

Any pointers or further logs I can give you?

How to share apikeys between containers on docker environment?

It seems that there's no supported way to add predefined key as apikey anywhere? Alerta generates keys automatically but i don't get how i can share them between multiple containers, or alternatively is there way to disable apikey authentication or use other kind of authentication on api calls?

Problem is that other container needs access to alerta api, however they don't know apikeys beforehand and creating them and configuring them afterwards outside (like ui and then configure them to containers) isn't really an option since same configuration is deployed multiple times by machine.

Dockerfile does not create required user

The plugin installer in the docker-entrypoint script fails for me, example:

I have no name!@3fa332413aa6:/$ /venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/prometheus
The directory '/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/prometheus
  Cloning https://github.com/alerta/alerta-contrib.git to /tmp/pip-sttxqumy-build
fatal: unable to look up current user in the passwd file: no such user
shommand "git clone -q https://github.com/alerta/alerta-contrib.git /tmp/pip-sttxqumy-build" failed with error code 128 in None

I'll submit a PR for this which resolves the issue (unless I'm missing something and there's a better workaround?)

Version 6 breaks kuberenetes contrib

The new config.json and config.js and a few other bits that have changed between v5 and v6 cause the docker build in the contrib to fail.

angular-alerta-webui-master/web.js
mv: cannot stat '/usr/share/nginx/html/config.js': No such file or directory
The command '/bin/sh -c tar zxvf /tmp/web.tar.gz -C /tmp &&     rm -rf /usr/share/nginx/html &&     mv /tmp/angular-alerta-webui-master/app /usr/share/nginx/html &&     mv /usr/share/nginx/html/config.js /usr/share/nginx/html/config.js.orig' returned a non-zero code: 1
Makefile:10: recipe for target 'container' failed

probably could do with the same cleanup as the main alerta-web image

Unable to deploy with mongodb replset

Hi,

Im trying to deploy this container with a mongodb replica set for HA as advised by the documentation.
The docker-entrypoint script uses a mongodb client version 2.6 which needs a connection uri in the form:

<replset>/<host0>,<host1>...

If i set the MONGO_URI env var in such form to make the mongo client happy, alerta fails to connect to mongodb because of the invalid connection uri.

Setting MONGO_URI in alertad.conf doesn't help either because it seems the env var overwrites it.

PS: Thanks! Really awesome project 👍

Ack button disabled on some alerts.

We're using Alerta 6.0.1 and I've noticed that the Ack button doesn't work for alerts that were sent from Prometheus Alertmanager even though the user has the admin role and scope and can execute all the other commands (Delete, Close, Open, Watch, Shelve). Sending similar alert with the alerta CLI produces and alert that can be acknowledged by the user. I'm wondering how this can happen. Was there a bug in this version, or if there is a problem with the way we've set up the admin user where it doesn't have permission to ack, but can do everything else. We plan to upgrade to Alerta 6.6 but don't have the time right now.

Add CA file(s) for SSL support

Hi,

I'm using the prometheus-alerta plugin - this lets me ack. alerts from alerta to alerta-manager.

Our alertmanager runs internally on SSL, so we need to provide the CA (I need to make the CA available inside this container).

I can get this working by having something like this in the entrypoint file:

# Install certs
if [ ! -z "$(ls -A /certs)" ]; then
  cp /certs/*.crt  /usr/local/share/ca-certificates/ 2>/dev/null
  update-ca-certificates
fi

This assumes that the user has mounted their certs into /certs/.

In addition to this we need to set the ca-bundle path for the python requests-lib.

Something like:
ENV REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt

The problem I have is that ca-update-certificates needs to run as root and the container-user is alerta.

I've tried a few workarounds (i.e. run the container as root, but su - alerta to run the rest of the entrypoint), but these end up rather messy (with multiple permissions issues).

Although I have it working in my fork, I'm running everything as root (which I don't think is right, so don't want to create a PR yet).

Do you have any advice? Not sure if this is something you have dealt with before on this project.

Thanks

404 errors when running mailer integration

I've custom built a docker container containing the mailer integration:
alerta-mailer-DockerFile

FROM alerta/alerta-web

USER root
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get autoclean
RUN apt-get clean
USER 1001

RUN /venv/bin/pip install --upgrade pip
RUN /venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=integrations/mailer

I've spun up the new image in a container and everything works from the web gui.
I exec into the alerta container and run: alerta-mailer
I see the following output:

DEBUG:mailer:Looking for rules files in /app/alerta.rules.d
Python dns.resolver unavailable. The skip_mta option will be forced to FalseError 99 connecting to localhost:6379. Cannot assign requested address.
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): localhost:8080
Starting new HTTP connection (1): localhost:8080
DEBUG:urllib3.connectionpool:http://localhost:8080 "POST /heartbeat HTTP/1.1" 404 None
http://localhost:8080 "POST /heartbeat HTTP/1.1" 404 None
DEBUG:urllib3.connectionpool:http://localhost:8080 "POST /heartbeat HTTP/1.1" 404 None
http://localhost:8080 "POST /heartbeat HTTP/1.1" 404 None

This looks like it's trying to post to the heartbeats page in alerta but failing as the path is incorrect.
I've run:
curl http://localhost:8080 and can confirm the page is returned. However, running http://localhost:8080/heartbeat fails with a 404 error which I expect is what I'm seeing from alerta-mailer.
curl -X POST http://localhost:8080/heartbeat

I get a 404 Not Found.

How is CORS_ORIGINS set?

I'm building a helm chart for alerta, and I'd like to know how the CORS_ORIGINS are set. It's currently an env var, but I wanna know if it's comma separated or something, so I know if I can loop through it

docker-compose fails "ERROR: manifest for alerta/alerta-web:latest not found"

Using the example docker-compose.yml the coker-compose up` fails with following error message

ERROR: manifest for alerta/alerta-web:latest not found

Checking on https://hub.docker.com/r/alerta/alerta-web/tags/ shows there is no image with tag latest. If the docker-compose.yml is amended to

  ...
  web:
    image: alerta/alerta-web:5.2.4
    ports:
  ...

the example can be executed.

Not sure if this is either to be changed on docker-alerta or on alerta-web to provide a docker image with the tag latest.

Linking containers is deprecated

Warning: The --link flag is a deprecated legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, we recommend that you use user-defined networks to facilitate communication between two containers instead of using --link. One feature that user-defined networks do not support that you can do with --link is sharing environmental variables between containers. However, you can use other mechanisms such as volumes to share environment variables between containers in a more controlled way.

https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/

routing plugin issues

Hi,

I'm attempting to create a routing plugin within the docker version of alerta. With that said I've customized the Dockerfile from the repo a bit because I don't want to run the web console on the same container as my API container.

I'm basically following the tutorial here and the following is in routing.py:

def rules(alert, plugins):
    if alert.duplicateCount > 2:
        return [plugins['slack']]
    else:
        return []

The setup.py file is basically identical to the tutorial. I exec into the API server container, create routing.py using the above and setup.py. I activate the venv and install the plugin:

alerta@33958ff7cd34:/$ cd /app/plugins/routing/
alerta@33958ff7cd34:/app/plugins/routing$ . /venv/bin/activate
(venv) alerta@33958ff7cd34:/app/plugins/routing$ python setup.py install
running install
running bdist_egg
running egg_info
writing alerta_routing.egg-info/PKG-INFO
...

I restart the API server container, and when I try to send some test alerts with the alerta CLI I'm always seeing the following in the logs:

2018-08-26 19:21:05,083 DEBG 'uwsgi' stdout output:
2018-08-26 19:21:05,083 - alerta.plugins[30]: WARNING - Plugin routing rules failed: 'Alert' object has no attribute 'duplicateCount' [in /venv/lib/python3.6/site-packages/alerta/utils/plugin.py:45]

Any idea what the issue might be? I tried changing routing.py to use alert.duplicate_count (per the Alert model source) but this didn't seem to work either.

Github Authentication and Authorization

Hi,

I've set the authentication with Github and it's working with all the users in my organisations. However, none of the Github Users have admin access. We can manage users only in Basic Auth, so I can't change scopes and permissions...
Is there a solution to set all the github's users as admin, or a workaround to be able to log as an admin ?

PLUGINS not defined when running docker compose

c9b52479:docker-alerta nsatterl$ docker-compose up
Starting dockeralerta_db_1
Starting dockeralerta_web_1
Attaching to dockeralerta_db_1, dockeralerta_web_1
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=76bfb95f2b4b
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] db version v3.4.1
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] allocator: tcmalloc
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] modules: none
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] build environment:
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten]     distmod: debian81
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten]     distarch: x86_64
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten]     target_arch: x86_64
db_1   | 2017-01-17T09:26:01.712+0000 I CONTROL  [initandlisten] options: {}
db_1   | 2017-01-17T09:26:01.715+0000 W -        [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
db_1   | 2017-01-17T09:26:01.722+0000 I -        [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
db_1   | 2017-01-17T09:26:01.726+0000 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
db_1   | 2017-01-17T09:26:01.728+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=486M,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
db_1   | 2017-01-17T09:26:03.546+0000 I CONTROL  [initandlisten]
db_1   | 2017-01-17T09:26:03.547+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
db_1   | 2017-01-17T09:26:03.547+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
db_1   | 2017-01-17T09:26:03.547+0000 I CONTROL  [initandlisten]
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
db_1   | 2017-01-17T09:26:03.566+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
db_1   | 2017-01-17T09:26:03.568+0000 I NETWORK  [thread1] waiting for connections on port 27017
db_1   | 2017-01-17T09:26:04.412+0000 I FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK
dockeralerta_web_1 exited with code 1
web_1  | Collecting git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/geoip
web_1  |   Cloning https://github.com/alerta/alerta-contrib.git to /tmp/pip-xrYHeR-build
web_1  | fatal: unable to access 'https://github.com/alerta/alerta-contrib.git/': Could not resolve host: github.com
web_1  | Command "git clone -q https://github.com/alerta/alerta-contrib.git /tmp/pip-xrYHeR-build" failed with error code 128 in None
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
dockeralerta_web_1 exited with code 1
web_1  | Collecting git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/geoip
web_1  |   Cloning https://github.com/alerta/alerta-contrib.git to /tmp/pip-xrYHeR-build
web_1  | fatal: unable to access 'https://github.com/alerta/alerta-contrib.git/': Could not resolve host: github.com
web_1  | Command "git clone -q https://github.com/alerta/alerta-contrib.git /tmp/pip-xrYHeR-build" failed with error code 128 in None
web_1  | Traceback (most recent call last):


DEBUG = True
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
web_1  | Traceback (most recent call last):
web_1  |   File "<string>", line 1, in <module>
web_1  | NameError: name 'PLUGINS' is not defined
dockeralerta_web_1 exited with code 1
Killing dockeralerta_web_1 ... done
Killing dockeralerta_db_1 ... done
c9b52479:docker-alerta nsatterl$ vi config/alertad.conf.example
c9b52479:docker-alerta nsatterl$ docker-compose rm
Going to remove dockeralerta_web_1, dockeralerta_db_1
Are you sure? [yN] y
Removing dockeralerta_web_1 ... done
Removing dockeralerta_db_1 ... done
c9b52479:docker-alerta nsatterl$ docker-compose up

Installation of mailer fails

Hi,

It looks like mailer fails to install as it doesn't reside in the "plugins" directory in the project.
The docker-entrypoint.sh script in alerta runs the following:

# Install plugins
  IFS=","
  for plugin in ${INSTALL_PLUGINS}
  do
    echo "Installing plugin '${plugin}'"
    /venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=plugins/$plugin
  done
  touch ${RUN_ONCE}
fi

This fails to install mailer (I'm assuming of course that I'm supposed to define it under plugins as there doesn't appear to be anywhere else I can define it in the alertad.conf file and very little direction as to how to integrate it with alerta).

To get around this, I can run:

/venv/bin/pip install git+https://github.com/alerta/alerta-contrib.git#subdirectory=integrations/mailer
Building wheels for collected packages: alerta-mailer
  Building wheel for alerta-mailer (setup.py) ... done
  Stored in directory: /tmp/pip-ephem-wheel-cache-2e1nah9o/wheels/9a/49/c1/b19cd89b13b7dc6802bbe04dd0edb93d0a23dcc54948c2c916
Successfully built alerta-mailer
Installing collected packages: vine, amqp, kombu, redis, alerta-mailer
Successfully installed alerta-mailer-5.2.0 amqp-2.4.1 kombu-4.3.0 redis-3.1.0 vine-1.2.0

from within the container but obviously not ideal. Additional to that is that despite it saying the alerta-mailer-5.2.0 package has been built and dropped into /tmp/pip-ephem-wheel-cache-2e1nah9o/wheels/9a/49/c1/b19cd89b13b7dc6802bbe04dd0edb93d0a23dcc54948c2c916, that directory does not exist. Even if it did, there is no clear instructions on how to make that implement with alerta.

If anyone can point out if I'm missing something, I'd appreciate it.

Storing the configuration in volumes

Hi,

I'm running Alerta in a docker container on DC/OS platform. Everytime I restart Alerta, it creates a new container, so I have to save the configuration in volumes to keep it across reboots. Currently, the API keys generated via the WebUI are lost. Can you tell me which files I need to put in volumes to make the configuration persistent ?

API URL doesn't work after running Docker container

When running the API server in the docker container the web UI works however when I got to access the API link I get the following...

{"code":500,"errors":["Traceback (most recent call last):\n File "/venv/lib/python3.6/site-packages/werkzeug/routing.py", line 1538, in match\n rv = rule.match(path, method)\n File "/venv/lib/python3.6/site-packages/werkzeug/routing.py", line 776, in match\n raise RequestSlash()\nwerkzeug.routing.RequestSlash\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/venv/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request\n rv = self.dispatch_request()\n File "/venv/lib/python3.6/site-packages/flask/app.py", line 1791, in dispatch_request\n self.raise_routing_exception(req)\n File "/venv/lib/python3.6/site-packages/flask/app.py", line 1774, in raise_routing_exception\n raise request.routing_exception\n File "/venv/lib/python3.6/site-packages/flask/ctx.py", line 336, in match_request\n self.url_adapter.match(return_rule=True)\n File "/venv/lib/python3.6/site-packages/werkzeug/routing.py", line 1542, in match\n safe='/:|+') + '/', query_args))\nwerkzeug.routing.RequestRedirect: 301 Moved Permanently: None\n"],"message":"301 Moved Permanently: None","status":"error"}

Is there a way to fix this? I followed the directions for the docker container to the letter.

403 http response when used in conjunction with nagios plugin

I get a 403 http response while using the docker alerta container and the nagios integration when I try to set an environment. It looks like I have to set an ALLOWED_ENVIRONMENT in alertad.conf? But there doesn't appear to be an environment variable to set that at docker run time.

Basic LDAP not logging

Basic LDAP integration not working and not logging errors. Also alerta login.html not prepared to support username in login page without email

Read plugin settings from environment variables

Some problems faced

  1. AMQP

error: Unknown option auto_start_request
error: Unknown option use_greenlets

I had to review lines of the file usr/local/lib/python2.7/dist-packages/kombu/transport/mongodb.py:

            # 'auto_start_request': True,
            # auto_start_request = options ['auto_start_request'],

            # use_greenlets = _detect_environment ()! = 'default'

For these option is deprecated

  1. Twilio

I entered the container shell, cloned the repo contrib, performed 'python setup' for Twilio, and had to change the file:

/usr/local/lib/python2.7/dist-packages/alerta_twilio-0.1.0-py2.7.egg/twilio_sms.py

with my credentials!

It did not work, export the variables.

Then I needed to enable the alertad.conf.sh:

PLUGINS = ['AMQP', 'reject', 'twilio_sms']

AMQP_URL = 'mongodb: // localhost: 27017 / kombu'
AMQP_QUEUE = 'alerts'
AMQP_TOPIC = 'notify'

All changes always with stop / start container!

I just wanted to share it if someone tb go through it!

Command line options fail to run

Describe the bug
Running any of the command line options when exec'ed into a docker container fails with the same error messages.

To Reproduce
Steps to reproduce the behavior:

  1. sudo docker exec -it alerta /bin/bash
  2. Run 'alerta watch' or 'alerta top' or any other switch for alerta and the same error messages will be returned:
alerta@071c919c2588:/$ alerta watch
Traceback (most recent call last):
  File "/venv/bin/alerta", line 10, in <module>
    sys.exit(cli())
  File "/venv/lib/python3.6/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 1134, in invoke
    Command.invoke(self, ctx)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/venv/lib/python3.6/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/venv/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
    return f(get_current_context(), *args, **kwargs)
  File "/venv/lib/python3.6/site-packages/alertaclient/cli.py", line 54, in cli
    config.get_remote_config()
  File "/venv/lib/python3.6/site-packages/alertaclient/config.py", line 57, in get_remote_config
    remote_config = r.json()
  File "/venv/lib/python3.6/site-packages/requests/models.py", line 897, in json
    return complexjson.loads(self.text, **kwargs)
  File "/usr/local/lib/python3.6/json/__init__.py", line 354, in loads
    return _default_decoder.decode(s)
  File "/usr/local/lib/python3.6/json/decoder.py", line 339, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/local/lib/python3.6/json/decoder.py", line 357, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Expected behavior
A terminal view of the alerts or some other data back such as Uptime, etc.

Desktop:

  • OS: Ubuntu 18.04 - Docker 18.9.2

Docker image alerta/alerta-web:5.2.4 run alerta-server 5.2.3

Docker image alerta/alerta-web:5.2.4 run alerta-server 5.2.3
See below the result of pip freeze in alerta-web:5.2.4 container

docker run -ti --rm --name alerta --entrypoint "bash" alerta/alerta-web:5.2.4
alerta@1b466c311dae:/$ source venv/bin/activate
(venv) alerta@1b466c311dae:/$ pip freeze | grep "alerta"
alerta==5.2.0
alerta-server==5.2.3

is this expected behaviour ?

container is getting exited as soon as it is up

as soon as I run the following command : docker run --name alerta-web --link alerta-db:mongo -d -p 80:80 alerta/alerta-web
I see container is in exited status like below.
[root@master docker-alerta]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
579634042fd3 alerta/alerta-web "/docker-entrypoin..." 5 seconds ago Exited (1) 4 seconds ago alerta-web
b19bf630f240 mongo "docker-entrypoint..." 39 minutes ago Up 39 minutes 27017/tcp alerta-db

What can be the fix for this ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.