Git Product home page Git Product logo

ansible-grafana's People

Contributors

bengoa avatar bitchkat avatar bitphage avatar bngsudheer avatar bolek2000 avatar boutetnico avatar bwolf avatar cloudalchemybot avatar dreeg avatar faxm0dem avatar hyzth avatar krzyzakp avatar lae avatar leitgab avatar madeinoz67 avatar mxbossard avatar nicosto avatar nikosgraser avatar nikosmeds avatar obitech avatar paulfantom avatar rdemachkovych avatar richardheelin avatar rnhurt avatar rockandska avatar ruzickap avatar sarphram avatar superq avatar till avatar wiktor2200 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-grafana's Issues

Pass variables through playbook

Hi there, I've created playbook which tries to pass some mandatory vars:

- hosts: all
  become: true
  vars:
    grafana_security.admin_password: adm_pwd
  roles:
    - role: cloudalchemy.grafana

..but it doesn't work:

ERROR! Invalid variable name in vars specified for Play: 'grafana_security.admin_password' is not a valid variable name

Please advice how to do it correctly?

Datasource provision does not specify the apiVersion

When we add a new datasource the jsonData is not working because of the apiVersion is not declared.
The apiVersion should be declared on top of the provisioning yaml because otherwise it is considered as an apiVersion: 0 file an raises this warning :

[Deprecated] the datasource provisioning config is outdated. please upgrade

See here for the differences between the versions: github.com/grafana

It is currently possible to have jsonData working by renaming it json_data but imho it is better to specify that we are using apiVersion: 1.

A PR implementing apiVersion: 1 is incoming.

500 internal server error with packagecloud.io

While executing the cloudalchemy ansible-grafana role, I get the following error:

TASK [cloudalchemy.grafana : Update Apt cache] *************************************************************************************************************
FAILED - RETRYING: Update Apt cache (5 retries left).
FAILED - RETRYING: Update Apt cache (4 retries left).
FAILED - RETRYING: Update Apt cache (3 retries left).
FAILED - RETRYING: Update Apt cache (2 retries left).
FAILED - RETRYING: Update Apt cache (1 retries left).
fatal: [192.168.33.10]: FAILED! => {"attempts": 5, "changed": false, "msg": "Failed to update apt cache: "}

When I look on the machine I see something like this:

vagrant@ubuntu-bionic:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease                                                                                     
Hit:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease                                                                                   
Hit:4 http://security.ubuntu.com/ubuntu bionic-security InRelease                                                                                   
Err:5 https://packagecloud.io/grafana/stable/debian jessie InRelease                                                          
  500  Internal Server Error [IP: 50.97.198.58 443]
Reading package lists... Done
Building dependency tree       
Reading state information... Done
173 packages can be upgraded. Run 'apt list --upgradable' to see them.
W: Failed to fetch https://packagecloud.io/grafana/stable/debian/dists/jessie/InRelease  500  Internal Server Error [IP: 50.97.198.58 443]
W: Some index files failed to download. They have been ignored, or old ones used instead.

Is this error persistent or does packagecloud have problems a the moment?

support multiple grafana versions

This would allow to apply new grafana features before they come in new release. Also this way it would be easier to maintain grafana software in production.

flush_handlers task does not support when conditional

TASK [cloudalchemy.grafana : Install plugins] *********************************************************************************************************************************************************************
 [WARNING]: flush_handlers task does not support when conditional

The problem is:

I have a task A, that flushes handlers in role B.
Role A is executed before grafana, role B is after it.
And in first execution the script failes because flush_handlers task does not support when conditional and is executed every time.

Setup Notification channel

It looks like feature request, so please tag it if it is,
The idea is to get feature that will set Notification channel(s),
Typically it's done once on server setup, so makes sense on "Ansible stage".

Issues provisioning datasources on Grafana in private subnet

All API calls use grafana_url as the base, however if Grafana is being provisioned on a server in a private subnet the public grafana_url may not always line up with the Grafana server DNS in the private subnet.

Example set up:

  • web-server Nginx on server in public subnet, reverse proxying https://monitoring.app.com/grafana to http://monitoring.production.app.com:3000
  • public monitoring.app.com is accessed from browser, resolved via public hosted zone
  • private monitoring.production.app.com is accessed by servers in the VPC, resolved via private hosted zone
  • grafana-server Grafana running on monitoring.production.app.com in the same VPC in a private subnet, exposing port 3000.

When running ansible-playbook, API calls get made to monitoring.app.com/grafana (configured via grafana_url) which is not correct. API calls should be made to monitoring.production.app.com

Open to solutions and also happy to open a PR after deciding how to handle this, maybe another variable grafana_api_url that can override grafana_url and is used for the API calls?

proxy support

Hello,
this playbook doesn't support proxy support as package module is launched without environment: "{{ grafana_environment }}".

it's thus doesn't work :(

WTF bug with grafana_dashboards_dir == dashboards

If grafana_dashboards_dir is set to dashboards fileglob finds roles/cloudalchemy.grafana/files/dashboards and ignores all other folders in the playbook directory.

It cause some big WTF as grafana_dashboards_dir=myboards will loads dashboards from it (f.e. myproject/dashboards (with the role inside myproject/roles/cloudalchemy.grafana)), and grafana_dashboards_dir=dashboards won't find anything because it finds myproject/roles/cloudalchemy.grafana/files/dashboards and stops there instead of looking into myproject/dashboards.

I think it should be noted in documentation (defaults/main.yaml), or, better, to be fixed work with 'dashboards' name.

preflight checks

We should sanitize configuration specified by user in preflight.yml tasks section. This should check as much user defined parameters as it is needed to start grafana.
This is essential since grafana doesn't have any configuration validation command.

User creation

Role could create additional users if it is possible (possibility should be checked in tasks/preflight.yml).

This probably can be accomplished via grafana API

[Question] Dashboard add is done twice

Hi,

Not sure if i'm misunderstood the purpose of this but it seems for me to do exactly the same operation twice but once by the api and once by the provisioning capability if the datasource doesn't already exist but will update it only by the provisioning capability if it already exist.

Why not make a choice and only use the api or the provisioning ?

Regards,

- name: Create grafana datasource
uri:
url: "{{ grafana_url }}/api/datasources"
user: "{{ grafana_security.admin_user }}"
password: "{{ grafana_security.admin_password }}"
force_basic_auth: true
method: POST
body_format: json
body: "{{ item | to_json }}"
with_items: "{{ grafana_datasources }}"
no_log: true
when: ((datasources['json'] | selectattr("name", "equalto", item['name'])) | list) | length == 0
- name: Create datasources file
copy:
dest: "/etc/grafana/provisioning/datasources/ansible.yml"
content: |
delete_datasources: []
datasources:
{{ grafana_datasources | to_nice_yaml }}
backup: false
notify: restart grafana

Grafana dashboard import fails

It looks like the grafana REST API for importing dashboards may have changed, I needed to modify dashboards.yml:

 - name: import grafana dashboards through API
   uri:
-    url: "{{ grafana_api_url }}/api/dashboards/db"
+    url: "{{ grafana_api_url }}/api/dashboards/import"

wrong file perm for datasources/ansible.yml

Hi,

File is created with the default umask for root and can't be read by grafana.

[root@dadplx120 datasources]# ls -la
total 8
drwxr-xr-x. 2 root grafana   44 Aug 29 09:14 .
drwxr-xr-x. 4 root grafana   43 Aug 29 08:50 ..
-rw-------. 1 root grafana  164 Aug 29 09:14 ansible.yml
-rw-r-----. 1 root grafana 1505 Aug 29 08:50 sample.yaml

thanks

Package is always updated to latest version

I would prefer, that the grafana package (on Debian) is not updated to the latest version but kept at the currently installed version.

Eg. by setting grafana_version: present.
However, this would need a modification in vars/debian.yml where grafana_package is set:
grafana_package: "grafana{{ (grafana_version != 'latest') | ternary('=' ~ grafana_version, '') }}"

Or did I miss how to achieve that behaviour?

Provisioning folders

Grafana v5 provides new way of grouping dashboards into folders which is also compatible with their provisioning method. We need to suport this.

Preflight "tags: always" conflicts with --tags option

Hi hi!

I have a playbook that configures a full Prometheus + Grafana stack on a node via task dependencies, with each dependency tagged, e.g. tags: prometheus, tags: grafana.

The problem here is that if I just want to run Prometheus reconfiguration (--tags=prometheus), bits and pieces of Grafana stuff still gets run, since tags: always overrides command line --tags.

This is compounded by the fact that I have, as best practices dictate, the Grafana password stored as an Ansible vaulted variable. This means "Fail when grafana admin password isn't set" fails even if I don't want to touch the Grafana configuration at all, if I haven't somehow entered the vault password.

import grafana dashboards failed

TASK [cloudalchemy.grafana : import grafana dashboards] **********************************************
task path: /usr/share/ansible/roles/cloudalchemy.grafana/tasks/dashboards.yml:66
Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/uri.py
ESTABLISH LOCAL CONNECTION FOR USER: devops
EXEC /bin/sh -c 'echo ~ && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373" && echo ansible-tmp-1532592377.55-113906658255373="echo /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373" ) && sleep 0'
PUT /tmp/tmpzaS9f_ TO /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373/uri.py
EXEC /bin/sh -c 'chmod u+x /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373/ /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373/uri.py && sleep 0'
EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-dohzkqlqtjtbwmxbnvdmwlvldbhsmcta; /usr/bin/python /home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373/uri.py; rm -rf "/home/devops/.ansible/tmp/ansible-tmp-1532592377.55-113906658255373/" > /dev/null 2>&1'"'"' && sleep 0'
failed: [lo] (item=None) => {
"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
}
to retry, use: --limit @/home/devops/ansible/playbook/install-grafana.retry

Readme

Readme needs to be updated. Following sections should be added/extended:

  • "Requirements" - at least list supported ansible versions and packages needed on deployer host
  • "Role Variables" - link to defaults/main.yml with additional links to grafana docs
  • "Dependencies" - should list "None", as this role isn't dependent on others
  • "License'

Mostly this should look similiar to template generated by ansible-galaxy init

documentation example missing admin user requirement

Hello!

Please can you mention in the examples that it is mandatory to define the admin user and password variables.

This is the error with the default values:

TASK [cloudalchemy.grafana : Fail when grafana admin password isn't set] ******************************************************************************************************************************************************************
fatal: [grafana.vlab.lcl]: FAILED! => {"changed": false, "msg": "Please specify grafana admin password (grafana_security.admin_password)"}
	to retry, use: --limit @/home/andrasb/playbooks/vlab.retry

After I defined group vars it worked fine:

---
grafana_security:
  admin_user: admin
  admin_password: "myverysecurepassword"

Thank you in advance!

Importing dashboards will fail if datadir != /var/lib/grafana

If you set grafana_data_dir to a path different than /var/lib/grafana the task Import grafana dashboards will fail with the following error:

Destination directory /var/lib/grafana/dashboards does not exist

This happens because this path is static (hardcoded) and does not use the dynamic value from grafana_data_dir. I have also noticed the same path is present in different parts of the role that might cause similar issues. I'll send a PR.

Problem installing on debian. Problems with repo and it's gpg key.

Hello, I've got a problem with installing grafana with this role.
I have error:

TASK [ansible-grafana : Import Grafana GPG signing key [Debian/Ubuntu]] ********
changed: [MY_IP]

TASK [ansible-grafana : Add Grafana repository [Debian/Ubuntu]] ****************
FAILED - RETRYING: Add Grafana repository [Debian/Ubuntu] (5 retries left).
ok: [MY_IP4]

TASK [ansible-grafana : Install Grafana] ***************************************
FAILED - RETRYING: Install Grafana (5 retries left).
FAILED - RETRYING: Install Grafana (4 retries left).
FAILED - RETRYING: Install Grafana (3 retries left).
FAILED - RETRYING: Install Grafana (2 retries left).
FAILED - RETRYING: Install Grafana (1 retries left).
fatal: [MY_IP]: FAILED! => {"attempts": 5, "changed": false, "msg": "No package matching 'grafana' is available"}

And when I SSH to server and try to install grafana manually I've got error:

Err:9 https://packagecloud.io/grafana/stable/debian stretch InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 40F370A1F9081B64
Reading package lists... Done
W: GPG error: https://packagecloud.io/grafana/stable/debian stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 40F370A1F9081B64

I assume that problem is related with packagecloud.io and it's GPG key.

Maybe you should change this packagecloud package with official one? Available here: http://docs.grafana.org/installation/debian/
It works on this official repository.

Alert deployment ?

Using ansible to deploy our grafana server I was happy to find the datasource and dashboard provisioning mechanisms to work very well by creating yml/json files. I assign the dashboards into a folder 'provisioned' so its easy for our users to understand what dashboards are provided and which ones they have created on their own without them getting overwritten.

Next item is the alerts system. Is there work in progress to support that in a similar fashion or am I better off using sqlite to write the definitions into the alerts table directly?

Alerting notifications are not updated after created

If you try to update settings of an existing alerting notification channel after it was created this role will not update it.

Steps to reproduce:

  1. Create a notification channel
grafana_alert_notifications:
  - name: "Send alerts into Slack"
    type: slack
    isDefault: true
    settings:
      url: "REDACTED"
      recipient: "#monitoring"
  1. Ensure channel is created in Grafana

Screenshot 2019-03-24 at 11 25 23

  1. Update the settings of the notification channel (Notice addition of username)
grafana_alert_notifications:
 - name: "Send alerts into Slack"
   type: slack
   isDefault: true
   settings:
     url: "REDACTED"
     recipient: "#monitoring"
     username: Grafana
  1. Check notification settings on Grafana UI.

Current result: settings are not updated, username remains blank.

Expected result: settings are updated, username = Grafana

Incorrect location for datasources provisioning directory

The datasources Yaml file is currently being persisted in the Grafana data directory: /var/lib/grafana/provisioning/ whereas by default Grafana expects these files under /etc/grafana/provisioning. As such the current YAML files being created have no affect. We should start saving the files under /etc/grafana/provisioning .

Datasources provisionning

Hello, your roles are very cool but I can't configure it for datsources creation... I don't touch the sample values :

# Datasources to configure
grafana_datasources: []
 - name: "Prometheus"
   type: "prometheus"
   access: "proxy"
   url: "http://prometheus.mydomain"
   basicAuth: true
   basicAuthUser: "admin"
   basicAuthPassword: "password"
   isDefault: true
   jsonData: '{"tlsAuth":false,"tlsAuthWithCACert":false,"tlsSkipVerify":true}'

And got this error :

ERROR! Syntax Error while loading YAML.
  did not find expected key

The error appears to have been in '/home/.../Ansible/monitoring/roles/grafana/defaults/main.yml': line 153, column 2, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

grafana_datasources: []
 - name: "Prometheus"
 ^ here

I don't understand what I fail... Thanks to you and sorry if it's a stupid question

adding datasource fails

Task:

  - name: add prometheus datasource
    grafana_datasource:
      name: prometheus
      grafana_url: "http://localhost:3000"
      url: "http://prometheus:9090"
      ds_type: prometheus
      basic_auth_user: admin
      basic_auth_password: admin

Error:

TASK [add prometheus datasource] **************************************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: a bytes-like object is required, not 'str'
fatal: [node01]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_0qkfiabg/ansible_module_grafana_datasource.py\", line 540, in <module>\n    main()\n  File \"/tmp/ansible_0qkfiabg/ansible_module_grafana_datasource.py\", line 523, in main\n    result = grafana_create_datasource(module, module.params)\n  File \"/tmp/ansible_0qkfiabg/ansible_module_grafana_datasource.py\", line 376, in grafana_create_datasource\n    auth = base64.b64encode(to_bytes('%s:%s' % (data['grafana_user'], data['grafana_password'])).replace('\\n', ''))\nTypeError: a bytes-like object is required, not 'str'\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}
$ ansible --version
ansible 2.6.1
  python version = 3.6.5 (default, Jun 17 2018, 12:13:06) [GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.2)]

Issue with grafana restart when including one or more datasources

Hi,

For some weird reason the copy of the datasource file (/etc/grafana/provisioning/datasources/ansible,yml) is not done with the right ownership and mode doesn't allow all the users to read it, making grafana-server fail to start. Could I kindly ask you to correct that or do you want me to open a PR?

Thanks in advance

Dashboard title cannot be empty

It's close to the #77

I can't import my grafana dashboard

- hosts: grafana
  vars:
    grafana_security:
      admin_user: admin
      admin_password: password42
    grafana_datasources:
      - name: prometheus
        type: prometheus
        url: 'http://localhost:9090'
        access: proxy
        basicAuth: false
    grafana_dashboards:
      - dashboard_id: 42
        revision_id: 1
        datasource: prometheus

    prometheus_targets:
      node:
      - targets:
        - localhost:9100
        - x.x.x.x:9100
  roles:
    - cloudalchemy.grafana
    - cloudalchemy.prometheus

I uncomment the no_log: true, here is the output:

failed: [x.x.x.x] (item=/tmp/dashboards/42.json) => {"changed": false, "connection": "close", "content": "{\"message\":\"Dashboard title cannot be empty\"}", "content_length": "45", "content_type": "application/json", "date": "Tue, 06 Nov 2018 16:36:48 GMT", "item": "/tmp/dashboards/42.json", "json": {"message": "Dashboard title cannot be empty"}, "msg": "Status code was 400 and not [200]: HTTP Error 400: Bad Request", "redirected": false, "status": 400, "url": "http://0.0.0.0:3000/api/dashboards/db"}

Any idea ?

Prometheus dashboard cannot be imported - Invalid alert data. Cannot save dashboard

configuration:

`grafana_dashboards:

  • dashboard_id: 5984 # Linux Alert nodes
    revision_id: 1
    datasource: prometheus
  • dashboard_id: 2121 # RabbitMQ
    revision_id: 1
    datasource: prometheus
  • dashboard_id: 617 # AWS EC2
    revision_id: 3
    datasource: cloudwatch`

Error:

ok: [10.1.1.26] => (item=/tmp/dashboards/617.json) failed: [10.1.1.26] (item=/tmp/dashboards/alerts-linux-nodes_rev1.json) => {"changed": false, "connection": "close", "content": "{\"message\":\"Invalid alert data. Cannot save dashboard\"}", "content_length": "55", "content_type": "application/json", "date": "Tue, 13 Nov 2018 15:40:45 GMT", "item": "/tmp/dashboards/alerts-linux-nodes_rev1.json", "json": {"message": "Invalid alert data. Cannot save dashboard"}, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://0.0.0.0:3000/api/dashboards/db"} failed: [10.1.1.26] (item=/tmp/dashboards/2121.json) => {"changed": false, "connection": "close", "content": "{\"message\":\"Invalid alert data. Cannot save dashboard\"}", "content_length": "55", "content_type": "application/json", "date": "Tue, 13 Nov 2018 15:40:46 GMT", "item": "/tmp/dashboards/2121.json", "json": {"message": "Invalid alert data. Cannot save dashboard"}, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://0.0.0.0:3000/api/dashboards/db"} ok: [10.1.1.26] => (item=/tmp/dashboards/aws-ec2_rev3.json) failed: [10.1.1.26] (item=/tmp/dashboards/rabbitmq-metrics_rev1.json) => {"changed": false, "connection": "close", "content": "{\"message\":\"Invalid alert data. Cannot save dashboard\"}", "content_length": "55", "content_type": "application/json", "date": "Tue, 13 Nov 2018 15:40:47 GMT", "item": "/tmp/dashboards/rabbitmq-metrics_rev1.json", "json": {"message": "Invalid alert data. Cannot save dashboard"}, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://0.0.0.0:3000/api/dashboards/db"} failed: [10.1.1.26] (item=/tmp/dashboards/5984.json) => {"changed": false, "connection": "close", "content": "{\"message\":\"Invalid alert data. Cannot save dashboard\"}", "content_length": "55", "content_type": "application/json", "date": "Tue, 13 Nov 2018 15:40:47 GMT", "item": "/tmp/dashboards/5984.json", "json": {"message": "Invalid alert data. Cannot save dashboard"}, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "status": 500, "url": "http://0.0.0.0:3000/api/dashboards/db"}

grafana log

t=2018-11-13T10:40:45-0500 lvl=eror msg="Invalid alert data. Cannot save dashboard" logger=context userId=1 orgId=1 uname=admin error="Invalid alert data. Cannot save dashboard" t=2018-11-13T10:40:45-0500 lvl=eror msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=POST path=/api/dashboards/db status=500 remote_addr=127.0.0.1 time_ms=42 size=55 referer= t=2018-11-13T10:40:46-0500 lvl=eror msg="Invalid alert data. Cannot save dashboard" logger=context userId=1 orgId=1 uname=admin error="Invalid alert data. Cannot save dashboard" t=2018-11-13T10:40:46-0500 lvl=eror msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=POST path=/api/dashboards/db status=500 remote_addr=127.0.0.1 time_ms=39 size=55 referer= t=2018-11-13T10:40:47-0500 lvl=eror msg="Invalid alert data. Cannot save dashboard" logger=context userId=1 orgId=1 uname=admin error="Invalid alert data. Cannot save dashboard" t=2018-11-13T10:40:47-0500 lvl=eror msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=POST path=/api/dashboards/db status=500 remote_addr=127.0.0.1 time_ms=36 size=55 referer= t=2018-11-13T10:40:47-0500 lvl=eror msg="Invalid alert data. Cannot save dashboard" logger=context userId=1 orgId=1 uname=admin error="Invalid alert data. Cannot save dashboard" t=2018-11-13T10:40:47-0500 lvl=eror msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=POST path=/api/dashboards/db status=500 remote_addr=127.0.0.1 time_ms=35 size=55 referer=

grafana_dashboards_dir is checked on target hosts and not on deployer

Hi if I uderstand well the grafana_dashboards_dir property, the role should pick the dahsboard in this local directory and copied it to a temp directory. Is that the behaviour expeted ?

For now, the task named "Check if there are any dashboards in {{ grafana_dashboards_dir }}" in main.yml to not do this on localhost but on each remote host. It seems to be a mistake.

Moreover, at the end of dashboards.yml, every dashboards are imported through the API. I think that if we choose the provisioning method (grafana_use_provisioning), the files in local /tmp/dashboards should simply be copied in remote /etc/grafana/provisioning/dashboards.

What do you think about it ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.