Git Product home page Git Product logo

Comments (7)

emollusion avatar emollusion commented on July 18, 2024 1

Hi!

Yes, this did the trick.

Running on port 9491 was temporary to be able to run services side by side.

Thank you!

from pure-fa-openmetrics-exporter.

emollusion avatar emollusion commented on July 18, 2024

Attempted to use this setup to define the second array now and grafana dashboard picks this up without any trouble, so not sure what is differentiating them creating trouble in grafana.

from pure-fa-openmetrics-exporter.

chrroberts-pure avatar chrroberts-pure commented on July 18, 2024

Hi @emollusion thanks for the issue! I am going to tag in the owner of those grafana dashboard who may be able to give a little more context on how the panels are getting their data, and what vars may being propagated through the panels.

@james-laing

With that said, if using our pre-built suggested dashboards - please follow the configurations as closely as possible.

We'd also be happy to schedule out a Zoom session at the beginning of the year. Please email our distribution list [email protected]

from pure-fa-openmetrics-exporter.

james-laing avatar james-laing commented on July 18, 2024

@emollusion this does appear to be an issue with the Prometheus config rather than the Exporter.

Are you able to share your prometheus.yaml config with us?

from pure-fa-openmetrics-exporter.

emollusion avatar emollusion commented on July 18, 2024

Hi!

This is the working configuration at the moment, and the pure-fa exporter service runs with default config and without a token file. This maps both fa1 and fa2 in grafana as expected.
prometheus.yml:

# PureStorage

# Scrape job for one Pure Storage FlashArray scraping /metrics/array
# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA1'
    # Specify the array endpoint from /metrics/array
    metrics_path: /metrics/array
    # Provide FlashArray authorization API token
    authorization:
      credentials: apikey_fa1
    # Provide parameters to pass the exporter the device to connect to. Provide FQDN or IP address
    params:
      endpoint: ['fqdn_fa1']

    static_configs:
    # Tell Prometheus which exporter to make the request
    - targets:
      - ipv4_exporter:9490
      # Finally provide labels to the device.
      labels:
        # Instance should be the device name and is used to correlate metrics between different endpoints in Prometheus and Grafana. Ensure this is the same for each endpoint for the same device.
        instance: hostname_fa1
        # location, site and env are specific to your environment. Feel free to add more labels but maintain these three to minimize changes to Grafana which is expecting to use location, site and env as filter variables. 
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA1_vol'
    metrics_path: /metrics/volumes
    authorization:
      credentials: apikey_fa1
    params:
      endpoint: ['fqdn_fa1']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa1
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA1_hosts'
    metrics_path: /metrics/hosts
    authorization:
      credentials: apikey_fa1
    params:
      endpoint: ['fqdn_fa1']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa1
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA1_pods'
    metrics_path: /metrics/pods
    authorization:
      credentials: apikey_fa1
    params:
      endpoint: ['fqdn_fa1']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa1
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA1_dir'
    metrics_path: /metrics/directories
    authorization:
      credentials: apikey_fa1
    params:
      endpoint: ['fqdn_fa1']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa1
        location: location
        site: site
        env: environment

  - job_name: 'purefa_array_FA2'
    # Specify the array endpoint from /metrics/array
    metrics_path: /metrics/array
    # Provide FlashArray authorization API token
    authorization:
      credentials: apikey_fa2
    # Provide parameters to pass the exporter the device to connect to. Provide FQDN or IP address
    params:
      endpoint: ['fqdn_fa2']
    static_configs:
    # Tell Prometheus which exporter to make the request
    - targets:
      - ipv4_exporter:9490
      # Finally provide labels to the device.
      labels:
        # Instance should be the device name and is used to correlate metrics between different endpoints in Prometheus and Grafana. Ensure this is the same for each endpoint for the same device.
        instance: hostname_fa2
        # location, site and env are specific to your environment. Feel free to add more labels but maintain these three to minimize changes to Grafana which is expecting to use location, site and env as filter variables. 
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA2_vol'
    metrics_path: /metrics/volumes
    authorization:
      credentials: apikey_fa2
    params:
      endpoint: ['fqdn_fa2']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa2
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA2_hosts'
    metrics_path: /metrics/hosts
    authorization:
      credentials: apikey_fa2
    params:
      endpoint: ['fqdn_fa2']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa2
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA2_pods'
    metrics_path: /metrics/pods
    authorization:
      credentials: apikey_fa2
    params:
      endpoint: ['fqdn_fa2']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa2
        location: location
        site: site
        env: environment

# Each Prometheus scrape requires a job name. In this example we have structures the name `exporter_endpoint_arrayname`
  - job_name: 'purefa_array_FA2_dir'
    metrics_path: /metrics/directories
    authorization:
      credentials: apikey_fa2
    params:
      endpoint: ['fqdn_fa2']

    static_configs:
    - targets:
      - ipv4_exporter:9490
      labels:
        instance: hostname_fa2
        location: location
        site: site
        env: environment

So for the configs that are expected to work, this runs the pure-fa exporter service with a token file. The metrics appear to be mapped for both FAs the same way using either method.
I have checked results from both methods comparing the outputs, and the metric mappings are no different, the only thing that is different is the actual metrics where this is expected. But as soon as I use the method below, there are no metrics in Grafana, but Grafana does query the FA hostnames just as with the above method, so they are both in the list at the top right of the dashboard, but not with metrics whilst using the method below.

systemd:

[Unit]
Description="PureStorage Multi FA prometheus exporter"
After=network.target

[Service]
ExecStart=/path/to/pure-fa-openmetrics-exporter -a 127.0.0.1 -p 9491 -t /path/to/tokens.yml

Type=exec
Restart=always
SyslogIdentifier=Pure-fa

[Install]
WantedBy=multi-user.target

tokens.yml:

<hostname_fa1>:
  address: fqdn_fa1
  api_token: apikey_fa1
<hostname_fa2>:
  address: fqdn_fa2
  api_token: apikey_fa2

prometheus.yml:

  - job_name: 'purefa_array'
    honor_timestamps: true
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /metrics/array
    scheme: http
    follow_redirects: true
    #enable_http2: true
    relabel_configs:
    - source_labels: [job]
      separator: ;
      regex: (.*)
      target_label: __tmp_prometheus_job_name
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: job
      replacement: purefa_array
      action: replace
    - source_labels: [__address__]
      separator: ;
      regex: (.*)
      target_label: __param_endpoint
      replacement: $1
      action: replace
    - source_labels: [__param_endpoint]
      separator: ;
      regex: (.*)
      target_label: instance
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: __address__
      replacement: 127.0.0.1:9491  #  <== your exporter address and port goes here
      action: replace
    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2
  - job_name: 'purefa_array_directories'
    honor_timestamps: true
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /metrics/directories
    scheme: http
    follow_redirects: true
    #enable_http2: true
    relabel_configs:
    - source_labels: [job]
      separator: ;
      regex: (.*)
      target_label: __tmp_prometheus_job_name
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: job
      replacement: purefa_array_dir
      action: replace
    - source_labels: [__address__]
      separator: ;
      regex: (.*)
      target_label: __param_endpoint
      replacement: $1
      action: replace
    - source_labels: [__param_endpoint]
      separator: ;
      regex: (.*)
      target_label: instance
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: __address__
      replacement: 127.0.0.1:9491  #  <== your exporter address and port goes here
      action: replace
    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2
  - job_name: 'purefa_array_hosts'
    honor_timestamps: true
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /metrics/hosts
    scheme: http
    follow_redirects: true
    #enable_http2: true
    relabel_configs:
    - source_labels: [job]
      separator: ;
      regex: (.*)
      target_label: __tmp_prometheus_job_name
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: job
      replacement: purefa_array_hosts
      action: replace
    - source_labels: [__address__]
      separator: ;
      regex: (.*)
      target_label: __param_endpoint
      replacement: $1
      action: replace
    - source_labels: [__param_endpoint]
      separator: ;
      regex: (.*)
      target_label: instance
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: __address__
      replacement: 127.0.0.1:9491  #  <== your exporter address and port goes here
      action: replace
    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2
  - job_name: 'purefa_array_pods'
    honor_timestamps: true
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /metrics/pods
    scheme: http
    follow_redirects: true
    #enable_http2: true
    relabel_configs:
    - source_labels: [job]
      separator: ;
      regex: (.*)
      target_label: __tmp_prometheus_job_name
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: job
      replacement: purefa_array_pods
      action: replace
    - source_labels: [__address__]
      separator: ;
      regex: (.*)
      target_label: __param_endpoint
      replacement: $1
      action: replace
    - source_labels: [__param_endpoint]
      separator: ;
      regex: (.*)
      target_label: instance
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: __address__
      replacement: 127.0.0.1:9491  #  <== your exporter address and port goes here
      action: replace
    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2
  - job_name: 'purefa_array_volumes'
    honor_timestamps: true
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /metrics/volumes
    scheme: http
    follow_redirects: true
    #enable_http2: true
    relabel_configs:
    - source_labels: [job]
      separator: ;
      regex: (.*)
      target_label: __tmp_prometheus_job_name
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: job
      replacement: purefa_array_vol
      action: replace
    - source_labels: [__address__]
      separator: ;
      regex: (.*)
      target_label: __param_endpoint
      replacement: $1
      action: replace
    - source_labels: [__param_endpoint]
      separator: ;
      regex: (.*)
      target_label: instance
      replacement: $1
      action: replace
    - separator: ;
      regex: (.*)
      target_label: __address__
      replacement: 127.0.0.1:9491  #  <== your exporter address and port goes here
      action: replace
    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2

from pure-fa-openmetrics-exporter.

james-laing avatar james-laing commented on July 18, 2024

If you wish to use the relabelling prometheus.yaml model we need to add an environment label for the Grafana dashboard.

We need to add the env label to each static_configs: section.

For example:

    static_configs:
    - targets:           #  <== the list of your FlashArrays goes here
      - hostname_fa1
      - hostname_fa2
      labels:
        env: env

Using this model we cannot easily customise an environment variable for each array but it will meet the Grafana dashboard requirements.

I notice you have specified port 9491 for the FlashArray Exporter, perhaps for the tokens instance of the exporter. Just note that by default the FlashBlade Exporter uses port 9491 just in case you have FlashBlade.

from pure-fa-openmetrics-exporter.

james-laing avatar james-laing commented on July 18, 2024

Excellent - I'm glad you're up and running.

from pure-fa-openmetrics-exporter.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.