Comments (29)
The exporter is specifically built with the intent of not having any configuration file and with a very minimal set of configuration parameters, so there is nothing that you need to pass it in order to work.
All what you need to have is a working Prometheus instance with a configuration file containing a job for each FlashArray/FlashBlade you want to collect the metrics from. A very basic Prometheus config file can be seen here https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/examples/config/k8s/prometheus-configmap.yaml, as the value of the 'prometheus.yml' key. Because of the authentication that is specifically required by the exporter you cannot leverage the Prometheus labeling as a mean to keep the configuration concise and in a single job. You need instead to define a job for each appliance you want to monitor, which somewhat simplifies the configuration but at the same time originates a proliferation of entries in the file.
from pure-fa-openmetrics-exporter.
I have set that configuration file up and placed it in a temp folder, but how do I get the Docker Container to see that file? Or is there a specific place i need to put it so the Container sees it automatically?
from pure-fa-openmetrics-exporter.
Not being able to take advantage of the labeling seems like a bad ieda. Is there a reason it was created for both exporters this way? FB and FA.
from pure-fa-openmetrics-exporter.
I ran this command and I still don't see the the container isn't seeing the config file:
docker run -d --restart unless-stopped -p 9490:9490 --name Pure-Metrics -v /etc/docker/configs/pure-configmap.yaml:/etc/prometheus/prometheus.yml quay.io/purestorage/pure-fa-om-exporter:1.0.1
from pure-fa-openmetrics-exporter.
@andrewm659 I completely understand you point, but the issue arises from the fact that providing authentication/authorization credentials or tokens as query parameters is not a good practice, so we decided to remove that possibility. Prometheus provides the authorization config key for that specific purpose. This approach unfortunately requires to define a job for each target FlashArray, as it is not possible to create unique API token for a pool of arrays.
from pure-fa-openmetrics-exporter.
@genegr Does it seem that I have the command correct? When it loads it doesn't seem to be seeing the config
from pure-fa-openmetrics-exporter.
@ApickettLGA The command you are using runs the exporter properly, but it is the additional volume you are trying to pass to the container that is not understood/used by the exporter. The exporter does not use any config file and it has only the few options that are shown when it is executed with the -h/--help flag
docker run quay.io/purestorage/pure-fa-om-exporter:v1.0.2 -h
Usage of /usr/local/bin/pure-fa-om-exporter:
-debug
Debug
-host string
Address of the exporter (default "0.0.0.0")
-port int
Port of the exporter (default 9490)
All the configuration happens at the Prometheus side, in which config file you have to add the target array as the query parameter of the scraped endpoint, like this:
...
scrape_configs:
- job_name: 'purestorage-fa'
metrics_path: /metrics
authorization:
credentials: 2b74f9eb-a35f-40d9-a6a6-33c13775a53c <-- api token
params:
endpoint: ['10.11.112.6'] <-- array management ip address (or hostname)
static_configs:
- targets:
- 10.10.8.4:9490 <-- exporter ip address (or hostname) and port
labels:
location: uk
site: London
instance: fa-prod-01
...
from pure-fa-openmetrics-exporter.
@genegr I now see the Server itself needs this installed and then the scrape can be configured, but I did that and its still not loading the config:
my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
scrape_timeout is set to the global default (10s).
Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
- "first_rules.yml"
- "second_rules.yml"
A scrape configuration containing exactly one endpoint to scrape:
Here it's Prometheus itself.
scrape_configs:
The job name is added as a label job=<job_name>
to any timeseries scraped from this config.
-
job_name: "prometheus"
metrics_path defaults to '/metrics'
scheme defaults to 'http'.
static_configs:
- targets: ["localhost:9090"]
-
job_name: 'purestorage-fa'
metrics_path: /metrics
authorization:
credentials: blah
params:
endpoint: ['10.6.100.71']
static_configs:- targets:
- 10.6.25.132:9490
labels:
location: Frederick
site: Maryland
instance: mapsfa01
- 10.6.25.132:9490
- targets:
When I go to the page:
http://mvlmddstor01:9490/metrics/array?endpoint=host
Target authorization token is missing
from pure-fa-openmetrics-exporter.
I was hoping I could get any more assistance as I know this is close but not sure what else would be missing
from pure-fa-openmetrics-exporter.
@genegr or Anyone, I configured Prometheus on the Ubuntu Server and configured the script as above, but the Pure Import Container is still not seeing the correct information, is there something I missing or can check to further troubleshooting?
from pure-fa-openmetrics-exporter.
@ApickettLGA it's a little unclear to understand your prometheus.yaml config as it's not in a code block and markdown has formatted it all. It looks like Prometheus is not picking up your authorization credentials as yaml format is expecting an indentation.
There are some enhancements to the README files on the way. In the meantime, take a look at this example for configuring prometheus.yaml.
We recently posted some additional content on how to deploy Prometheus and Grafana with overview dashboards for FA/FB.
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/tree/master/extra/grafana
There is a troubleshooting section there for Prometheus, try running this to prove your config works.
> promtool check config /etc/prometheus/prometheus.yml
Checking prometheus.yml
SUCCESS: prometheus.yml is valid prometheus config file syntax
Also I notice you are running the query against /metrics. While this will work, it is an expensive query and it may take longer than the timeout and therefore may fail. It is recommended to configure a query job for specific metric endpoints /metrics/array, /metrics/volumes, /metrics/hosts, etc.
I have gone to the effort to format your yaml configuration and included jobs to /metrics/array, /metrics/volumes and /metrics/hosts and processed through a yaml validator online to ensure it conforms to yaml formatting.
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
# Pure Storage OpenMetrics Exporter - mapsfa01
- job_name: 'purefa_array_mapsfa01'
metrics_path: /metrics/array
# Bearer authorization token
authorization:
credentials: a12345bc6-d78e-901f-23a4-56b07b89012
params:
endpoint: ['10.6.100.71']
static_configs:
# purefa openmetrics exporter
- targets:
- 10.6.25.132:9490
labels:
location: Frederick
site: Maryland
instance: mapsfa01
env: prod
- job_name: 'purefa_hosts_mapsfa01'
metrics_path: /metrics/hosts
# Bearer authorization token
authorization:
credentials: a12345bc6-d78e-901f-23a4-56b07b89012
params:
endpoint: ['10.6.100.71']
static_configs:
# purefa openmetrics exporter
- targets:
- 10.6.25.132:9490
labels:
location: Frederick
site: Maryland
instance: mapsfa01
env: prod
- job_name: 'purefa_volumes_mapsfa01'
metrics_path: /metrics/volumes
# Bearer authorization token
authorization:
credentials: a12345bc6-d78e-901f-23a4-56b07b89012
params:
endpoint: ['10.6.100.71']
static_configs:
# purefa openmetrics exporter
- targets:
- 10.6.25.132:9490
labels:
location: Frederick
site: Maryland
instance: mapsfa01
env: prod
Run the promtool check config prometheus.yaml
and restart Prometheus.
There are some troubleshooting steps in the README.md which might help you with each component.
For example, to check the exporter is working we need to pass the bearer token.
curl -H 'Authorization: Bearer a12345bc6-d78e-901f-23a4-56b07b89012' -X GET http://10.6.25.132:9490/metrics/array?endpoint=10.6.100.71
from pure-fa-openmetrics-exporter.
@james-laing Thank you for all your help and its really appreciated. I have taken your information and applied and seems to be passing the check, and I can run the curl but when I go to the page I'm still not getting any results back.
Promtheus Check:
promtool check config /etc/prometheus/prometheus.yml
Checking /etc/prometheus/prometheus.yml
SUCCESS: /etc/prometheus/prometheus.yml is valid prometheus config file syntax
And Curl Check looks fine:
curl -H 'Authorization: Bearer abc-blah' -X GET http://10.6.25.132:9490/metrics/array?endpoint=10.6.100.71
HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.9432e-05
go_gc_duration_seconds{quantile="0.25"} 0.000119313
go_gc_duration_seconds{quantile="0.5"} 0.000154718
go_gc_duration_seconds{quantile="0.75"} 0.000213092
go_gc_duration_seconds{quantile="1"} 0.011197259
go_gc_duration_seconds_sum 369.193937972
go_gc_duration_seconds_count 2.215424e+06
HELP go_goroutines Number of goroutines that currently exist.
TYPE go_goroutines gauge
go_goroutines 93
Browser Check:
Page:
http://mvlmddstor01.lgamerica.com:9490/metrics?endpoint=host
Result:
Target authorization token is missing
from pure-fa-openmetrics-exporter.
@ApickettLGA The browser can't pass an authorisation token as you can see in the result of the browser check it is failing.
Result: Target authorization token is missing
The curl output you provided looks truncated. You need to scroll down and check you can see purefa_
metrics.
It looks like you are almost there, please try the troubleshooting section of the setup README.md.
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/grafana/README.md
If there is an issue please provide feedback. If you are still requiring consultation to pull data from FlashArray, Pure Storage Professional Services offer a Monitoring and Observability service specifically for this. Just get in touch with your sales representative.
from pure-fa-openmetrics-exporter.
@ApickettLGA Did you see any purefa_xxxx
results returned from the curl command?
Did you see targets listed in Prometheus and can you see results in with a PromQL query?
from pure-fa-openmetrics-exporter.
@james-laing Thank you for your response; Sorry We have had tons of outages that i couldn't get back to this.
`When` I run the Curl command I can see it respond back,
go_memstats_stack_sys_bytes 917504
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 5.49562496e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 9
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 1.46949344e+06
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 51
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 1.96665344e+08
#HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
#TYPE process_start_time_seconds gauge
process_start_time_seconds 1.67122622665e+09
#HELP process_virtual_memory_bytes Virtual memory size in bytes.
#TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.237245952e+09
#HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
#TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes `1.8446744073709552e+19
So I know it is connecting with the API Key, I'm going to try troubleshooting with Prometheus now, Thank you
from pure-fa-openmetrics-exporter.
@james-laing I was able to Connect to the Prometheus Connection and I'm able to run active queries on my Pure Storage and see live data.
from pure-fa-openmetrics-exporter.
@ApickettLGA great news, excellent!
Are you planning on pointing Grafana to your Prometheus TSDB to view broad and historical metrics?
https://github.com/PureStorage-OpenConnect/pure-fa-openmetrics-exporter/blob/master/extra/grafana/README.md
from pure-fa-openmetrics-exporter.
@james-laing - Well right now I'm able to see the Data which is very good, but I'm still having an issue with the Container Web page; I'm still getting "Target authorization token is missing" when I visit the page
And the plan is to have our System monitoring software DataDog query the page and pull the information then put it to their page. This way all the stats will be collected, but need the Container Page to work. Seems like I have all the pieces working but the container.
from pure-fa-openmetrics-exporter.
@ApickettLGA just out of curiosity why not use the Pure FlashArray DataDog integration that we already have?
https://docs.datadoghq.com/integrations/purefa/
from pure-fa-openmetrics-exporter.
Funny enough, DataDog sent us down the rabbit hole of using this step to send data to DD.. Trying the Link you sent me to see how that works. Thank you
from pure-fa-openmetrics-exporter.
@andrewm659 this as well: http://theansibleguy.com/datadog-and-pure-storage/
from pure-fa-openmetrics-exporter.
@sdodsley Reading the steps DataDog steps and it does require for me to have the container setup;
The Pure Storage Prometheus exporter is installed and running in a containerized environment. Refer to the GitHub repo for installation instructions.
So this leads me back to my page not working correctly.. If I can get the container webpage to work then I would be completed this with step.. I have Prometheus working but it doesn't seem like the page is working from the container
from pure-fa-openmetrics-exporter.
@ApickettLGA if your objective is to get data into Datadog, you would simplify your solution by using the Datadog integration for Pure FA. It works with both the depricated pure-exporter and purefa_openmetrics-exporter.
Configure the /etc/datadog-agent/conf.d/purefa.d/mapsfa01.conf.yaml
like the following example:
init_config:
timeout: 60
instances:
# Pure Storage OpenMetrics Exporter - mapsfa01
- openmetrics_endpoint: http://10.6.25.132:9490/metrics/array?endpoint=10.6.100.71
tags:
- env:prod
- host:mapsfa01
- fa_array_name:mapsfa01
headers:
Authorization: Bearer a12345bc6-d78e-901f-23a4-56b07b89012
min_collection_interval: 30
Restart Datadog and check the metrics are being pulled with datadog-agent status
in the purefa
section
purefa (1.0.99.22)
------------------
Instance ID: purefa:123456789abcdef [OK]
Configuration Source: file:/etc/datadog-agent/conf.d/purefa.d/mapsfa01.conf.yaml
Total Runs: 10
Metric Samples: Last Run: 414, Total: 7,731,703
Events: Last Run: 0, Total: 0
Service Checks: Last Run: 1, Total: 19,638
Average Execution Time : 1.273s
Last Execution Date : 2023-01-30 22:01:53 UTC (1675116113000)
Last Successful Execution Date : 2023-01-30 22:01:53 UTC (1675116113000)
If the data is already in Prometheus and you wish to continue with this solution, you will need to point Datadog to the Prometheus instance, not the exporter.
In your previous comments you've stated you are trying to connect to the purefa-openmetrics-exporter from the browser. This won't work as the browser cannot pass the bearer token like cURL, Prometheus and Datadog can.
Hope this helps.
from pure-fa-openmetrics-exporter.
That makes sense and I didn't realize it wouldn't work from my browser. So I have configured my DD Agent and now waiting for my networking team to open the port for connection, then I think I might be good to go!
Thank you all for your assistance!
from pure-fa-openmetrics-exporter.
To correct @james-laing 's comment - the current published datadog-purefa==1.0.1
does NOT include all metrics required for both pure-fa-openmetrics-exporter
and pure-exporter
.
I'm working on the datadog-purefa==1.1.0
which will add the other required metrics and dashboards for pure-fa-openmetrics-exporter
. I'll be submitting that to DataDog this week.
datadog-purefa==1.0.99.22
that we are running in our test env as above is the beta/RC build
Please let me know if you would like a wheel of datadog-purefa==1.0.99.22
to install on your datadog-agent.
from pure-fa-openmetrics-exporter.
@james-laing ; Thank you for all your help! @chrroberts-pure I finally have everything up and running and we are getting some metrics into DataDog, but we arn't seeing all and now DataDog is asking for us to upgrade to 1.1.0, but i don't see this version published? Can I upgrade to the latest version to get more metrics or do you know when 1.1 will be released?
from pure-fa-openmetrics-exporter.
Hi, @ApickettLGA - Datadog PureFA integration v1.1 was released on Feb 28, 2023.
DataDog/integrations-extras#1750
https://docs.datadoghq.com/integrations/purefa/ also lists the version as v1.1.0
from pure-fa-openmetrics-exporter.
Also - feel free to reach out to the Observability channel in our Slack channel, I'll be in there if you'd like to connect.
https://code-purestorage.slack.com/messages/C0357KLR1EU
from pure-fa-openmetrics-exporter.
Closing this as the latest OME release and DD integration provide all the fixes.
from pure-fa-openmetrics-exporter.
Related Issues (20)
- CVE-2023-45286 is causing issues with Quay Scanner
- Update useragent string to reflect calling platform HOT 3
- Collect Frontent WWPN Information HOT 9
- Array Load Statistics metric HOT 5
- purefa_alerts_open not reporting correctly HOT 4
- Add better error handling when the API token is incorrect. HOT 2
- PodReplicaLink is not in seconds HOT 3
- Add vendor directory to manage deps
- [new metric label proposal] - Add subscription info
- Add OME version to user_agent string
- [new metrics proposal] Volume snapshot metrics HOT 1
- Adding “volume group” metrics to pure-fa-openmetrics-exporter and pure-fb-openmetrics exporters HOT 15
- File system metrics using Pure-fa-opemmetrics-exporter HOT 8
- Сonnection path state HOT 3
- fatal error: concurrent map writes HOT 5
- The OpenMetrics exporter is incorrectly calling the /hardware endpoint HOT 3
- [new metric proposal] - Drive capacity metrics - purefa_drive_capacity HOT 1
- Enhancements to purefa_alerts_open, collect additional fields. HOT 5
- Inquiry regarding the “Purestorage” version supported by the current “exporter” and the previous “exporter” HOT 8
- Grafana not showing data as expected with multiple arrays HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pure-fa-openmetrics-exporter.