Git Product home page Git Product logo

sql_exporter's Introduction

Prometheus SQL Exporter Build Status Go Report Card GoDoc Docker Pulls

Database agnostic SQL exporter for Prometheus.

Overview

SQL Exporter is a configuration driven exporter that exposes metrics gathered from DBMSs, for use by the Prometheus monitoring system. Out of the box, it provides support for MySQL, PostgreSQL, Microsoft SQL Server and Clickhouse, but any DBMS for which a Go driver is available may be monitored after rebuilding the binary with the DBMS driver included.

The collected metrics and the queries that produce them are entirely configuration defined. SQL queries are grouped into collectors -- logical groups of queries, e.g. query stats or I/O stats, mapped to the metrics they populate. Collectors may be DBMS-specific (e.g. MySQL InnoDB stats) or custom, deployment specific (e.g. pricing data freshness). This means you can quickly and easily set up custom collectors to measure data quality, whatever that might mean in your specific case.

Per the Prometheus philosophy, scrapes are synchronous (metrics are collected on every /metrics poll) but, in order to keep load at reasonable levels, minimum collection intervals may optionally be set per collector, producing cached metrics when queried more frequently than the configured interval.

Usage

Get Prometheus SQL Exporter, either as a packaged release, as a Docker image or build it yourself:

$ go install github.com/free/sql_exporter/cmd/sql_exporter

then run it from the command line:

$ sql_exporter

Use the -help flag to get help information.

$ ./sql_exporter -help
Usage of ./sql_exporter:
  -config.file string
      SQL Exporter configuration file name. (default "sql_exporter.yml")
  -web.listen-address string
      Address to listen on for web interface and telemetry. (default ":9399")
  -web.metrics-path string
      Path under which to expose metrics. (default "/metrics")
  [...]

Configuration

SQL Exporter is deployed alongside the DB server it collects metrics from. If both the exporter and the DB server are on the same host, they will share the same failure domain: they will usually be either both up and running or both down. When the database is unreachable, /metrics responds with HTTP code 500 Internal Server Error, causing Prometheus to record up=0 for that scrape. Only metrics defined by collectors are exported on the /metrics endpoint. SQL Exporter process metrics are exported at /sql_exporter_metrics.

The configuration examples listed here only cover the core elements. For a comprehensive and comprehensively documented configuration file check out documentation/sql_exporter.yml. You will find ready to use "standard" DBMS-specific collector definitions in the examples directory. You may contribute your own collector definitions and metric additions if you think they could be more widely useful, even if they are merely different takes on already covered DBMSs.

./sql_exporter.yml

# Global settings and defaults.
global:
  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from
  # timing out first.
  scrape_timeout_offset: 500ms
  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.
  min_interval: 0s
  # Maximum number of open connections to any one target. Metric queries will run concurrently on
  # multiple connections.
  max_connections: 3
  # Maximum number of idle connections to any one target.
  max_idle_connections: 3

# The target to monitor and the list of collectors to execute on it.
target:
  # Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
  # the schema gets dropped or replaced to match the driver expected DSN format.
  data_source_name: 'sqlserver://prom_user:[email protected]:1433'

  # Collectors (referenced by name) to execute on the target.
  collectors: [pricing_data_freshness]

# Collector definition files.
collector_files: 
  - "*.collector.yml"

Collectors

Collectors may be defined inline, in the exporter configuration file, under collectors, or they may be defined in separate files and referenced in the exporter configuration by name, making them easy to share and reuse.

The collector definition below generates gauge metrics of the form pricing_update_time{market="US"}.

./pricing_data_freshness.collector.yml

# This collector will be referenced in the exporter configuration as `pricing_data_freshness`.
collector_name: pricing_data_freshness

# A Prometheus metric with (optional) additional labels, value and labels populated from one query.
metrics:
  - metric_name: pricing_update_time
    type: gauge
    help: 'Time when prices for a market were last updated.'
    key_labels:
      # Populated from the `market` column of each row.
      - Market
    static_labels:
      # Arbitrary key/value pair
      portfolio: income
    values: [LastUpdateTime]
    query: |
      SELECT Market, max(UpdateTime) AS LastUpdateTime
      FROM MarketPrices
      GROUP BY Market

Data Source Names

To keep things simple and yet allow fully configurable database connections to be set up, SQL Exporter uses DSNs (like sqlserver://prom_user:[email protected]:1433) to refer to database instances. However, because the Go sql library does not allow for automatic driver selection based on the DSN (i.e. an explicit driver name must be specified) SQL Exporter uses the schema part of the DSN (the part before the ://) to determine which driver to use.

Unfortunately, while this works out of the box with the MS SQL Server and PostgreSQL drivers, the MySQL driver DSNs format does not include a schema and the Clickhouse one uses tcp://. So SQL Exporter does a bit of massaging of DSNs for the latter two drivers in order for this to work:

DB SQL Exporter expected DSN Driver sees
MySQL mysql://user:passw@protocol(host:port)/dbname user:passw@protocol(host:port)/dbname
PostgreSQL postgres://user:passw@host:port/dbname unchanged
SQL Server sqlserver://user:passw@host:port/instance unchanged
Clickhouse clickhouse://host:port?username=user&password=passw&database=dbname tcp://host:port?username=user&password=passw&database=dbname

Why It Exists

SQL Exporter started off as an exporter for Microsoft SQL Server, for which no reliable exporters exist. But what is the point of a configuration driven SQL exporter, if you're going to use it along with 2 more exporters with wholly different world views and configurations, because you also have MySQL and PostgreSQL instances to monitor?

A couple of alternative database agnostic exporters are available -- https://github.com/justwatchcom/sql_exporter and https://github.com/chop-dbhi/prometheus-sql -- but they both do the collection at fixed intervals, independent of Prometheus scrapes. This is partly a philosophical issue, but practical issues are not all that difficult to imagine: jitter; duplicate data points; or collected but not scraped data points. The control they provide over which labels get applied is limited, and the base label set spammy. And finally, configurations are not easily reused without copy-pasting and editing across jobs and instances.

sql_exporter's People

Contributors

alin-amana avatar burningalchemist avatar free avatar james-m-tubbs avatar progesteves avatar xxorde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sql_exporter's Issues

Reload config file while the exporter running

Hey, first of all i wanted to say that im using the exporter and he's amazing
I wanted to ask you how to reload the config file while the sql_exporter still running on windows machine.
I saw the answer that you told to Tangxuye but i tried to do it myself and didnt quite get it.
If you can help me that would be amazing !
Thank you

Feature request: Support metrics from DataDog-style stored procedures

DataDog support collecting metrics by calling stored procedures (at least on MS SQL Server databases). They return metric, type, value and tag columns. (It seems like that is mainly limited to SQL server databases, other databases has other methods for custom metrics...)

They are directly usable if all of them returns the same metric, but cases with multiple metrics in the output require that the output is filtered and that the stored procedure is run multiple times.

Support for this would make migration from DataDog to Prometheus simpler.

(It also supports a different format for PostgreSQL custom metrics) (This is closer to what is currently supported)

IBM DB2 support

I have adapted this project to support IBM DB2 and added some example metrics: https://github.com/nubenum/sql_exporter

Maybe it comes in handy for someone else. I would be glad to make a pull request, however, since there are so many added dependencies, it's probably not worth it to add it for everyone. I'm not sure if one could adapt the build process to conditionally only include required dependencies. Maybe you want to have a look at it, otherwise, this issue is just informational and can be closed.

How to run sql_exporter without being DB Local

How to run sql_exporter without being DB Local ?. We have a use-case to connect to multiple databases (mysql/oracle) and also multiple instances of these databases, being db local is not an option for us as most of these are production databases and getting approval and running exporters locally will take really long time. Is there anyway where we can run Exporters externally and configure it to run against multiple databases. For ex - Having multiple Targets with multiple collectors and DSN ?

Last gathering time in SQL query

Is it possible to pass scrape interval into SQL query? It would be useful to get information for last <scrape_interval> seconds.
For example, query may look like this:
SELECT COUNT(*) FROM <TABLE> WHERE <TABLE>.TIME > NOW() - <scrape_interval>

Report a inscrutable message

When I used the sql_exporter-0.5.linux-amd64.tar.gz package,I can not connect to the sql server address which I had configured in the sql_exporter.yaml file. When the sql_exporter is running,it always show me a message: No metrics gathered, [from Gatherer #1] Unable to open tcp connection with host 'localhost:1433': dial tcp [::1]:1433: connect: connection refused. Why it is localhost?

Query output value mapping

I'm using your sql_exporter...its great!!!However i'm having hard time mapping the query output back to prometheus..Can you please share a document if you have any..for example..i want to check disabled user my query is like :

 - metric_name: mssql_account_locked
    type: counter
#    type: gauge
    help: 'Account locked status'
    key_labels:
      - username
    value_label: status
    values: [cntr_value]
    query_ref: account_locked_out


- query_name: account_locked_out
    query: |
      SELECT
        name as username, is_disabled as cntr_value
      FROM sys.sql_logins

I should see in prometheus a metrics with name "mssql_account_locked" ..right ?

However i can't see it

Query latency as a metric

It would be very nice when it is possible to expose the latency of a certain (custom) query as a exposed metric.

Stored procedure with parameter

Hello there,

first thanks for your exporter, it's been serving us very nicely for a few months now :)
However I have a problem I can't wrap my head around and it's driving me nuts !

We mostly use stored procedure to extract data from our server and so far we've been able to get values like we wanted to. But recently we've added a stored procedure which requires a parameter, here's the configuration from the collector :

- metric_name: sql_dbtools_size
  type: gauge
  help: 'Tools Database size'
  key_labels:
    - DB_NAME
  values: [(SizeMB)]
  query: TOOLS.Monitoring.P_DBsize @Dbname='TOOLS'

If you call it from a mssql gui you get something like that :

    SERVER_NAME | DB_NAME | (SizeMB) | (SizeGB)
   mssql_server |   TOOLS |      350 |     0,35

The problem is, I don't have any value at all when I query the /metrics of the exporter.
I can't find the corresponding "HELP" or "TYPE" string and there's no line with "sql_dbtools_size" :(

The query seems to work fine, I've got some "Extra column" logs but I've got the same logs with other working queries (and getting rid of it by adapting the configuration with the correct key_labels and values/value_label doesn't solve the problem anyway)

I've tried putting the query in another collector, changing the quotes (single, double) around the dbname parameter, use a different query syntax (using EXEC), nothing seems to work even though it should (logs don't show any error ).

Do you have any idea of what's happening ? Is there a problem with the use of stored procedures ?

Thank you for any information you could give me !

Passing DSN as argument fails if data_source_name field is missing from config

Hey there! We use the config.data-source-name flag to more securely pass the DSN to this exporter. It appears that the exporter fails to start, if the data_source_name field is missing from the config, even though it's not used in this use case.

Appears to be because the unmarshalling validation done here is before the config.data-source-name is evaluated a couple lines later.

Fix might be to have the unmarshalling validation be aware of this flag? Not sure... what do you think?

Support HTTPS config

I'm a big fan of sql_exporter, using it extensively! So first of all, thank you for this nice piece of software. 🙂

One thing I'd really like to see is being able to run sql_exporter with HTTPS, maybe similar how the node_exporter handles it. Are you planning on adding such a feature? If not, would you be interested to accept a pull request?

Getting: 194513 promhttp.go:38] Error gathering metrics: [from Gatherer #1]...Execution canceled by operator

Hi

I added support to Vertica database.
On some clusters I get the following errors:

systemctl status vertica_exporter.service -l
● vertica_exporter.service - prometheus vertica exporter
Loaded: loaded (/etc/systemd/system/vertica_exporter.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-05-21 08:36:34 UTC; 1h 25min ago
Main PID: 194513 (sql_exporter)
CGroup: /system.slice/vertica_exporter.service
└─194513 /etc/prometheus/sql_exporter/sql_exporter --config.file=/etc/prometheus/sql_exporter/vertica/config/vertica_sql_exporter.yml

May 21 09:59:02 vrtc003.taboolasyndication.com sql_exporter[194513]: I0521 09:59:02.216867 194513 promhttp.go:38] Error gathering metrics: [from Gatherer #1] [, collector="vertica_1_hour", query="vertica_global_status_top_ros_containers_tables_per_node"] Error: [57014] Execution canceled by operator

sql_exporter[194513]: * [from Gatherer #1] [, collector="vertica_standard_1_minute", query="vertica_global_status_catalog_locks"] context deadline exceeded

Tried to enable log but it prints same info.
how do i set log in debug mode?

Thanks.

No result returned for log shipping monitor query

Used the following yml file to get a specific query.

If we copy and paste the query to the SSMS you can get a list of databases and restored delta return.

However, when using SQL exporter, services started up in services however when browsing to http://localhost:9399/metrics i recieved http 500 internal error.

I'm running MSSQL 2017, Windows Server 2016

# A collector defining standard metrics for Microsoft SQL Server.
#
# It is required that the SQL Server user has the following permissions:
#
#   GRANT VIEW ANY DEFINITION TO
#   GRANT VIEW SERVER STATE TO
#
collector_name: mssql_standard

# Similar to global.min_interval, but applies to the queries defined by this collector only.
#min_interval: 0s

metrics:
  - metric_name: mssql_monitor_secondary_logshipping
    type: gauge
    help: 'Monitor secondary sql restored delta.'
    key_labels:
      - secondary_db
    values: [restored_delta]
    query: |
      SELECT secondary_database as secondary_db, DATEDIFF(mi, last_restored_date, GETDATE()) as restored_delta FROM msdb..log_shipping_monitor_secondary

Support data source configuration outside of sql_exporter.yml

Instead of specifying the data_source_name directly in sql_exporter.yml, provide an option to point to external file or environment variable, that contains the data source config.

Use case: I want to store the credentials to the database securely in a Kubernetes secret, while the sql_exporter.yml would be a Kubernetes config map. There is no easy way to interpolate the secret into the config map. But if I was able to point to an external file on env variable, the problem would be solved.

Metrics from some (not all) queries are missing

Hi
I use sql_exporter with Postgresql (tried last release and trunk version).
I use a collector for three queries (queries run 20-60 seconds each).

I see strange behavior - in some cases (always on the first request) the service does not return part of the rows of the sql query

Example below (the last two requests contain all the rows - see the size)
Tell me how it can be fixed?

Thank you!

root@han:~/run# wget   http://localhost:9400/metrics
--2018-04-24 16:40:06--  http://localhost:9400/metrics
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:9400... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2541 (2.5K) [text/plain]
Saving to: ‘metrics.5’

metrics.5                    100%[===========================================>]   2.48K  --.-KB/s    in 0s

2018-04-24 16:40:19 (233 MB/s) - ‘metrics.5’ saved [2541/2541]

root@han:~/run# wget   http://localhost:9400/metrics
--2018-04-24 16:40:23--  http://localhost:9400/metrics
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:9400... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4929 (4.8K) [text/plain]
Saving to: ‘metrics.6’

metrics.6                    100%[===========================================>]   4.81K  --.-KB/s    in 0s

2018-04-24 16:40:34 (308 MB/s) - ‘metrics.6’ saved [4929/4929]

root@han:~/run# wget   http://localhost:9400/metrics
--2018-04-24 16:40:36--  http://localhost:9400/metrics
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:9400... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6738 (6.6K) [text/plain]
Saving to: ‘metrics.7’

metrics.7                    100%[===========================================>]   6.58K  --.-KB/s    in 0s

2018-04-24 16:40:47 (51.0 MB/s) - ‘metrics.7’ saved [6738/6738]

root@han:~/run# wget   http://localhost:9400/metrics
--2018-04-24 16:40:53--  http://localhost:9400/metrics
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:9400... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6738 (6.6K) [text/plain]
Saving to: ‘metrics.8’

metrics.8                    100%[===========================================>]   6.58K  --.-KB/s    in 0s

2018-04-24 16:41:04 (362 MB/s) - ‘metrics.8’ saved [6738/6738]

Interpolate environment variables in config

As a general solution to the type of problem which #7 deals with, it would be nice to be able to reference environment variables in some or all config. For example, if we were to use golang templates, it might look like:

target:
  data_source_name: 'sqlserver://prom_user:{{.PROM_PASSWORD}}@dbserver1.example.com:1433'

This could be done either for the entire file or for specific keys. I think it would be straightforward enough and I'd be happy to do it if you think it would be useful.

Run as a Windows service

It would be great to have ability to run sql_exporter as a Windows service. Now I register service with sc.exe, but it crush with error 7000:

The sql_exporter service failed to start due to the following error:
The service did not respond to the start or control request in a timely fashion.

can I add static labels to target?

Hi, I want to add some static labels to target. Is it possible?

ex:

target:
  # Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)
  # the schema gets dropped or replaced to match the driver expected DSN format.
  data_source_name: 'sqlserver://prom_user:[email protected]:1433'

  # Collectors (referenced by name) to execute on the target.
  collectors: [pricing_data_freshness]

  # My static labels
  labels:
    -  my_label: 'my-label-value'
    -  some-another-label: 'another-label-value'

errors in metric and target files

Hi i clone the repo and try to build this , but i go the next errors

.\metric.go:88:28: cannot use mf.config.ValueType() (type "github.com/free/sql_exporter/vendor/github.com/prometheus/client_golang/prometheus".ValueType) as type "github.com/prometheus/client_golang/prometheus".ValueType in return argument
.\metric.go:234:12: undefined: "github.com/prometheus/client_golang/prometheus".LabelPairSorter
.\target.go:67:12: undefined: "github.com/prometheus/client_golang/prometheus".LabelPairSorter
PS C:\Users\marco esquivel\Documents\GitHub\sql_exporter>

maybe i missing something. can you help me with?

i

Getting sql_exporter.yml: no such file or directory exception while running this app

Hello,

I am getting the below exception after doing go install -

I0506 16:57:12.334059 22679 main.go:52] Starting SQL exporter (version=, branch=, revision=) (go=go1.14.2, user=, date=)
I0506 16:57:12.334665 22679 config.go:18] Loading configuration from sql_exporter.yml
F0506 16:57:12.334701 22679 main.go:56] Error creating exporter: open sql_exporter.yml: no such file or directory
goroutine 1 [running]:
github.com/golang/glog.stacks(0xc00026dd00, 0xc000274000, 0x74, 0xd0)
/Users/debanitaghosh/go/pkg/mod/github.com/golang/[email protected]/glog.go:769 +0xb8
github.com/golang/glog.(*loggingT).output(0x1cb7a40, 0xc000000003, 0xc000262a80, 0x1c500ea, 0x7, 0x38, 0x0)
/Users/debanitaghosh/go/pkg/mod/github.com/golang/[email protected]/glog.go:720 +0x372
github.com/golang/glog.(*loggingT).printf(0x1cb7a40, 0x3, 0x16cd763, 0x1b, 0xc00017dee8, 0x1, 0x1)
/Users/debanitaghosh/go/pkg/mod/github.com/golang/[email protected]/glog.go:655 +0x14b
github.com/golang/glog.Fatalf(...)
/Users/debanitaghosh/go/pkg/mod/github.com/golang/[email protected]/glog.go:1148
main.main()
/Users/debanitaghosh/go/pkg/mod/github.com/free/[email protected]/cmd/sql_exporter/main.go:56 +0x542

But i do have sql_exporter.yml under documentation folder. Not sure why i am getting this error.

Thanks in advance.

Add tagged builds to Docker Hub

I recently bumped into an issue, when sql_exporter is used across the environment, but since the only image tag available in Docker Hub is latest it appeared that some containers were still on v0.4 or even v0.3.

Since, you already have automated builds in Docker (#3) and also tagged releases, it's extremely easy to add versioned image tags, so users could explicitly define images and also be aware of versions without looking into logs.

In this case, I'd just recommend to add settings from the following picture (particularly, line three) to Docker Hub settings:

I'm not sure if Docker Hub will pick it up immediately (according to the ticket), so maybe quick re-tag is needed, but it would absolutely great to have them.

I might provide some additional information or help, if needed. :)

Propagate static_labels from target collectors config to metrics to enable re-use of query config across fleet of identical but unique server DSN's

This is a usability feature req, more than anything, but enables the use of this at scale across immense fleets without needless duplication of config entries.

In the following scenario, having the ability to propagate a static_label from the target collectors config to the metrics utilized as an inherited label would be amazing. Granted you can't technically label the target, but the metrics should not have to be uniquely configured for each and every unique DSN in a unique metric collector config when they are identical in every way except for label. Adding the ability to specify a static label at the target/collector would enable clean and reproducible filtering of targets in queries and dashboards without having unnecessary duplicative metric collector configs.

In our deployment scenario, we have no static collector hosts, and all scrapers are serverless hosts with dynamic DNS/Host names dynamically registered to Prometheus.

Scenario:

  • multiple db servers with identical databases, all uniquely identified by by a static cluster identifier
  • all db servers require the same monitoring queries, only difference between each db server / cluster is the label.

Problem:

  • Currently you have to create a UNIQUE metric config file for EACH and EVERY individual DSN to get UNIQUE labels on them to enable you to filter in prometheus per UNIQUE ID without breaking queries, dashboards, and alerts every time the scraper cycles.
  • In our case, each db server has a remote sql_exporter scraper running in an ephemeral docker container with dynamic names and addresses. The addresses are dynamically registered to SRV and picked up by prometheus when the container cycles to ensure monitoring continuity on redeploy updates of the container and container hardware failure / lifecycle.

Result:

  • 1000's of individual config metric templates each only differing with dozens of unique labels identifying the DSN/server/cluster they are associated with.
  • 1000's of metric configurations only different by a cluster_identifier label (this seems needlessly duplicative, and if propagated from target, would enable re-use of metric config's across all servers of similar persuasion

Preferred Result:

  • 1 config per DSN
  • 1 shared config for all DSN's propagating the DSN's unique label / identifier specified in the DSN config
  • Eliminates needless config duplication, and minimizes chance of accidental config drift when adding, removing, and updating monitoring queries across thousands of sql_exporter containers since all config's are identical save for the DSN
  • Makes templating of the config generation far less painful and needlessly duplicative
  • Perhaps this identifier label could be applied to the collector_files/collector_names to minimize duplication?

Current State Example:

  • unique-server-id.yml
global:
  scrape_timeout: 5m
  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from timing out first.
  scrape_timeout_offset: 500ms
  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.
  min_interval: 30s
  # Maximum number of open connections to any one target. Metric queries will run concurrently on multiple connections,
  # as will concurrent scrapes.
  max_connections: 3
  # Maximum number of idle connections to any one target. Unless you use very long collection intervals, this should
  # always be the same as max_connections.
  max_idle_connections: 3

# The target to monitor and the collectors to execute on it.
target:
  data_source_name: "postgres://secret_monitoring_user:[email protected]:5439/unique_identifier?sslmode=require"
  collectors: [unique-server-id]

# Collector files specifies a list of globs. One collector definition is read from each matching file.
collector_files:
  - "unique-server-id.collector.yml"

(Example with only 1 instead of dozens metrics for clarity and brevity. Primary problem lies in requiring a static_label per metric_name, when the static_label is really just a static label for the DSN to enable metric filtering based on that static label)

  • "unique-server-id.collector.yml"

metrics:
  - metric_name: prefix_something_nothing_age_seconds
    static_labels:
      cluster_identifier: unique-server-id
    type: gauge
    help: "Age in seconds of something from nothing"
    values: [age_seconds]
    query: |
      SELECT datediff(s, timestamp, getdate()) AS age_seconds FROM something.nothing ORDER BY timestamp DESC limit 1;

Redshift Support

Hi,

I need to add a Redshift DB for people to add metrics against in sql exporter. I have tried every combination of postgresql/redshift/jdbc but i end up getting Jan 02 17:40:42 prometheus1 sql_exporter[26793]: I0102 17:40:42.782689 26793 promhttp.go:38] Error gathering metrics: [from Gatherer #1] [job="redshift-prod", target="****"] sql: unknown driver "postgresql" (forgotten import?)

These are the configurations that don't work:

redshift_source: 'postgresql://username:password@tcp(host:port)/schema'
redshift_source: 'jdbc:postgresql://username:password@tcp(host:port)/schema'
redshift_source: 'jdbc:redshift://username:password@tcp(host:port)/schema'
redshift_source: 'redshift://username:password@tcp(host:port)/schema'

Maybe I am missing something super basic or redshift is not supported.

I have a similar target for mysql datasource which works fine:
mysql_source: 'mysql://username:password@tcp(host:port)/schema'

Guess my question comes down to: Is Redshift driver supported? If yes, what would the connection string look like? If not, can we expect support for it and when? What are the alternatives?

How to write values if I want to query multiple fields

If I want to query how to write values for multiple fields, it seems that values only supports one field。
exp:

 - metric_name: oracle_tablespace_detail
    type: gauge
    help: 'oracle_tablespace_detail.'
    values:
      - free
      - usage
      - total
    key_labels:
      - tablespace_name
      - autoextend
    query: |
......................................................

error:

[root@localhost database_exporter]#  ./database_exporter -config.file /usr/local/database_exporter/database_exporter/config/oracle_exporter.yml
I0312 10:24:03.484394   32528 main.go:53] Starting Database Exporter (version=, branch=, revision=) (go=go1.10.3, user=, date=)
I0312 10:24:03.484992   32528 config.go:18] Loading configuration from /usr/local/database_exporter/database_exporter/config/oracle_exporter.yml
F0312 10:24:03.488097   32528 main.go:57] Error creating exporter: value_label must be defined for metric with multiple values "oracle_tablespace_detail"
goroutine 1 [running]:

How to show through grafana

I am new to this. I want to show the metrics you monitor through grafana, but I didn't find a suitable dashboard. Can you provide a solution?

Context deadline exceeded error does't go away

We have a collector that will check the responsiveness of a CrateDB cluster:

collector_name: responsivity_collector

# This metric is intended to alert us to when CrateDB Cloud clusters are unresponsive.
# When a cluster becomes unresponsive queries such as selecting from sys.tables were
# responsive, but queries against sys.shards or sys.nodes were hanging.
# This query tests this responsivity in order to give us an indication that the cluster
# is hanging, instead of discovering it through a customer complaint.
# We are not actually interested in the output *specifically* of this metric, only
# that it is returned.
metrics:
- metric_name: responsivity
  type: gauge
  help: 'Indicates whether the CrateDB node is responding to queries. Will not return if the node is stuck.'
  value_label: responsive
  values: [responsive, states]
  query: |
    SELECT count(state) as states, 1 AS responsive
    FROM sys.shards; 

The problem though, is that when a Crate node goes down, we get an error from the SQL exporter saying that the context deadline has been exceed (exactly what we would expect, and exactly what we want), but, even after the node comes back up again, and Crate is now responsive again, the context deadline is still being exceeded.

What we would expect/want to happen is that we get the context deadline exceeded error while CrateDB is unresponsive, but then the error stops when CrateDB becomes responsive again.

In order to deal with this issue we are currently having to manually restart the sql exporter so that it connects again successfully.

Note: the connection to CrateDB is through postgres wire protocol

Does config file support multiple targets ?

Hello,

i'm currently writing an ansible role to deploy and configure your exporter (https://github.com/lrk/ansible-role-prometheus-sql-exporter if you want to check).

I'm not sure if the config file support multiple entries of "target", right now i have a Hash model with multiple targets/datasources configurable, and i generate multiple configuration files (with one target inside).
I have to create multiple systemd services as i want the exporter to run and restart automatically after failure or reboot. each one corresponding to one config file (and one target).

i'm not sure if it was somewere in the documentation, but i haven't found any clues, config.go seems to support multiple (undocumented) jobs, but only one target.

If multiple target per config are not supported, i would like to request that feature

thank you in advance.
best regards

An existing connection was forcibly closed by the remote host

First off, thanks for this exporter.

I've followed through the instructions an install in the following environment:

Windows Server 2008 R2 Datacenter
Microsoft SQL SERVER 2008 R2

I've build this sql_exporter from source using go 1.9.2 windows/amd64
Am using the default example of the yml files except changing the sqlserver:// configuration to use proper login and host.

After running the sql_exporter.exe which starts up fine with the Listening to port 9399 message.
I then browse to the server:9399/metrics page.

And i recieve the following error message.

No metrics gathered, [from Gatherer #1] Login error: read tcp 172.x.x.x:51778->172.x.x.x:1433: wsarecv: An existing connection was forcibly closed by the remote host.

I've double/triple check my connection credentials separately using the Microsoft SQL Server Management Studio and login credentials work correctly.

Any thoughts on why i'm getting this error ?

Thanks

context deadline exceeded despite high timeout

I'm seeing an issue where scrapes of this exporter are failing with error context deadline exceeded.

I can't see how this is happening when:

  1. My timeout is high.
  2. The error occurs very soon after the scrape GET.

SQL Exporter logs

Mar 22 14:20:20 host sql_exporter[3668]: I0322 14:20:20.893037    3668 main.go:52] Starting SQL exporter (version=0.4, branch=master, revision=6dfbe982f1e42340e2a87e0d30599383c02f0bed) (go=go1.9.1, user=alin@wheat, date=2017 1128-15:26:05)
Mar 22 14:20:20 host sql_exporter[3668]: I0322 14:20:20.923174    3668 config.go:18] Loading configuration from /etc/sql_exporter/sql_exporter.yml
Mar 22 14:20:20 host sql_exporter[3668]: I0322 14:20:20.937702    3668 config.go:131] Loaded collector "foo-collector" from /etc/sql_exporter/collectors/foo-collector.collector.yml
Mar 22 14:20:20 host sql_exporter[3668]: I0322 14:20:20.937795    3668 collector.go:67] [, collector="foo-collector"] Non-zero min_interval (30s), using cached collector.
Mar 22 14:20:20 host sql_exporter[3668]: I0322 14:20:20.937849    3668 main.go:67] Listening on :9399
Mar 22 14:22:09 host sql_exporter[3668]: I0322 14:22:09.040451    3668 sql.go:88] Database handle successfully opened with driver postgres.
Mar 22 14:22:09 host sql_exporter[3668]: I0322 14:22:09.040451    3668 sql.go:88] Database handle successfully opened with driver postgres.
Mar 22 14:22:09 host sql_exporter[3668]: I0322 14:22:09.108490    3668 collector.go:124] [, collector="foo-collector"] Collecting fresh metrics: min_interval=30.000s cache_age=9223372036.855s
Mar 22 14:22:18 host sql_exporter[3668]: I0322 14:22:18.986887    3668 promhttp.go:38] Error gathering metrics: [from Gatherer #1] [, collector="foo-collector"] context deadline exceeded
Mar 22 14:22:52 host sql_exporter[3668]: I0322 14:22:52.166343    3668 promhttp.go:38] Error gathering metrics: 4 error(s) occurred:
Mar 22 14:22:52 host sql_exporter[3668]: * [from Gatherer #1] [, collector="foo-collector", query="query1"] context deadline exceeded
Mar 22 14:22:52 host sql_exporter[3668]: * [from Gatherer #1] [, collector="foo-collector", query="query2"] sql: Rows are closed
Mar 22 14:22:52 host sql_exporter[3668]: * [from Gatherer #1] [, collector="foo-collector", query="query2"] context deadline exceeded
Mar 22 14:22:52 host sql_exporter[3668]: * [from Gatherer #1] [, collector="foo-collector", query="query3"] context deadline exceeded

Postgres logs:

2019-03-22 14:22:18.779 GMT [3824] LOG:  duration: 9660.997 ms  execute 1: query4
2019-03-22 14:22:19.090 GMT [3825] LOG:  duration: 9971.390 ms  execute 1: query 1
2019-03-22 14:22:19.333 GMT [3826] LOG:  duration: 10206.832 ms  execute 1: query2
2019-03-22 14:22:52.032 GMT [3822] LOG:  duration: 42916.951 ms  execute 1: query3

SQL Exporter config:

global:
  scrape_timeout_offset: 3s
  min_interval: 30s
  max_connections: 16
  max_idle_connections: 16

Prometheus config:

  scrape_interval: 2m
  scrape_timeout: 115s

SQL exporter logs show that the database queries fail after 9 seconds. How can this happen when SQL exporter has 112 seconds to return in?

How can I add multiple oracle instances

Hi Team,

How can i add multiple database instances to "database_exporter.yml" file.I observed that it is taking last database connection string if i added like below
data_source_name: 'oracle://XX/XX@XX:1521/XX'
data_source_name: 'oracle://XX/XX@XX:1521/XX'
data_source_name: 'oracle://XX/XX@XX:1521/XX'
data_source_name: 'oracle://XX/XX@XX:1521/XX'

Please help

Update to README regarding MSSQL on Windows

Hello,

I wanted to make a note regarding scraping metrics on a MSSQL server from Prometheus. In addition to allowing prometheus at the instance-level firewall (allow port 9399 from prometheus in an AWS security group or equivalent). A windows firewall rule also has to be created to allow the connection from prometheus. Create a rule to allow the 9399 port in windows.

I ran into this today and figured it would be helpful if others knew.

Thanks!

not get packedge when download?

Hi, when try the comand " go install github.com/free/sql_exporter/cmd/sql_exporter " in my server linux don't download from my machine, the say in mesage " Can't load package, and i try search here how to get the package but i don't see, and with wget idk how to proced for work .. do u can help me ?

How does it run in docker?

Run the docker,return message:"Error creating exporter: open sql_exporter.yml: no such file or directory"

Allow metric names to be generated as well as labels

For certain metrics, it would be nice for column names or values to be included in the metric name instead of the labels. For example, in Oracle, gv$resource_limit contains the current value as well as limit for each metric, and the correct way to do this would be to output:

oracle_enqueue_locks_limit{instance="1"} 70188
oracle_enqueue_locks_total{instance="1"} 1327
oracle_enqueue_locks_limit{instance="2"} 70188
oracle_enqueue_locks_total{instance="2"} 1174

instead of:

oracle_resource_limits{instance="1",name="enqueue_locks",stat="limit"} 70188
oracle_resource_limits{instance="1",name="enqueue_locks",stat="total"} 1327
oracle_resource_limits{instance="2",name="enqueue_locks",stat="limit"} 70188
oracle_resource_limits{instance="2",name="enqueue_locks",stat="total"} 1174

Currently, to get the first example, I'd need to create a duplicate query for each metric name, which results in a whole load more requests to the DB. Some sort of key_suffixes/value_suffix feature would be great!

How many data_source can sql_exporter configure?

Hi,
I have a confusion.It is that how many data_source can be configured in the 'sql_exporter.yaml' file.I tried to type two data_source_name under the target node,but it was running with error_log.

Adding Vertica support

Hello,

Is there a chance that support for Vertica could be added to this project? I can add it myself but it would be easier to maintain one package instead of two :)

Thanks

non numeric labels from query

I apologise if I have missed something obvious, but I'm trying to attach string values as labels and failing as all columns appear to be converted to either ints or floats.

For example, given the following collector.yml I'd like the Version to be included as a label. Version is a Semantic Version string.

metrics:
  - metric_name: hk_record_count
    type: counter
    help: 'Count of records identified as ready for housekeeping, and removed by housekeeping.'
    key_labels:
      - TableName
    static_labels:
      # Arbitrary key/value pair
      host: test
    value_label: 'Counters'
    values: [ExpiredCount,DeletedCount]
    value_label: 'Version'
    values: [Version]
    query_ref: hk_metrics

queries:
  # housekeeping scorecards
  - query_name: hk_metrics
    query: |
      select
        TABLE_NAME as TableName,
        VERSION as Version,
        TOTAL_EXPIRED as ExpiredCount,
        TOTAL_DELETED as DeletedCount,
        THREAD_COUNT as THREAD_COUNT
      from HOUSEKEEP_SCORECARD
      order by TABLE_NAME

compilation failure - undefined: prometheus.LabelPairSorter

I have tried to compile a database_exporter but I got a compilation error:

# go get github.com/Corundex/database_exporter
# cd go/src/github.com/Corundex/database_exporter/
# go build
# github.com/Corundex/database_exporter/exporter
go/src/github.com/Corundex/database_exporter/exporter/metric.go:232:12: undefined: prometheus.LabelPairSorter
go/src/github.com/Corundex/database_exporter/exporter/target.go:67:12: undefined: prometheus.LabelPairSorter

Incorrect value rounding when working with intervals.

I tried to graph database process delays with:

select time_to_sec(timediff(now(), last_update)) as time_skew from some_table;

The code always works for me when executed from MySQL console, however when I try to query metrics I sometimes get:

time_skew 3.020399e+06

The value itself never exceeeds 1000.

Load labels from JSON column

It would be great to have a possibility to load multiple labels for a value from a JSON column returned from SQL query.

Example:

 - metric_name: data_check
    type: gauge
    help: 'Data processing status'
    key_labels:
      - id
    json_labels: labels
    values: [ok]
    query: |
      SELECT id, ok, labels
      FROM data_monitoring_check

Database result:

| 888 |  0 | {"procid": "1", "procname": "download"}
| 889 |  1 | {"procid": "2", "procname": "upload"} 

Metrics:

data_check{id="888",procid="1",procname="download"} 0
data_check{id="889",procid="2",procname="upload"} 1

How to reload the config when sql_exporter is still running?

Well, I mean ,
If the configuration file has been modified.How to reload it when sql_exporter is running on a linux server? This is similar to the Ngnix command: nginx -s reload. Sometimes restart service or kill the process is not very suitable in the production environment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.