Git Product home page Git Product logo

sql_exporter's People

Contributors

alex1989hu avatar alin-amana avatar allanger avatar burningalchemist avatar corporate-gadfly avatar dependabot[bot] avatar dupondje avatar elturkym avatar eugenekainara avatar fapiko avatar ferrous26 avatar fffinkel avatar free avatar hhromic avatar james-m-tubbs avatar mafrosis avatar progesteves avatar puneetloya avatar qfritz avatar rc5hack avatar unix-way avatar xxorde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sql_exporter's Issues

RPM installation issues

Hi,
i just installed the rpm on a centos 7 machine.

I found the following issues.

  1. The -log.level parameter defined in the sql_exporter.service for the rpm packaging at the ExecStart got a type. It writes "-log-level" instead of "-log.level" which results in the application not beeing started
  2. As the /etc/default/prometheus-sql-exporter environment file is missing after the install, the user and group for sql_exporter is also not getting created. Is this intended?
  3. Related to 2., the /etc/sql_exporter directory does not get created due to the missing env file which results that the sql_exporter.yml and the mssql_example.collector.yml is not getting copied

PS: Awesome application.

cheers,
Horfic

Does this exporter support IBM DB2 and Oracle DB?

Is your feature request related to a problem? Please describe.
I am looking for a solution to export metrics from DB2 and Oracle database.
Current we use https://github.com/albertodonato/query-exporter as our biz monitor solution.
However, the configuration format in albertodonato/query-exporter is hard to implement to helm chart.

Describe the solution you'd like
The solution I need is to make db, metrics and query sql more flexible and reusable with helm chart
There are many sites in our company such as A site, B site and C site, and we should export the metrics with the same SQL from DB of each site.
For example,
metric: goods_inventory_count
query: select count from tbl_inventory where <condition>
Both of those above are the same, and I should deploy 3 sql_exporter in 3 sites to connect different DB.
So, I want to build a helm chart package to form the same metrics and queries.

After I survey overall, sql_exporter seems to fit my needs except that DB2 and oracle are not supported. :(

Describe alternatives you've considered
Currently not. Only if I try to build the config template for query-exporter, and I am sure it's really not easy.

If the DB2 and Oracle could be supported, it's so appreciated, thanks a lot.

ARM64 support

Is your feature request related to a problem? Please describe.
I'd like to use sql_exporter on AWS Graviton instances, but right now only x86_64 images are published.

Describe the solution you'd like
Build ARM64 images and push them to Dockerhub in addition to the existing x86_64 images.

docker buildx will let you publish both images simultaneously. Example command: docker buildx build --platform linux/amd64,linux/arm64 -t burningalchemist/sql_exporter:0.11 -t burningalchemist/sql_exporter:latest --push .

"up" metric only show in jobs mode

Hello, I have noticed that the "up" metrics only seem to work when I use the job mode in the sql_exporter.yml file. Specifically, I have defined a job with the following content:

jobs:
  - job_name: test
    collectors: [mssql_*]
    static_configs:
      - targets:
          mssql: 'sqlserver://prom_user:[email protected]:1433'

However, if I change to target mode by using the following configuration:

target:
   data_source_name: 'sqlserver://prom_user:[email protected]:1433'
   collectors: [mssql_*]

The "up" metrics disappear.

The up metrics shows in jobs mode(this is what I want):
up{instance="mssql",job="test"} 0

Can anyone tell me what's the problem?

Collected Before Error

Describe the bug
At the moment any metrics the exporter collects create an error: "was collected before with the same name and label values"

To Reproduce
Steps to reproduce the behavior:

  1. Create a metric that wont change value on the next run

Expected behavior
I wouldn't have thought this was an error, this is surely a common occurrence?

No metrics for collector: query works in one case, not in other

Hi! I got one collector working but got issues with the second one. It's something with the query, will be a long running one in the end, like 15-30mins (gonna run once a day). It's purpose is to scan all indexes on some tables for all dbs.

I started chopping the query up to find when it works and when it won't. At some occasions I got "context deadline exceeded" messages. I read up on some old threads saying that one might need to increase Prometheus and Sql Exporter scrape_timeout, but this didn't make a difference. Eventually I cut the query down bit by bit, and even if it takes less than 1 sec in Microsoft SQL Server Management Studio, no metrics would be provided - timing shouldn't be an issue then, right?

Finally got it working selecting indexes for only one specific DB, this would provide metrics in the UI:

      SELECT
          DB_NAME(DDIPS.database_id) AS 'database',
          DDIPS.index_id AS 'index',
          DDIPS.avg_fragmentation_in_percent AS 'avgFragmentationInPercent'
      FROM 
          sys.dm_db_index_physical_stats (DB_ID('<DatabaseName>'), NULL, NULL, NULL, NULL) AS DDIPS
      WHERE 
          DDIPS.database_id = DB_ID('<DatabaseName>')
      -- AND DDIPS.object_id = OBJECT_ID('AvlEvent')

This query provides results instantly when starting up. BUT, if un-commenting the last line, then suddenly I get no results. Both versions works instantly in MSSM.

SQL Exporter config:

# Global defaults.
global:
  scrape_timeout: 120s
  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from timing out first.
  scrape_timeout_offset: 500ms
  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.
  min_interval: 0s
  # Maximum number of open connections to any one target. Metric queries will run concurrently on multiple connections,
  # as will concurrent scrapes.
  max_connections: 3
  # Maximum number of idle connections to any one target. Unless you use very long collection intervals, this should
  # always be the same as max_connections.
  max_idle_connections: 3
  # Maximum number of maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse.
  # If 0, connections are not closed due to a connection's age.
  max_connection_lifetime: 5m

Prometheus config:

global:
  scrape_interval:     115s # Default is 1m.
  evaluation_interval: 1m # Evaluate rules every 1 minute. Default is 1m.
  scrape_timeout: 110s     # For how long we let a scrape run before timing it out and it fails. Default is 10s.

Very grateful for any ideas on this!

P.S. I first ran version 0.5 but changed to 0.9 now (misunderstood which github page was the active one), and in the earlier version I could change log level with "-v 4" but using the new flag like "-log.level debug" won't work (wrote a comment here).. So can't get logging on this currently. Issue persists in new version anyhow.

Call a specific job scrape via a query string

Hello,

I'm new to prometheus and trying to have a different scrapping interval for collectors, without having to configure two sql_exporter services.

Despite using the exemple here : #37 (comment) , I did not find a way for prometheus to call one specific 'job' defined in sql_exporter.yml

jobs:
  - job_name: test1
    collectors: [ports]
    static_configs:
      - targets:
          data1: 'mysql://test:test@localhost:3306/test'

  - job_name: test2
    collectors: [devices]
    static_configs:
      - targets:
          data2:  'mysql://test:test@localhost:3306/test'

With this configuration it add an instance -target db name) and job label but I dont find how to call only one "sql_exporter job" from prometheus or through curl as I dont find a way to make params works.

Am I missing something ?

v0.10.0 seems to have broken the configuration

Describe the bug
v0.10.0 seems to have broken the configuration

To Reproduce
Steps to reproduce the behavior:

  1. Upgrade to 0.10.0
  2. Append '-log.level debug'
  3. Start app
  4. See error
ts=2023-03-27T12:05:07.792Z caller=klog.go:108 level=warn func=Warningf msg="Starting SQL exporter (version=0.10.0, branch=HEAD, revision=3e9475ff001b3ab1c246ac97892e1f9cd02bd364) (go=go1.20.1, platform=linux/amd64, user=root@buildkitsandbox, date=20230228-23:45:09)"
ts=2023-03-27T12:05:07.792Z caller=klog.go:84 level=debug func=Infof msg="Loading configuration from /etc/sql_exporter/sql_exporter.yml"
ts=2023-03-27T12:05:07.793Z caller=klog.go:84 level=debug func=Infof msg="Loaded collector 'mssql_standard' from /etc/sql_exporter/collectors/mssql_standard.collector.yml"
ts=2023-03-27T12:05:07.795Z caller=klog.go:143 level=error func=Fatalf msg="Error creating exporter: unknown collector \"mssql_standard\" referenced in target"

Expected behavior
Run with configuration loaded.

Configuration
Take example configuration

Panic on metrics request when config file is empty

Describe the bug
When the config file is empty, the application starts normally but then panics on a request to /metrics:

ts=2023-01-19T12:20:27.778Z caller=klog.go:108 level=warn func=Warningf msg="Starting SQL exporter (version=0.9.2, branch=HEAD, revision=e1a5775cca58bf1110516baebc6fe7854d18279b) (go=go1.19.4, platform=linux/amd64, user=root@46fa4bc8dec6, date=20221222-15:10:06)"
ts=2023-01-19T12:20:27.778Z caller=klog.go:84 level=debug func=Infof msg="Loading configuration from sql-exporter.yaml"
ts=2023-01-19T12:20:27.778Z caller=klog.go:96 level=warn func=Warning msg="Listening on :9399"
ts=2023-01-19T12:20:27.779Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9399
ts=2023-01-19T12:20:27.779Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9399

// A request to `/metrics` is done here

2023/01/19 12:20:36 http: panic serving 172.17.0.1:43220: runtime error: invalid memory address or nil pointer dereference
goroutine 30 [running]:
net/http.(*conn).serve.func1()
        /usr/local/go/src/net/http/server.go:1850 +0xbf
panic({0x10e49a0, 0x1c74650})
        /usr/local/go/src/runtime/panic.go:890 +0x262
main.contextFor(0xc0004d8100, {0x1467240, 0xc0004b58c0})
        /app/cmd/sql_exporter/promhttp.go:77 +0x3c
main.ExporterHandlerFor.func1({0x7f2ebd6f93b8, 0xc0004b26e0}, 0xc0004d8100)
        /app/cmd/sql_exporter/promhttp.go:32 +0x72
net/http.HandlerFunc.ServeHTTP(0x11208a0?, {0x7f2ebd6f93b8?, 0xc0004b26e0?}, 0xff4bb8?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerInFlight.func1({0x7f2ebd6f93b8, 0xc0004b26e0}, 0x1466100?)
        /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:60 +0xd4
net/http.HandlerFunc.ServeHTTP(0x1466188?, {0x7f2ebd6f93b8?, 0xc0004b26e0?}, 0x0?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1({0x1466188?, 0xc000682000?}, 0xc0004d8100)
        /go/pkg/mod/github.com/prometheus/[email protected]/prometheus/promhttp/instrument_server.go:146 +0xb8
net/http.HandlerFunc.ServeHTTP(0xc00012daf0?, {0x1466188?, 0xc000682000?}, 0x0?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
net/http.(*ServeMux).ServeHTTP(0x0?, {0x1466188, 0xc000682000}, 0xc0004d8100)
        /usr/local/go/src/net/http/server.go:2487 +0x149
net/http.serverHandler.ServeHTTP({0xc0004b5fb0?}, {0x1466188, 0xc000682000}, 0xc0004d8100)
        /usr/local/go/src/net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc0004c4b40, {0x1466da8, 0xc0004b5ec0})
        /usr/local/go/src/net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:3102 +0x4db

To Reproduce
Steps to reproduce the behavior:

  • Start the exporter using a blank/empty config file
  • Do a request to /metrics
  • See panic

Expected behavior
The application should not start with a blank/empty config file as one of jobs or target is required.

Configuration
Just an empty file is sufficient to trigger this panic.

Additional context
It can be argued that a blank/empty config file is not a use-case, but the application should not panic regardless. A (mistakingly) invalid config file could be a very possible situation and a proper error messages is better than a panic.

Probably is related to #182 in the way the YAML unmarshalling is performed.

SQL Exec function not supported? - Database FreeSpace

Hello,

Sql-exporter Go1.17
Mssql 12.0.41

I've tried to execute a query to get the FreeSpacem DatabaseMaxSize and DatabaseSize for all the databases the user has access:

EXEC sp_msforeachdb
'USE [?];
SELECT DB_NAME() AS DbName,
name AS FileName,
type_desc,
size/128.0 AS CurrentSizeMB,
size/128.0 - CAST(FILEPROPERTY(name, ''SpaceUsed'') AS INT)/128.0 AS FreeSpaceMB,
max_size/128 as MaxSizeMB
FROM sys.database_files'

when I run it at Management Studio it works ok. When I run it at the exporter it only gives me 1 occorunce.

I understand that the query has to be run multiple times and think that is the reason sql exporter does not give me the full info I should get when I run it at Management Studio.

Is this feasible to be altered in sql-exporter or am I doing something wrong?

I went this way:

SELECT DB_NAME(database_id) AS DbName,
type_desc,
name AS FileName,
size/128.0 AS CurrentSizeMB,
--size/128.0 - CAST(FILEPROPERTY(name, ''SpaceUsed'') AS INT)/128.0 AS FreeSpaceMB,
max_size/128.0 AS MaxSizeMB
FROM sys.master_files
WHERE database_id > 6 AND type IN (0,1)

But cannot get the FreeSpaceMB from it as it does not exist in sys.master_files but only exists in sys.database_files.

When I try to join them both, sys.database_files only return 1 database per query and sys.master_files returns everything.

SELECT DB_NAME(database_id) AS database_name,
b.name,
a.type_desc,
a.name AS FileName,
a.size/128.0 AS CurrentSizeMB,
b.size/128.0 - CAST(FILEPROPERTY(b.name, 'SpaceUsed') AS INT)/128.0 AS FreeSpaceMB,
a.max_size/128.0 AS MaxSizeMB
FROM sys.master_files as A --, sys.database_files as B WHERE a.name=b.name
INNER JOIN sys.database_files as B on a.name=b.name
--WHERE database_id > 6 AND type IN (0,1)

It only show 1 row....

I need to show this information for all databases....

please help :-)

regards,

MRT

More useful connection logging

Is your feature request related to a problem? Please describe.

I'm currently trying to use sql_exporter to fetch data from a Clickhouse database.
For now, I'm just trying to get a simple query to work.
Here's my configuration:

global:
  scrape_timeout_offset: 500ms
  min_interval: 0s
  max_connections: 3
  max_idle_connections: 3
reuse.
  max_connection_lifetime: 5m

target:
  data_source_name: 'ch://user:pass@myhost:8123/default'
  collectors: [my_clickhouse]

collector_files:
  - "*.collector.yml"

With the collector file:

collector_name: my_clickhouse

metrics:
  - metric_name: row_count_my_table
    type: gauge
    help: 'Number of rows in table my_table.'
    values: [num_rows]
    query: |
      SELECT COUNT(*) AS num_rows FROM my_table;

When I run it, I get the following output:

$ ./sql_exporter -log.level debug
ts=2022-11-14T08:30:25.473Z caller=klog.go:108 level=warn func=Warningf msg="Starting SQL exporter (version=0.9.1, branch=HEAD, revision=aa29c91a6d31d614dc2b20c429d6fd8699a12d1c) (go=go1.19.1, user=root@11ce5d80fd8d, date=20220921-15:11:16)"
ts=2022-11-14T08:30:25.473Z caller=klog.go:84 level=debug func=Infof msg="Loading configuration from sql_exporter.yml"
ts=2022-11-14T08:30:25.473Z caller=klog.go:84 level=debug func=Infof msg="Loaded collector 'my_clickhouse' from clickhouse.collector.yml"
ts=2022-11-14T08:30:25.473Z caller=klog.go:96 level=warn func=Warning msg="Listening on :9399"
ts=2022-11-14T08:30:25.473Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
ts=2022-11-14T08:30:31.754Z caller=klog.go:84 level=debug func=Infof msg="Database handle successfully opened with 'clickhouse' driver"
ts=2022-11-14T08:30:41.254Z caller=klog.go:84 level=debug func=Infof msg="Error gathering metrics: [from Gatherer #1] context deadline exceeded"

As you can see, even with debug logging, I have no idea what the problem is:

  • I've verified that the credentials are correct, using a different clickhouse client
  • I've verified that the SQL is correct and works

As a test, I tried changing the database connection string:

  • from "ch" to "sqlserver" or "mysql"
  • using a nonexistent username
    The result was the exact same log message (just with the driver name changed)!

Describe the solution you'd like

At the very least, the logging should provide more information about what's going on with the database. As it is, I have no idea what the problem is, much less what I might do about it.

  • Did the database connection work? Obviously there's something wrong if I can use the wrong driver and it still thinks it connected OK.
  • Did the query timeout? Or was there an error in the query? Or unable to connect?
  • Shouldn't the program exit with an error if it can't connect to the database?

Describe alternatives you've considered

Using a different tool

Additional context

New docker image release?

Hi there,

Loving the exporter, it's just what we need, but we'd love to have the globbing syntax for collectors in the main yaml file feature, and I don't really want to start building my own images from scratch and lose upstream updates.

Any chance of new release soon, or have I failed to spot your release schedule?

Thanks for the great tool!

dashboard

has a dashboard for this sql_exporter๏ผŸ

Not supporting special characters on passwords for the connection string

Something minor I noted post update.
After updating to the latest version, 0.10.1 (from 0.8.4), noted that the exporter is not reporting metrics due to a special character on the password (#), this password was generated through keepass long ago and was working for the previous version.

Steps to reproduce the behavior:

  1. Having a special character password (this case with # for example, PWDPART1#PWDPART2) and having sql_exporter v0.8.4.0 working;
  2. Update sql-exporter to version 0.10.1;
  3. Access the metrics endpoint: curl http://127.0.0.1:9399/metrics
  4. See error:
    No metrics gathered, [from Gatherer #1] invalid port ":PWDPART1" after host

Expected behavior:
SQL exporter to return metrics instead of an error;

Additional context:
Using SQL_Exporter to extract a couple of metrics from a specific table, the query on its own works fine before and after the update;

Workaround:
Other special characters can be used to ensure a safe password, but not '#'

Debian package for sql_exporter

Is your feature request related to a problem? Please describe.
I woud like a properly packaged binary for deployment on debian (and possibly other debian-based systemd systems).

Describe the solution you'd like
A downloadable .DEB, or (better still) inclusion in the regular debian repositories.

Describe alternatives you've considered
It is possible to deploy using the release TAR, but the point is to have a native DEB for easy installation.

Additional context
I have done some preliminary work, and managed to create a .DEB package that seems to install correctly. However, I am not an expert on the matter, and I am quite sure that it is possible to improve.

Is this something you would consider? If yes, what would be the next steps?

Regards, Kevin

Different interval per query

Is your feature request related to a problem? Please describe.
Greetings, I have queries that should scrape in different intervals, due to the time they take being more than 10m or 30m.

Describe the solution you'd like
I would like to be able to configure the interval on a per query basis, I don't know if this is possible atm.

Describe alternatives you've considered
I've tested other query exporters and sql exporters, some have the option but not all, but these other tools have different problems or new challenges.

Multiple queries under 1 metric

Hello!
It would be very useful to allow for multiple select (and other) queries under 1 metric

An example use case:
Monitoring query completion time

DECLARE @t1 DATETIME;
DECLARE @t2 DATETIME;

SET @t1 = GETDATE();
SELECT /* query */;
SET @t2 = GETDATE();
SELECT DATEDIFF(millisecond,@t1,@t2) AS elapsed_ms;

And elapsed_ms could go in values to be scraped

It seems that right now, the exporter only takes the first select query and ignores others whether it has a AS command or not

Spaces in %SQL_EXPORTER_PATH% prevents service startup

Describe the bug
The exporter service fails to start when the path includes a space

To Reproduce
Steps to reproduce the behavior:

  1. Download necessary files to C:\Program Files\sql-exporter
  2. Use examples in README.md to create service entry in windows
  3. Open services dialogue, click on created service and run
  4. See error: windows could not start the SqlExporterSvc service on local computer error 1067

Expected behavior
Service to start as expected

Configuration
Default configs using documentation example (entered in db info of course)

Additional context
This only seems to happen for me when I'm trying to place the service in a path with spaces in it's name. If I put the extracted release folder on the root of C:\ everything behaves as expected.

Lock Go version

Is your feature request related to a problem? Please describe.

I found in my previous PR that the GitHub setup-go action is configured to 1.17.z with caret comparator https://github.com/npm/node-semver#caret-ranges-123-025-004 that is why my first try failed.

As of now go.mod is set to Go 1.16 but the setup-go action will always use the latest 1.minor version.

On the other hand Dockerfile builder image is not locked.

Describe the solution you'd like
Match Go major.minor version in go.mod, in Dockerfile and use any patch version in setup-go configuration from the same major.minor.

Describe alternatives you've considered
Lock specific version in setup-go configuration, in Dockerfile and set major.minor version in go.mod

Additional context
I would use Go 1.19 and call rm -f go.sum && go mod tidy -go=1.19 as part of this intention.

Add support for globbing to `resolveCollectorRefs()`

Is your feature request related to a problem? Please describe.

Currently, the target configuration requires a list of collector names in collectors.
The names must be previously defined via collector_files or inline in the top-level collectors config.

These collector names are resolved from defined collectors using resolveCollectorRefs():

func resolveCollectorRefs(
collectorRefs []string, collectors map[string]*CollectorConfig, ctx string,
) ([]*CollectorConfig, error) {
resolved := make([]*CollectorConfig, 0, len(collectorRefs))
for _, cref := range collectorRefs {
c, found := collectors[cref]
if !found {
return nil, fmt.Errorf("unknown collector %q referenced in %s", cref, ctx)
}
resolved = append(resolved, c)
}
return resolved, nil
}

However, the current implementation is limited only to full collector names to be resolved. When you leverage the handy collector_files config, which supports globbing, it is not sufficient to just add new files that match the globbing pattern. Any new collector names must be added also to target.collectors. Therefore, despite using globbing patterns on the collector file names, one is still forced to keep target.collectors updated manually.

Describe the solution you'd like

A nice enhancement to the target.collectors configuration could be to add support for globbing patterns for the collector names. In this way, it would be possible to easily tell the exporter to "use all collectors with names using these patterns". For example:

global: {}
target:
  data_source_name: driver://host/dbname
  collectors: ['users_*', 'prices_*', 'stocks']
collector_files: ['*.collector.yaml']

For example, a main configuration like this:

global: {}
target:
  data_source_name: driver://host/dbname
  collectors: ['*']
collector_files: ['*.collector.yaml']

Would simply use all available collectors found via collector_files, without altering the same main config file when new collector YAML files are added.

Describe alternatives you've considered

At the moment, we are using a wrapper shell script that scans all the available *.collector.yaml files and extracts the collector_name values from them. Then, we manually generate an array of these names to provide in target.collectors in a generated main config file before starting the SQL exporter.

EDIT: Here is the POSIX sh snippet that collects the collector names from existing files to populate target.collectors:

# collect the names of all available collectors
COLLECTORS=
for COLL_FILE in "$COLLECTORS_DIR"/*.collector.yaml; do
  [ -e "$COLL_FILE" ] || [ -L "$COLL_FILE" ] || continue
  COLLECTOR=$(sed -rn '/^collector_name: / s/collector_name: (.+)/'"'"'\1'"'"'/g p' "$COLL_FILE")
  if [ -z "$COLLECTORS" ]; then
    COLLECTORS=$COLLECTOR
  else
    COLLECTORS="$COLLECTORS, $COLLECTOR"
  fi
done

The above generates a COLLECTORS variable with names surrounded by quotes, ready to use in the YAML file.
As can be seen, having a native solution would be much appreciated to avoid these kind of workarounds.

Additional context

A bit of context: we are using the SQL exporter inside a Docker container, where we want to just provide new collector YAML configuration files without needing to update the main config file inside of the container every time.

Connection Closure

I was not sure where to ask this, if this is not the right place please point me in the right direction. Firstly, thank you for maintaining this project. My question is regarding connection closure after the query is executed, it looks like connections are maintained and queries are executed every scrape. If I wanted to end the query after the query is executed; would this be possible? I use the exporter to get information out of databases and also test connectivity.

basic authentication is not implemented

Hi

I downloaded latest version for windows and installed.

But I can't see option "--web.config.file" for basic authentication indicated in document.

######################################################
flag provided but not defined: -web.config.file
Usage of C:\Program Files\sql_exporter\sql_exporter.exe:
-add_dir_header
If true, adds the file directory to the header of the log messages
-alsologtostderr
log to standard error as well as files (default true)
-config.data-source-name string
Data source name to override the value in the configuration file with.
-config.enable-ping
Enable ping for targets (default true)
-config.file string
SQL Exporter configuration filename (default "sql_exporter.yml")
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-log_file string
If non-empty, use this log file
-log_file_max_size uint
Defines the maximum size a log file can grow to. Unit is megabytes. If t
he value is 0, the maximum file size is unlimited. (default 1800)
-logtostderr
log to standard error instead of files (default true)
-one_output
If true, only write logs to their native severity level (vs also writing
to each lower severity level)
-skip_headers
If true, avoid header prefixes in the log messages
-skip_log_headers
If true, avoid headers when opening log files
-stderrthreshold value
logs at or above this threshold go to stderr (default 2)
-v value
number for the log level verbosity
-version
Print version information
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
-web.enable-reload
Enable reload collector data handler
-web.listen-address string
Address to listen on for web interface and telemetry (default ":9399")
-web.metrics-path string
Path under which to expose metrics (default "/metrics")
#######################################################

It is possible include this option ?

Thanks for any.
Regards

Dependabot - google/flatbuffers 1.12 checksum mismatch

Dependabot failed to update your dependencies because there was an error resolving your Go dependency files.

Dependabot encountered the following error:
verifying github.com/google/[email protected]: checksum mismatch

The issue happens only at the GitHub-native Dependabot flow, and mostly likely is related to a release quirk of a problematic module.

Looking for ways to find a workaround in Dependabot.

Add full support for config based on environment variables

Is your feature request related to a problem? Please describe.

We use the exporter inside a Docker container, where we dynamically generate a configuration file in an entrypoint shell script on start based on a set of environment variables passed to the container to pre-configure the SQL exporter.

The entrypoint script we made needed to address the following requirements:

  • Be able to configure database aspects from separate environment variables (host, port, dbname, etc)
  • Be able to read database secrets from an env variable or a file (i.e. to use container secrets feature)
  • Allow to configure any global configuration of the exporter without rebuilding the image
  • Be able to configure the exporter only from mounted collector files, and not need to mount a static global config.

We are aware that at the moment there is partial support for env variables in the exporter, namely:

  • SQLEXPORTER_DEBUG
  • SQLEXPORTER_CONFIG
  • SQLEXPORTER_TARGET_DSN

But we would like to be able to override any configuration in the config files, not just target.dsn.

Describe the solution you'd like

We would like for the SQL exporter to be able to be fully configured from environment variables, therefore greatly increasing usability and user experience when running in containers (not only Docker) because we can fully avoid writing entrypoint scripts.

If you are open to this feature request, I'm happy to implement it and send a PR for reviewing.

Describe alternatives you've considered

To fulfil the requirements, our current entrypoint script is as follows:

#!/usr/bin/env sh

die() { printf "fatal: %s\n" "$*" > /dev/stderr; exit 1; }
uri() { printf "%s" "$(jq -nr --arg v "$*" '$v | @uri')"; }

# define working directories and files
SHM_DIR=/dev/shm
CONFIG_FILE=$SHM_DIR/sql-exporter.yaml
COLLECTORS_DIR=  # use root directory of container

# check required environment variables
[ -z "$DB_HOST" ] && die "missing required environment variable: DB_HOST"
[ -z "$DB_USER" ] && die "missing required environment variable: DB_USER"
if [ -z "$DB_PASSWORD" ] && [ -z "$DB_PASSWORD_FILE" ]; then
  die "missing required environment variable: one of DB_PASSWORD or DB_PASSWORD_FILE"
fi
[ -z "$DB_DATABASE" ] && die "missing required environment variable: DB_DATABASE"

# prepare optional environment variables
SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT=${SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT:-10s}
SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT_OFFSET=${SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT_OFFSET:-500ms}
SQLEXPORTER_GLOBAL_MIN_INTERVAL=${SQLEXPORTER_GLOBAL_MIN_INTERVAL:-0s}
SQLEXPORTER_GLOBAL_MAX_CONNECTIONS=${SQLEXPORTER_GLOBAL_MAX_CONNECTIONS:-3}
SQLEXPORTER_GLOBAL_MAX_IDLE_CONNECTIONS=${SQLEXPORTER_GLOBAL_MAX_IDLE_CONNECTIONS:-3}
DB_PORT=${DB_PORT:-5433}

# read password from file if provided
[ -n "$DB_PASSWORD_FILE" ] && DB_PASSWORD=$(cat "$DB_PASSWORD_FILE")

# collect the names of all available collectors
COLLECTORS=
for COLL_FILE in "$COLLECTORS_DIR"/*.collector.yaml; do
  [ -e "$COLL_FILE" ] || [ -L "$COLL_FILE" ] || continue
  COLLECTOR=$(sed -rn '/^collector_name: / s/collector_name: (.+)/'"'"'\1'"'"'/g p' "$COLL_FILE")
  if [ -z "$COLLECTORS" ]; then
    COLLECTORS=$COLLECTOR
  else
    COLLECTORS="$COLLECTORS, $COLLECTOR"
  fi
done

# build the data source name (DSN) to use
DSN=driver://$(uri "$DB_USER"):$(uri "$DB_PASSWORD")
DSN=${DSN}@${DB_HOST}:${DB_PORT}/${DB_DATABASE}

# generate configuration file for the SQL exporter
cat > "$CONFIG_FILE" << __EOF__
global:
  scrape_timeout: $SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT
  scrape_timeout_offset: $SQLEXPORTER_GLOBAL_SCRAPE_TIMEOUT_OFFSET
  min_interval: $SQLEXPORTER_GLOBAL_MIN_INTERVAL
  max_connections: $SQLEXPORTER_GLOBAL_MAX_CONNECTIONS
  max_idle_connections: $SQLEXPORTER_GLOBAL_MAX_IDLE_CONNECTIONS
target:
  data_source_name: '$DSN'
  collectors: [$COLLECTORS]
collector_files: ['$COLLECTORS_DIR/*.collector.yaml']
__EOF__

# execute SQL exporter with the generated configuration file
exec sql_exporter --config.file="$CONFIG_FILE"

As it can be seen, the above script is quite a lot of boilerplate that could be instead managed natively in the SQL exporter if it had full support for environment variables, especially the global config.

Is connecting to SQL 2005 supported?

I tried to connect to an instance I have here but I got the following error message.
I0609 13:48:51.560255 6 promhttp.go:38] Error gathering metrics: [from Gatherer #1] Login error: EOF
I0609 13:49:21.536107 6 promhttp.go:38] Error gathering metrics: [from Gatherer #1] Login error: EOF
I0609 13:49:51.527936 6 promhttp.go:38] Error gathering metrics: [from Gatherer #1] Login error: read tcp prometheusIP:Port->SQLIP:Port: read: connection reset by peer

The same configuration works just fine in SQL 2012.

Using your repo as the base for our Vertica centric exporter repo

Hi,
Sorry, couldn't find any other way to reach out. I work for Vertica in the Partner Engineering/Integrations group. We recently pulled your repo to use as a starting point for a Vertica centric open source exporter, as last year you had added Vertica support for connectivity. We are basically stripping out the other databases, adding Vertica centric enhancements including Vertica metric file examples, different logging mechanism, optimized docker builds, pull request test suites, etc. Because these were breaking changes we opted to not branch your code but to download the zip and create our own project. We wanted to let you know our intentions prior to release (currently hoping for early Oct). We plan on adding some kudos to you in our Readme.md noting it your repo was the solid base which we used as a jump off point. If any interest in discussing please let me know how best to reach you.
Thanks and regards,
Stephen Crossman
Vertica, a Microfocus business unit.

Is it backward compatible?

Hi! Is this fork backward compatible with the binary from original repo? Do the commands and functions that work in the original version also work in this one?

Multiples Datasources on the same sql-exporter

Hello,

I've read the config but haven't found a reference to this.

is it possible in the same sql-exporter service to have several datasources with the same/different collectors?

My ideia is that I need the database size of several databases that are in different ms-sql servers.

Can the data source be in the collectors files and we just say in the sql_exporter.yml to use the collectors files? Or is it support in any other way?

I've tried adding a new datasource_name but it tells me that it is already defined... even a new target didn't work.

best regards,

MRT

go v1.16 - build fails

Running make build with Go v1.16 on master branch results in error:

>> building binaries
 >   sql_exporter
../../../go/pkg/mod/github.com/jackc/[email protected]/auth_scram.go:27:2: missing go.sum entry for module
providing package golang.org/x/xerrors (imported by github.com/jackc/pgx/v4/stdlib); to add:
	go get github.com/jackc/pgx/v4/[email protected]

There are some changes in the latest release of Go, but executing go mod tidy should fix go.sum file.
The code builds properly afterwards. ๐Ÿ‘

Golang Github - Issue 44129
Golang Blog - Module Changes

Thundering herd prevention?

Is your feature request related to a problem? Please describe.

I'm always frustrated when sql_exporter taxes the DB by running all of its queries at once. It creates a thundering herd that runs up the CPU usage.

Describe the solution you'd like

There should be a configurable jitter option that staggers the queries over time

Describe alternatives you've considered

I increased min_interval to 600s but every 10 mins you see a huge spike in Azure SQL DTU%.

Additional context

Screen Shot 2023-03-09 at 10 02 43

Add possibility to collect metrics without SSL

Is your feature request related to a problem? Please describe.
I am getting an error No metrics gathered, [from Gatherer #1] pq: SSL is not enabled on the server
while trying to collect metrics from the server without SSL

Describe the solution you'd like
I would like to have a configuration option e.g.:

target:
  data_source_name: 'postgres://192.168.152.23:5432'
  ssl: false

?sslmode=disable surely does the job, but a separate option imo would be more human friendly and readable

Re-load on change of the configuration file or the collector files

It would come quite handy if a change of the configuration file or the collector files is detected and a reload is triggered. Especially handy while running it on k8s and providing the configuration as a kubernetes configmap.

I hope this makes sense.

thanks for the great work and best greetings,
Walter

MSSQL connect IPv6 on instance XYZ

Hi all,

I found no hint to connect to mssql database instance INST01 and assign exporter metric on specific IPv6.

solved IPv6 :-web.listen-address [1000:da9:f:2023:4fc:0:7492:979]:9399

open: include instance name to connection string

thanks for help
Andi

Azure Key Vault auth w/ MS SQL

  1. Does something similar exist to issue #25, but for Azure Key Vault? I'd like to use this exporter to query multiple databases, but we don't want to just specify a static username/password in a configuration file. If not, should I open a feature request for this?
  2. In addition, I'd like to check the Microsoft SQL Server driver repo to see if something like this is available through the driver, but I am unsure what driver this project uses for MS SQL.
  3. Also, can more than one DSN be specified in the SQLEXPORTER_TARGET_DSN environment variable? Or do I have to have multiple containers running (each with a unique DSN) to accomplish this?

Question about sql_exporter reading the config file

Hello,

I'm running sql_exporter in a Kubernetes container and I've got a solution in place to read in the database user and password from an Azure Key Vault by way of mounting the secret as a volume mount (ex. /mnt/secret/DB-PASSWORD) and then dynamically modify the Data Source Name in the configuration file (sql_exporter.yml) with the credentials of the database. This works well when the pod (container) first starts up, but when the password is rotated (updated), and the configuration file gets updated while the container is running, are there any mechanisms in place in sql_exporter to check the config file for changes? For example, if sql_exporter can't connect to the database, so it reads the config file again to check for changes? Or does the application only read the configuration file on startup? I assume it's the latter.

Thank you in advance for your help.

Collect from multiple databases

Hi,

Is there a way to collect from multiple databases? I am using mysql data source and would like to read from two different databases within the same server.

Panic on start when `global` key is missing in config file

Describe the bug

When the global key in the main config file is missing, the application crashes with a panic:

ts=2023-01-19T12:19:18.220Z caller=klog.go:108 level=warn func=Warningf msg="Starting SQL exporter (version=0.9.2, branch=HEAD, revision=e1a5775cca58bf1110516baebc6fe7854d18279b) (go=go1.19.4, platform=linux/amd64, user=root@46fa4bc8dec6, date=20221222-15:10:06)"
ts=2023-01-19T12:19:18.220Z caller=klog.go:84 level=debug func=Infof msg="Loading configuration from sql-exporter.yaml"
ts=2023-01-19T12:19:18.221Z caller=klog.go:84 level=debug func=Infof msg="Loaded collector 'tables' from collectors/tables.yaml"
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x8ced68]

goroutine 1 [running]:
gopkg.in/yaml%2ev3.handleErr(0xc0005abbd0)
        /go/pkg/mod/gopkg.in/[email protected]/yaml.go:294 +0x6d
panic({0x10e49a0, 0x1c74650})
        /usr/local/go/src/runtime/panic.go:884 +0x212
github.com/burningalchemist/sql_exporter/config.(*Config).UnmarshalYAML(0xc0001d2000, 0x112ed60?)
        /app/config/config.go:76 +0x168
gopkg.in/yaml%2ev3.(*decoder).callObsoleteUnmarshaler(0xc0001d2070, 0xc0004bc1e0, {0x7fc5cd458ab0, 0xc0001d2000})
        /go/pkg/mod/gopkg.in/[email protected]/decode.go:379 +0xa7
gopkg.in/yaml%2ev3.(*decoder).prepare(0x0?, 0x0?, {0x11d7f40?, 0xc0001d2000?, 0x0?})
        /go/pkg/mod/gopkg.in/[email protected]/decode.go:427 +0x245
gopkg.in/yaml%2ev3.(*decoder).unmarshal(0xc0001d2070, 0xc0004bc1e0, {0x11d7f40?, 0xc0001d2000?, 0x0?})
        /go/pkg/mod/gopkg.in/[email protected]/decode.go:502 +0x2a5
gopkg.in/yaml%2ev3.(*decoder).document(0x0?, 0x0?, {0x11d7f40?, 0xc0001d2000?, 0xc0002c1800?})
        /go/pkg/mod/gopkg.in/[email protected]/decode.go:527 +0x5b
gopkg.in/yaml%2ev3.(*decoder).unmarshal(0xc0001d2070, 0xc0004bc140, {0x11d7f40?, 0xc0001d2000?, 0x7fc5f4954a68?})
        /go/pkg/mod/gopkg.in/[email protected]/decode.go:498 +0x42d
gopkg.in/yaml%2ev3.unmarshal({0xc0004c0000, 0xf1, 0x200}, {0x112ed60?, 0xc0001d2000}, 0x50?)
        /go/pkg/mod/gopkg.in/[email protected]/yaml.go:167 +0x426
gopkg.in/yaml%2ev3.Unmarshal(...)
        /go/pkg/mod/gopkg.in/[email protected]/yaml.go:89
github.com/burningalchemist/sql_exporter/config.Load({0x7fff28047e35, 0x12})
        /app/config/config.go:29 +0x115
github.com/burningalchemist/sql_exporter.NewExporter({0x7fff28047e35?, 0x1b?})
        /app/exporter.go:41 +0x33
main.main()
        /app/cmd/sql_exporter/main.go:80 +0x529

EDIT: I pasted the wrong panic before, now it is correct.

To Reproduce
Steps to reproduce the behavior:

  1. Try to start the exporter using the config shown in Configuration. Make sure at least one collector file is present.
  2. See panic

Expected behavior
In the example file below, it is expected that all the configuration options under global are set to their defaults.

Configuration

target:
  data_source_name: 'driver://user:pass@dbhost:5433/dbname'
  collectors: [example]
collector_files: ['collectors/*.yaml']

Additional context
The panic can be avoided (and defaults for globals applied) by defining an empty-object global: {} key in the config:

global: {}
target:
  data_source_name: 'driver://user:pass@dbhost:5433/dbname'
  collectors: [example]
collector_files: ['collectors/*.yaml']

Metrics only produced for few initial scrapes

Describe the bug
One of my collectors produce metrics during a few initial scrapes, like 4-5, then no more metrics are shown on the /metrics endpoint. The collector uses the query mentioned in this issue: #154 (comment), but since the query now has proved to work, I suspect there must be some other issue at play here.

To Reproduce

  1. Run collector with query from issue above

Expected behavior
Metrics produced every 6 hours according to min_interval, see below config. Even if the query fails to produce new metrics, surely the cached metrics should still show up in the /metrics endpoint, no?

Configuration
Prometheus config:

global:
  scrape_interval:     60s 
  evaluation_interval: 1m
  scrape_timeout: 15s

Sql-exporter config:
Note the min_interval of 6h, so scrapes are only done every 6 hours

global:
  scrape_timeout: 1h
  scrape_timeout_offset: 500ms
  min_interval: 6h 
  max_connections: 3
  max_idle_connections: 3
  max_connection_lifetime: 5m

Not sure, but might the issue be that Prometheus scrape_timeout: 15s cuts off the Sql-exporter scrape_timeout: 1h, so that any sql-exporter query running for longer that 15s fail? Do I perhaps need to manually set scrape_timeout: 1h on the sql-exporter job in Prometheus?

Additional information:
One of my sql-exporters would after a day or so not provide any metrics at all, but only errors on the /metric endpoint. It would say like Error gathering metrics: [from Gatherer #1], for each collector, saying that "context deadline exceeded". Not sure if this is related or not?

I have been running sql-exporter as a windows service so don't have any any logs of these issues. Now I've restarted them running from command line with debug logging, to monitor this further.

Long running query times out

Describe the bug
I have a MSSQL query that takes about 2 minutes to complete. One of my main reasons for wanting to use sql-exporter is so that I can pull the results of all these queries into Prometheus so I can see the values in grafana without having to wait for the queries to run every time I want to refresh a dashboard, also so I can see how the values are changing over time.

When I try and run the query through sql_exporter, I get:
ts=2023-02-17T00:25:21.925Z caller=klog.go:84 level=debug func=Infof msg="Error gathering metrics: [from Gatherer #1] [collector="qb-test", query="ar_balance"] context deadline exceeded"

I can see that the query is running, and it gets this error before the query is expected to return the data.

To Reproduce
Try to run a query longer than 20 seconds, and you will get this error.

Expected behavior
I couldn't find a way to set the timeout for running a query, it would be great to have this in the .yml file so that it could be adjusted for more use cases.

Environment variables to override connection string and collectors list

As a containerised app, the current implementation doesn't seem to support using either a secret and/or an environment variable to set the secret parts of the connection string and/or to adjust the list of collectors. These are both in the root config file, which although it can be treated as an entire secret, means that whoever needs to configure the e.g. scrape timeout offset, also would need access to the secrets.

A solution I have implemented locally allows an optional environment variable that can set the DSN or the collectors list (the second I am not sure if it is as useful). This code is very simple:

func (t *TargetConfig) UnmarshalYAML(unmarshal func(interface{}) error) error {
	type plain TargetConfig
	if err := unmarshal((*plain)(t)); err != nil {
		return err
	}

        // My new block below
	// Look for env variable to overwrite
	if val, ok := os.LookupEnv("SQLEXPORTER_DSN"); ok {
		t.DSN = Secret(val)
	}

	// Check required fields
	if t.DSN == "" {
		return fmt.Errorf("missing data_source_name for target %+v", t)
	}
        // etc.

I am not sure if there are other ways to achieve the same thing but this allows my deployment tool to inject an environment variable with the correct DSN passwords in it per-environment.

If you are happy with this, I am happy to create a PR but as a Go noob, I don't know if I have missed any edge cases (like Nil errors)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.