Git Product home page Git Product logo

script_exporter's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

script_exporter's Issues

Script with space character in name or path

hi,
Could you advise please on how is it possible to specify in the config file the path to the script which contains spaces, example of what I mean:

scripts:

  • name: test_spaces
    script: /opt/test dir/test_spaces.sh
    timeout:
    max_timeout: 55
    enforced: true

For this config I am getting following error:

script_exporter: 2022/05/18 14:43:24 Script failed: fork/exec /opt/test: no such file or directory

Whatever escaping, putting in comma etc. I am trying it is still not able to execute the script

Thanks in advance

Caching metrics

Hi Rico.

Great work - I love your little exporter here.

Is it possible to add a caching feature to the exporter ?

I have some scripts that I would prefer to run on a hourly basis ( for performance reasons ) - but to avoid prometheus staleness, I cannot set the scrape_interval higher than 5m.

Can the exporter perhaps cache the script output and return the last values for a given set of parameters ?

Or is there a better way to accomplish this ?

Best regards
Torben

Cannot obtain the script output/result from the script_exporter probe

Looking at the probe page of my custom script execution, and following of course the readme examples, there's no way to obtain the script stdout result.
In an example script, that returns only and always the value "50" looking at probe values it returns only

# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="pint01"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="pint01"} 0.005414
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="pint01"} 0

without any trace of the output.
In the readme file, it's only explained how to ignore the output on the prometheus.yaml, not how to force it.
What am I missing? :(

Provide an example of docker-compose.yml

I am trying to configure the docker-compose.yml file but it doesn't seem to work. On the other hand by making a docker run command I do not have any problem.

What works :

docker run -d -p 9469:9469/tcp -v /data/script_exporter/examples:/opt/examples  -config.file /opt/examples/config.yaml -web.listen-address ":9469" ricoberger/script_exporter:v2.4.0

Script Exporter

Metrics

Probe

  • version: v2.4.0
  • branch: HEAD
  • revision: 5eb48ef
  • go version: go1.17.1
  • build user: root
  • build date: 20210915-14:23:11

What doesn't work :

version: '3'
services:
  script_exporter:
    command:
      - '-config.file=/opt/examples/config.yaml'
      - '-web.listen-address=":9469"'
    container_name: 'script_exporter'
    image: 'ricoberger/script_exporter:v2.4.0'
    ports:
      - '9469:9469'
    volumes:
      - '/data/script_exporter/examples:/opt/examples'

Below is the log when I run the docker-compose :

Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Recreating script_exporter ... done
Attaching to script_exporter
script_exporter    | Starting server (version=v2.4.0, branch=HEAD, revision=5eb48ef1b53c11f98a4a6609667389e5c7140a42)
script_exporter    | Build context (go=go1.17.1, user=root, date=20210915-14:23:11)
script_exporter    | script_exporter listening on ":9469"
script_exporter    | 2021/09/29 20:17:17 listen tcp: address tcp/9469": unknown port
script_exporter exited with code 1

As on your documentation nothing is specified about the parameters for the docker-compose, could you tell me how to make your docker work with a docker-compose, please ? Thanks

env: expects list of strings, not a map.

Howdy, Rico!

I have a sensitive pwd that I'd like to provide via env vars.

I tried to follow this snippet as an example:

scripts:
  - name: test_env
    command: /tmp/my_script.sh
    env:
      http_proxy: http://proxy.example.com:3128
      https_proxy: http://proxy.example.com:3128

But it throws this message:

script_exporter[2622]: ts=2023-07-03T22:48:51.713Z caller=exporter.go:57 level=error err="yaml: unmarshal errors:\n line 5: cannot unmarshal !!map into []string"

Version: v2.12.0

Long shoot:
And when I provide the env var as a list of str, it works:

scripts:
  - name: test_env
    command: /tmp/my_script.sh
    env:
    - http_proxy
    - https_proxy

Add more script discovery options

I would like to use the great script_exporter discovery but I lack few options there like dynamic parameters and scrape_interval with scrape_target.

I wrote initial PR #89 to take a look what idea I'm talking about.

Powershell output comma converted to dot

Hello,

I'm trying to send powershell output:
windows_scheduled_task_job_execution { taskname="Generate Monthly Report", LastTaskResult="0", LastTaskStatus="Success"} 0

via the script_exporter to prometheus, but the output seem's to be reformated from comma to dot

# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="ExportScheduledTaskMetric"} 0
windows_scheduled_task_job_execution { taskname="Generate Monthly Report". LastTaskResult="0". LastTaskStatus="Success"} 0

Any help would be greatly appreciated.

Thank you in advanced

Script failed: fork/exec, exec format error

Hi

Can anyone help me understand why this fails

$ /app/dvkdbhk/home/dvkdbhkx/REPOS/lbrown15/kdb-data-services/lbrown15/script/sh/metrics/latency_by_exchange
#HELP latency_by_exchange Average Latency by exchange for the last 30 seconds
#TYPE latency_by_exchange gauge
latency_by_exchange{exchange="CU2", host="unycasd20556"} 0.6871429
latency_by_exchange{exchange="DTB", host="unycasd20556"} 0.7325
latency_by_exchange{exchange="EUX", host="unycasd20556"} 0.324
latency_by_exchange{exchange="ICE", host="unycasd20556"} 0.5640802
latency_by_exchange{exchange="LIF", host="unycasd20556"} 646.997
latency_by_exchange{exchange="LME", host="unycasd20556"} 5332.213
latency_by_exchange{exchange="OSA", host="unycasd20556"} 0.3487727

$ ./script_exporter-v2.0.1-linux-amd64 -config.file ./script_exporter.yml -web.listen-address :4522
Starting server (version=v2.0.1, branch=master, revision=92a7645e2e084df7334b71a17b65eb04bbda0e5c)
Build context (go=go1.13.4, user=ricoberger, date=20191213-09:08:44)
script_exporter listening on :4522
2020/03/18 05:14:20 Script failed: fork/exec /app/dvkdbhk/home/dvkdbhkx/REPOS/lbrown15/kdb-data-services/lbrown15/script/sh/metrics/latency_by_exchange: exec format error

Call does not wait for script execution to finish

Hello guys,

I would like some help.

I'm using script_exporter to run a shell script, which runs another powershell script.

Executing the shell script, I have the return successfully, even though the return is not immediate, taking 30s for example.

Even though I configure the scrap time and the timeout time, whenever I execute the curl call from the exporter, the return is immediate and with that it does not return the echo (variable created with the expected value).

Would you have any idea why?

Would it be because of the shell script to run another power shell?

Thank you very much!

how to get shell command metrics?

Mr Rico Berger, I want to use script_exporter to monitor pxf-cli status of greenplum, so I edit the script like

#!/bin/sh
result="$(pxf-cli cluster status | grep 'running on [0-9] out of' | cut -b 19)"
echo "$result"
echo "PXF is running on $result out of 4 hosts"

I want to obtain the result, which equals 4
but in localhost/metrics, I only got

scripts_requests_total{script="pxf_status_check"} 2

pxf_status_check is the name of my shell script.

really thanks to you, expect your response.

release

I'd like to use the unreleased improvments in master, can you build a new release?

Running script exporter on Kubernetes

Hi, I have a problem with understanding how your script should work on Kubernetes. It could be caused by my small knowledge about the Kubernetes environment, but I hope you could something explain to me. Using it locally isn't a problem, but on Kubernetes, it is.

Suppose I have a cluster named my-cluster and in there a few samples pods which exhibit hello world page. My job is to get data about specific files from the containers on which programs are running, for example, on path ~/var/app/log I have two files: log_1.log and log_2.log (in every container). I would like to calculate how many days is between creating/updating log_1.log and log_2.log, export it to Prometheus and creating a diagram in Grafana about this information in every container.

Should I install the script exporter on every container and exhibit the information about differences in files, or can I create the script exporter as the next pod in my cluster and get access to the filesystem in every container to get the required data? If can I do it in a second way, could you explain how it should look?

Thank you very much in advance for your time.

Paweł

missing port in address

hello, i got trouble in running the exporter with a specific port.
when i use the cmd ./script_exporter -config.file config.yml -web.listen-address 19469, there's error msg shows "address 19469: missing port in address",
and i change the line into ./script_exporter -config.file config.yml -web.listen-address 127.0.0.1:19469, the exporter seems working, but i cant access the "host:port/metrics" from a browser, please help!
thanks!!

expected label value, got "INVALID"

hi,

I am trying script exporter, I am able to make it work, and it is outputting the expected values using curl request.
curl "http://server.stag.use1b.com:9469/probe?script=check_service_script"

but when I am trying to scrape using Prometheus, I am getting expected label value, got "INVALID" error in Prometheus.
Prometheus config:

  - job_name: 'script_check'
    scrape_interval: 30s
    scrape_timeout: 10s
    metrics_path: /probe
    params:
      script: ['check_service_script']
    static_configs:
    - targets: ['server.stag.use1b.com:9469']

can you tell me what could be the issue here?

script exporter version
[root@server examples]# script_exporter-linux-amd64 --version
script_exporter, version v2.2.0 (branch: HEAD, revision: b698e33)
build user: runner
build date: 20200508-06:03:16
go version: go1.13.10

Using scrape config with discovery does not trigger the scripts

Using the following scrape config in prometheus

      - job_name: 'script-exporter-discovery'
        http_sd_configs:
          - url: http://script-exporter-svc:9469/discovery

i can see the script_success and all the rest of the "basic" metrics but the script itself does not get triggered

Define script parameters in config as array

Hi there!

Currently, the script string is split on spaces to generate the program name and any fixed arguments. This approach means that the program name and fixed arguments can't contain spaces.

I propose that fixed arguments in the exporter config should be defined as an array instead of being defined in the script property, e.g. something like this:

Example 1:

scripts:
  - name: "example1"
    command: "./examples/connectivity-check.sh"
    args:
    - "google.com"
    - "bing.com"

Example 2:

scripts:
  - name: "example2"
    command: "netcat"
    args:
    - "-vzw"
    - "2"
    - "example.com"

The change can be implemented in a backwards compatible way:

  1. Keep script property with current behaviour.
  2. Require either script or command to be defined.
  3. Error if script is combined with args or command. args is always optional.

This is also how arguments are passed to a process in a Kubernetes pod manifest which means it will feel natural to many users of script_exporter:

apiVersion: v1
kind: Pod
metadata:
  name: command-demo
spec:
  containers:
  - name: command-demo-container
    image: debian
    command:
    - "printenv"
    args:
    - "HOSTNAME"
    - "KUBERNETES_PORT"

@ricoberger let me know what you think, I will be happy to implement this change.

Update Dockerfile to fix CRITICAL vulnerabilities linked to the used version of the base image

Name and Version
ricoberger/script_exporter:v2.1.1

What is the problem this feature will solve?
This feature will solve secutiry critical vulnerabilities, below the trivy scan :

For ricoberger/script_exporter:v2.1.1 that uses:

  • base image : alpine:3.10
  • build image : golang:1.13-alpine3.10
    image

What is the feature you are proposing to solve the problem?
Update Dockerfile using:

  • base image : alpine:3.11.12
  • build image : golang:1.13-alpine3.11

Below the trivy scan for the suggested build and base images:

image

Is it possible to run multiple scripts on the same prometheus job/endpoint

Hi,
I have this situation where I want to add a prometheus job to scrape an endpoint and on the server script_exporter would execute let's say 3 scripts. Is it possible to scrape them on the same job like this?

  - job_name: "oracle-scripts"
    scrape_interval: "1h"
    scrape_timeout: "1m"
    scheme: "http"
    metrics_path: "/probe"
    params:
      script:
        - "check_script1"
        - "check_script2"
        - "check_script3"
    file_sd_configs:
      - files:
          - "/etc/prometheus/file_sd/targets.yml"

If I run the above I get the metrics only of the first script check_script1 meaning that the other 2 scripts are not executed.
The endpoint scraped from above would be:
http://target_server:9469/probe?script=check_script1&script=check_script2&script=check_script3

Thanks,
Enid

alpine lacks bash?

The first line of the Dockerfile sources FROM golang:1.17-alpine3.14 as build, which lacks bash. As such:

# docker run --rm -it --entrypoint /bin/bash ricoberger/script_exporter:v2.5.0
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.

Yet all the examples provided use bash (e.g. https://github.com/ricoberger/script_exporter/blob/master/examples/ping.sh).

Am I missing something here?

Proposal: provide internal Prometheus metrics for script_exporter

The Prometheus Blackbox exporter both answers your probes with probe-specific metrics and provides additional internal metrics for its own state. Right now, script_exporter has no equivalent of the latter, and I think having it would be potentially handy, especially if the standard Go client metrics were augmented with metrics about what script_exporter is doing and has done so that you could see things like how many scripts were currently active.

I've put together a preliminary version of this to show what I'm thinking of, at https://github.com/siebenmann/script_exporter/tree/internal-metrics. The metrics are always exposed on /metrics; the code arranges to share this URL path with the script handler if necessary. I can further develop this into something worth a pull request if you're interested.

(Changing the default script handler path away from /metrics to, say, /probe, is probably too much of a breaking change. It turned out to be easy enough to share the two based on whether or not there are URL query parameters.)

add a helm chart

Hi! It would be great if there was a helm chart for this project

How to run scripts for windows?

Hi, what scripts can I use on win. I tried to use python script in title #! python but got an error Script parameter is missing.
scripts:

  • name: my_script
    script: C:\my_script.py

What am I doing wrong?

Request Service Account support in helm chart

Very useful project, thanks! We have one small issue with EKS service accounts

In order to access some of the resources we need to report metrics on, the script-exporter instance needs to use an EKS workload identity provided via the AWS IAM integration with the EKS k8s service account. In order to leverage that in the pod we need to be able to assign a service account to the pod. This requires a small change to the deployment template to take a service account name and bind it to the pod.

See:

After setting a service account on the pod, the AWS CLI may be run inside the pod with the appropriate permissions to scrape the data we need to report metrics for.

script_exporter's script_exit_code differs from the actual script's exit code

When executed from CLI:

# /opt/script_exporter/check_type_current.sh
# echo `$?`
0

===
at the same time, script_exporter shows an exit code of -1 (and success as a False):

#  curl localhost:9469/probe?script=check_type_current
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="check_type_current"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="check_type_current"} 0.001040
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="check_type_current"} -1

I made sure to check the config.yaml file so it points to the proper script:

# grep check_type_current config.yaml
 - name: 'check_type_current'
   script: /opt/script_exporter/check_type_current.sh

When the script exits with non-0 code, output is not provided to prometheus

Hello,

Unless mistaking in the way to use the exporter, when the script exits with non-0 return code, its output it not provided to prometheus.
I believe it comes from here : https://github.com/ricoberger/script_exporter/blob/main/pkg/exporter/metrics.go#L67

Example OK :

[~]# cat script_exiting_0
#!/bin/bash
echo "EXIT 0"
exit 0
[~]# 
[~]# curl -s "http://0:9469/probe?script=script_exiting_0"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="script_exiting_0"} 1
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="script_exiting_0"} 0.011737
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="script_exiting_0"} 0
EXIT 0

Example NOK :

[~]# cat script_exiting_1
#!/bin/bash
echo "EXIT 1"
exit 1
[~]# 
[~]# curl -s "http://0:9469/probe?script=script_exiting_1"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{script="script_exiting_1"} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{script="script_exiting_1"} 0.002808
# HELP script_exit_code The exit code of the script.
# TYPE script_exit_code gauge
script_exit_code{script="script_exiting_1"} 1

about executing docker scripts

Hi , i'm using your script_exporter . its great !!! However , when i executed a shell script about docker with your script_exporter, it returned an error result , but it return the correct result when i executed the script through sh script.sh , i can't see the log and I don't know why ,

#!/bin/sh
source /etc/profile
result="$(sudo docker exec -it mysql_slave1 mysql -utest -p'test'  -e "show slave status\G" |grep "Slave_IO_Running: Yes"|wc -l)"
echo "# HELP mysql_slave1_io_running"
echo "# TYPE mysql_slave1_io_running gauge"
echo "mysql_slave1_io_running{label=\"mysql_slave1_io_running\"} $result"
tls:
  enabled: false
  crt: server.crt
  key: server.key

basicAuth:
  enabled: false
  username: admin
  password: admin

bearerAuth:
  enabled: false
  signingKey: my_secret_key

scripts:
  - name: mysql_slave1_sql_running
    script: ./test.sh
    timeout:
      max_timeout: 60
  - name: sleep
    script: sleep 120
    timeout:
      enforced: true

Can you help me?

Script parameter is missing

Hi
I have tried your script exporter but after call http://localhost:9469/ I can see

Script Exporter
Metrics

Probe

version: v2.2.0
branch: master
revision: b698e33
go version: go1.13.7
build user:
build date:

After click to Probe I see "Script parameter is missing". I am only testing you testing solution:
./bin/script_exporter -config.file ./examples/config.yaml

Could you pls me advise me where can be a problem ?

Thanks a lot

Vojtech

gaps between the metrics

Since we started using script_exporter, we found a weird issue that makes us uncomfortable. We use script_exporter to execute some long-running scripts like smartctl metrics. Even though we have a reasonable cache duration set up, we still have gaps in between the metrics, that correspond to the time of script execution or close. The script-exporter.yml looks like

scripts:
...
- name: node_smartmon
  command: "/usr/lib/prometheus/custom_metric_node_smartmon.sh"
  cacheDuration: 1200s
  timeout:
    max_timeout: 600
    enforced: false

The result is

image

Is there any good way to get rid of this gaps?

windows example

is it possible to have an example with powershell file on windows host ?

Include `jq` in the Docker image

Hey!

I was creating scripts that parse a REST API with jq. I was wondering if it was possible to include jq in the default docker image, since its a fairly standard tool to parse json in bash scripts.

Obviously, I can build my own image, but perhaps more people could profit / save time if some common standard scripting tools are shipped with a docker image.

Thank you in advance for creating and maintaining this project :)

how to set up a script with params

Hi, how should I set the configs if I want to have script args set in prometheus.yml?

I've tried this, but the param doesn't seem to reach my script, otherwise the script is executed:

params:
      script: [my_script]
      params: [my_param]

Thanks!

Proposal: expose scrape timeout information to scripts and optionally enforce it

When Prometheus makes a scrape, it exposes information about the scrape timeout in a special HTTP header, which is used by Blackbox to set timeouts for its probes (other exporters may also use it, I'm not sure). I propose exposing this information to scripts in some environment variables and optionally having script_exporter enforce this by using exec.CommandContext with a Context that has a deadline (on the grounds that this saves people from having to wrap various scripts in a boilerplate timeout $SCRIPT_TIMEOUT cmd ...).

If this is of interest, I've put together a branch with an implementation of this: https://github.com/siebenmann/script_exporter/tree/script-timeouts

I've split the implementation into two commits (first adding the environment variables, then adding the optional enforcement) in case you'd like to not have the optional enforcement side. There's also a tiny cleanup of command line argument handling. I can revise any or all of these, or make a pull request, as you'd like.

script-exporter does not handle process reaping, leaving zombie processes behind

I'm running it in Kubernetes, and use kubectl, jq etc to get the data to produce a metric. I noticed that the script execution fails intermittently with the script getting killed, even though it did not meet the timeout or logged any errors.

The I noticed this in the container:

script-exporter-5dd8cbfc8-xjv7r:$ ps faux
PID   USER     TIME  COMMAND
    1 nobody    0:07 /bin/script_exporter -config.file /etc/script-exporter/script-exporter.yml
 3300 nobody    0:00 [kubectl]
 3700 nobody    0:00 [kubectl]
 3798 nobody    0:00 [kubectl]
 3996 nobody    0:00 [kubectl]
 4900 nobody    0:00 [kubectl]
 6894 nobody    0:00 [kubectl]
 7895 nobody    0:00 [kubectl]
15302 nobody    0:00 [jq]
...

script-exporter-5dd8cbfc8-xjv7r:$ cat /proc/3300/status
Name:   kubectl
State:  Z (zombie)
Tgid:   3300
...

dozens of zombie processes sitting around. Perhaps script-exporter should run under tini or something else that handles reaping defunct processes? This is in containerd, by the way, which seemingly does not handle this the way Docker does.

how to disable param config in url (security concern)

params can be edited in url config , that's a good feature , but also could be a security problem, can we disable this feature

I tried

  - name: "echo"
    script: "echo"
  - name: "echo2"
    command: "echo"

but they both support command param customize.

http://localhost:9469/probe?script=echo&params=s,t&s=foo&t=bar
http://localhost:9469/probe?script=echo2&params=s,t&s=foo&t=bar

both web page shows foo bar, meaning the parameter in web url is passed to the command excuting, which could lead to a security problem.

Script Results not showing on the web since upgrade to Redhat 8.9

Hi

We have been running the prometheus-script-exporter on Redhat 7, but we have had to upgrade our machies to Redhat 8.9, this also included an upgrade of Java from 8 to 11 and tomcat went from v8 to v9

Whilst many of our scripts are still working, these are all very basic Bash output, but we have a slightly more complex script that connects to ETCD cluster to get some github hashes.

The first line of this script is export PATH="${PATH}:/usr/local/bin" which now includes the v11 Java.

If I run the script manually from within the server I can see the results, when I hit the endpoint url with the probe?script=script_name I see some of the starting text, all the way up to

HELP script_success Script exit status (0 = error, 1 = success).

TYPE script_success gauge

script_success{script="script_name"} 1

HELP script_duration_seconds Script execution time, in seconds.

TYPE script_duration_seconds gauge

script_duration_seconds{script="script_name"} 0.170699

HELP script_exit_code The exit code of the script.

TYPE script_exit_code gauge

script_exit_code{script="script_name"} 0

but I don't see anything below this where the actual results of the script would come out.

This is the only script not working, and we are trying to figure it out. If you have any notion of pre-requisites or how changes in the OS would possibly affect things that would help

Many thanks
MP

Script Failure does not provide indication in script_success metric as to what script failed

Hi,
When a script fails, I won't be able to correlate back to the script that actually failed. It would be great if it was included as a dimension in the script_success metric. Alternatively or additionally, it would also be great if the output parameter was respected as in the case of my script I output an up/down metric that I can build my alerting off of.

Currently

# curl "localhost:9469/probe?output=true&script=db_tablespace"
# HELP script_success Script exit status (0 = error, 1 = success).
# TYPE script_success gauge
script_success{} 0
# HELP script_duration_seconds Script execution time, in seconds.
# TYPE script_duration_seconds gauge
script_duration_seconds{} 1.788413

Imagining

script_success{script="db_tablespace"} 0

And/Or optionally

script_success{script="db_tablespace"} 0
# HELP Script Output param output != ignore
db_tablespace_probe_up{dbhost="dbserver",database="CUST",port="1521"} 0

Please let me know your thoughts on the above.
Regards,
Chris Whelan

params not being honored despite flag noargs not present

I am testing v2.14.0 in Docker as shown in the README.

/examples # ps -ef
PID USER TIME COMMAND
1 root 0:00 /bin/script_exporter -config.file /examples/config.yaml

I slightly modified the helloworld.sh script as below:
/examples # cat helloworld.sh
#!/bin/sh
echo "hello_world{params="$1","$2"} 1"

I then query as below:
http://localhost:9469/probe?script=helloworld&prefix=mypref&params=argv

The output shows (among other lines):
mypref_hello_world{params="test",""} 1

I also tried the ping script (ping.sh) and no argument is passed.

Suggestions?

Script Exporter max_timeout not taking effect if query or header timeouts not specified

Configuring the maximum script timeout in Script Exporter's config.yaml file is not taking effect unless query or header timeouts are being specified alongside. Scripts are executing successfully even when their duration exceeds the max_timeout set.

I created a test_script that sleeps for 5 seconds before executing an echo command.

ping 192.0.2.0 -n 1 -w 5000 >nul
ECHO Script executed successfully 

Then, in Script Exporter's config.yaml file, the script's max_timeout was set to 1 second.

scripts:
  - name: test_script
    command: .\test_script.bat
    timeout:
      max_timeout: 1.0
      enforced: true 

The script was expected to be killed. Instead, it executed successfully.

image-2023-03-17-11-20-31-388

However, the max_timeout worked as expected when passed as a header in the GET request on Script Exporter's probe endpoint. As a result, the script was killed and did not return any value.
image-2023-03-17-13-14-37-832

ts=2023-03-17T11:11:19.895Z caller=metrics.go:77 level=error msg="Run script failed" err="exit status 1" 

The max_timeout also killed the test_script when the timeout was passed as a query in Script Exporter's probe endpoint.
image-2023-03-23-12-52-53-475

ts=2023-03-23T10:23:41.217Z caller=metrics.go:77 level=error msg="Run script failed" err="exit status 1"

I think I found out how this is happening: https://github.com/ricoberger/script_exporter/blob/v2.11.0/pkg/exporter/scripts.go#L121

The code clearly shows that this will be the expected behavior

But is it in the intended behavior or is it a bug?

Logger dumps sensitive env vars into logs

Hello @ricoberger !

Thank you for maintaining this great tool!

I recently discovered that the script executor dumps all environment variables into logs on errors:

"env", strings.Join(cmd.Env, " "),

which can be very helpful for debugging, but it can also dump sensitive values such as passwords/access tokens/etc, for example if a script is something like

curl -s -u $GITHUB_USERNAME:$GITHUB_TOKEN $somegithublink

and thanks to log shipping that immediately ends up exposed to anyone with access to logs.

It would be great to (conditionally?) avoid dumping env vars to logs.

script_exporter not passing custom params to script

Hi, I have a script_exporter configured with the following file:

scripts:
  - name: my_script
    script: /full/path/to/script/my_script.py value3 value4

The script my_script.py is designed to output the received args to a file.
When I use this command:
curl http://127.0.0.1:9469/probe?script=my_script&params=param1&param1=value1
I would have expected to see value1 in the file output, however that is not the case, I see only the params value3 and value4. I cannot figure out what is wrong.

Docker image

Thanks for nice project!!

Looking for container based image (docker)

RFE: Support directly running scripts and switch to it by default

Currently, the script exporter runs specified scripts by invoking the specified shell (/bin/sh by default) with the command line arguments script [arg ...]. The problem with this is that it is very easy to believe that you can specify a binary executable as the script: setting (a belief that is encouraged by two out of the three examples starting with #! /bin/bash). This doesn't work, and in fact it fails explosively; some shells will try to interpret your binary executable as a shell script and all sorts of crazy things proceed to happen.

I propose two changes (and I can submit a pull request to implement one or both). First, support directly running programs, instead of invoking the shell, by setting an empty shell on the command line: script_exporter -config.shell "". Second, make this the default and require people to specifically set the shell (even to /bin/sh) if they want the behavior of running through the shell.

(This requires only minor changes because exec.Command() already pretty much supports this usage; you just run args[0] instead of *shell, with small other changes.)

impossible to launch a windows service from script_exporter-windows-amd64.exe

I create a windows service with sc.exe and i try to start the service, error 1053 appears "The service did not respond to start or control request in a timely fashion"

It's seem that it missing in script_exporter.go informations for Service Control Manager. It does not receive a "service started" notice from the service within this time-out period.

Is it possible to fix it ?

Get script exit code as a metric

hi,
Could you advise please if there is a way in the current release to get as a metric returned, script original exit code?

Thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.