Git Product home page Git Product logo

prometheus-podman-exporter's Introduction

prometheus-podman-exporter

License GitHub release (latest SemVer) Go Report Go

Prometheus exporter for podman v4 and v5 environment exposing containers, pods, images, volumes and networks information.

prometheus-podman-exporter uses the podman (libpod) library to fetch the statistics and therefore no need to enable podman.socket service unless using the container image.

Installation

Building from source, using container image or installing packaged versions are detailed in install guide.

Usage and Options

Usage:
  prometheus-podman-exporter [flags]

Flags:
  -t, --collector.cache_duration int          Duration (seconds) to retrieve container, size and refresh the cache. (default 3600)
  -a, --collector.enable-all                  Enable all collectors by default.
      --collector.enhance-metrics             enhance all metrics with the same field as for their podman_<...>_info metrics.
  -i, --collector.image                       Enable image collector.
  -n, --collector.network                     Enable network collector.
  -o, --collector.pod                         Enable pod collector.
  -b, --collector.store_labels                Convert pod/container/image labels on prometheus metrics for each pod/container/image.
  -s, --collector.system                      Enable system collector.
  -v, --collector.volume                      Enable volume collector.
  -w, --collector.whitelisted_labels string   Comma separated list of pod/container/image labels to be converted
                                              to labels on prometheus metrics for each pod/container/image.
                                              collector.store_labels must be set to false for this to take effect.
  -d, --debug                                 Set log level to debug.
  -h, --help                                  help for prometheus-podman-exporter
      --version                               Print version and exit.
      --web.config.file string                [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication.
  -e, --web.disable-exporter-metrics          Exclude metrics about the exporter itself (promhttp_*, process_*, go_*).
  -l, --web.listen-address string             Address on which to expose metrics and web interface. (default ":9882")
  -m, --web.max-requests int                  Maximum number of parallel scrape requests. Use 0 to disable (default 40)
  -p, --web.telemetry-path string             Path under which to expose metrics. (default "/metrics")

By default only container collector is enabled, in order to enable all collectors use --collector.enable-all or use --collector.enable-<name> flag to enable other collector.

Example: enable all available collectors:

$ ./bin/prometheus-podman-exporter --collector.enable-all

The exporter uses plain HTTP without any form of authentication to expose the metrics by default. Use --web.config.file with a configuration file to use TLS for confidentiality and/or to enable authentication. Visit this page for more information about the syntax of the configuration file.

Collectors

The table below list all existing collector and their description.

Name Description
container exposes containers information
image exposes images information
network exposes networks information
pod exposes pod information
volume exposes volume information
system exposes system (host) information

Collectors examples output

container

# HELP podman_container_info Container information.
# TYPE podman_container_info gauge
podman_container_info{id="19286a13dc23",image="docker.io/library/sonarqube:latest",name="sonar01",pod_id="",pod_name="",ports="0.0.0.0:9000->9000/tcp"} 1
podman_container_info{id="482113b805f7",image="docker.io/library/httpd:latest",name="web_server",pod_id="",pod_name="",ports="0.0.0.0:8000->80/tcp"} 1
podman_container_info{id="642490688d9c",image="docker.io/grafana/grafana:latest",name="grafana",pod_id="",pod_name="",ports="0.0.0.0:3000->3000/tcp"} 1
podman_container_info{id="ad36e85960a1",image="docker.io/library/busybox:latest",name="busybox01",pod_id="3e8bae64e9af",pod_name="pod01",ports=""} 1
podman_container_info{id="dda983cc3ecf",image="localhost/podman-pause:4.1.0-1651853754",name="3e8bae64e9af-infra",pod_id="3e8bae64e9af",pod_name="pod01",ports=""} 1

# HELP podman_container_state Container current state (-1=unknown,0=created,1=initialized,2=running,3=stopped,4=paused,5=exited,6=removing,7=stopping).
# TYPE podman_container_state gauge
podman_container_state{id="19286a13dc23",pod_id="",pod_name=""} 2
podman_container_state{id="482113b805f7",pod_id="",pod_name=""} 4
podman_container_state{id="642490688d9c",pod_id="",pod_name=""} 2
podman_container_state{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 5
podman_container_state{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 2

# HELP podman_container_block_input_total Container block input.
# TYPE podman_container_block_input_total counter
podman_container_block_input_total{id="19286a13dc23",pod_id="",pod_name=""} 49152
podman_container_block_input_total{id="482113b805f7",pod_id="",pod_name=""} 0
podman_container_block_input_total{id="642490688d9c",pod_id="",pod_name=""} 1.41533184e+08
podman_container_block_input_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_block_input_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 0

# HELP podman_container_block_output_total Container block output.
# TYPE podman_container_block_output_total counter
podman_container_block_output_total{id="19286a13dc23",pod_id="",pod_name=""} 1.790976e+06
podman_container_block_output_total{id="482113b805f7",pod_id="",pod_name=""} 8192
podman_container_block_output_total{id="642490688d9c",pod_id="",pod_name=""} 4.69248e+07
podman_container_block_output_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_block_output_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 0

# HELP podman_container_cpu_seconds_total total CPU time spent for container in seconds.
# TYPE podman_container_cpu_seconds_total counter
podman_container_cpu_seconds_total{id="19286a13dc23",pod_id="",pod_name=""} 83.231904
podman_container_cpu_seconds_total{id="482113b805f7",pod_id="",pod_name=""} 0.069712
podman_container_cpu_seconds_total{id="642490688d9c",pod_id="",pod_name=""} 3.028685
podman_container_cpu_seconds_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_cpu_seconds_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 0.011687

# HELP podman_container_cpu_system_seconds_total total system CPU time spent for container in seconds.
# TYPE podman_container_cpu_system_seconds_total counter
podman_container_cpu_system_seconds_total{id="19286a13dc23",pod_id="",pod_name=""} 0.007993418
podman_container_cpu_system_seconds_total{id="482113b805f7",pod_id="",pod_name=""} 4.8591e-05
podman_container_cpu_system_seconds_total{id="642490688d9c",pod_id="",pod_name=""} 0.00118734
podman_container_cpu_system_seconds_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_cpu_system_seconds_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 9.731e-06

# HELP podman_container_created_seconds Container creation time in unixtime.
# TYPE podman_container_created_seconds gauge
podman_container_created_seconds{id="19286a13dc23",pod_id="",pod_name=""} 1.655859887e+09
podman_container_created_seconds{id="482113b805f7",pod_id="",pod_name=""} 1.655859728e+09
podman_container_created_seconds{id="642490688d9c",pod_id="",pod_name=""} 1.655859511e+09
podman_container_created_seconds{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 1.655859858e+09
podman_container_created_seconds{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 1.655859839e+09

# HELP podman_container_started_seconds Container started time in unixtime.
# TYPE podman_container_started_seconds gauge
podman_container_started_seconds{id="19286a13dc23",pod_id="",pod_name=""} 1.659253804e+09
podman_container_started_seconds{id="482113b805f7",pod_id="",pod_name=""} 1.659253804e+09
podman_container_started_seconds{id="642490688d9c",pod_id="",pod_name=""} 1.660642996e+09
podman_container_started_seconds{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 1.66064284e+09
podman_container_started_seconds{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 1.66064284e+09

# HELP podman_container_exit_code Container exit code, if the container has not exited or restarted then the exit code will be 0.
# TYPE podman_container_exit_code gauge
podman_container_exit_code{id="19286a13dc23",pod_id="",pod_name=""} 0
podman_container_exit_code{id="482113b805f7",pod_id="",pod_name=""} 0
podman_container_exit_code{id="642490688d9c",pod_id="",pod_name=""} 0
podman_container_exit_code{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 130
podman_container_exit_code{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 0

# HELP podman_container_exited_seconds Container exited time in unixtime.
# TYPE podman_container_exited_seconds gauge
podman_container_exited_seconds{id="19286a13dc23",pod_id="",pod_name=""} 1.659253805e+09
podman_container_exited_seconds{id="482113b805f7",pod_id="",pod_name=""} 1.659253805e+09
podman_container_exited_seconds{id="642490688d9c",pod_id="",pod_name=""} 1.659253804e+09
podman_container_exited_seconds{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 1.660643511e+09
podman_container_exited_seconds{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 1.660643511e+09

# HELP podman_container_mem_limit_bytes Container memory limit.
# TYPE podman_container_mem_limit_bytes gauge
podman_container_mem_limit_bytes{id="19286a13dc23",pod_id="",pod_name=""} 9.713655808e+09
podman_container_mem_limit_bytes{id="482113b805f7",pod_id="",pod_name=""} 9.713655808e+09
podman_container_mem_limit_bytes{id="642490688d9c",pod_id="",pod_name=""} 9.713655808e+09
podman_container_mem_limit_bytes{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_mem_limit_bytes{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 9.713655808e+09

# HELP podman_container_mem_usage_bytes Container memory usage.
# TYPE podman_container_mem_usage_bytes gauge
podman_container_mem_usage_bytes{id="19286a13dc23",pod_id="",pod_name=""} 1.029062656e+09
podman_container_mem_usage_bytes{id="482113b805f7",pod_id="",pod_name=""} 2.748416e+06
podman_container_mem_usage_bytes{id="642490688d9c",pod_id="",pod_name=""} 3.67616e+07
podman_container_mem_usage_bytes{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_mem_usage_bytes{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 49152

# HELP podman_container_net_input_total Container network input.
# TYPE podman_container_net_input_total counter
podman_container_net_input_total{id="19286a13dc23",pod_id="",pod_name=""} 430
podman_container_net_input_total{id="482113b805f7",pod_id="",pod_name=""} 430
podman_container_net_input_total{id="642490688d9c",pod_id="",pod_name=""} 4323
podman_container_net_input_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_net_input_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 430

# HELP podman_container_net_output_total Container network output.
# TYPE podman_container_net_output_total counter
podman_container_net_output_total{id="19286a13dc23",pod_id="",pod_name=""} 110
podman_container_net_output_total{id="482113b805f7",pod_id="",pod_name=""} 110
podman_container_net_output_total{id="642490688d9c",pod_id="",pod_name=""} 12071
podman_container_net_output_total{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_net_output_total{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 110

# HELP podman_container_pids Container pid number.
# TYPE podman_container_pids gauge
podman_container_pids{id="19286a13dc23",pod_id="",pod_name=""} 94
podman_container_pids{id="482113b805f7",pod_id="",pod_name=""} 82
podman_container_pids{id="642490688d9c",pod_id="",pod_name=""} 14
podman_container_pids{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 0
podman_container_pids{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 1

# HELP podman_container_rootfs_size_bytes Container root filesystem size in bytes.
# TYPE podman_container_rootfs_size_bytes gauge
podman_container_rootfs_size_bytes{id="19286a13dc23",pod_id="",pod_name=""} 1.452382e+06
podman_container_rootfs_size_bytes{id="482113b805f7",pod_id="",pod_name=""} 1.135744e+06
podman_container_rootfs_size_bytes{id="642490688d9c",pod_id="",pod_name=""} 1.72771905e+08
podman_container_rootfs_size_bytes{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 1.135744e+06
podman_container_rootfs_size_bytes{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 1.035744e+06

# HELP podman_container_rw_size_bytes Container top read-write layer size in bytes.
# TYPE podman_container_rw_size_bytes gauge
podman_container_rw_size_bytes{id="19286a13dc23",pod_id="",pod_name=""} 0
podman_container_rw_size_bytes{id="482113b805f7",pod_id="",pod_name=""} 0
podman_container_rw_size_bytes{id="642490688d9c",pod_id="",pod_name=""} 26261
podman_container_rw_size_bytes{id="ad36e85960a1",pod_id="3e8bae64e9af",pod_name="pod01"} 3551
podman_container_rw_size_bytes{id="dda983cc3ecf",pod_id="3e8bae64e9af",pod_name="pod01"} 0

pod

# HELP podman_pod_state Pods current state current state (-1=unknown,0=created,1=error,2=exited,3=paused,4=running,5=degraded,6=stopped).
# TYPE podman_pod_state gauge
podman_pod_state{id="3e8bae64e9af"} 5
podman_pod_state{id="959a0a3530db"} 0
podman_pod_state{id="d05cda23085a"} 2

# HELP podman_pod_info Pod information
# TYPE podman_pod_info gauge
podman_pod_info{id="3e8bae64e9af",infra_id="dda983cc3ecf",name="pod01"} 1
podman_pod_info{id="959a0a3530db",infra_id="22e3d69be889",name="pod02"} 1
podman_pod_info{id="d05cda23085a",infra_id="390ac740fa80",name="pod03"} 1

# HELP podman_pod_containers Number of containers in a pod.
# TYPE podman_pod_containers gauge
podman_pod_containers{id="3e8bae64e9af"} 2
podman_pod_containers{id="959a0a3530db"} 1
podman_pod_containers{id="d05cda23085a"} 1

# HELP podman_pod_created_seconds Pods creation time in unixtime.
# TYPE podman_pod_created_seconds gauge
podman_pod_created_seconds{id="3e8bae64e9af"} 1.655859839e+09
podman_pod_created_seconds{id="959a0a3530db"} 1.655484892e+09
podman_pod_created_seconds{id="d05cda23085a"} 1.655489348e+09

image

# HELP podman_image_info Image information.
# TYPE podman_image_info gauge
podman_image_info{id="48565a8e6250",parent_id="",repository="docker.io/bitnami/prometheus",tag="latest",digest="sha256:4d7fdebe2a853aceb15019554b56e58055f7a746c0b4095eec869d5b6c11987e"} 1
podman_image_info{id="62aedd01bd85",parent_id="",repository="docker.io/library/busybox",tag="latest",digest="sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74"} 1
podman_image_info{id="75c013514322",parent_id="",repository="docker.io/library/sonarqube",tag="latest",digest="sha256:548f3d4246cda60c311a035620c26ea8fb21b3abc870c5806626a32ef936982b"} 1
podman_image_info{id="a45fa0117c2b",parent_id="",repository="localhost/podman-pause",tag="4.1.0-1651853754",digest="sha256:218169c5590870bb95c06e9f7e80ded58f6644c1974b0ca7f2c3405b74fc3b57"} 1
podman_image_info{id="b260a49eebf9",parent_id="",repository="docker.io/library/httpd",tag="latest",digest="sha256:ba846154ade27292d216cce2d21f1c7e589f3b66a4a643bff0cdd348efd17aa3"} 1
podman_image_info{id="c4b778290339",parent_id="b260a49eebf9",repository="docker.io/grafana/grafana",tag="latest",digest="sha256:7567a7c70a3c1d75aeeedc968d1304174a16651e55a60d1fb132a05e1e63a054"} 1

# HELP podman_image_created_seconds Image creation time in unixtime.
# TYPE podman_image_created_seconds gauge
podman_image_created_seconds{id="48565a8e6250",repository="docker.io/bitnami/prometheus",tag="latest"} 1.655436988e+09
podman_image_created_seconds{id="62aedd01bd85",repository="docker.io/library/busybox",tag="latest"} 1.654651161e+09
podman_image_created_seconds{id="75c013514322",repository="docker.io/library/sonarqube",tag="latest"} 1.654883091e+09
podman_image_created_seconds{id="a45fa0117c2b",repository="localhost/podman-pause",tag="4.1.0-1651853754"} 1.655484887e+09
podman_image_created_seconds{id="b260a49eebf9",repository="docker.io/library/httpd",tag="latest"} 1.655163309e+09
podman_image_created_seconds{id="c4b778290339",repository="docker.io/grafana/grafana",tag="latest"} 1.655132996e+09

# HELP podman_image_size Image size
# TYPE podman_image_size gauge
podman_image_size{id="48565a8e6250",repository="docker.io/bitnami/prometheus",tag="latest"} 5.11822059e+08
podman_image_size{id="62aedd01bd85",repository="docker.io/library/busybox",tag="latest"} 1.468102e+06
podman_image_size{id="75c013514322",repository="docker.io/library/sonarqube",tag="latest"} 5.35070053e+08
podman_image_size{id="a45fa0117c2b",repository="localhost/podman-pause",tag="4.1.0-1651853754"} 815742
podman_image_size{id="b260a49eebf9",repository="docker.io/library/httpd",tag="latest"} 1.49464899e+08
podman_image_size{id="c4b778290339",repository="docker.io/grafana/grafana",tag="latest"} 2.98969093e+08

network

# HELP podman_network_info Network information.
# TYPE podman_network_info gauge
podman_network_info{driver="bridge",id="2f259bab93aa",interface="podman0",labels="",name="podman"} 1
podman_network_info{driver="bridge",id="420272a98a4c",interface="podman3",labels="",name="network03"} 1
podman_network_info{driver="bridge",id="6eb310d4b0bb",interface="podman2",labels="",name="network02"} 1
podman_network_info{driver="bridge",id="a5a6391121a5",interface="podman1",labels="",name="network01"} 1

volume

# HELP podman_volume_info Volume information.
# TYPE podman_volume_info gauge
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol01/_data",name="vol01"} 1
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol02/_data",name="vol02"} 1
podman_volume_info{driver="local",mount_point="/home/navid/.local/share/containers/storage/volumes/vol03/_data",name="vol03"} 1

# HELP podman_volume_created_seconds Volume creation time in unixtime.
# TYPE podman_volume_created_seconds gauge
podman_volume_created_seconds{name="vol01"} 1.655484915e+09
podman_volume_created_seconds{name="vol02"} 1.655484926e+09
podman_volume_created_seconds{name="vol03"} 1.65548493e+09

system

# HELP podman_system_api_version Podman system api version.
# TYPE podman_system_api_version gauge
podman_system_api_version{version="4.1.1"} 1

# HELP podman_system_buildah_version Podman system buildahVer version.
# TYPE podman_system_buildah_version gauge
podman_system_buildah_version{version="1.26.1"} 1

# HELP podman_system_conmon_version Podman system conmon version.
# TYPE podman_system_conmon_version gauge
podman_system_conmon_version{version="2.1.0"} 1

# HELP podman_system_runtime_version Podman system runtime version.
# TYPE podman_system_runtime_version gauge
podman_system_runtime_version{version="crun version 1.4.5"} 1

License

Licensed under the Apache 2.0 license.

prometheus-podman-exporter's People

Contributors

dependabot[bot] avatar eesprit avatar fpoirotte avatar ingobecker avatar jasyip avatar mjtrangoni avatar navidys avatar rahilarious avatar rhatdan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prometheus-podman-exporter's Issues

Flag `--version` require conmon to be installed

Describe the bug
The flag --version can't be used in a CI as it needs the conmon utility to be available, and it seems that we are not exposing any conmon information at this stage.

To Reproduce
Steps to reproduce the behavior:

Compile the program and try to use the flag --version

$ ./bin/prometheus-podman-exporter --version
2022/10/11 12:11:59 could not find a working conmon binary (configured options: [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon]): invalid argument

Expected behavior

$ bin/prometheus-podman-exporter --version
prometheus-podman-exporter (version=1.2.0, branch=main, revision=dev.1)

Additional context
Building inside of a rockylinux:8 container image, without conmon installed.

SIGSEGV on startup until user runs "podman system reset"

I have a few Proxmox machines running a multitude of VMs, each with multiple users running various podman containers.

The VMs themselves are running Debian bookworm.

Each user runs prometheus-podman-exporter under a systemd service:

[Unit]
Description=Prometheus Podman Exporter
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers

StartLimitIntervalSec=300
StartLimitBurst=3

[Service]
Restart=on-failure
RestartSec=30

ExecStart=/bin/bash -c ' \
    exec /opt/prometheus/podman-exporter/prometheus-podman-exporter \
    --web.listen-address "127.0.0.1:$((40000 + %U))" \
    --collector.enable-all \
'

[Install]
WantedBy=default.target

This service is run by the various podman user accounts using systemctl --user. This works perfectly on many of my machines, but one of them segfaults whenever I start the service on any VM and any user:

ts=2024-04-25T18:46:14.302Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.11.0, branch=, revision=1)"
ts=2024-04-25T18:46:14.302Z caller=exporter.go:69 level=info msg=metrics enhanced=false
ts=2024-04-25T18:46:14.302Z caller=handler.go:94 level=info msg="enabled collectors"
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=container
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=image
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=network
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=pod
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=system
ts=2024-04-25T18:46:14.302Z caller=handler.go:105 level=info collector=volume
ts=2024-04-25T18:46:14.306Z caller=events.go:17 level=debug msg="starting podman event streamer"
ts=2024-04-25T18:46:14.306Z caller=events.go:20 level=debug msg="update images"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xfaf217]

goroutine 1 [running]:
github.com/containers/common/libimage.(*Runtime).ListImages(0x0, {0x1c9a700, 0xc0003cbbf0}, {0x0?, 0xc00022d740?, 0x4176e5?}, 0x30?)
        /opt/prometheus/podman-exporter/v1.11.0/vendor/github.com/containers/common/libimage/runtime.go:587 +0x97
github.com/containers/podman/v5/pkg/domain/infra/abi.(*ImageEngine).List(0xc000178000, {0x1c9a700, 0xc0003cbbf0}, {0x0?, {0x0?, 0x763b80?, 0x27d8680?}})
        /opt/prometheus/podman-exporter/v1.11.0/vendor/github.com/containers/podman/v5/pkg/domain/infra/abi/images_list.go:25 +0x174
github.com/containers/prometheus-podman-exporter/pdcs.updateImages()
        /opt/prometheus/podman-exporter/v1.11.0/pdcs/image.go:48 +0x7c
github.com/containers/prometheus-podman-exporter/pdcs.StartEventStreamer({0x1c8b720?, 0xc00007f740}, 0x1)
        /opt/prometheus/podman-exporter/v1.11.0/pdcs/events.go:21 +0x1d3
github.com/containers/prometheus-podman-exporter/exporter.Start(0x0?, {0x0?, 0x0?, 0x0?})
        /opt/prometheus/podman-exporter/v1.11.0/exporter/exporter.go:93 +0x5fb
github.com/containers/prometheus-podman-exporter/cmd.run(0x27ebd40?, {0xc00007f480?, 0x4?, 0x19c8839?})
        /opt/prometheus/podman-exporter/v1.11.0/cmd/root.go:53 +0x1c
github.com/spf13/cobra.(*Command).execute(0x27ebd40, {0xc0000400b0, 0x4, 0x4})
        /opt/prometheus/podman-exporter/v1.11.0/vendor/github.com/spf13/cobra/command.go:987 +0xaa3
github.com/spf13/cobra.(*Command).ExecuteC(0x27ebd40)
        /opt/prometheus/podman-exporter/v1.11.0/vendor/github.com/spf13/cobra/command.go:1115 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
        /opt/prometheus/podman-exporter/v1.11.0/vendor/github.com/spf13/cobra/command.go:1039
github.com/containers/prometheus-podman-exporter/cmd.Execute()
        /opt/prometheus/podman-exporter/v1.11.0/cmd/root.go:61 +0x1e
main.main()
        /opt/prometheus/podman-exporter/v1.11.0/main.go:8 +0xf

The VMs are running podman 4.3.1.

I have tried both prometheus-podman-exporter v1.10.1 and v1.11.0. The packages were built using make clean && make binary using the debian golang package provided by bookworm-backports for go 1.21.8.

This happens no matter how many / which containers are running, and even if no containers are running. The segfault also happens independently of the user (even users who have not ever run a podman container), except for root - in which case everything works fine.

The proxmox machines, VMs and containers are for the most part provisioned through ansible, and as such the VMs should be virtually the same, just running different containers.

The issues go away once each user runs podman system reset once.

Apologies if this isn't enough information, but I have spent quite a bit of time trying to figure out what is wrong, and am wondering if you might have any ideas. I am happy to provide any extra information.

arm64 images

Is your feature request related to a problem? Please describe.
It would be really appreciated if there were builds for arm64 architecture.

Describe the solution you'd like
Official arm64 builds (I was not able to get them running by building images myself, using same Containerfile but "arm64" as ARCH variable)

Thanks and best regards

Exporter crashes when attempting to export metrics for a pod with no namespace

Describe the bug
The exporter will crash as soon as the metrics endpoint is accessed if the --collector.pod tag is enabled, if a pod is created without an infrastructure container and namespace.

These are the default configurations used in the podman-compose up and podman-compose systemd commands in the podman compose package.

To Reproduce

Create at least one pod with no infrastructure container.

podman pod create --infra=false --share="" <pod_name>
podman pod ls -ns
POD ID        NAME           STATUS      CREATED       INFRA ID               CGROUP      NAMESPACES  # OF CONTAINERS
5e76f99ca  <pod_with_infra>     Running     44 hours ago  997264653234c603  user.slice          2
1f3fb57f3  <pod_with_no_infra>  Running     5 days ago                          user.slice              1
prometheus-podman-exporter -o -d

Expected behavior
The metrics endpoints are displayed.

Error Logs

prometheus-podman-exporter -o -d
ts=2022-10-20T16:31:34.408Z caller=exporter.go:57 level=info msg="Starting podman-prometheus-exporter" version="(version=1.0.0, branch=, revision=1)"
ts=2022-10-20T16:31:34.408Z caller=handler.go:93 level=info msg="enabled collectors"
ts=2022-10-20T16:31:34.408Z caller=handler.go:104 level=info collector=container
ts=2022-10-20T16:31:34.408Z caller=handler.go:104 level=info collector=pod
ts=2022-10-20T16:31:34.408Z caller=exporter.go:68 level=info msg="Listening on" address=:9882
ts=2022-10-20T16:31:34.409Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
ts=2022-10-20T16:31:51.484Z caller=handler.go:34 level=debug msg="collect query:" filters="unsupported value type"
panic: runtime error: slice bounds out of range [:12] with length 0

goroutine 48 [running]:
github.com/containers/prometheus-podman-exporter/pdcs.Pods()
        /builddir/build/BUILD/prometheus-podman-exporter/pdcs/pod.go:30 +0x396
github.com/containers/prometheus-podman-exporter/collector.(*podCollector).Update(0xc00026b0e0, 0x0)
        /builddir/build/BUILD/prometheus-podman-exporter/collector/pod.go:58 +0x38
github.com/containers/prometheus-podman-exporter/collector.execute({0x14bfd51, 0x0}, {0x16ddc60, 0xc00026b0e0}, 0x0, {0x16ddee0, 0xc000269780})
        /builddir/build/BUILD/prometheus-podman-exporter/collector/collector.go:115 +0x9c
github.com/containers/prometheus-podman-exporter/collector.PodmanCollector.Collect.func1({0x14bfd51, 0x0}, {0x16ddc60, 0xc00026b0e0})
        /builddir/build/BUILD/prometheus-podman-exporter/collector/collector.go:103 +0x3d
created by github.com/containers/prometheus-podman-exporter/collector.PodmanCollector.Collect
        /builddir/build/BUILD/prometheus-podman-exporter/collector/collector.go:102 +0xd5

Desktop (please complete the following information):

  • OS RHEL 9.0
  • Podman Version 4.1.1

Where is the packaged version?

I read the install file, and there's supposed to be a packaged version available on the epel repo.
I've enabled the epel repo on my rhel 8 vm, but and tried installing the podman exporter, but it can't find the package.

[root@vm-local-1 prometheus-podman-exporter-1.11.0]# sudo dnf -y install prometheus-podman-exporter
Last metadata expiration check: 0:14:20 ago on Sat 23 Mar 2024 11:37:35 AM UTC.
No match for argument: prometheus-podman-exporter
Error: Unable to find a match: prometheus-podman-exporte

I also took a look at the repo files itself, but the package isn't present.

Please help to support alpine arm64 as using postmarketos for testing

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Additional context
Add any other context or screenshots about the feature request here.

feat: allow cached scraping of container size

Is your feature request related to a problem? Please describe.
When creating containers one can miss mounting volumes to important locations.

  • This way important data can be written to container layer instead of a volume or a mount. If the container is deleted (or stopped with the --rm flag) the data is gone with it.
  • The container could also have logs or other data that grows without bound that is not part of volume and may be difficult to resize.

Ideally containers would be of size 0 with all data in volumes but this is rarely the case. This would be a good sanity check.

Describe the solution you'd like
I would like to see tunable cached container size metrics added. Specifically the size attributes from podman ps --size --format=json.

  • This is an expensive metric to collect because containers are usually mutable and their size unpredictable so it would need to be cached.
    • Cache duration should be tunable as this will affect different systems differently.
    • The default cache duration should be once every 24 hours.
    • Once the cache expires serve the old data and start fetching new data?
    • Ideally size collection for each container would distributed over the course of the tunable period to avoid sending too much IO to the system at the same time.

Additional context

  • It is likely not important for this metric to be collected frequently. I think it will be most helpful with long lived containers.
  • Image and volume sizes do not cover this aspect.

Thank you for considering this.

image names are parsed incorrectly for repositories that have port numbers.

Describe the bug
Incorrect parsing of repository name for images

To Reproduce
Include a port number in repository name

Expected behavior
Port number should be part of repo name not part of tag

Screenshots
Example metric from this exporter:
podman_image_info{id="20091fb4b6c2",repository="registry.test.example.org",tag="5000/example/ipxe"} 1
The registry name is actually registry.test.example.org:5000 the image is example/ipxe and the tag is not in the parsed value.

Desktop (please complete the following information):

  • Fedora 36
  • Podman 4.1.1

Additional context
Podman output:

# podman images |grep ipxe
registry.test.example.org:5000/example/ipxe                        20220428            20091fb4b6c2  5 days ago     507 MB

Recurring error in logs "error gathering metrics....was collected before with the same name and label values"

Describe the bug
Every 30 seconds in the logs I get the following error:

Aug 21 08:21:50 lvps podman-exporter_run[51905]: ts=2022-08-21T15:21:50.073Z caller=stdlib.go:105 level=error msg="error gathering metrics: 2 error(s) occurred:\n* [from Gatherer #2] collected metric \"podman_image_size\" { label:<name:\"id\" value:\"9c6f07244728\" > gauge:<value:5.830724e+06 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"podman_image_created_seconds\" { label:<name:\"id\" value:\"9c6f07244728\" > gauge:<value:1.660065593e+09 > } was collected before with the same name and label values"
Aug 21 08:21:50 lvps podman[51871]: ts=2022-08-21T15:21:50.073Z caller=stdlib.go:105 level=error msg="error gathering metrics: 2 error(s) occurred:\n* [from Gatherer #2] collected metric \"podman_image_size\" { label:<name:\"id\" value:\"9c6f07244728\" > gauge:<value:5.830724e+06 > } was collected before with the same name and label values\n* [from Gatherer #2] collected metric \"podman_image_created_seconds\" { label:<name:\"id\" value:\"9c6f07244728\" > gauge:<value:1.660065593e+09 > } was collected before with the same name and label values"

To Reproduce

Run prometheus-podman-exporter with --collector.enable-all. Have two identical images with different tags:

[ansible@lvps ~]$ podman images | grep 9c6f072
docker.io/library/alpine     3.16          9c6f07244728  11 days ago        5.83 MB
docker.io/library/alpine     latest        9c6f07244728  11 days ago        5.83 MB

Removing one of the images makes the error go away.

Expected behavior

I wouldn't expect error messages every 30 seconds in the logs for the above duplicate images. I believe it's not abnormal to have a latest tag that matches a numbered tag for a given image.

Desktop (please complete the following information):

  • OS CentOS Stream 9
  • Podman Version 4.1.1

Thanks!

feature: add status change timestamp

A feature of e.g. podman ps -a is to show the relative time of the last state change (Up 3 minutes ago).

Would an additional gauge with this timestamp be possible?

Right now I can't detect restart loops because the state is not exited long enough for prometheus to notice.

Thanks for this project! :)

Exporter only returns stats about pods from current running user?

Describe the bug
(I'm sorry to call this a "bug"; it's likely just a lack of my understanding over how to use this exporter successfully)

When I run prometheus-podman-exporter, it only exports container-level stats about the containers that are being run as the same user that I'm running the exporter as. Therefore, I don't know how to deploy prometheus-podman-exporter as a server-level monitoring tool -- it seems that I would need to run an instance of it for every user that could run pods?

To Reproduce
Steps to reproduce the behavior:

  • Run prometheus-podman-exporter as a user account (ie. non-root user)
  • Observe metrics like podman_container_cpu_seconds_total only reports id=* for containers running as the same user
  • Repeat, running as root -- even as a superuser, only root's containers are displayed

Expected behavior

I expected that running as root would return metrics from all pods. Alternatively, I expected the documentation in the repo to indicate the right way, or the expected runtime model, for prometheus-podman-exporter to be used effectively for monitoring at a server-level.

Screenshots
N/A; can provide if needed.

Desktop (please complete the following information):
NixOS 22.11

Additional context
I'm currently working on packaging prometheus-podman-exporter for NixOS. Building the software was straight-forward, but determining how to set up a systemd unit to run it has left me confused because of this. While it could run as a root user, which isn't typically how I'd want it to run anyway, it also wouldn't collect info on all the containers... leaving me a little confused on the recommended packaging approach.

make `--collector.image` option use less IO and time by not calculating image size on every scrape

Is your feature request related to a problem? Please describe.
When the --collector.image option is used the scrape time increases significantly along with I/O. For me hundreds of MBs of reads happen on every scrape.

  • I am not certain if containers/storage#1060 is required for this to be done.
    • here uidmap is listed as part of the problem but we have proper uidmap support now and it is still a problem.
  • containers/podman#13810 - tried to make size calculation optional in the podman repo for podman image ls but the change was reverted before it shipped.

Describe the solution you'd like
Do size calculation only when new images are added or detected. (I presume they should not change size without the ID changing).

Additional context
Using podman version 4.4.1

Exclude infra containers?

Is your feature request related to a problem? Please describe.
More of a question, but is it ever useful to monitor infra containers? It's easy enough to filter those but if they never do anything useful then it could be nice to prune them here.

[Question] Is there anyway to make this work with podman v3.0.1?

Hello, I'm using Podman on Debian 11 and Podman v4x hasn't been added to the Debian repository yet, however, this project satisfies all of my container monitoring needs, I know you currently only support podman v4x but I was wondering if I use the image version since I have to enable and mount the podman.sock, would this exporter work with Podman v3.0.1 in container mode with podman.sock enabled and mounted?

feat: enhance metrics ( include more labels )

Is your feature request related to a problem? Please describe.
For example, New Relic does not support joining two metrics with the same label ATOW. This makes it very hard to understand graphs only with containerid legend.

Describe the solution you'd like
Would be superb to enhance all metrics with the same fields as for podman_container_info metric This should also include a feature request: #34.

--enhance-metrics=true

Additional context
Add any other context or screenshots about the feature request here.

Support Ubuntu Jammy 22.04 / 22.10

Is your feature request related to a problem? Please describe.
Currently, Ubuntu is supporting 3.4.x podman. And podman exporter errors out with

❯ podman run -it -e CONTAINER_HOST=unix:///run/podman/podman.sock -v $XDG_RUNTIME_DIR/podman/podman.sock:/run/podman/podman.sock --userns=keep-id --security-opt label=disable -u 1000 quay.io/navidys/prometheus-podman-exporter
2022/11/24 10:18:27 unable to connect to Podman socket: server API version is too old. Client "4.0.0" server "3.4.4"

But the api is accessible and works well.. may be some parts might not work in podman v3.4
e.g.
curl --unix-socket /run/user/1000/podman/podman.sock http://d/v4.3.1/libpod/info

Describe the solution you'd like
Allow to use podman-exporter even with ubuntu 22.04 / 22.10

Additional context
Add any other context or screenshots about the feature request here.

podman_container_mem_usage_bytes metrics disappeared

Describe the bug
I used to query podman_container_mem_usage_bytes in prometheus, but now noticed it's not exposed anymore. a manual crawl of the /metrics file confirms this.

To Reproduce
unfortunately I can't really say what has changed. First I assumed a permission problem on the podman socket and created a separate one only for the prometheus-podman-exporter container:

/ $ id
uid=65534(nobody) gid=65534(nobody)
/ $ ls -la /run/podman/podman.sock 
srw-------    1 nobody   nobody           0 Dec  5 17:35 /run/podman/podman.sock
/ $ 

logs (last 7 lines repeat):

ts=2022-12-05T17:35:47.710Z caller=exporter.go:63 level=info msg="Starting podman-prometheus-exporter" version="(version=1.3.0, branch=main, revision=dev.1)"
ts=2022-12-05T17:35:47.711Z caller=handler.go:93 level=info msg="enabled collectors"
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=container
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=image
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=network
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=pod
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=system
ts=2022-12-05T17:35:47.711Z caller=handler.go:104 level=info collector=volume
ts=2022-12-05T17:35:47.711Z caller=exporter.go:74 level=info msg="Listening on" address=127.0.0.1:9882
ts=2022-12-05T17:35:47.712Z caller=tls_config.go:232 level=info msg="Listening on" address=127.0.0.1:9882
ts=2022-12-05T17:35:47.712Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=127.0.0.1:9882
ts=2022-12-05T17:35:58.307Z caller=handler.go:34 level=debug msg="collect query:" filters="unsupported value type"
ts=2022-12-05T17:35:58.312Z caller=collector.go:135 level=debug msg="collector succeeded" name=network duration_seconds=0.002719641
ts=2022-12-05T17:35:58.323Z caller=collector.go:135 level=debug msg="collector succeeded" name=pod duration_seconds=0.013724029
ts=2022-12-05T17:35:58.349Z caller=collector.go:135 level=debug msg="collector succeeded" name=volume duration_seconds=0.040115157
ts=2022-12-05T17:35:58.363Z caller=collector.go:135 level=debug msg="collector succeeded" name=container duration_seconds=0.053379088
ts=2022-12-05T17:35:58.511Z caller=collector.go:135 level=debug msg="collector succeeded" name=system duration_seconds=0.202251355
ts=2022-12-05T17:35:58.558Z caller=collector.go:135 level=debug msg="collector succeeded" name=image duration_seconds=0.24884938

testing the socket itself seems OK

$ echo -e "GET /containers/json HTTP/1.0\r\n" | podman unshare nc -U ${SOCKET} 
HTTP/1.0 200 OK
Api-Version: 1.41
Content-Type: application/json
Libpod-Api-Version: 4.3.1
Server: Libpod/4.3.1 (linux)
X-Reference-Id: 0xc0003c8000
Date: Mon, 05 Dec 2022 18:13:37 GMT

[{"Id":"1ef8fdf2102ba787e29125f3def643e6a4c5da4b22266daccffd2b24423b2549","Names" [...]

Expected behavior
have podman_container_mem_usage_bytes available in metrics exposed by prometheus-podman-exporter

environment

  • CentOS 9 Stream
  • Podman Version 4.3.1
  • exporter run in a podman container: /bin/podman_exporter --collector.enable-all --collector.store_labels --debug --web.listen-address 127.0.0.1:9882

Additional context
Any help in debugging this is welcome

feat: Add image_id to podman_container_info

Is your feature request related to a problem? Please describe.
I'm trying to have the leanest dashboard possible
I gather infos about the running containers, but i'd love to be able to show container image age
So far, to achieve this, I have to create a separate panel to display container image information

Describe the solution you'd like
If podman_container_info could contain used image id, I would be able to create a query that would display container image info in the same panel

Additional context
Here are the two panels I'd love to combine :
container info :
image
image info :
image

Thank you very much for taking this request into consideration

Unable to get this working with podman.sock

Struggling to get this running just on localhost to test it with user podman.sock. I'm not sure what to use for CONTAINER_HOST`` so I assumed the unix://` prefix and bind mounted the podman socket. Am I missing something obvious here?

$ podman run -it --rm -e CONTAINER_HOST=unix:///run/podman/podman.sock -v /run/user/1000/podman/podman.sock:/run/podman/podman.sock:Z quay.io/navidys/prometheus-podman-exporter:latest
2022/07/21 21:14:10 unable to connect to Podman socket: Get "http://d/v4.1.1/libpod/_ping": dial unix /run/podman/podman.sock: connect: permission denied

Thanks!
(Fedora 36, podman 4.1.1)

feat: Add support for HTTPS & authentication

Is your feature request related to a problem? Please describe.
Currently, the only way to add encryption or authentication to the exporter is to introduce a reverse proxy.
I would like to do this without relying on yet an additional tool.

Describe the solution you'd like
Add native support for HTTPS & authentication, using available features from the exporter_toolkit, which is already a dependency for prometheus-podman-exporter.

Additional context
n/a.

feat: export also container/image labels

Is your feature request related to a problem? Please describe.
In an environment where the only determining field is a label it hard to visualize metrics
from prometheus-podman-exporter, because it does not export labels.

Describe the solution you'd like
Would be superb to have an option to export also container/image labels, like cAdvisor does:

-store_container_labels=false -whitelisted_container_label="com.docker.compose.config-hash,com.docker.compose.container-number,com.docker.compose.oneoff,com.docker.compose.project,com.docker.compose.service"

Additional context
No

bug: podman_images_* series include metrics for non-existing images

Hello,

I've created a dashboard to monitor amount of images on the podman host and noticed that the metrics dont line up with the host.
There is a lot of <none> images that do not show up in cli when running podman images

Where are these coming from ? (I've tried to run podman system prune but that made no difference as there were no dangling images on the system)

root@podman1:~>podman system prune
WARNING! This command removes:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all dangling build cache

Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
root@podman1:~>

podman images -> 14 items

root@podman1:~>podman images
REPOSITORY                                                                TAG               IMAGE ID      CREATED      SIZE
refinst-docker-dev-local.artifactory.io/grafana-agent1           latest            eabd977236ca  2 days ago   457 MB
refinst-docker-dev-local.artifactory.io/grafana_agent_latest     latest            ea71004b1aaf  2 days ago   457 MB
refinst-docker-dev-local.artifactory.io/vmware-exporter          latest            5e883f1bedfc  5 days ago   937 MB
refinst-docker-dev-local.artifactory.io/veeam-em-exporter        latest            e09053d9feab  2 weeks ago  883 MB
refinst-docker-dev-local.artifactory.io/hpilo-exporter           latest            c81f066fce6d  2 weeks ago  882 MB
refinst-docker-dev-local.artifactory.io/python39                 latest            c801b1ebfa4b  2 weeks ago  880 MB
refinst-docker-dev-local.artifactory.io/web_discard              latest            516038f49add  2 weeks ago  404 MB
refinst-docker-dev-local.artifactory.io/netapp-harvest-exporter  latest            50b147ce0c4b  2 weeks ago  316 MB
refinst-docker-dev-local.artifactory.io/web_dashboard            latest            24dd88e557b0  5 weeks ago  319 MB
refinst-docker-dev-local.artifactory.io/web_tq                   latest            9d2260fd7289  6 weeks ago  413 MB
refinst-docker-dev-local.artifactory.io/web_gapcheck             latest            7a64a7f08f58  6 weeks ago  413 MB
refinst-docker-dev-local.artifactory.io/web_rds                  latest            3a07e63ee435  7 weeks ago  414 MB
refinst-docker-dev-local.artifactory.io/web_rdf                  latest            8a5e4434c4b0  8 weeks ago  416 MB
localhost/podman-pause                                           4.6.1-1692961697  4ce25834cda0  4 weeks ago  810 kB
root@podman1:~>

podman_image_info -> 38 items

root@podman1:~>curl -s  http://127.0.0.1:9882/metrics | grep -i "podman_image_info"
# HELP podman_image_info Image information.
# TYPE podman_image_info gauge
podman_image_info{digest="sha256:0a6ed5c7ac19ac85ea3408247032e36590c97fcf9b92202131dde940994cbc9e",id="2905c0f74e2b",parent_id="ec7bf4cae86f",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:0aca56789602469679f03cd9a74abcc5cfb23e505f6a9dade9dac872fc7bf9d3",id="4ce25834cda0",parent_id="",repository="localhost/podman-pause",tag="4.6.1-1692961697"} 1
podman_image_info{digest="sha256:0c033e7a4a25b4bb3857aae3a9985eb6cc51674fa6841c5d6ab6e520b744037c",id="6f7fa9a134b1",parent_id="897420c14ff9",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:113446045e5beb631735d4b78606865b417d43aec3f5d0e10cce7138da10fe20",id="7a64a7f08f58",parent_id="",repository="refinst-docker-dev-local.artifactory.io/web_gapcheck",tag="latest"} 1
podman_image_info{digest="sha256:1673345dc3f19269ae8c87c337e6d0bbfc42ab3596abbfc2635fc5a33d8a1987",id="9a2807da1e7b",parent_id="ef5d4631a596",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:33442ed707926c7a5f95cee14ef921c79e01f6f0b6faa8c52821b4f89b9c1040",id="6d03d1c6deee",parent_id="ed57561df995",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:33c34a64e8b27039fbc546580395ef509f8943b55a2d4382c9efc75654bc2e1c",id="ed57561df995",parent_id="a207e1233108",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:3b76c4fa1298ce14181b89e09f336db94241698ef03571bda82618fd5ca33794",id="24dd88e557b0",parent_id="",repository="refinst-docker-dev-local.artifactory.io/web_dashboard",tag="latest"} 1
podman_image_info{digest="sha256:3fd696005267537e7901cea0e49d9498cea6889fc0ee17e5ea90be418d6f09b8",id="ec7bf4cae86f",parent_id="9a2807da1e7b",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:488e9d0d8eed2abacbc8e9aea9355e87e65f6a0c51acb387cf8994249202d741",id="a207e1233108",parent_id="",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:48bd1c5e531374e5763509cc21d0973ace66b43b0278a4f32645eb56264ee22c",id="8a5e4434c4b0",parent_id="",repository="refinst-docker-dev-local.artifactory.io/web_rdf",tag="latest"} 1
podman_image_info{digest="sha256:4fbe71d302dfcdd052541087701895dd0897cc05fa5f94d112efdad50a138021",id="50b147ce0c4b",parent_id="6f29191cc431",repository="refinst-docker-dev-local.artifactory.io/netapp-harvest-exporter",tag="latest"} 1
podman_image_info{digest="sha256:656bc3649d5854472637fdf47a46c56234b8bb11ad6ff2296349a61b360ce0a3",id="ea71004b1aaf",parent_id="",repository="refinst-docker-dev-local.artifactory.io/grafana_agent_latest",tag="latest"} 1
podman_image_info{digest="sha256:680500d57432c0d0efb312f7b4419b74e38db15ce54a85677beec73107d17432",id="516038f49add",parent_id="06e50700f88a",repository="refinst-docker-dev-local.artifactory.io/web_discard",tag="latest"} 1
podman_image_info{digest="sha256:7016d3a807af13f15cf6d06e06e7ca0c6457eabe795ad48ed8a074a7e438142f",id="87131e84e92f",parent_id="32c82d478acf",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:72fbd7c8d40fba25881926da0320c928bc6d96fa2e4ecf82c55c2e98948049e5",id="75547269d5dd",parent_id="c801b1ebfa4b",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:7435f71b84f40e8e11a79a00ef6ddf5b3a87fac4831c53b827d404e77eccf9f7",id="3a07e63ee435",parent_id="",repository="refinst-docker-dev-local.artifactory.io/web_rds",tag="latest"} 1
podman_image_info{digest="sha256:84fb87a5fdf985050f71d9bdea1478ee38216efc5ce575a543ec33fa54a82e48",id="a1534bf3bdb4",parent_id="2905c0f74e2b",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:8ea031fa8ae8966d5c98da5befd753adde29e6d456db4159302a8829da9a172f",id="32c82d478acf",parent_id="b4dcaebd7ad6",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:8ffa2a6e92204235f7c352c84101c1cba46cbbce6441c99d2abbf8cbd289c031",id="ef5d4631a596",parent_id="c801b1ebfa4b",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:92902126a72240481acc03525b8ead478a5a73750c29bf866d46f2e79a6b83af",id="c79f51b7c547",parent_id="5a7233b9a946",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:95bfa65e3fc9b2537d59f701edd5dc02db0fe4238e2ff2b68ff7bfe4889a516c",id="513fe78ae290",parent_id="ea71004b1aaf",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:986d1e26e6aed7df4ede6b236cc7fb6f3ac6f83bae752cfd13a0315f89fb08c1",id="c801b1ebfa4b",parent_id="",repository="refinst-docker-dev-local.artifactory.io/python39",tag="latest"} 1
podman_image_info{digest="sha256:996de721756a9656eda92bf31adadeedfec4246ecb4747accf301cba97e43806",id="d87698835e56",parent_id="c79f51b7c547",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:a08b74bfd4dee20fede7459800d26de0e65bf9ea6ad328083e600e3b1731ff9b",id="6f29191cc431",parent_id="87131e84e92f",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:a5f22f326e8c3376014ccbd0a3e81edda656f9e603e0367b25a40470e3e1e4d2",id="5a7233b9a946",parent_id="5d2e027dce4f",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:aad54800aa00ff2837730b51be85d00da21edb40dcd0f5e41c5a6d4334fffa65",id="55cb705da813",parent_id="c801b1ebfa4b",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:aed453013c0f102e953bacebb05dfa3d1355947f572665813834d439b5d67dc3",id="9d2260fd7289",parent_id="",repository="refinst-docker-dev-local.artifactory.io/web_tq",tag="latest"} 1
podman_image_info{digest="sha256:bce8cc01a18804d07163078d78febfe89144c3a6501900ad04e2a79473ac0b1a",id="897420c14ff9",parent_id="75547269d5dd",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:c364e2faffae61d860b23651baad3d73586c58a779387305cfe6e2af1c609f1d",id="06e50700f88a",parent_id="6d03d1c6deee",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:d0192458dbbbf72f0fe9f59cb0c98fc46cc3573918e866a0331d3177429a448f",id="c81f066fce6d",parent_id="bf4fc3d5cec8",repository="refinst-docker-dev-local.artifactory.io/hpilo-exporter",tag="latest"} 1
podman_image_info{digest="sha256:d601b23ecef1ceafefcfe2f122858aebb7d2873586d47fc9afd0fe5af67c80fd",id="bf4fc3d5cec8",parent_id="a1534bf3bdb4",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:e03b62dae1dc0163b6b9c90e1defe4d3f4a1808306ca133d8bdb8ee5aad58c9f",id="b4dcaebd7ad6",parent_id="4399df180324",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:e05ca2be96483cfce3ebc425ce478d24a4da427b065d744d1e9ff5a038861d46",id="5d2e027dce4f",parent_id="55cb705da813",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:e7f4b3705e8575ddd73bb326cb756686b1e874ba2257bb01178ba9d1d29a69ce",id="e09053d9feab",parent_id="6f7fa9a134b1",repository="refinst-docker-dev-local.artifactory.io/veeam-em-exporter",tag="latest"} 1
podman_image_info{digest="sha256:f80f755d353c7367cabf625b1b7723bfe8f850dbab3f72c8ee3b07d037ad8c53",id="4399df180324",parent_id="",repository="<none>",tag="<none>"} 1
podman_image_info{digest="sha256:f81453457f17aff712282c95d5c6ddb6407f487ae94f5c5c4b281291b6f2e055",id="5e883f1bedfc",parent_id="d87698835e56",repository="refinst-docker-dev-local.artifactory.io/vmware-exporter",tag="latest"} 1
podman_image_info{digest="sha256:fff48b6cab26df80408134707a6d8fcf10227c5dc6cf3c59755201f383985b4e",id="eabd977236ca",parent_id="513fe78ae290",repository="refinst-docker-dev-local.artifactory.io/grafana-agent1",tag="latest"} 1
root@podman1:~>

Add metric for volume storage size

I'd like to be able to monitor the amount of storage currently being utilized by my volumes. Looks like this data should be available:
Command line: podman system df -v
API: https://docs.podman.io/en/latest/_static/api.html#tag/system/operation/SystemDataUsageLibpod

SystemDf(ctx context.Context, options SystemDfOptions) (*SystemDfReport, error)

bug: --collector.store_labels causing empty collection

Describe the bug
Upgraded to 1.80 version (on redhat 9.3) and when I launch exporter with the new flag --collector.store_labels - there arent any metrics exported.

● prometheus-podman-exporter.service - Prometheus exporter for podman (v4) machine
     Loaded: loaded (/etc/systemd/system/prometheus-podman-exporter.service; enabled; preset: disabled)
     Active: active (running) since Tue 2024-02-27 13:37:26 UTC; 14s ago
   Main PID: 139424 (prometheus-podm)
      Tasks: 14 (limit: 48830)
     Memory: 21.9M
        CPU: 76ms
     CGroup: /system.slice/prometheus-podman-exporter.service
             └─139424 /usr/bin/prometheus-podman-exporter --collector.store_labels --collector.enable-all --web.listen-address 127.0.0.1:9882

Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:93 level=info msg="enabled collectors"
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=container
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=image
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=network
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=pod
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=system
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.309Z caller=handler.go:104 level=info collector=volume
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.328Z caller=exporter.go:82 level=info msg="Listening on" address=127.0.0.1:9882
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.328Z caller=tls_config.go:313 level=info msg="Listening on" address=127.0.0.1:9882
Feb 27 13:37:26 vlado1 prometheus-podman-exporter[139424]: ts=2024-02-27T13:37:26.329Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=127.0.0.1:9882

results in

root@vlado1:/>curl http://localhost:9882/metrics
curl: (52) Empty reply from server
root@vlado1:/>

when I remove the new flag --collector.store_labels

root@vlado1:/>vim /etc/systemd/system/prometheus-podman-exporter.service
root@vlado1:/>systemctl daemon-reload
root@vlado1:/>systemctl restart prometheus-podman-exporter
root@vlado1:/>systemctl status prometheus-podman-exporter
● prometheus-podman-exporter.service - Prometheus exporter for podman (v4) machine
     Loaded: loaded (/etc/systemd/system/prometheus-podman-exporter.service; enabled; preset: disabled)
     Active: active (running) since Tue 2024-02-27 13:39:02 UTC; 7s ago
   Main PID: 139658 (prometheus-podm)
      Tasks: 14 (limit: 48830)
     Memory: 22.0M
        CPU: 51ms
     CGroup: /system.slice/prometheus-podman-exporter.service
             └─139658 /usr/bin/prometheus-podman-exporter --collector.enable-all --web.listen-address 127.0.0.1:9882

Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:93 level=info msg="enabled collectors"
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=container
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=image
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=network
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=pod
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=system
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.892Z caller=handler.go:104 level=info collector=volume
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.908Z caller=exporter.go:82 level=info msg="Listening on" address=127.0.0.1:9882
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.908Z caller=tls_config.go:313 level=info msg="Listening on" address=127.0.0.1:9882
Feb 27 13:39:02 vlado1 prometheus-podman-exporter[139658]: ts=2024-02-27T13:39:02.908Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=127.0.0.1:9882
root@vlado1:/>curl http://localhost:9882/metrics
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 5.7077e-05
go_gc_duration_seconds{quantile="0.25"} 5.87e-05
go_gc_duration_seconds{quantile="0.5"} 0.000106128
go_gc_duration_seconds{quantile="0.75"} 0.00020925
go_gc_duration_seconds{quantile="1"} 0.00020925
go_gc_duration_seconds_sum 0.000431155
...

Pod names are not displayed in all metrics

Hello,

We need to have the pod name appear in all metrics. Currently only one metric contain the name of the pod and this is the 'podman_container_info'.

It is vital to have the podname in all other metrics so as to display the podname in the chart ledgent in grafana!

That will avoid unnecessary joins of type:

join(podman_container_mem_limit_bytes{id=~"$id"}, 1s, podman_container_info{id=~"$id"})

Thank you!
Georgios

Generate multi-arch Container images

Is your feature request related to a problem? Please describe.
The published container image is only build for amd64 arch. This can't run on other arch like raspberry pi 4 with arm64 arch

Describe the solution you'd like
Build and publish the container image for multiple arch

Additional context
I might work on this but I see any existing pipeline to build and publish the container image

Exporter does not list all rootless containers running under different users

I have a RHEL8 servers with podman and I have several rootless containers running under different users (this is a security requirement in my env).

The podman-exporter is currently running in a container under root account and it can only list metrics related to the cotnainers running under root.

Is there a way to scape the metrics from all rootless containers running under different users?

bug: --collector.store_labels does not work - even with 1.90 version

Describe the bug
I'm expecting with 1.90 version to get container name in the container metrics.

To Reproduce
Install 9.3 RHEL, run a container, install 1.90 podman_exporter.

Service is up and running

● prometheus-podman-exporter.service - Prometheus exporter for podman (v4) machine
     Loaded: loaded (/etc/systemd/system/prometheus-podman-exporter.service; enabled; preset: disabled)
     Active: active (running) since Tue 2024-03-12 15:50:44 UTC; 2h 26min ago
   Main PID: 861 (prometheus-podm)
      Tasks: 18 (limit: 100374)
     Memory: 161.2M
        CPU: 15.711s
     CGroup: /system.slice/prometheus-podman-exporter.service
             └─861 /usr/bin/prometheus-podman-exporter --collector.store_labels --collector.enable-all --web.listen-address 127.0.0.1:9882

Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=container
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=image
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=network
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=pod
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=system
Mar 12 15:50:44 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:44.651Z caller=handler.go:104 level=info collector=volume
Mar 12 15:50:44 podman1 podman[861]: 2024-03-12 15:50:44.824891671 +0000 UTC m=+0.418275272 system refresh
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.132Z caller=exporter.go:82 level=info msg="Listening on" address=127.0.0.1:9882
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.132Z caller=tls_config.go:313 level=info msg="Listening on" address=127.0.0.1:9882
Mar 12 15:50:45 podman1 prometheus-podman-exporter[861]: ts=2024-03-12T15:50:45.133Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=127.0.0.1:9882
root@podman1:~>curl -s  http://127.0.0.1:9882/metrics | grep podman_container_mem_usage_bytes
# HELP podman_container_mem_usage_bytes Container memory usage.
# TYPE podman_container_mem_usage_bytes gauge
podman_container_mem_usage_bytes{id="6ac5cbf017e9",pod_id="",pod_name=""} 9.555968e+06
podman_container_mem_usage_bytes{id="751fd678fd81",pod_id="",pod_name=""} 4.0337408e+07
podman_container_mem_usage_bytes{id="77fded31009b",pod_id="",pod_name=""} 3.8244352e+07
podman_container_mem_usage_bytes{id="872c35951a71",pod_id="",pod_name=""} 1.10112768e+08
podman_container_mem_usage_bytes{id="9c4b67edce43",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.4784896e+07
podman_container_mem_usage_bytes{id="ad411a795efc",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.5776128e+07
podman_container_mem_usage_bytes{id="b51846f7b274",pod_id="",pod_name=""} 1.9447808e+07
podman_container_mem_usage_bytes{id="c3f7409463e1",pod_id="",pod_name=""} 2.9732864e+07
podman_container_mem_usage_bytes{id="d769648ee679",pod_id="a02f37390a33",pod_name="grafana-agent"} 5.9138048e+07
podman_container_mem_usage_bytes{id="e96d06c68e45",pod_id="a02f37390a33",pod_name="grafana-agent"} 430080
podman_container_mem_usage_bytes{id="ef392d0f4f36",pod_id="a02f37390a33",pod_name="grafana-agent"} 57344
podman_container_mem_usage_bytes{id="f220eaa7b913",pod_id="a02f37390a33",pod_name="grafana-agent"} 2.07048704e+08

Expected behavior
I expect to see container name in all container metrics.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

root@podman1:~>cat /etc/redhat-release
Red Hat Enterprise Linux release 9.3 (Plow)
root@podman1:~>
root@podman1:~>podman -v
podman version 4.6.1

feat: add a metric representing a container's health

Is your feature request related to a problem? Please describe.
I would like to be able to query a container's health in addition to its state.

Describe the solution you'd like
I propose to add a new gauge metric named podman_container_health. The metric would map the values defined in podman/libpod/define/healthchecks.go (healthy, unhealthy & starting) to integers and the help message would help map those values back to their intended meaning (similar to the way podman_container_state is defined).

Additional context
n/a

feat: enhance metrics ( include more labels )

Opened and feature request from: #35

Is your feature request related to a problem? Please describe.
For example, New Relic does not support joining two metrics with the same label ATOW. This makes it very hard to understand graphs only with containerid legend.

Describe the solution you'd like
Would be superb to enhance all metrics with the same fields as for podman_container_info metric This should also include a feature request: #34.

--enhance-metrics=true

Additional context
Add any other context or screenshots about the feature request here.

noob questions (sorry): How do you filter podman_container_state on value ?

again sorry for the noob question here

so i just want to do this

podman_container_state=2

so it only shows me the running containers.

i've looked at examples of other querys, like cpu times and it just states the metrics name and then the value like

cpu_memory_time>300

but i can't get it wokring with the podman_container_state

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.