Git Product home page Git Product logo

nifi-prometheus-reporter's Introduction

Nifi Prometheus Reporter Build Status

A reporting task in Nifi which is capable of sending monitoring statistics as prometheus metrics to a prometheus pushgateway. After this, the Prometheus server scrapes the metrics from the pushgateway.

Getting Started

For setting up the requirements there is a docker-compose file in docker/prometheus, that sets up the Pushgateway, the Prometheus server and a Grafana server. After starting the docker containers nifi needs to be downloaded and the ReportingTask has to be copied into the lib directory.

To setup the test environment have docker-compose and docker installed: see link

docker-compose up -d

This will bootstrap:

A sample dashboard can be found here: Sample Dashboard

After setting up a simple flow and the ReportingTask, the flow can be started and the results should be visible in the Grafana dashboard.

Docs

See the docs for more details:

  1. Configuration

Prerequisites

To test or use the PrometheusReportingTask the following systems should be setup and running.

  • Running Prometheus instance
  • Running Prometheus Pushgateway instance
  • Running Nifi instance

The tools can be setup with Docker or manually.

Install to running Nifi instance

First download the current release and then copy the nar file into your Nifi lib folder. (Most times under /opt/nifi//lib)

After this, just restart Nifi.

Limitations

The Reporting Task can't send custom metrics from processors to the Pushgateway. If you want something like this, you have to setup your own processor, that can read FlowFiles, generate custom metrics and send them to a Pushgateway. Because this is such a custom thing, it can't be done with this Reporting Task and it is also not the scope of this project.

Build it yourself

The project can be build with maven as the standard fashion of building nifi-processor-bundles. Following snippet shows the entire setup with pre-installed Nifi:

# Clone project
git clone https://github.com/mkjoerg/nifi-prometheus-reporter.git
# Move into cloned dir
cd nifi-prometheus-reporter

# Build project
mvn clean install

The previously built .nar archive has to be copied into the nifi/lib directory and can be used after a restart of nifi.

# Copy .nar into Nifi's lib folder
cp nifi-prometheus-nar/target/nifi-prometheus-nar-1.9.2.nar NIFI_HOME/lib/nifi-prometheus-nar-1.9.2.nar

# Start nifi
NIFI_HOME/bin/nifi.sh start
# Or restart if already running
NIFI_HOME/bin/nifi.sh restart


## Authors

* **Matthias Jörg** - *Initial work* - [mkjoerg](https://github.com/mkjoerg)
* **Daniel Seifert** - *Initial work* - [Daniel-Seifert](https://github.com/Daniel-Seifert)

nifi-prometheus-reporter's People

Contributors

daniel-seifert avatar jpercivall avatar mikus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nifi-prometheus-reporter's Issues

how to push additionnal metrics

Hi,
I appreciate your work, but I was more willing to expand the capability of the "status history" per processor to be able to see usage during a larger timerange than 5 minutes.
Is there any chance that this reporter could expose some metrics from nifi.components.status.repository ?

If not, could you confirm that in order to add new metrics, you just use vanilla pushgateway and use a HTTP post processor within nifi ?
If so, any chance to have a template of such POST ?

Thanks and regards,
Julien

Java Lang Error when enabling JVM flag

Getting Java Lang reflect InaccessibleObjectException Unable to make public long. It works fine when I have the JVM flag as false

Java Version 9.0.1

Config
image

Error

image

What do I need to do?

Limitation about Metrics sent to prometheus

Hi
As I've read this project doesn't handling custom metrics from processor.

1/ Do you mean that the only metrics that are send are only these :
process_group_amount_bytes_total
process_group_amount_flowfiles_total
process_group_amount_items
process_group_amount_threads_total
process_group_size_content_total Total
push_time_seconds

2/ Is there any option to get these metrics including the processor name as tag into the metrics ?

3/ SiteToSiteMetricsReportingTask send event including processorName is it possible to combine them with your reporting task to be able to get more detailed metrics?

Thanks

Connection Refused

I ran the simple setup. all the containers are up. But I get this
ERROR [Timer-Driven Process Thread-9] o.a.n.r.p.PrometheusReportingTask PrometheusReportingTask[id=6feba8f7-0168-1000-f1f2-6fa3a81f4871] Failed pushing JVM-metrics to Prometheus PushGateway due to java.net.ConnectException: Connection refused (Connection refused); routing to failure: {}

here's a screenshot of error in NiFi. Reporting task is setup in NiFi with default settings i.e Pushgateway running on localhost:9091
screenshot from 2019-01-21 11-27-21

heres my task configuration
screenshot from 2019-01-21 12-09-10

I don't understand why it's throwing this error?

Can't show prometheus in Reporting Task

Hello,

I restart my Nifi serveur with nifi-prometheus-nar-1.9.0.nar in /nifi/lib. The server works.
I check my nifi log and i have this:
INFO [main] org.apache.nifi.nar.NarClassLoaders Loaded NAR file: /opt/application/nifi/work/nar/extensions/nifi-prometheus-nar-1.9.0.nar-unpacked as class loader org.apache.nifi.nar.NarClassLoader[/opt/application/nifi/work/nar/extensions/nifi-prometheus-nar-1.9.0.nar-unpacked]
org.apache.nifi.reporting.prometheus.PrometheusReportingTask
org.apache.nifi:nifi-prometheus-nar:1.9.0 || /opt/application/nifi/work/nar/extensions/nifi-prometheus-nar-1.9.0.nar-unpacked

I have this in my extension file :

.
├── META-INF
│ ├── DEPENDENCIES
│ ├── LICENSE
│ ├── MANIFEST.MF
│ ├── maven
│ │ └── org.apache.nifi
│ │ └── nifi-prometheus-nar
│ │ ├── pom.properties
│ │ └── pom.xml
│ └── NOTICE
├── NAR-INF
│ └── bundled-dependencies
│ ├── metrics-core-2.2.0.jar
│ ├── nifi-prometheus-reporting-task-1.9.0.jar
│ ├── nifi-utils-1.9.0.jar
│ ├── simpleclient-0.5.0.jar
│ ├── simpleclient_common-0.5.0.jar
│ ├── simpleclient_hotspot-0.5.0.jar
│ ├── simpleclient_pushgateway-0.5.0.jar
│ └── simpleclient_servlet-0.5.0.jar
└── nar-md5sum

But in my Nifi IHM i can't show the prometheus agent in the Reporting Task:

image

And nothing about prometheus in bootstrap.log

Error when running mvn build for 1.8 version

I'm getting build failure when running mvn clean install on mac for version 1.8, hence not able to generate nar file.

This is the error I'm receiving.
[INFO]
[INFO] Results:
[INFO]
[ERROR] Errors:
[ERROR] TestPrometheusReportingTask.testOnTrigger:140 » InaccessibleObject Unable to m...
[ERROR] TestMetricsService.testGetVirtualMachineMetrics:121 » InaccessibleObject Unabl...
[INFO]
[ERROR] Tests run: 4, Failures: 0, Errors: 2, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for nifi-prometheus-bundle 1.8.0:
[INFO]
[INFO] nifi-prometheus-bundle ............................. SUCCESS [ 2.218 s]
[INFO] nifi-prometheus-reporting-task ..................... FAILURE [ 3.238 s]
[INFO] nifi-prometheus-nar ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 7.629 s
[INFO] Finished at: 2019-01-11T14:58:42-05:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.20.1:test (default-test) on project nifi-prometheus-reporting-task: There are test failures.

Using pushgateway in a NiFi Cluster

Hello - This is more of a question.

How would you see pushgateway in a NiFi cluster. Say in a 3 node cluster, do we run pushgateway on every node?

Problems for push data monitoring in NIFI with separated tools

I have one scenario where I get the following errors same with all configurations correct about documentations for Pushgateway:

2019-10-15 08:14:54,789 ERROR [Timer-Driven Process Thread-4] o.a.n.r.p.PrometheusReportingTask PrometheusReportingTask[id=cf0d9548-016d-1000-7798-424e9933586b] Failed pushing Nifi-metrics to Prometheus PushGateway due to java.io.IOException: Response code from http://localhost:9091/metrics/job/nifi_reporting_job/instance/user-Dell was 200; routing to failure: {}
java.io.IOException: Response code from http://localhost:9091/metrics/job/nifi_reporting_job/instance/user-Dell was 200
at io.prometheus.client.exporter.PushGateway.doRequest(PushGateway.java:304)
at io.prometheus.client.exporter.PushGateway.pushAdd(PushGateway.java:178)
at org.apache.nifi.reporting.prometheus.PrometheusReportingTask.onTrigger(PrometheusReportingTask.java:170)
at org.apache.nifi.controller.tasks.ReportingTaskWrapper.run(ReportingTaskWrapper.java:44)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

The installation for that is:

  • Nifi 1.9.2 with nifi-prometheus-reporter.nar on lib folder installed local out from docker or docker-compose and ReportinTask configured as below:
    ReportingTask_Config_NIFI

  • Apache Livy installed local out from docker or docker-compose.

  • Pushgateway, Prometheus and Grafana are separeted docker files with --network host setted on same machine with other softwares but the real installation are Nifi and Livy in one server and others docker files in Cloud.

I have one ExecuteScript component in ProcessGroup on Nifi as below:
Flow_Nifi

The ExecuteScript access Livy Api via Nifi because I'm listening json files in one local folder and passing parameters for Spark Jobs via Python and passing that parameters for execute Livy Batch Api:

import json
import java.io
from org.apache.commons.io import IOUtils
from java.nio.charset import StandardCharsets
from org.apache.nifi.processor.io import InputStreamCallback
import json, pprint, requests, textwrap

class PyReadStreamCallback(InputStreamCallback):
def init(self):
pass

def process(self, ins):
    self.val = IOUtils.toString(ins, StandardCharsets.UTF_8)        

###end class###

flowFile = session.get()

if flowFile is None:
session.transfer(flowFile, REL_FAILURE)

obj = PyReadStreamCallback()
session.read(flowFile, obj)
parsedJson = json.loads(obj.val)

headers = {'Content-Type': 'application/json'}
batchs_url = 'http://localhost:8998/batches'
host = 'http://localhost:8998'

r = requests.post(batchs_url, data=obj.val, headers=headers)
pprint.pprint(r.json())

statement_url = host + r.headers['location']
r = requests.get(statement_url, headers=headers)
pprint.pprint(r.json())

####with open('/data/personal.json', 'w') as json_file: ###

json.dump(parsedJson, json_file)

session.transfer(flowFile, REL_SUCCESS)

Testing with the docker-compose for nifi-prometheus-reporter I'm getting the same error.

What is the problem about that because I'm getting some scraped data on Pushgateway and get too that error?!

Using Reporter on Openshift Cluster

Hi there,

thanks for implementing this reporter!
Currently I am trying to run it on an openshift cluster (which I am quite new to, hence sorry if this is the wrong place to ask). Unfortunately the reporter does not send the metrics to pushgateway, although I added the built nifi-prometheus-nar-1.6.0.nar file to the lib directory in the Dockerfile by

COPY ./nifi-conf/nifi-prometheus-nar-1.6.0.nar /opt/nifi/lib/nifi-prometheus-nar-1.6.0.nar

Prometheus, pushgateway and nifi are running in three pods respectively; the connection between pushgateway and prometheus seems to be fine. Therefore I expect the problem lies in the set up of the nifi pod.
Since I copy the nar-file to nifi beforehand, I do not restart nifi as you mention in your README.

Did I miss something crucial? Again, sorry if this is more of an openshift problem.
Thanks in advance.

More JVM metrics

Add more JVM metrics which will be supported in the next release of the io.prometheus.PushGateway

Added URL authentication

Hi there,

This repo saved me a lot of time.
Just made a small modification in order to add authentication on the url connection

static final PropertyDescriptor USERNAME = new PropertyDescriptor.Builder() .name("Username") .description("Username for URL authentication") .build(); static final PropertyDescriptor PASSWORD = new PropertyDescriptor.Builder() .name("Password") .description("Password for URL authentication") .sensitive(true) .build();

properties.add(USERNAME); properties.add(PASSWORD);

'final PushGatewayExt pushGateway = new PushGatewayExt(metricsCollectorUrl
, context.getProperty(USERNAME) != null ? context.getProperty(USERNAME).toString() : null
, context.getProperty(PASSWORD) != null ? context.getProperty(PASSWORD).toString() : null);'

created a new class

`package org.apache.nifi.reporting.prometheus.extend;

import io.prometheus.client.CollectorRegistry;
import io.prometheus.client.exporter.PushGateway;
import io.prometheus.client.exporter.common.TextFormat;

import java.io.BufferedWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL;
import java.net.URLEncoder;
import java.util.Iterator;
import java.util.Map;
import sun.misc.BASE64Encoder;

public class PushGatewayExt extends PushGateway {
protected final String username;
protected final String password;

public PushGatewayExt(String address, String username, String password) {
    super(address);
    this.username = username;
    this.password = password;
}

@Override
public void pushAdd(CollectorRegistry registry, String job) throws IOException {
    this.doAuthRequest(registry, job, (Map)null, "POST");
}

void doAuthRequest(CollectorRegistry registry, String job, Map<String, String> groupingKey, String method) throws IOException {
    String url = this.gatewayBaseURL + URLEncoder.encode(job, "UTF-8");
    Map.Entry entry;
    if (groupingKey != null) {
        for(Iterator var6 = groupingKey.entrySet().iterator(); var6.hasNext(); url = url + "/" + (String)entry.getKey() + "/" + URLEncoder.encode((String)entry.getValue(), "UTF-8")) {
            entry = (Map.Entry)var6.next();
        }
    }

    HttpURLConnection connection = (HttpURLConnection)(new URL(url)).openConnection();
    connection.setRequestProperty("Content-Type", "text/plain; version=0.0.4; charset=utf-8");
    if (!method.equals("DELETE")) {
        connection.setDoOutput(true);
    }

    if (this.username != null || this.password != null) {
        String userPass = this.username + ":" + this.password;
        String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userPass.getBytes());
        connection.setRequestProperty("Authorization", basicAuth);
    }

    connection.setRequestMethod(method);
    connection.setConnectTimeout(10000);
    connection.setReadTimeout(10000);
    connection.connect();

    try {
        if (!method.equals("DELETE")) {
            BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(connection.getOutputStream(), "UTF-8"));
            TextFormat.write004(writer, registry.metricFamilySamples());
            writer.flush();
            writer.close();
        }

        int response = connection.getResponseCode();
        if (response != 202) {
            throw new IOException("Response code from " + url + " was " + response);
        }
    } finally {
        connection.disconnect();
    }
}

}
`
Hope it helps someone.

Best regards and thanks.

nifi-prometheus Version NIFI compatible

Hello,

I've installed and restart NIFI with this NAR.
It is correctly installed but I can see it on NIFI Cluster in section "Controller Settings" and there are nothing in the "Reporting Task"

Is it work only for NIFI 1.6 my version is NIFI 1.2

thanks

DashBoard

Hi,

Can you tell us what metrics have sent to Prometheus ?
Have you got some metrics or dashboard to share with us in README file ?
Thanks

Process Group ID discovery

Thanks for the reporting task, it's been helpful.

A question about the "Process group ID(s)" field in the task's configuration. I understand you can get metrics for a specific PG by specifying its ID, or global metrics if the field is empty. However, we've got a large number of PGs - is there a query/method for automatic discovery of PG IDs without manually specifying them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.