Git Product home page Git Product logo

pfsense-analytics's Introduction

pfSense Analytics

This Project aims to give you better insight of what's going on your pfSense Firewall. It's based on some heavylifting alrerady done by devopstales and opc40772. Since it still was a bit clumsy and outdated I wrapped some docker-compose glue around it, to make it a little bit easier to get up and running. It should work hasslefree with a current Linux that has docker and docker-compose. Thanks as well to MatthewJSalerno for some Streamlining of the Graylog provisioning Process.

I have recently updated the whole stack to utilize Graylog 4 and Elasticsearch 7 and Grafana 7. I don't include any directions for Upgrading GL3/ES6 to GL4/ES7.

This doc has been tested with the following Versions:

Component Version
Elasticsearch 7.11.1
Grafana 7.4.2
Graylog 4.0.3
Cerebro 0.9.3
pfSense 2.5.0 CE

If it's easier for you, you can find a video guide here: https://youtu.be/uOfPzueH6MA (Still the Guide for GL3/ES6, will make a new one some day.)

The whole metric approach is split into several subtopics.

Metric type Stored via stored in Visualisation
pfSense IP Filter Log Graylog Elasticsearch Grafana
NTOP DPI Data NTOP timeseries export InfluxDB Grafana

Optional Succicata/SNORT logs can be pushed to Elasticsearch, Graylog has ready made extractors for this, but currently this is not yet included in this Documentation.

What you get is Eyecandy like this:

DPI Data: dpi2

More DPI Data: dpi1

Firewall Insights: fw1

Moar Insights: fw2

This walkthrough has been made with a fresh install of Ubuntu 18.04 Bionic but should work flawless with any debian'ish linux distro.

0. System requirements

Since this involves Elasticsearch 7 a few GB of RAM will be required. Don't bother with less than 8GB. It just won't run.

Please install docker, docker-compose and git as basic prerequisite.

sudo apt install docker.io docker-compose git

1. Prepare compose environment

Let's pull this repo to the Server where you intend to run the Analytics front- and backend.

git clone https://github.com/lephisto/pfsense-analytics.git
cd pfsense-analytics

We have to adjust some Systemlimits to allow Elasticsearch to run:

sudo sysctl -w vm.max_map_count=262144

to make it permanent edit /etc/sysctl.conf and add the line:

vm.max_map_count=262144

Next edit the ./Docker/graylog.env file and set some values:

Set the proper Time Zone: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones

  • GRAYLOG_TIMEZONE=Europe/Berlin

The URL you want your graylog to be available under:

A salt for encrypting your graylog passwords

  • GRAYLOG_PASSWORD_SECRET (Change that now)

Edit Docker/graylog/getGeo.sh and insert your license Key for the Maxmind GeoIP Database. Create an account on https://www.maxmind.com/en/account/login and go to "My Account -> Manage License Keys -> Generate new License key" and copy the that Key to the placeholder in your getGeo.sh File. If you don't do that the geolookup feature for IP Addresses won't work.

Finally, spin up the stack with:

cd ./Docker
sudo docker-compose up -d

Note: graylog will be built the first time you run docker-compose. The below step is only for updating the GeoLite DB. To update the geolite.maxmind.com GeoLite2-City database, simply run:

cd ./Docker
sudo docker-compose up -d --no-deps --build graylog

This should expose you the following services externally:

Service URL Default Login Purpose
Graylog http://localhost:9000 admin/admin Configure Data Ingestions and Extractors for Log Inforation
Grafana http://localhost:3000 admin/admin Draw nice Graphs
Kibana http://localhost:5601/ none Default Elastic Data exploratiopn tool. Not required.
Cerebro http://localhost:9001 none - provide with ES API: http://elasticsearch:9200 ES Admin tool. Only required for setting up the Index.

Depending on your hardware a few minutes later you should be able to connect to your Graylog Instance on http://localhost:9000. Let's see if we can login with username "admin", password "admin".

2. Initial Index creation

Next we have to create the Index in Elasticsearch for the pfSense logs in System / Indices

Index

Index shard 4 and Index replicas 0, the rotation of the Index time index and the retention can be deleted, closure of an index according to the maximum number of indices or doing nothing. In my case, I set it to rotate monthly and eliminate the indexes after 12 months. In short there are many ways to establish the rotation. This index is created immediately.

Indices

3. GeoIP Plugin activation

In Graylog go to System->Configurations and:

  1. Change the order by Message processors, to have the following sequence:
1. AWS Instance Name Lookup
2. Message Filter Chain
3. Pipeline Processor
4. GeoIP Resolver

This should look like:

Index

  1. In the Plugins section update enable the Geo-Location Processor

4. Content Packs

Custom Content Pack

This content pack includes Input rsyslog type , extractors, lookup tables, Data adapters for lockup tables and Cache for lookup tables. You could do this manually, but this is preconfigured for what we want, so you don't have to fight with lookups, data adapters etc.

We can take it from the Git directory or sideload it from github to the Workstation you do the deployment on:

https://raw.githubusercontent.com/lephisto/pfsense-analytics/master/pfsense_content_pack/graylog4/pfanalytics.json

Once it's uploaded, press the Install button. If everthing went well it should look like:

dpi1

Note the "pfintel" on the bottom of the list.

4. Assign Streams

Now edit then Streams: Assign your Index pfsense in Streams to associate the index that we created initially. We mark that it eliminates the coincidences for the default stream 'All message' so that only it stores it in the index of pfsense.

Content Pack

5. Cerebro

This part might be a little bit confusing, so read carefully!

As previously explained, by default graylog for each index that is created generates its own template and applies it every time the index rotates. If we want our own templates we must create them in the same elasticsearch. We will convert the geo type dest_ip_geolocation and src_ip_geolocation to type geo_point to be used in the World Map panels since graylog does not use this format.

Get the Index Template from the GIT repo you cloned or sideload it from:

https://raw.githubusercontent.com/lephisto/pfsense-analytics/master/Elasticsearch_pfsense_custom_template/pfsense_custom_template_es7.json

To import personalized template open cerebro and will go to more/index template

Content Pack

We create a new template

Content Pack

In the name we fill it with pfsense-custom and open the git file that has the template and paste its content here.

Content Pack

And then we press the create button.

!!! IMPORTANT: Now we will stop the graylog service to proceed to eliminate the index through Cerebro.

sudo docker-compose stop graylog

In Cerebro we stand on top of the index and unfold the options and select delete index.

Content Pack

We start the graylog service again and this will recreate the index with this template.

sudo docker-compose start graylog

Once this procedure is done, we don't need Cerebro for daily work, so it could be disable in docker-compose.yml.

6. Configure pfSense

We will now prepare Pfsense to send logs to graylog and for this in Status/System Logs/ Settings we will modify the options that will allow us to do so.

We go to the Remote Logging Options section and in Remote lo7g servers we specify the ip address and the port prefixed in the content pack in the pfsense input of graylog that in this case 5442.

Pfsense

We save the configuration.

Check Graylog

We now go to graylog by selecting the pfsense stream and we will see how it is parsing the log messages creating the fields.

Graylog

Check Grafana

Dashboards and Datasource are auto-provisioned to Grafana. Log in at http://localhost:3000 with admin/admin and you should see your Firewall Logs pouring in.

DPI

Now that we have the Firewall logs we want to get some Intel about legit Traffic on our Network.

  • On your pfSense go to System->Package Manager->Available Packages and install ntopng.
  • Head to Diagnostics -> ntopng Settings and do basic Configuration
  • Update GeoIP Data there as well. (Install "PFSENSE-9211: Fix GeoIP DB" if it fails)
  • Go to Diagnostics -> ntopng Settings and log in to ntopng
  • Go to Settings -> Preferences -> timeseries

Configure according your needs, I propose following Settings:

Setting Value remarks
Timeseries Driver InfluxDB
InfluxDB URL http://yourdockerserverip:8086
InfluxDB Datebase ndpi
InfluxDB Authentication off unless you have enabled.
InfluxDB Storage 365d
Interface TS: Traffic on
Interface TS: L7 Applications per Protocol
Local Host Timeseries: Traffic on
Local Host Timeseries: L7 Applications per Protocol
Device Timeseries: Traffic on
Device Timeseries: L7 Applications per Category
Device Timeseries: Retention 30d
Other Timeseries: TCP Flags off
Other Timeseries: TCP OfO,Lost,Retran off
Other Timeseries: VLANs on
Other Timeseries: Autonomous Systems on
Other Timeseries: Countries on
Datebase Top Talker Storage 365d

Disable Cerebro.

Since Cerebro is mainly used for applying a custom Index Template, we don't need it in our daily routine and we can disable it. Edit your docker-compose.yml and remove the comment in the service block for Cerebro:

  cerebro:
    image: lmenezes/cerebro
    entrypoint: ["echo", "Service cerebro disabled"]

No need to restart the whole Stack, just stop Cerebro:

sudo docker-compose stop cerebro

That should do it. Check your DPI Dashboard and enjoy :)

pfsense-analytics's People

Contributors

lephisto avatar matthewjsalerno avatar opc40772 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pfsense-analytics's Issues

Graylog refused to connect

I followed the steps in the README.md file but i got stuck when accessing the Graylog webURL.
The browser error message is "refused to connect" while the URL for Cerebro and Grafana works fine.

docker-compose.yml
GRAYLOG_HTTP_EXTERNAL_URI=http://localhost:9000/

sudo netstat -tulpn | grep 9000

sudo netstat -tulpn | grep 3000
tcp6       0      0 :::3000                 :::*                    LISTEN      14130/docker-proxy

sudo netstat -tulpn | grep 9001
tcp6       0      0 :::9001                 :::*                    LISTEN      14170/docker-proxy
lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.3 LTS
Release:	18.04
Codename:	bionic
sudo ufw status
Status: inactive

Host Variable drop down for all panels

It would be really helpful to have a variable drop down at the top for specific hosts, I have the variable defined but when I apply it the changes only take place on the Traffic by Host panel and not the NDPI or distribution panels. Likewise it would be neat if the already defined ndpicat variable applied to the Traffic by host panel as well.

I am a noob at this stuff but I am still trying to figure this out, if anyone has a fix to get these variables to apply to all panels please let me know.

I will include my host variable that I am using just in case it gets someone with more grafana experience started.

https://i.imgur.com/8LdsPJi.png

InflufDB latest now pulls 2.0

image: 'influxdb:latest' pulls InfluxDB 2.0. As a fix I set the docker compose file to pull influxdb:1.8.4-alpine.

Nice Job

This is not e issue.
Thanks for your mention. Nice job. I will work and reuse your job. I have worked on other projects such as Suricata, Zimbra, Squid and Wazuh, but they are still Graylog 2.4. The Graylog version change and no longer allowed me to decline my Content Packs. Recently a friend asked me to update everything to the current versions of Graylog and Elasticsearch. You have given me a reason to update and thus collaborate with each other. Best regards.

No visuals on Grafna dashboard for Firewall Logs

Hi
I have set this up on a Ubuntu 18.04 and graylog is receiving the logs from the pfsense. However, I don't see anything on the Grafana dashboard. I also tested the data source pfsensefw (http://elasticsearch:9200) and it doesn't complain.

Following is the log output from the CLI and screenshots from Grafana.

tcpdump -i ens160 not port 22 | grep 5442
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens160, link-type EN10MB (Ethernet), capture size 262144 bytes
23:18:26.196359 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:26.313221 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local0.info, length: 195
23:18:26.338964 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 276
23:18:27.190880 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:28.193809 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:29.196005 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:29.240726 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 311
23:18:29.332991 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local0.info, length: 183
23:18:30.195597 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:31.197824 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:31.242791 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 301
23:18:32.195049 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:32.374036 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local0.info, length: 183
23:18:33.193469 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:34.193956 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275
23:18:34.252609 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 299
23:18:35.190906 IP xx.xx.xx.xx.syslog > pf-analytics.local.5442: SYSLOG local5.info, length: 275

image

image

Graylog server time - no way to change

This is what I have on System>Overview;
User 2020-05-07 18:47:00 -4:00
Your web browser 2020-05-07 18:47:00 -4:00
Graylog server 2020-05-07 22:48:00 +0:00

That is an issue with the logs. I can only see the last 8 hours of data on Grafana.

I have changed the timezone on the server without luck.
garylog.env is GRAYLOG_TIMEZONE=America/Puerto_Rico

Any ideas?

Thanks!

*Feature Request* Client Specific DPI

Has anyone been able to get the NDPI dashboard to be able to show specific types of traffic per host?

I am a grafana noob and don't have the skills to make this happen but would like to be able to select a specific client ip address and then have the NDPI interface show just the traffic for that client.

Service 'graylog' failed to build

tost@siem_soc:~/pfsense-analytics/Docker$ sudo docker-compose up -d
[sudo] password for tost:
Building graylog
Step 1/6 : FROM graylog/graylog:3.1
---> ca38a27808e3
Step 2/6 : USER root
---> Using cache
---> 069030ba8761
Step 3/6 : RUN mkdir -pv /etc/graylog/server/
---> Using cache
---> b85bcd69ff78
Step 4/6 : COPY ./getGeo.sh /etc/graylog/server/
---> Using cache
---> b023f4e1942e
Step 5/6 : RUN chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh
---> Running in 02dd88419fb8
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: geolite.maxmind.com
tar (child): /etc/graylog/server/mm.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
ERROR: Service 'graylog' failed to build: The command '/bin/sh -c chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh' returned a non-zero code: 2

Blank histogram/no messages displayed; index growing

Hello,

I am not seeing data in streams messages in Graylog. I have followed the video and the guide at https://github.com/lephisto/pfsense-analytics/tree/master. I have checked for the flow of messages. what steps can I use to troubleshoot?

I have verified that data is flowing from the pfsense box to the vm port 5442. I can see the message count fluctuate under graylog/streams > pfsense, but when I click on the stream I see a blank histogram for message count as well as no messages. There is no data displayed in Grafana. The elasticsearch pfsense_0 has thousands of docs and is steadily growing. I have rebuilt a new vm since you updated the repo earlier today. Everything works up to step 6/Check Graylog.

Graylog Streams e.g. 17 messages/second. Must match all of the 2 configured stream rules.
Graylog Index e.g. pfsense-logs 1 index, 33,114 documents, 8.8MiB

pfsense filter.log extract
Feb 21 17:39:55 remote filterlog[35405]: 76,,,100000101,re0,match,pass,in ...
Feb 21 17:39:55 remote filterlog[35405]: 76,,,100000101,re0,match,pass,in ...
Feb 21 17:39:55 remote filterlog[35405]: 4,,,1000000003,re0,match,block,in ...

Jake

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

Hi ! i'm trying to setting up but i'm stuck at step 5. Cerebro.
Cerebro can't to connect to elastricsearch. I used docker stats to see if elasticsearch was running, it was actually looping. I looked at the logs : docker logs -f pfanalytics_elasticsearch_1 ; and then i saw :

[2020-10-04T18:56:56,262][INFO ][o.e.n.Node               ] [fEncsgq] starting ...
[2020-10-04T18:56:56,406][INFO ][o.e.t.TransportService   ] [fEncsgq] publish_address {172.18.0.8:9300}, bound_addresses {0.0.0.0:9300}
[2020-10-04T18:56:56,417][INFO ][o.e.b.BootstrapChecks    ] [fEncsgq] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2020-10-04T18:56:56,424][INFO ][o.e.n.Node               ] [fEncsgq] stopping ...
[2020-10-04T18:56:56,458][INFO ][o.e.n.Node               ] [fEncsgq] stopped
[2020-10-04T18:56:56,458][INFO ][o.e.n.Node               ] [fEncsgq] closing ...
[2020-10-04T18:56:56,470][INFO ][o.e.n.Node               ] [fEncsgq] closed

I think that i will need to reset max file descriptors. But i'm stuck, i don't know how to do this. (i'm setting up pfense-analytics on my NAS Asustor, there is no such file /etc/security/limits.conf). Could you please help ?

Firewall Dashboard works fine but DPI dashboard shows no data

I had to reinstall pfsense-analytics since I screwed up some backups. It was working fine until then. Now after reinstalling, the Firewall dashboard looks fine:
image

but the DPI dashboard is empty (the panels are all there, they just don't have any data)
image

So I wanted to know, in order to troubleshoot, how can verify logs are being stored in Graylog?

empty Stream in Graylog and empty Dashboard in Grafana

Thank you very much to setting this how to up and also thank you for providing your step by step guide.

I followed your guide exactly and was able to set everything up without any error messages or other hicups.
Sadly, how ever I can not see any data in the pfsense stream on graylog and there for i also have no data in the Grafana Dashboard.

doese anybody have any idea what the reason could be that the graylog server get no data in the stream?
I'm not very Linux experienced and there for a little lost in how to find the error my self.

thanks to every one who could help here.

Best Regards

Can't run RUN chmod +x /etc/graylog/server/getGeo.sh

Please can you help me I get this error in step 5/6

`Step 5/6 : RUN chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh
---> Running in b30dbe5bac10
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 20 100 20 0 0 112 0 --:--:-- --:--:-- --:--:-- 112

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
ERROR: Service 'graylog' failed to build: The command '/bin/sh -c chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh' returned a non-zero code: 2`

Thanks.

NDPI Traffic by host graph showing incorrect traffic spikes on WAN IP

image

I get this strange spikes about every 30 minutes. The ip is may WAN (ppoe) IP, when the ip changes it resets and begins increasing again. It makes no sense since it keeps increasing even over my max internet bandwith (600mb).
Checking on ntopng directly there is no trace of this bandwith usage too.

Content Pack import fails

org.graylog2.contentpacks.exceptions.FailedConstraintsException: Failed constraints: [PluginVersionConstraint{type=plugin-version, pluginId=org.graylog.plugins.threatintel.ThreatIntelPlugin, version=>=3.1.3}, PluginVersionConstraint{type=plugin-version, pluginId=org.graylog.plugins.threatintel.ThreatIntelPlugin, version=>=3.1.2}]

For some reason, I do not see the ThreatIntelPlugin listed in the config. Since it's not enabled and is a dependency of the content pack, the install of the content pack fails.

I'm looking through the Graylog docs to see if I missed anything.

Setup error on docker_elasticsearch_1; docker_mongodb_1; elasticsearch

Hello, I have reached this point where I get an error and the installation fails. Any ideas on what I can check? Thank you!

ERROR: for docker_elasticsearch_1 Cannot start service elasticsearch: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:359: setting rlimits for ready process caused \\\"error setting rlimit type 8: operation not permitted\\\"\"": unknown'

ERROR: for docker_mongodb_1 UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

ERROR: for elasticsearch Cannot start service elasticsearch: b'OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:359: setting rlimits for ready process caused \\\"error setting rlimit type 8: operation not permitted\\\"\"": unknown'

ERROR: for mongodb UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Can't run RUN chmod +x /etc/graylog/server/getGeo.sh

Hello,

is anyone experiencing the same issue when running the 5/6 step?

`Step 5/6 : RUN chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh
---> Running in b30dbe5bac10
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 20 100 20 0 0 112 0 --:--:-- --:--:-- --:--:-- 112

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
ERROR: Service 'graylog' failed to build: The command '/bin/sh -c chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh' returned a non-zero code: 2`

Thanks.

Graylog Indice/notification error

I get the following error message when attempting to create an Indice under "Create index sets" in graylog:

"Please fill out this field" under the Alerts tab

I tried to set up notifications, however error persists. How do I overcome this?

image

No Data Appearing on the Maps in Grafana

Everything else is working fine in Grafana and Graylog. My only issue is that there is no map data being shown in either the Firewall Logs or DPI dashboards. I used #15 to help me refer to my licence key within getGeo.sh in order to build Graylog in the first place, but even though it was built, there is still nothing being shown on either maps in Grafana. Anyone else having this issue?

Thanks for the help
grafana

geolite.maxmind.com

Is anyone else seeing this error and happen to know the fix? I'm guessing it is due to the recent MaxMind sign-in requirement, but haven't dug into the script to figure out how to fix yet.

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                               Dload  Upload   Total   Spent    Left  Speed
0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (6) Could not resolve host: geolite.maxmind.com
tar (child): /etc/graylog/server/mm.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
ERROR: Service 'graylog' failed to build: The command '/bin/sh -c chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh' returned a non-zero code: 2

How can I change influxdb port number

After fixing the maxmind link, I now stumble across this, and it looks like becuase I already have infludb running on the machine port 8086 for things like my cable modem signal statistics and also all of my ubiquiti unifi data that I am already running in grafana - locally on the machine already.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/compose/service.py", line 625, in start_container
container.start()
File "/usr/lib/python3/dist-packages/compose/container.py", line 241, in start
return self.client.start(self.id, **options)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 19, in wrapped
return f(self, resource_id, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/container.py", line 1095, in start
self._raise_for_status(res)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 263, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/usr/lib/python3/dist-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("b'driver failed programming external connectivity on endpoint pfanalytics_influxdb_1 (b9e0960f863784eb35a07ee1b0976e43ac6c407b060c5be250f698e38ea16e27): Error starting userland proxy: listen tcp 0.0.0.0:8086: bind: address already in use'")

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/docker-compose", line 11, in
load_entry_point('docker-compose==1.25.0', 'console_scripts', 'docker-compose')()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 72, in main
command()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 128, in perform_command
handler(command, command_options)
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1107, in up
to_attach = up(False)
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 1088, in up
return self.project.up(
File "/usr/lib/python3/dist-packages/compose/project.py", line 565, in up
results, errors = parallel.parallel_execute(
File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute
raise error_to_reraise
File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer
result = func(obj)
File "/usr/lib/python3/dist-packages/compose/project.py", line 548, in do
return service.execute_convergence_plan(
File "/usr/lib/python3/dist-packages/compose/service.py", line 545, in execute_convergence_plan
return self._execute_convergence_create(
File "/usr/lib/python3/dist-packages/compose/service.py", line 460, in _execute_convergence_create
containers, errors = parallel_execute(
File "/usr/lib/python3/dist-packages/compose/parallel.py", line 112, in parallel_execute
raise error_to_reraise
File "/usr/lib/python3/dist-packages/compose/parallel.py", line 210, in producer
result = func(obj)
File "/usr/lib/python3/dist-packages/compose/service.py", line 465, in
lambda service_name: create_and_start(self, service_name.number),
File "/usr/lib/python3/dist-packages/compose/service.py", line 457, in create_and_start
self.start_container(container)
File "/usr/lib/python3/dist-packages/compose/service.py", line 627, in start_container
if "driver failed programming external connectivity" in ex.explanation:
TypeError: a bytes-like object is required, not 'str'

GeoIP location panel on Grafana doesn`t work

Hi everyone and thanks for this magnific guide.
I have just installed all and it is working except the map on grafana which shows no points.

Please someone could help me finding the problem??

I have already enabled Geo_Location Processor on Graylog configuration.

Thanks to all.

image

firewall logs

Hi, everything is working fine in Grafana. DPI is showing correct but only i dont see any firewall Logs. Firewall Logs keeps empty.
Is there a way to fix this?
image

Step 5/6 problem

Hello
When running docker-compose up:

Step 5/6 : RUN chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh
---> Running in 7ba489eb790e
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: geolite.maxmind.com

My server can resolve geolite.maxmind.com, because Maxmind is now a paid db.

How to I change that?

curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number

No connection to https://geolite.maxmind.com/
This causes the error:
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
tar (child): /etc/graylog/server/mm.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
ERROR: Service 'graylog' failed to build: The command '/bin/sh -c chmod +x /etc/graylog/server/getGeo.sh && /etc/graylog/server/getGeo.sh' returned a non-zero code: 2

OS Debian 10
I would be glad for help or ideas on how to get around this problem.

Grafana Dashboard Error

When I go into the new dashboard with data, and scroll down the page, apparently changes are made. It wants me to save after every time I leave and add this metadata...

{
  "__inputs": [
    {
      "name": "DS_PFS_GRAYLOG",
      "label": "PFS Graylog",
      "description": "",
      "type": "datasource",
      "pluginId": "elasticsearch",
      "pluginName": "Elasticsearch"
    }
  ],
  "__requires": [
    {
      "type": "datasource",
      "id": "elasticsearch",
      "name": "Elasticsearch",
      "version": "1.0.0"
    },
    {
      "type": "grafana",
      "id": "grafana",
      "name": "Grafana",
      "version": "6.4.3"
    },
    {
      "type": "panel",
      "id": "grafana-piechart-panel",
      "name": "Pie Chart",
      "version": "1.3.9"
    },
    {
      "type": "panel",
      "id": "grafana-worldmap-panel",
      "name": "Worldmap Panel",
      "version": "0.2.1"
    },
    {
      "type": "panel",
      "id": "graph",
      "name": "Graph",
      "version": ""
    },
    {
      "type": "panel",
      "id": "savantly-heatmap-panel",
      "name": "Heatmap",
      "version": "0.2.0"
    },
    {
      "type": "panel",
      "id": "singlestat",
      "name": "Singlestat",
      "version": ""
    },
    {
      "type": "panel",
      "id": "table",
      "name": "Table",
      "version": ""
    }
  ],

Data seems wrong

When looking at the dashboards I am seeing numbers that don't seem to make sense?

Some totals are in the TB totals for the last 6 hours. Most data seems wrong on the NDPI interface dashboard. Any ideas on where to start looking? I followed your instructions and all went well, just seems like something is wrong as there is no way that the data is correct.

Thanks.

firewall logs with ipv6 addresses

graylog recognizes and filters ipv4 addresses correctly. but ipv6 addresses are not extracted or recognized from log messages.

Is this error known or does the error only occur during my installation ?

Content pack issue, Graylog v3.3.4

Getting an error uploading the Content Pack on Graylog 3.3.4 -- from the log file:

Caused by: org.graylog2.contentpacks.exceptions.DivergingEntityConfigurationException: Expected Grok pattern for name "COMMONAPACHELOG": <%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)>; actual Grok pattern: <%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} [%{HTTPDATE:timestamp;date;dd/MMM/yyyy:HH:mm:ss Z}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)>

Looks like it has to do with this entry in the content pack:

{ "v": "1", "type": { "name": "grok_pattern", "version": "1" }, "id": "7482a5f4-868c-4ef2-839f-a22141445c5c", "data": { "name": "COMMONAPACHELOG", "pattern": "%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \\[%{HTTPDATE:timestamp}\\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)" }, "constraints": [ { "type": "server-version", "version": ">=3.1.2+9e96b08" } ] },

Removing this will allow the content pack to install, but breaks some of the output.

Not sure why this is happening.. Any ideas?

Removed index prefix in instructions

In your newest readme on that index creation instructions, your new pictured removed what it was showing to prefix the index. Isn't that step important?

Can't connect to mongodb

So, I had everything working and then had some hardware problems. After resolving them, things wouldn't start cleanly. I didn't know if it was something I did or not, so I burned all the containers and started from scratch (CENTOS 8 vm, not rebuilt, only the docker stuff was).

Now, as before, graylog can't connect to mongo. All I get is this:
raylog_1 | 2020-06-23 13:23:35,123 INFO : org.mongodb.driver.cluster - Cluster description not yet available. Waiting for 30000 ms before timing out
graylog_1 | 2020-06-23 13:23:36,172 INFO : org.mongodb.driver.cluster - Exception in monitor thread while connecting to server mongo:27017
graylog_1 | com.mongodb.MongoSocketOpenException: Exception opening socket
graylog_1 | at com.mongodb.connection.SocketStream.open(SocketStream.java:62) ~[graylog.jar:?]
graylog_1 | at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[graylog.jar:?]
graylog_1 | at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:114) [graylog.jar:?]
graylog_1 | at java.lang.Thread.run(Thread.java:748) [?:1.8.0_232]
graylog_1 | Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
graylog_1 | at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_232]
graylog_1 | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_232]
graylog_1 | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_232]
graylog_1 | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_232]
graylog_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_232]
graylog_1 | at java.net.Socket.connect(Socket.java:607) ~[?:1.8.0_232]
graylog_1 | at com.mongodb.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:59) ~[graylog.jar:?]
graylog_1 | at com.mongodb.connection.SocketStream.open(SocketStream.java:57) ~[graylog.jar:?]
graylog_1 | ... 3 more
graylog_1 | 2020-06-23 13:24:05,124 ERROR: org.graylog2.bindings.providers.MongoConnectionProvider - Error connecting to MongoDB: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, servers=[{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.NoRouteToHostException: No route to host (Host unreachable)}}]
graylog_1 | 2020-06-23 13:24:05,144 INFO : org.graylog2.shared.buffers.InputBufferImpl - Message journal is enabled.

and it doesn't start right. Grafana also errors out with an message about not connecting to download the pie-chart plugin. Elasticsearch and Cerebro start fine, as far as I can tell.

Any ideas?
T.

No Docker

Hi there,

Might be a really dumb question but are you considering creating a non Docker version of this. I already have a Graylog & Elastic instance running on bare metal and would like to use that.

Map now showing Map

Here is a picture of what I am talking about.. the world map is missing some squares. I can zoom in and find detail. The entire map was there last night. Anyone else having the same issue .. see pic
Screenshot_5

Grafana dashboard has no data after start of month

Hi,

Thanks for setting this up. It's been working great in July but after 1 August, the logfile rotation has kicked in but the indices don't seem to be working correctly - so the dashboard has no new updates.

Where might I need to look to check that the indexing/search is updating correctly at month end ?

I am still getting pfSense messages in Graylog, and indices are still running.

pfsense-logs 2 indices, 55,209,052 documents, 33.9GiB default

Rotation Strategy: Index Time
Rotation Period : P1M
Index Retention Configuration - Delete Index (after Max number of indices = 12)

Timeseries Database

Hello

When i try to configure on NTOP the Influxdb it appear the following error:

imagem

ERROR: org.graylog2.bootstrap.CmdLineTool

Hello

After a long time of no problem, a restart causes this problem with Graylog server, that won't start up.

Any help?


2020-05-19 11:58:28,830 ERROR: org.graylog2.bootstrap.CmdLineTool - Invalid configuration
com.github.joschi.jadconfig.ParameterException: Couldn't convert value for parameter "root_timezone"
at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:141) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99) ~[graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:351) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.readConfiguration(CmdLineTool.java:344) [graylog.jar:?]
at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:178) [graylog.jar:?]
at org.graylog2.bootstrap.Main.main(Main.java:50) [graylog.jar:?]
Caused by: com.github.joschi.jadconfig.ParameterException: Couldn't convert value "UTC +2" to DateTimeZone.
at com.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:26) ~[graylog.jar:?]
atccom.github.joschi.jadconfig.jodatime.converters.DateTimeZoneConverter.convertFrom(DateTimeZoneConverter.java:12) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.convertStringValue(JadConfig.java:167) ~[graylog.jar:?]
at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:139) ~[graylog.jar:?]


The last commit is breaking kabana

Hi lephisto,

First of all, thanks for the work you are putting in this project, it makes me reconsider using pfpsense as my home firewall (used untangle up until now). Your effort is really appreciated.

Now, on the topic, I saw an issue while I am starting the containers as the kabana is exiting right away and there is no way to start it. After looking on the doker compose file, I saw you added a line to disable Kabana, I saw that this is something we can do while everything is up and running and we do not need Kabana at that point. Here is the commit, line 63 was the one causing the issue:

commit dc2fcd8

I think we should probably have instructions on the README regarding this portion, but it is better if we comment this portion for those who try to setup the whole thing.

Again, thanks for your time on this project!

Have a nice one!

Nikolay

Service 'graylog' failed to build

I am running Ubuntu Server 64bit on a Raspberry Pi 4 with 8 GB of RAM and a 128 GB SD card. When I trying to run the docker compose process I get the following error:

ubuntu@ubuntu:~/pfsense-analytics/Docker$ sudo docker-compose up -d Building graylog Step 1/6 : FROM graylog/graylog:3.1 ---> ca38a27808e3 Step 2/6 : USER root ---> Using cache ---> 08a76a36e96d Step 3/6 : RUN mkdir -pv /etc/graylog/server/ ---> Running in d7d70564f82a standard_init_linux.go:211: exec user process caused "exec format error" ERROR: Service 'graylog' failed to build: The command '/bin/sh -c mkdir -pv /etc/graylog/server/' returned a non-zero code: 1

Not sure what the issue is here. Any assistance would be very much appreciated.

Error in Cerebro

Receiving this error:

"error": "JsResultException(errors:List((,List(JsonValidationError(List('attributes' is undefined on object: {\"name\":\"vJ9IPvf\",\"transport_address\":\"172.18.0.2:9300\",\"host\":\"172.18.0.2\",\"ip\":\"172.18.0.2\",\"version\":\"6.8.5\",\"build_flavor\":\"oss\",\"build_type\":\"docker\",\"build_hash\":\"78990e9\",\"roles\":[\"master\",\"data\",\"ingest\"],\"os\":{\"refresh_interval\":\"1s\",\"refresh_interval_in_millis\":1000,\"name\":\"Linux\",\"pretty_name\":\"CentOS Linux 7 (Core)\",\"arch\":\"amd64\",\"version\":\"4.15.0-101-generic\",\"available_processors\":4,\"allocated_processors\":4},\"jvm\":{\"pid\":1,\"version\":\"13.0.1\",\"vm_name\":\"OpenJDK 64-Bit Server VM\",\"vm_version\":\"13.0.1+9\",\"vm_vendor\":\"AdoptOpenJDK\",\"start_time\":\"2020-05-26T03:05:04.547Z\",\"start_time_in_millis\":1590462304547,\"mem\":{\"heap_init\":\"1gb\",\"heap_init_in_bytes\":1073741824,\"heap_max\":\"990.7mb\",\"heap_max_in_bytes\":1038876672,\"non_heap_init\":\"7.3mb\",\"non_heap_init_in_bytes\":7667712,\"non_heap_max\":\"0b\",\"non_heap_max_in_bytes\":0,\"direct_max\":\"0b\",\"direct_max_in_bytes\":0},\"gc_collectors\":[\"ParNew\",\"ConcurrentMarkSweep\"],\"memory_pools\":[\"CodeHeap 'non-nmethods'\",\"Metaspace\",\"CodeHeap 'profiled nmethods'\",\"Compressed Class Space\",\"Par Eden Space\",\"Par Survivor Space\",\"CodeHeap 'non-profiled nmethods'\",\"CMS Old Gen\"],\"using_compressed_ordinary_object_pointers\":\"true\",\"input_arguments\":[\"-Xms1g\",\"-Xmx1g\",\"-XX:+UseConcMarkSweepGC\",\"-XX:CMSInitiatingOccupancyFraction=75\",\"-XX:+UseCMSInitiatingOccupancyOnly\",\"-Des.networkaddress.cache.ttl=60\",\"-Des.networkaddress.cache.negative.ttl=10\",\"-XX:+AlwaysPreTouch\",\"-Xss1m\",\"-Djava.awt.headless=true\",\"-Dfile.encoding=UTF-8\",\"-Djna.nosys=true\",\"-XX:-OmitStackTraceInFastThrow\",\"-Dio.netty.noUnsafe=true\",\"-Dio.netty.noKeySetOptimization=true\",\"-Dio.netty.recycler.maxCapacityPerThread=0\",\"-Dlog4j.shutdownHookEnabled=false\",\"-Dlog4j2.disable.jmx=true\",\"-Djava.io.tmpdir=/tmp/elasticsearch-125629727188914072\",\"-XX:+HeapDumpOnOutOfMemoryError\",\"-XX:HeapDumpPath=data\",\"-XX:ErrorFile=logs/hs_err_pid%p.log\",\"-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m\",\"-Djava.locale.providers=COMPAT\",\"-XX:UseAVX=2\",\"-Des.cgroups.hierarchy.override=/\",\"-Des.path.home=/usr/share/elasticsearch\",\"-Des.path.conf=/usr/share/elasticsearch/config\",\"-Des.distribution.flavor=oss\",\"-Des.distribution.type=docker\"]}}),WrappedArray())))))"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.