Git Product home page Git Product logo

Comments (5)

Cyb3rSn0rlax avatar Cyb3rSn0rlax commented on July 20, 2024

Hi,
You can use something similar to this :


- pipeline.id: beats_processor
  config.string: |
          input { beats { port => 5044 } }
          output { pipeline { send_to => [beats] } }

- pipeline.id: syslog_processor
  #queue.type: persisted
  config.string: |
          input { syslog { port => 514 } }
          output { pipeline { send_to => [firewalls] } }

- pipeline.id: firewalls
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/01-input.conf"

- pipeline.id: observer_enrichment
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/11-observer_enrichment.conf"

- pipeline.id: forcepoint_leef_parser
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/12-forcepoint_leef_parser.conf"

- pipeline.id: kv_syslog
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/21-kv_syslog.conf"

- pipeline.id: fortigate_2_ecs
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/31-fortigate_2_ecs.conf"

- pipeline.id: fortiweb_2_ecs
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/32-fortiweb_2_ecs.conf"

- pipeline.id: geo_enrichment
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/41-geo_enrichment.conf"

- pipeline.id: blacklist
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/blacklist.conf"

- pipeline.id: logstash_enrichment
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/51-logstash_enrichment.conf"

- pipeline.id: drop
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/61-drop.conf"

- pipeline.id: output
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/02-output.conf"

- pipeline.id: beats
  #queue.type: persisted
  path.config: "/etc/logstash/conf.d/02-beats.conf"

I changed my input file since I receive everything on 514, so my configuration is first receiving everything on syslog and then tagging events according to their log source IP, but you got the idea

from fortinet-2-elasticsearch.

Whysmerhill avatar Whysmerhill commented on July 20, 2024

Hi,

Thanks a lot for this example !
But is it resource efficient to do as many pipeline ?
As I understand of logstash documentation it come with a cost :

That said, it’s important to take into account resource competition between the pipelines, given that the default values are tuned for a single pipeline

Do you needed to reduce workers by pipeline or in your case it don't change that much having one pipeline vs many ?

from fortinet-2-elasticsearch.

Cyb3rSn0rlax avatar Cyb3rSn0rlax commented on July 20, 2024

Well pipelines come with pros and cons and that depends on the way you gonna use them. for instance, which architectural patterns you will chose :

The distributor pattern
The output isolator pattern
The forked path pattern
The collector pattern

Each one is suitable for a use case.
One of the cons of having an all-in-one pipeline would be that if something fails I won't get all of my data. For me the goal was to well define the logic behind it and make it as much modular as I can and make it easier for readability since I am receiving all the firewall data in port 514 (which is not the case for this project). So in my case I am using a 32GB RAM all-in-one node with 16GB RAM for Elasticsearch 3GB of RAM for Logstash, 1 primary shard per index. So far I am able to receive logs from 3 firewall, 1 WAF, 1 windows server and 1 linux box. The only time i started seeing some latency was when I started adding Dashboards and Canvas visualisations since querying also reduce performance. Anyway this was just for testing purposes.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

I agree, pipelines comes with a cost, but flexibility is worth it. I manage 5 logstashs, no one below 16GB RAM, I use central pipeline management from Kibana. No less than 10 FWs per logstash, log-all on most (all) policies, including implicit deny. At least we don´t want logstash to be a bottleneck for ingesting logs.

You can tune up your resources for every pipeline for better performance, ES is improving its "per-pipeline" monitoring capabilities , so I think pipelines is the way to go.

However, I agree that pipelines should be reduced. On the lastest commits, I have deprecated logstash_enrichment pipeline, we replaced it by using env variable call on the input pipelines. Next steps are going to move all lookups to enrich processor

We needed a flexible way to centrally manage all logstashs and separete the ingest logic from the enrichment data. We can load different dictionaries to every logstash while keeping the logic the same.

Where we struggle is with snmp, it doesn´t seem to be flexible way to pull our firewalls. I mean, IPs and OIDs need to be hardcoded on logstash pipelines.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

pipelines.yml

from fortinet-2-elasticsearch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.