Git Product home page Git Product logo

Comments (23)

enotspe avatar enotspe commented on July 20, 2024 1

I really haven´t tried any other formats besides udp. I am sorry man.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

If src_port does not exist, it should not generate source.port either.

I have plenty of logs that don´t have all the ECS fields under "copy"

Can you post some example of the logs that are not reaching ES?

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Here is an example that does not make it to Elastic:

<190>date=2020-04-17 time=15:02:01 devname="FIRE-ITF-Internet-2" devid="FG39E6T019900014" logid="1059028704" type="utm" subtype="app-ctrl" eventtype="app-ctrl-all" level="information" vd="root" eventtime=1587153721 appid=24466 user="TEST12009" group="FSAE Webmail" srcip=192.168.205.71 dstip=8.8.8.8 srcport=1 dstport=8 srcintf="port3" srcintfrole="dmz" dstintf="port2" dstintfrole="wan" proto=1 service="PING" direction="outgoing" policyid=449 sessionid=162230838 applist="Internet_AC_Webmail" appcat="Network.Service" app="Ping" action="pass" incidentserialno=2026057643 msg="Network.Service: Ping," apprisk="elevated"

Note: I don't have any DROP filters in my pipeline.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

does it never get to Elastic? I mean, if you generate the same log (different time), does it get to Elastic?

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

I don´t see anything wrong with your log

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

I am not sure where it is going. It does not get properly parsed and I can't search on srcip dstip or source.ip destination.ip. I tried looking by session and other values but can't seem to find it anywhere. Also, I don't see any errors trying to write to the error index either. Any other thoughts?

from fortinet-2-elasticsearch.

whataboutpereira avatar whataboutpereira commented on July 20, 2024

Just a stab. UDP dropout?

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Not sure what a UDP dropout is, but I can guarantee that the packet makes it to the Logstash server since I see it in Wireshark. The text I posted is from the packet. Does that help?

from fortinet-2-elasticsearch.

whataboutpereira avatar whataboutpereira commented on July 20, 2024

Do you have a dead letter queue in Logstash?

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

No, but looking into that sounds like a genius idea! Any tips/hints on using DLQ for this use case?

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

More details:

So my original test is running ping from a Windows host to a specific address and watch the information arrive in SIEM A while also watching for it in SIEM B (Elastic).

SIEM A gets the log every time. But SIEM B (Elastic) is intermittent.

I ran ping twice and 1 out of 2 of the logs that had arrived to Elastic made it through and was indexed. The other is still somewhere in the unknown.

In SIEM A, both logs show no problem and I can verify that both packets made it to the LogStash server.

Maybe LogStash is overloaded and some events are silently dropping? I am hoping the DLQ will provide some answers as I have no clue where the second document is going.

So long story short, the data will sometimes make it through to Elastic and sometimes it wont.

Here is the rawest form of the packet I can obtain as printable text:

Success
PV¾°,,kõxE$æ=D¤ÿ¾¤C#¬y
ó<190>date=2020-04-21 time=08:46:17 devname="FIRE-ITF-Internet-2" devid="FG39E6T019900014" logid="1059028704" type="utm" subtype="app-ctrl" eventtype="app-ctrl-all" level="information" vd="root" eventtime=1587476777 appid=24466 user="TEST12009" group="FSAE Webmail" srcip=192.168.205.71 dstip=8.8.8.8 srcport=1 dstport=8 srcintf="port3" srcintfrole="dmz" dstintf="port2" dstintfrole="wan" proto=1 service="PING" direction="outgoing" policyid=449 sessionid=2467483494 applist="Internet_AC_Webmail" appcat="Network.Service" app="Ping" action="pass" incidentserialno=2028263106 msg="Network.Service: Ping," apprisk="elevated"

Failure
PV¾°,,kõxEQ3=Ûö¤ÿ¾¤C#¬yô<190>date=2020-04-21 time=08:46:52 devname="FIRE-ITF-Internet-2" devid="FG39E6T019900014" logid="1059028704" type="utm" subtype="app-ctrl" eventtype="app-ctrl-all" level="information" vd="root" eventtime=1587476812 appid=24466 user="TEST12009" group="FSAE Webmail" srcip=192.168.205.71 dstip=8.8.8.8 srcport=1 dstport=8 srcintf="port3" srcintfrole="dmz" dstintf="port2" dstintfrole="wan" proto=1 service="PING" direction="outgoing" policyid=449 sessionid=2467483494 applist="Internet_AC_Webmail" appcat="Network.Service" app="Ping" action="pass" incidentserialno=2028263588 msg="Network.Service: Ping," apprisk="elevated"

Maybe logstash isn't handling the UDP traffic on the second event as it seems not to have that line break or something? Will continue to evaluate.

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

After looking at more events, I will say 90% of this log won't get ingested. Opening a ticket with Elastic to see if LogStash is at fault here.

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

I took the failing syslog messages and added them to a file called ingest.txt and used the input { file{}} to read in the data then work through the pipeline and it worked every time.

I have tried to add a tag called testing when the dstip is my testing IP and when the log gets ingested I see it in my output, but when it doesn't get ingested, it doesn't ever make it to the output. So the issue lives some where between input and the filter.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

from what you said, it looks it is more on the input. If it were the filter, it would not work when ingesting it from file. However, input udp is quite straightforward, not many ideas comes to my mind. I would totally suggest you enable dead letter queue as @whataboutpereira recommended

Just dead_letter_queue.enable: true should be enough to start logging errors.

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

The DLQ did not show any errors besides the sent/recv delta with the long number in a separate issue.

It seems very clean. I am trying to debug the input to see if the packet makes it into the input.

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

@enotspe ,do you ingest logs straight from the Fortigate Firewalls or do you send to a different SysLog server before forwarding them on to LogStash?

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

@nicpenning straight from fortigate to logstash

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Okay thanks for letting me know. The last possibility is that there are a lot of events coming in for such a small test environment and maybe logstash can't handle them all.

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Working with support I believe this is the best answer to this situation given the erratic nature of when logs get ingested. I am leveraging the persistent queue now instead of memory and it seems to have increased the chances of my logs making it through the input. However, due to limited resources in the testing environment, they logs are 10 minutes behind. I will need to tweak some settings but I firmly believe this is the most valid answer to this problem:

When a Logstash pipeline is applying back-pressure to its inputs (e.g., when all of its workers are busy processing and either using the in-memory queue or a Persistent Queue that is full), those inputs become blocked and are unable to push new events into the queue. In the case of the UDP Input, this kind of back-pressure prevents it from reading additional bytes from the buffers of inbound packets. These buffers are maintained by the OS, and the OS will reap these buffers instead of letting them hog resources.

If this is the case, there are a number of knobs we can use to increase the application's likelihood:

  1. Tune the OS's UDP receive buffers by increasing their size. How this is done is OS-dependent, but will allow the OS to have a bigger pool of space in which to put inbound buffers, decreasing its need to reap existing buffers as new ones arrive
  2. Use the Persistent Queue: this decouples the inputs from the rest of the pipeline, allowing the UDP Input in this case to read from the inbound UDP buffers as quickly as possible, reducing the likelihood of the OS reaping them before they can be read. The PQ allows a Logstash pipeline to absorb bursts in traffic, but if the input rate consistently exceeds the pipeline's processing throughput, a full PQ will put back-pressure on the inputs in the same manner as above.

from fortinet-2-elasticsearch.

enotspe avatar enotspe commented on July 20, 2024

thanks for sharing @nicpenning

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Your welcome @enotspe !

Quick question in regards to this. Is there any reason why the syslog input is not being used but instead the udp input is?

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

Going down this rabbit hole, I found that LogStash may not support the type of "reliable" syslog format that gets sent out. I tried every variation of TCP, syslog, relp, etc..

If it is possible to use reliable SysLog, please do let me know!

This is what I see when I try to use TCP and reliable from the Fortigate:
image

from fortinet-2-elasticsearch.

nicpenning avatar nicpenning commented on July 20, 2024

My solution was increasing the UDP buffers by configuring a setting within the Windows registry to use around 100mb buffer instead of the default just to be safe.

HKEY_LOCAL_MACHINE \SYSTEM \CurrentControlSet \Services \Afd \Parameters]
DefaultReceiveWindow = 163840
DefaultSendWindow = 163840

http://smallvoid.com/article/winnt-winsock-buffer.html

I also used persistent queues to have a buffer there as well. Lastly, I used a dedicated LogStash node with 16GB RAM and 6 cores to handle all of the incoming logs. At about 5k events per second I am no longer seeing dropped logs and they are searchable within 5 seconds.

Lessons learned is that while UDP may not be reliable, it is crucial you have significant resources to ingest efficiently and prevent logstash from missing UDP packets in the Windows buffer due to LogStash back pressure as mentioned above.

Thanks everyone!

from fortinet-2-elasticsearch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.