enotspe / fortinet-2-elasticsearch Goto Github PK
View Code? Open in Web Editor NEWFortinet products logs to Elasticsearch
License: Apache License 2.0
Fortinet products logs to Elasticsearch
License: Apache License 2.0
Hi,
I m not a ELK, specialist !
I trying use it in my ELK, but I can t install.
There is a help of where I put each file ?
I've downloaded the raw NDJSON files, and when going into the UI - Stack Management - Kibana - Saved Objects --> Import, I get "Sorry, there was an error. The file could not be processed". I've tried it with all 4 of the NDJSON files on the github repository. I'm coming from Palo Alto to Fortinet, so I'd love to have pre-filled in dashboards that I can look at and tear apart to see how it works.
Any suggestions?
Thank you
Hi @enotspe
I got error in ruby filter when using config
I'm not change any config, only change port input. and output ES.
[ERROR][logstash.filters.ruby ][main][a87af76ae105d59b87fe27c4e7659d1c6cc7ec07a265cd75c57200456445fbc9] Ruby exception occurred: can't convert Array into an exact number {:class=>"TypeError", :backtrace=>["org/jruby/RubyTime.java:510:in
localtime'", "(ruby filter code):5:in block in filter_method'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-ruby-3.1.8/lib/logstash/filters/ruby.rb:96:in
inline_script'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-ruby-3.1.8/lib/logstash/filters/ruby.rb:89:in filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:159:in
do_filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:178:in block in multi_filter'", "org/jruby/RubyArray.java:1821:in
each'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:175:in multi_filter'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:134:in
multi_filter'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:300:in block in start_workers'"]}
can you reproduce for this case?
Thank you
Hi could you help to resolve this issue.
I have copied all the template, dashboard json, pipelines.yml, all the .conf files to my vm that already fresh-installed Elasticsearch and Kibana. There is an issue on starting Logstash service, there is a warning
logstash[8167]: [2022-11-17T13:20:39,686][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecs][320c3995cf79ebc0724f34bd360b3e9193d7d44220d69c92749327fb9930cde9] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
I've already modified the output {} and point it to my elasticsearch server gave the user and password also enabled the SSL and point the certificate to elasticsearch's http_ca.crt as well.
So i did a fork where i added my README/DOCS for Rsyslog. Maybe you can check if i missed something? See https://github.com/thetuxinator/fortinet-2-elasticsearch-rsyslog/blob/master/README-RSYSLOG.md maybe the "omelasticsearch-rule" is wrong?
regards
Hello
How to check that everything right with installation?
I import all dashboards to kibana and
make put requests to import templates for ecs-* and -fortigate-
configure syslog-ng to transfer data to elastic with names that are expected from dashboards
this is part of json message that generates syslog-ng
{"url":{"full":""},"source":{"user":{"email":""},"port":"62945","packets":"1","nat":{"port":"62945","ip":"185.183.174.41"},"mac":"64:31:50:37:fc:59","ip":"10.9.0.206","domain":"","bytes":"52"},"rule":{"uuid":"540e8a82-28a4-51ea-4ed6-a35f5b0063df","ruleset":"traffic","name":"","id":"23","category":"forward"},"observer":{"serial_number":"FG100D3G12801312","name":"FORTI_111","ingress":{"interface":{"role":"lan","name":"9 VLAN"}},"egress":{"interface":{"role":"wan","name":"wan2"}}},"network":{"iana_number":"6","application":""},"message":"","log":{"level":"notice"},"host":{"vendor":"HP","type":"","os":{"version":"7","name":"Windows","family":""},"name":"","mac":"64:31:50:37:fc:59","ip":"10.9.0.206"},"fortios":{"service":"tcp/29622"}}
Running into an issue with my setup. Some background:
Installed everything as per instructions but getting this error:
Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,417][INFO ][logstash.javapipeline ] Pipeline
syslog-fortinet-common_ecs-outputis configured with
pipeline.ecs_compatibility: v8setting. All plugins in this pipeline will default to
ecs_compatibility => v8unless explicitly configured otherwise. Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,515][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][6aab6188921cec0832a0712bc324ef942bf88174229dcfed0e9b06c29785d59a] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,555][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://172.17.7.134:9200"]} Nov 14 13:32:17 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:17,573][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure set
ssl_verification_mode => fullNov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,125][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_internal:[email protected]:9200/]}} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,514][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline Java execution initialization time {"seconds"=>2.38} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,516][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][6aab6188921cec0832a0712bc324ef942bf88174229dcfed0e9b06c29785d59a] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,524][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline started {"pipeline.id"=>"syslog-fortinet-fortigate_2_ecsv2"} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,825][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Restored connection to ES instance {:url=>"https://logstash_internal:[email protected]:9200/"} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,827][INFO ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Elasticsearch version determined (8.10.4) {:es_version=>8} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,828][WARN ][logstash.outputs.elasticsearch][syslog-fortinet-common_ecs-output] Detected a 6.x and above cluster: the
typeevent field won't be used to determine the document _type {:es_version=>8} Nov 14 13:32:18 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:18,843][WARN ][logstash.filters.grok ][syslog-fortinet-common_ecs-output] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated Nov 14 13:32:19 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:19,807][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:20 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:20,031][INFO ][logstash.filters.geoip.downloadmanager] new database version detected? true Nov 14 13:32:20 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:20,821][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry. Nov 14 13:32:21 zavpemblogs31 logstash[6565]: [2023-11-14T13:32:21,822][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
Hello enotspe,
Is there a documentation? I m not an ELK knowledge master ^^
thanks.
Regards.
HI, I'm newby of ELK Stack, I'm working now with FortiManager&FortiAnalyzer, and I'm courious to try FortiDragon :) , I've started with Bitnami distribution ELK stack. I've followed your guide step-by-step , I stopped and unistalled Filebeat, I' see the syslog traffic incoming from UDP port but I don't discover any data stream Fortinet Log ...Any Idea ??
Thanks for your help....if you need to see some configurations file or log I can show you..it's a Lab environnement.
Hi I followed the instructions as per the readme, ran the script and copied the syslog-fortinet-fortigate-input5424-kv.conf, syslog-fortinet-fortigate_2_ecsv2.conf and syslog-fortinet-common_ecs-output.conf files for logstash and added the following lines in pipelines.yml.
I get the following in logstash log
[2024-06-11T11:15:59,960][INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2024-06-11T11:15:59,965][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.14.0", "jruby.version"=>"jruby 9.4.7.0 (3.1.4) 2024-04-29 597ff08ac1 OpenJDK 64-Bit Server VM 17.0.11+9 on 17.0.11+9 +indy +jit [x86_64-linux]"}
[2024-06-11T11:15:59,967][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-06-11T11:15:59,969][INFO ][logstash.runner ] Jackson default value override logstash.jackson.stream-read-constraints.max-string-length
configured to 200000000
[2024-06-11T11:15:59,969][INFO ][logstash.runner ] Jackson default value override logstash.jackson.stream-read-constraints.max-number-length
configured to 10000
[2024-06-11T11:16:00,847][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-06-11T11:16:02,070][INFO ][org.reflections.Reflections] Reflections took 266 ms to scan 1 urls, producing 132 keys and 468 values
[2024-06-11T11:16:02,990][INFO ][logstash.javapipeline ] Pipeline syslog-fortinet-fortigate-input5424-kv
is configured with pipeline.ecs_compatibility: v8
setting. All plugins in this pipeline will default to ecs_compatibility => v8
unless explicitly configured otherwise.
[2024-06-11T11:16:03,055][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][syslog-fortinet-fortigate-input5424-kv] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2024-06-11T11:16:03,073][WARN ][logstash.filters.grok ][syslog-fortinet-fortigate-input5424-kv] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-06-11T11:16:03,243][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate-input5424-kv] Starting pipeline {:pipeline_id=>"syslog-fortinet-fortigate-input5424-kv", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/syslog-fortinet-fortigate-input5424-kv.conf"], :thread=>"#<Thread:0x53ce660 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-11T11:16:04,920][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate-input5424-kv] Pipeline Java execution initialization time {"seconds"=>1.67}
[2024-06-11T11:16:04,968][WARN ][logstash.inputs.udp ][syslog-fortinet-fortigate-input5424-kv] 'source_ip_fieldname' is user customized, please check is has an ECS compatible name
[2024-06-11T11:16:04,977][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate-input5424-kv] Pipeline started {"pipeline.id"=>"syslog-fortinet-fortigate-input5424-kv"}
[2024-06-11T11:16:05,053][INFO ][logstash.inputs.udp ][syslog-fortinet-fortigate-input5424-kv][fe74ca5748d39fae61e59663f13166ad5f93cbb8dc831b1c2cf24c15cbb69f56] Starting UDP listener {:address=>"0.0.0.0:5141"}
[2024-06-11T11:16:05,075][INFO ][logstash.inputs.udp ][syslog-fortinet-fortigate-input5424-kv][fe74ca5748d39fae61e59663f13166ad5f93cbb8dc831b1c2cf24c15cbb69f56] UDP listener started {:address=>"0.0.0.0:5141", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2024-06-11T11:16:06,261][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][1e52065c8a624f94557ccb67505d3ef84a30fd6dacdf0f92a1aaf76b98411ef9] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
[2024-06-11T11:16:07,072][INFO ][logstash.javapipeline ] Pipeline syslog-fortinet-fortigate_2_ecsv2
is configured with pipeline.ecs_compatibility: v8
setting. All plugins in this pipeline will default to ecs_compatibility => v8
unless explicitly configured otherwise.
[2024-06-11T11:16:07,162][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][syslog-fortinet-fortigate_2_ecsv2] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2024-06-11T11:16:07,165][INFO ][logstash.filters.kv ][syslog-fortinet-fortigate_2_ecsv2] ECS compatibility is enabled but target
option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the target
option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-06-11T11:16:07,170][WARN ][logstash.filters.grok ][syslog-fortinet-fortigate_2_ecsv2] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-06-11T11:16:07,277][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][1e52065c8a624f94557ccb67505d3ef84a30fd6dacdf0f92a1aaf76b98411ef9] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
[2024-06-11T11:16:07,280][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Starting pipeline {:pipeline_id=>"syslog-fortinet-fortigate_2_ecsv2", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/syslog-fortinet-fortigate_2_ecsv2.conf"], :thread=>"#<Thread:0x3290ff10 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-06-11T11:16:07,472][ERROR][logstash.plugins.registry] Unable to load plugin. {:type=>"filter", :name=>"tld"}
[2024-06-11T11:16:07,487][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:syslog-fortinet-common_ecs-output, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (PluginLoadingError) Couldn't find any filter plugin named 'tld'. Are you sure this is correct? Trying to load the tld filter plugin resulted in this error: Unable to load the requested plugin named tld of type filter. The plugin is not installed.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.(CompiledPipeline.java:120)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:186)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1363)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:115)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:90)", "org.jruby.RubyClass.newInstance(RubyClass.java:949)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:90)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:548)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:88)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:238)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:225)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:228)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:291)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:324)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:118)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:144)", "org.jruby.RubyProc.call(RubyProc.java:354)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)", "java.base/java.lang.Thread.run(Thread.java:840)"]}
[2024-06-11T11:16:08,281][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate-input5424-kv][1e52065c8a624f94557ccb67505d3ef84a30fd6dacdf0f92a1aaf76b98411ef9] Attempted to send event to 'syslog-fortinet-fortigate_2_ecsv2' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
[2024-06-11T11:16:08,568][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline Java execution initialization time {"seconds"=>1.29}
[2024-06-11T11:16:08,586][INFO ][logstash.javapipeline ][syslog-fortinet-fortigate_2_ecsv2] Pipeline started {"pipeline.id"=>"syslog-fortinet-fortigate_2_ecsv2"}
[2024-06-11T11:16:09,613][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
[2024-06-11T11:16:09,626][ERROR][logstash.plugins.registry] Unable to load plugin. {:type=>"filter", :name=>"tld"}
[2024-06-11T11:16:09,634][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (PluginLoadingError) Couldn't find any filter plugin named 'tld'. Are you sure this is correct? Trying to load the tld filter plugin resulted in this error: Unable to load the requested plugin named tld of type filter. The plugin is not installed.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.(CompiledPipeline.java:120)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:186)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1363)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:115)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.RubyClass.newInstance(RubyClass.java:949)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:548)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:363)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:88)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:238)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:225)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:228)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:476)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:293)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:324)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:118)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:144)", "org.jruby.RubyProc.call(RubyProc.java:354)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)", "java.base/java.lang.Thread.run(Thread.java:840)"]}
[2024-06-11T11:16:10,617][WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][29a6aa27ca7002ac905931a3f66296c9a559f80ec562f0a6bc6cce6e7d356a3a] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
Can this deployment pull from FortiAnalyzer syslog feeds or is this only possible from a FortiGate?
We have multiple FortiGates all connected to a FortiAnalyzer, so we'd like to run a single feed for all gateways.
Hello,
I received data and is ingested and procesed fine. But when new day starts, my ingestion give this error:
Validation Failed: 1: this action would add [2] shards, but this cluster currently has [999]/[1000] maximum normal shards open
Any idea?
As per title, would be great if you could add dashboards for OpenSearch 2.x as well.
logstash itself works the same so that piece won't be needed
@enotspe Excelente proyecto, estamos revisando para un tema académico, te comento que estoy intentando hacer las pruebas de tu proyecto, de la información ofrecida en el issue de instalación la arquitectura corresponte a un colector Logstash local y una Elastic + Kibana Cloud.
He seguido las recomendaciones, pero aún no puedo hacer funcionar el colector con los pipelines (aparentemente me faltan plugins). Por favor me podrías ayudar con los plugins que se deben instalar en el colector y las ubicaciones de los archivos (hearbets, index, ingest, MIBs).
Aprecio mucho tu tiempo y ayuda.
Hello,
I am using the plugin filter mutate {copy =>{ "[ipaddr]"=> "[dns][resolved_ip]" } } and ipaddr contains multiple values ("127.0.0.1, 192.168.0.3, 192.168.0.4") and when I try to ingest this into ElasticSearch the field dns.resolved_ip is an IP so the error I am receiving is:
"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.resolved_ip] of type [ip] in document with id 'w3ZAWnEBlAHVcZpD_2dx'. Preview of field's value: '127.0.0.1, 192.168.0.3, 192.168.0.4'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"'127.0.0.1, 192.168.0.3, 192.168.0.4' is not an IP string literal."
I think the ipaddr value needs to be parsed and break out the values into an array to show the end result like this:
"dns": {
"resolved_ip": [
"127.0.0.1",
"192.168.0.3",
"192.168.0.4"
],
Instead of
"dns": {
"resolved_ip": [
"127.0.0.1, 192.168.0.3, 192.168.0.4"
],
Which is not an IP address but just a string of text.
Let me know if you need any clarifications. I am working on a solution to parse the ipaddr data but if you already have one, please provide!
Thanks!
Hello
I'd like to ask if there are any future plans on the roadmap to integrate Field and Document level security for the data ingested by Fortinet-2-elasticsearch. We have a UTM with many different customer integrations, the only differentiator being the Security Profile names.
I have seen with some of the dashboards it's possible to filter by these security profiles or even the subnets allocated to a customer but I would like to expand on that and create a "Space" for that customer granting read only access to the needed Indices. I have asked the elasticsearch team on how one can prevent a user from seeing data that's not relevant to them and I was shown the below:
I have however noticed that this option (Grant Access to specific fields) does not exist in our ELK Stack, I'd like to clarify if this is due to the way the data is ingested or is this due to our subscription level (Currently Free and open Basic) ?
General Info:
Fortigate Version: v7.0.11
ELK Stack Version: 8.12.1
Hello,
After following your implementation guide, I am getting the following error in my logstash logs:
{"create"=>{"_index"=>"logs-fortinet.fortigate.traffic,traffic,traffic-default", "_id"=>nil, "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"data_stream [logs-fortinet.fortigate.traffic,traffic,traffic-default] must not contain the following characters ['\\','/','*','?','\"','<','>','|',' ',',']"}}
I am running FortiOS v7.2.4 and a fresh installation of Logstash on Ubuntu.
Am I reading the above error correctly and Logstash is trying to insert into an index named "logs-fortinet.fortigate.traffic,traffic,traffic-default"? Any ideas on what I can do to troubleshoot?
Hello,
Is it possible to skip Logstash and use the new Filebeat Fortinet module?
Regards
Fortios.url contains the full url when the type is utm with the subtybe of virus.
It seems that this is the only subtype of UTM that does this instead of just putting in the path.
With the current way the pipelines are laid out in the project, I don't think we can inject and if, elseif into the copy statements.
I re-engineered the pipelines to use if [field] {mutate {copy ...}} for all of them for more granular control.
This was my solution to ensure that the UTM Virus logs would put the full url into url.full:
if [subtype] =="virus" {
mutate { copy =>{ "[fortios][url]"=> "[url][full]" }}
}
else if [fortios][url] {
mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}
}
The full section would look like this, but not sure if you want a PR to make this large of a change if you have a better way of handling this logic.
if [type] == "traffic" {
if [app] {mutate { copy => { "[app]"=> "[network][application]" }}}
if [collectedemail] {mutate { copy => { "[collectedemail]"=> "[source][user][email]" }}}
if [comment] {mutate { copy => { "[comment]"=> "[rule][description]" }}}
if [dstcollectedemail] {mutate { copy => { "[dstcollectedemail]"=> "[destination][user][email]" }}}
if [dstintf] {mutate { copy => { "[dstintf]"=> "[observer][egress][interface][name]" }}}
if [dstintfrole] {mutate { copy => { "[dstintfrole]"=> "[observer][egress][interface][role]" }}}
if [dstip] {mutate { copy => { "[dstip]"=> "[destination][ip]" }}}
if [dstmac] {mutate { copy => { "[dstmac]"=> "[destination][mac]" }}}
if [dstname] {mutate { copy => { "[dstname]"=> "[destination][address]" }}}
if [dstport] {mutate { copy => { "[dstport]"=> "[destination][port]" }}}
if [duration] {mutate { copy => { "[duration]"=> "[event][duration]" }}}
if [group] {mutate { copy => { "[group]"=> "[source][user][group][name]" }}}
if [msg] {mutate { copy => { "[msg]"=> "[message]" }}}
if [policyid] {mutate { copy => { "[policyid]"=> "[rule][id]" }}}
if [policyname] {mutate { copy => { "[policyname]"=> "[rule][name]" }}}
if [policytype] {mutate { copy => { "[policytype]"=> "[rule][ruleset]" }}}
if [poluuid] {mutate { copy => { "[poluuid]"=> "[rule][uuid]" }}}
if [proto] {mutate { copy => { "[proto]"=> "[network][iana_number]" }}}
if [rcvdbyte] {mutate { copy => { "[rcvdbyte]"=> "[destination][bytes]" }}}
if [rcvdpkt] {mutate { copy => { "[rcvdpkt]"=> "[destination][packets]" }}}
if [sentbyte] {mutate { copy => { "[sentbyte]"=> "[source][bytes]" }}}
if [sentpkt] {mutate { copy => { "[sentpkt]"=> "[source][packets]" }}}
if [fortios][service] {mutate { copy => { "[fortios][service]"=> "[network][protocol]" }}}
if [sessionid] {mutate { copy => { "[sessionid]"=> "[network][session_id]" }}}
if [srcdomain] {mutate { copy => { "[srcdomain]"=> "[source][domain]" }}}
if [srcintf] {mutate { copy => { "[srcintf]"=> "[observer][ingress][interface][name]" }}}
if [srcintfrole] {mutate { copy => { "[srcintfrole]"=> "[observer][ingress][interface][role]" }}}
if [srcip] {mutate { copy => { "[srcip]"=> "[source][ip]" }}}
if [srcmac] {mutate { copy => { "[srcmac]"=> "[source][mac]" }}}
if [srcport] {mutate { copy => { "[srcport]"=> "[source][port]" }}}
if [tranip] {mutate { copy => { "[tranip]"=> "[destination][nat][ip]" }}}
if [tranport] {mutate { copy => { "[tranport]"=> "[destination][nat][port]" }}}
if [transip] {mutate { copy => { "[transip]"=> "[source][nat][ip]" }}}
if [transport] {mutate { copy => { "[transport]"=> "[source][nat][port]" }}}
if [unauthuser] {mutate { copy => { "[unauthuser]"=> "[source][user][name]" }}}
if [fortios][url] {mutate { copy => { "[fortios][url]"=> "[url][path]" }}}
if [dstunauthuser] {mutate { copy => { "[dstunauthuser]"=> "[destination][user][name]" }}}
if [fortios][user] {mutate { copy => { "[fortios][user]"=> "[source][user][name]" }}}
# ECS categorization fields
mutate {
add_field => { "[event][kind]" => "event" }
add_field => { "[event][category]" => "network" }
add_field => { "[event][type]" => "connection" }
}
if [action] == "deny" or [utmaction] == "block" {
mutate { add_field => { "[event][type]" => "denied" } }
}
else {
mutate { add_field => { "[event][type]" => "allowed" } }
}
if [action] == "start" {
mutate { add_field => { "[event][type]" => "start" } }
}
else {
mutate { add_field => { "[event][type]" => "end" } }
}
if [action] in [ "dns" , "ip-conn" ] {
mutate { add_field => { "[event][type]" => "error" } }
}
if [network][application] {
mutate { add_field => { "[event][type]" => "protocol" } }
}
}
# type=dns for version 6.0 and below. On 6.2, dns is subtype of utm
else if [type] == "utm" or [type] == "dns" {
if [fortios] {mutate { copy =>{ "[fortios][agent]"=> "[user_agent][original]" }}}
if [app] {mutate { copy =>{ "[app]"=> "[network][application]" }}}
if [appcat] {mutate { copy =>{ "[appcat]"=> "[rule][category]" }}}
if [applist] {mutate { copy =>{ "[applist]"=> "[rule][ruleset]" }}}
if [dir] {mutate { copy =>{ "[dir]"=> "[network][direction]" }}}
if [dst_int] {mutate { copy =>{ "[dst_int]"=> "[observer][egress][interface][name]" }}}
if [dst_port] {mutate { copy =>{ "[dst_port]"=> "[destination][port]" }}}
if [dstintfrole] {mutate { copy =>{ "[dstintfrole]"=> "[observer][egress][interface][role]" }}}
if [dstip] {mutate { copy =>{ "[dstip]"=> "[destination][ip]" }}}
if [duration] {mutate { copy =>{ "[duration]"=> "[event][duration]" }}}
if [fortios][error] {mutate { copy =>{ "[fortios][error]"=> "[error][message]" }}}
if [errorcode] {mutate { copy =>{ "[errorcode]"=> "[error][code]" }}}
if [event_id] {mutate { copy =>{ "[event_id]"=> "[event][id]" }}}
if [eventtype] {mutate { copy =>{ "[eventtype]"=> "[event][action]" }}}
if [filehash] {mutate { copy =>{ "[filehash]"=> "[file][hash][crc32]" }}}
if [filename] {mutate { copy =>{ "[filename]"=> "[file][name]" }}}
if [filesize] {mutate { copy =>{ "[filesize]"=> "[file][size]" }}}
if [filetype] {mutate { copy =>{ "[filetype]"=> "[file][extension]" }}}
if [fortios][group] {mutate { copy =>{ "[fortios][group]"=> "[source][user][group][name]" }}}
if [ipaddr]{mutate {split => { "ipaddr" => ", " }}}
if [ipaddr] {mutate { copy =>{ "[ipaddr]"=> "[dns][resolved_ip]" }}}
if [msg] {mutate { copy =>{ "[msg]"=> "[message]" }}}
if [policy_id] {mutate { copy =>{ "[policy_id]"=> "[rule][id]" }}}
if [profile] {mutate { copy =>{ "[profile]"=> "[rule][ruleset]" }}}
if [proto] {mutate { copy =>{ "[proto]"=> "[network][iana_number]" }}}
if [qclass] {mutate { copy =>{ "[qclass]"=> "[dns][question][class]" }}}
if [qname] {mutate { copy =>{ "[qname]"=> "[dns][question][name]" }}}
if [qtype] {mutate { copy =>{ "[qtype]"=> "[dns][question][type]" }}}
if [rcvdbyte] {mutate { copy =>{ "[rcvdbyte]"=> "[destination][bytes]" }}}
if [reason] {mutate { copy =>{ "[reason]"=> "[event][reason]" }}}
if [sentbyte] {mutate { copy =>{ "[sentbyte]"=> "[source][bytes]" }}}
if [fortios][service] {mutate { copy =>{ "[fortios][service]"=> "[network][protocol]" }}}
if [session_id] {mutate { copy =>{ "[session_id]"=> "[network][session_id]" }}}
if [src_int] {mutate { copy =>{ "[src_int]"=> "[observer][ingress][interface][name]" }}}
if [src_port] {mutate { copy =>{ "[src_port]"=> "[source][port]" }}}
if [srcdomain] {mutate { copy =>{ "[srcdomain]"=> "[source][domain]" }}}
if [srcintfrole] {mutate { copy =>{ "[srcintfrole]"=> "[observer][ingress][interface][role]" }}}
if [srcip] {mutate { copy =>{ "[srcip]"=> "[source][ip]" }}}
if [srcmac] {mutate { copy =>{ "[srcmac]"=> "[source][mac]" }}}
if [unauthuser] {mutate { copy =>{ "[unauthuser]"=> "[source][user][name]" }}}
#Inconsistencies in the UTM logging forces us to place the UTM virus URL path into url.full since it contains everything, not just the path.
if [subtype] =="virus" {
mutate { copy =>{ "[fortios][url]"=> "[url][full]" }}
}
else if [fortios][url] {
mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}
}
if [vrf] {mutate { copy =>{ "[vrf]"=> "[network][vrf]" }}}
if [xid] {mutate { copy =>{ "[xid]"=> "[dns][id]" }}}
if [hostname] {mutate { copy =>{ "[hostname]"=> "[url][domain]" }}}
if [catdesc] {mutate { copy =>{ "[catdesc]"=> "[rule][category]" }}}
if [direction] {mutate { copy =>{ "[direction]"=> "[network][direction]" }}}
if [dstintf] {mutate { copy =>{ "[dstintf]"=> "[observer][egress][interface][name]" }}}
if [eventid] {mutate { copy =>{ "[eventid]"=> "[event][id]" }}}
if [locip] {mutate { copy =>{ "[locip]"=> "[source][ip]" }}}
if [locport] {mutate { copy =>{ "[locport]"=> "[source][port]" }}}
if [policyid] {mutate { copy =>{ "[policyid]"=> "[rule][id]" }}}
if [sessionid] {mutate { copy =>{ "[sessionid]"=> "[network][session_id]" }}}
if [srcintf] {mutate { copy =>{ "[srcintf]"=> "[observer][ingress][interface][name]" }}}
if [fortios][user] {mutate { copy =>{ "[fortios][user]"=> "[source][user][name]" }}}
if [remip] {mutate { copy =>{ "[remip]"=> "[destination][ip]" }}}
if [remport] {mutate { copy =>{ "[remport]"=> "[destination][port]" }}}
if [dstport] {mutate { copy =>{ "[dstport]" => "[destination][port]" }}}
if [srcport] {mutate { copy =>{ "[srcport]" => "[source][port]" }}}
}
else if [type] == "event" {
if [fortios][agent] {mutate { copy =>{ "[fortios][agent]"=> "[user_agent][original]" }}}
if [daddr] {mutate { copy =>{ "[daddr]"=> "[destination][address]" }}}
if [direction] {mutate { copy =>{ "[direction]"=> "[network][direction]" }}}
if [dstip] {mutate { copy =>{ "[dstip]"=> "[destination][ip]" }}}
if [dstport] {mutate { copy =>{ "[dstport]"=> "[destination][port]" }}}
if [duration] {mutate { copy =>{ "[duration]"=> "[event][duration]" }}}
if [fortios][error] {mutate { copy =>{ "[fortios][error]"=> "[error][message]" }}}
if [error_num] {mutate { copy =>{ "[error_num]"=> "[error][code]" }}}
if [fortios][file] {mutate { copy =>{ "[fortios][file]"=> "[file][name]" }}}
if [filesize] {mutate { copy =>{ "[filesize]"=> "[file][size]" }}}
if [fortios][group] {mutate { copy =>{ "[fortios][group]"=> "[user][group][name]" }}}
if [hostname] {mutate { copy =>{ "[hostname]"=> "[url][domain]" }}}
if [msg] {mutate { copy =>{ "[msg]"=> "[message]" }}}
if [policyid] {mutate { copy =>{ "[policyid]"=> "[rule][id]" }}}
if [proto] {mutate { copy =>{ "[proto]"=> "[network][iana_number]" }}}
if [rcvdbyte] {mutate { copy =>{ "[rcvdbyte]"=> "[destination][bytes]" }}}
if [saddr] {mutate { copy =>{ "[saddr]"=> "[source][address]" }}}
if [sentbyte] {mutate { copy =>{ "[sentbyte]"=> "[source][bytes]" }}}
if [fortios][service] {mutate { copy =>{ "[fortios][service]"=> "[network][protocol]" }}}
if [sess_duration] {mutate { copy =>{ "[sess_duration]"=> "[event][duration]" }}}
if [source_mac] {mutate { copy =>{ "[source_mac]"=> "[source][mac]" }}}
if [fortios][user] {mutate { copy =>{ "[fortios][user]"=> "[user][name]" }}}
if [fortios][url] {mutate { copy =>{ "[fortios][url]"=> "[url][path]" }}}
if [dst_host] {mutate { copy =>{ "[dst_host]"=> "[destination][address]" }}}
if [srcmac] {mutate { copy =>{ "[srcmac]"=> "[source][mac]" }}}
if [srcport] {mutate { copy =>{ "[srcport]"=> "[source][port]" }}}
if [srcip] {mutate { copy =>{ "[srcip]"=> "[source][ip]" }}}
}
Hello! I'm looking at an odd field in the index patterns for ecs-fortigate-* - Technology\"cat
I can only suspect it's erroneous. :)
Hi all,
I'm trying to install the solution on my ELK but I'm stuck running the load.sh script.
I have cloned the repo on the ELK server and used the load.sh, I input my elastic search URL http://10.10.10.1:5601 and use the credentials I use to login to the ELK platform, which definitely work but I get the error in the title : " Failed to connect to Elasticsearch. Please check your credentials and try again." Is there something else to be done? is there another URL implied that I am not aware off?
Thank you!
Can you provide some examples of each of the dictionaries?
Hello,
I'm new to logstash, so forgive me if my question is stupid.
But what is the purpose of this line (file 40-fortigate_2_ecs, line 49) :
remove_field => [ "agent", "error", "file", "group", "hash", "host", "interface", "log", "process", "server", "service", "url", "user" ]
The previous rename is replacing the select fields anyway ? No ?
Best regards,
Hello,
I have found some additional fields from our fortinet logs which seems useless and can be removed as the N/A one in 40-fortigate_2_ecs :
if [srccountry]=="Reserved" { mutate { remove_field => ["srccountry"] } } if [dstcountry]=="Reserved" { mutate { remove_field => ["dstcountry"] } } if [dstdevcategory]=="None" { mutate { remove_field => ["dstdevcategory"] } }
I don't think the reserved value tell us anything meaningful so do you think those fields should be removed too ?
Best regards.
Hi,
First of all, this looks really good and appreciate all the efforts you have gone through to make it this far! I am very interested to ingest Fortinet logs into ElasticSearch.
1st Question: Do you have a slack, gitter, or other project communication channel to ask questions like these and help with the project?
2nd Question: Does the input for the SysLog need to be the regular format or in CEF? I have started down this path with FileBeat and CEF to ingest all the CEF fields to something but wasn't sure how this project was going about that.
3rd Question: Is there any other documentation to get this setup? I can fumble through and pick different pieces and parts but I wanted to make sure there wasn't anything available that made this process easier.
Thanks and keep up the good work!
A small suggestion, if it is aligned with your vision of the project, is to enable people to add bad IPs to there events and modify the event.kind to alert once the bad IP is detected in order to raise it on the SIEM app.
This is specially beneficial for when you have multiple fortiXX instances or many other solution you can centralize your blacklist and enrich your logs even further in a nice and easy way. I can make PR if you want.
I wanted to bring this up to see if anyone has experienced the issue of some logs not getting ingested.
So I did a packet capture pre logstash and made sure I was getting the syslog data and I could validate that 100%.
What is the expected behavior of the filter pipeline when using mutate copy => and the data doesn't exist to copy? For example if I had:
mutate{
copy =>{ "[src_port]"=> "[source][port]" }
}
and src_port does not exist as a field in my data, does it just carry on without that data copied to the new [source][port] field? I think I am missing logs because I have a lot of mutate copies and sometime the data doesn't exist in the log, therefor the doc doesn't ever make it to the output. Is that likely?
After I added a if [src_port] { then mutate cody as it was before } then I finally got the log I was looking for. I still have many other missing logs, but maybe I need to check for every single field to see if it exists prior to the copy to ensure the document can make it to Elastic.
I did not see any errors with docs getting dropped, but to be fair I was using log level info. Any thoughts would be greatly appreciated!
I went ahead and added checks to most of the mutates to see if this increases the log volume and it appeared so. More to come on this.
Just detected some parsing issue with this log
<185>date=2020-03-28 time=21:37:11 devname="MASTER_CALLEUNO" devid="FG5H1E5818909999" logid="0419016384" type="utm" subtype="ips" eventtype="signature" level="alert" vd="root" eventtime=1585449431 severity="high" srcip=51.81.126.39 srccountry="United States" dstip=192.168.253.169 srcintf="port1" srcintfrole="wan" dstintf="port2" dstintfrole="lan" sessionid=2060097095 action="dropped" proto=6 service="HTTP" policyid=13 attack="HTTP.URI.SQL.Injection" srcport=58637 dstport=80 hostname="somehostname.com" url="/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF AND 1=1 UNION ALL SELECT 1,NULL,'<script>alert("XSS")</script>',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')" direction="outgoing" attackid=15621 profile="all_default" ref="http://www.fortinet.com/ids/VID15621" incidentserialno=1846760869 msg="web_misc: HTTP.URI.SQL.Injection," crscore=30 crlevel="high"
The issue is on url="/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF AND 1=1 UNION ALL SELECT 1,NULL,'<script>alert("XSS")</script>',table_name FROM information_schema.tables WHERE 2>1--/**/; EXEC xp_cmdshell('cat ../../../etc/passwd')"
it gets parsed like
fortios.url= "/Miercoles/Portal/MME/descargar.aspx?archivo=A1A44AFA-694A-4264-8F8B-14BA4595D993.PDF
I am missing half of the value. I will do some troubleshooting
When run
PUT _index_template/logs-fortinet.fortigate.event
{
"priority": 200,
"index_patterns": [
"logs-fortinet.fortigate.event*"
],
"data_stream": {
"hidden": false,
"allow_custom_routing": false
},
"composed_of": [
"ecs-base",
"ecs-user",
"ecs-user_agent",
"ecs-observer",
"ecs-destination",
"ecs-source",
"ecs-network",
"ecs-error",
"ecs-url",
"ecs-rule",
"ecs-data_stream",
"ecs-organization",
"ecs-ecs",
"ecs-host",
"logs-fortinet.fortigate.event@ilm",
"strings_as_keyword@mappings",
"auto_expand_replicas@settings",
"refresh_interval@settings",
"logs-fortinet.fortigate@mappings",
"ecs-log-modified",
"ecs-event-modified",
"ecs-file-modified",
"synthetic_source@mappings"
]
}
i got error
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "composable template [logs-fortinet.fortigate.event] template after composition with component templates [ecs-base, ecs-user, ecs-user_agent, ecs-observer, ecs-destination, ecs-source, ecs-network, ecs-error, ecs-url, ecs-rule, ecs-data_stream, ecs-organization, ecs-ecs, ecs-host, logs-fortinet.fortigate.event@ilm, strings_as_keyword@mappings, auto_expand_replicas@settings, refresh_interval@settings, logs-fortinet.fortigate@mappings, ecs-log-modified, ecs-event-modified, ecs-file-modified, synthetic_source@mappings] is invalid"
}
],
"type": "illegal_argument_exception",
"reason": "composable template [logs-fortinet.fortigate.event] template after composition with component templates [ecs-base, ecs-user, ecs-user_agent, ecs-observer, ecs-destination, ecs-source, ecs-network, ecs-error, ecs-url, ecs-rule, ecs-data_stream, ecs-organization, ecs-ecs, ecs-host, logs-fortinet.fortigate.event@ilm, strings_as_keyword@mappings, auto_expand_replicas@settings, refresh_interval@settings, logs-fortinet.fortigate@mappings, ecs-log-modified, ecs-event-modified, ecs-file-modified, synthetic_source@mappings] is invalid",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "invalid composite mappings for [logs-fortinet.fortigate.event]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "field [error.stack_trace] of type [wildcard] doesn't support synthetic source"
}
}
},
"status": 400
}
after i remove
"synthetic_source@mappings"
this success.
my Question
Why component templates "synthetic_source@mappings"
give above error.
i check, i'm already add this component templates.
Thank you
Sorry if i'm mistaken, but for what i'm seeing, the line
output{ pipeline { send_to => "drop" }
isn't working properly, I forwarded a log from a host that isn't on the "host_org.yml" dictionary
and supposedly, the order should be
10-input_syslog.conf -> 20-observer_enrichment.conf -> 70-drop.conf -> 80-output.conf
But according to the line
[2020-04-06T12:41:41,278][DEBUG][logstash.util.decorators ][main] filters/LogStash::Filters::Mutate: adding value to field {"field"=>"[ecs][version]", "value"=>["1.2.0"]}
and by the debug that made, the tag is added in the file 21-snmp_cpu_fortigate_2_ecs.conf
So, by the debug i've made, instead of going through the pipelines, logstash is going through the file names.
DEBUG LOGS:
PS: For the porpuse of putting the logs in here i changed the field host
to another IP in the 10-input_syslog.conf
file withe the following line:
add_field => {"host" => "10.0.1.254"}
Hi,
could you publish a sampe of the the yml files used ? ( 2 entries per file would be great )
Hello, first of all congrats for the solution, it's amazing!
I'm trying to deploy it but I get some errors:
[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:syslog-fortinet-common_ecs-output, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:234:in
initialize'", "org/logstash/execution/AbstractPipelineExt.java:168:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in
initialize'", "org/jruby/RubyClass.java:911:in new'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50:in
execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386:in `block in converge_state'"]}
[WARN ][org.logstash.plugins.pipeline.PipelineBus][syslog-fortinet-fortigate_2_ecsv2][3f0de7dc0df9b79955e21fe0954f4615326ef0e002822839720b6337d266eb85] Attempted to send event to 'syslog-fortinet-common_ecs-output' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.
Can you help my with this errors?
Thanks!
Hi,
I think it is hard to know what and when which pipeline will be run. I suggest the following architecture for logstash:
10-input.conf
20-enhancement.conf
30-filter.conf
50-dootherstuff.conf
90-output.conf
then it is more easy to quickly understand how those pipelines are working together.
I've downloaded the raw NDJSON files, and when going into the UI - Stack Management - Kibana - Saved Objects --> Import, I get "Sorry, there was an error. The file could not be processed". I've tried it with all 4 of the NDJSON files on the github repository. I'm coming from Palo Alto to Fortinet, so I'd love to have pre-filled in dashboards that I can look at and tear apart to see how it works.
Any suggestions?
Thank you
:edit: for clarification - I'm on 7.9.0 for the whole ELK stack.
Let me know or I can try.
Hello,
ECS version: 1.5
Elastic stack 7.6.2
I don't know why I am finding this repo until today, such a great work.
I haven't tried this mapping yet since I created my own but I noticed that if you want this to work with Elastic SIEM to have a more complete and centralized visibility with other firewall/endpoint logs, you would need to change/add few things :
Instead you would have flat events. This is because of the even.category
is set to network
even though that this is what is recommended in the documentation but I had to change it to network_traffic
to get the event renderer working for my fortigate logs.
More info in this reddit thread here
country_code1
or country_code2
... etc instead of country_iso_code
which the one inspected by Elastic SIEM . From the kibana developper panel create an ingest pipeline :PUT _ingest/pipeline/geoip-info-fortinet
{
"description": "Add geoip info",
"processors": [
{
"geoip": {
"field": "srcip",
"target_field": "source.geo"
}
},
{
"geoip": {
"field": "dstip",
"target_field": "destination.geo"
}
}
]
}
Make sure that the node you creating this into is an node.ingest : true
Hope it helps someone and thanks for the great work.
Hi all,
I'm having some trouble understanding how to deploy your configurations to my ELK.
There are features here that I havent used and im kinda losing my way around all the conf files
any chance someone could provide an explanation as to how to put all the confs, pipelines and mibs on my logstash?
thanks
We are seeing an issue with the ELK Stack and Creating the Transforms,
Using the Put Command, I have tried to load the first Transform via Dev console and it yields the following:
{
"error": {
"root_cause": [
{
"type": "validation_exception",
"reason": "Validation Failed: 1: Failed to test query, received status: BAD_REQUEST;"
}
],
"type": "validation_exception",
"reason": "Validation Failed: 1: Failed to test query, received status: BAD_REQUEST;",
"caused_by": {
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": ".ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034",
"node": "tGJaL1oXRjepu80fEqDmsQ",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory.",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on [fgt.srchwvendor] in [.ds-logs-fortinet.fortigate.traffic-default-2023.11.10-000034]. Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [fgt.srchwvendor] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
}
}
},
"status": 400
}
Hello,
Can you add a pipeline.yml example on how to use different modules in logstash ?
For now all my files are on the same folder and I use the following pipeline :
But I don't understand how you use pipeline address ?
Hi, as logstash is very slow and ressource intensive compared to syslog (which is by nature as Java vs C) what about supporting Rsyslog with the omelastic method?
regards
tuxinator
I have noticed that once in a while an event won't get ingested because the [sentdelta] field contains a value such as 18446744073706429550 which is larger than the type long required in Elastic. Is there a way to handle bigintegers in Elastic? Otherwise I might just make this a text field or changing the mapping settings to let it index anyways. How has anyone else managed this?
I'm newbie in ELK. I don't know how to create ILM, Also I don't know how to import your json files.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.