Git Product home page Git Product logo

logstash-filter-elapsed's Introduction

Logstash Plugin

Travis Build Status

This is a plugin for Logstash.

It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way.

Documentation

Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one central location.

Need Help?

Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum.

Developing

1. Plugin Developement and Testing

Code

  • To get started, you'll need JRuby with the Bundler gem installed.

  • Create a new plugin or clone and existing from the GitHub logstash-plugins organization. We also provide example plugins.

  • Install dependencies

bundle install

Test

  • Update your dependencies
bundle install
  • Run tests
bundle exec rspec

2. Running your unpublished Plugin in Logstash

2.1 Run in a local Logstash clone

  • Edit Logstash Gemfile and add the local plugin path, for example:
gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome"
  • Install plugin
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Run Logstash with your plugin
bin/logstash -e 'filter {awesome {}}'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash.

2.2 Run in an installed Logstash

You can use the same 2.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory or you can build the gem and install it using:

  • Build your plugin gem
gem build logstash-filter-awesome.gemspec
  • Install the plugin from the Logstash home
# Logstash 2.3 and higher
bin/logstash-plugin install --no-verify

# Prior to Logstash 2.3
bin/plugin install --no-verify
  • Start Logstash and proceed to test the plugin

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin.

Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here.

It is more important to the community that you are able to contribute.

For more information about contributing, see the CONTRIBUTING file.

logstash-filter-elapsed's People

Contributors

chermenin avatar colinsurprenant avatar dedemorton avatar electrical avatar frots avatar jakelandis avatar jensvandecasteele avatar jordansissel avatar jsvd avatar kares avatar karsaroth avatar ph avatar robbavey avatar suyograo avatar untergeek avatar wiibaa avatar yaauie avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-filter-elapsed's Issues

Timestamp override

From #7 that has been closed due to a long stalled CLA request not fulfilled.

Added option to specify timestamp from field in event.
use the timestamp_override field to define the field to use when specifying a timestamp instead of using the timestamp of the event. Useful if parsing historical log files.

Elapsed filter with filebeat

Hi,

I used to work with elasticsearch, locally on my computer (using 'file' as input) and now we moved to work with filebeat and with 'S3' as input.
Until now everything worked just fine with the elapsed filter, but now the new elapsed events don't appear in Kibana.

I do get the following message:" [2019-10-22T10:41:14,485][INFO ][logstash.filters.elapsed ] Elapsed timeout: 100000 seconds", but as I said there aren't any new elapsed event or ant failure tags (like "elapsed_end_without_start","elapsed"...)

What am I missing? Do I need to create any particular configuration to make that work with filebeat?
Thank you!

Event timeout: Not able to get information fields from start event

Hello :)

Considering the following configuration:

filter {
  grok {
    match => ["message", "STARTING TASK: (?<task_id>.*)"]
    add_tag => [ "TaskStarted" ]
  }
  grok {
    match => ["message", "ENDING TASK: (?<task_id>.*)"]
    add_tag => [ "TaskTerminated"]
  }
  elapsed {
    start_tag => "TaskStarted"
    end_tag => "TaskTerminated"
    unique_id_field => "task_id"
  }
}

When we send the following event:

{
  "message":"STARTING TASK: some_id",
  "foo":"bar
}

and we get a timeout, a timeout event is generated like that:

{
  "tags":"elapsed",
  "task_id": "some_id"
}

but we cannot access the field named "foo" to restore its value in that timeout event. This is very useful because except the task_id, we don't have any way to retrieve which event failed.
Could be good to retrieve those fields with the add_field syntax, like:

elapsed {
    timeout_add_field => { 
      "foo" => "%{foo}"
    }
  }

Alternatively, we could chose which fields to restore from the start event, or just restore them all...

The class cannot calculate the elapsed time

The current object in the logstash/timestamp class doesn't support operators.
The elapsed time is calculated with two instances of timestamp, we could apply an integer conversion to fix for this, but It is better to change how the original class behave.

The fix is in elastic/logstash#2061

I can't install elapsed plugin logstash

Hello!

I can't install the elapsed plugin in logstash. When I type the bin/logstash-plugin install logstash-filter-elapsed command it send me this message:

MBP-de-Daniel:bin daniel$ ./logstash-plugin install logstash-filter-elapsed
Validating logstash-filter-elapsed
IOError: No route to host
                      send at org/jruby/ext/socket/RubyUDPSocket.java:441
                      send at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:803
                   request at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:680
   block in fetch_resource at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:536
           block in resolv at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:1108
                      each at org/jruby/RubyArray.java:1734
           block in resolv at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:1106
                      each at org/jruby/RubyArray.java:1734
           block in resolv at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:1105
                      each at org/jruby/RubyArray.java:1734
                    resolv at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:1103
            fetch_resource at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:527
             each_resource at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:517
               getresource at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/resolv.rb:498
              api_endpoint at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/bootstrap/patches/remote_fetcher.rb:8
                   api_uri at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/source.rb:47
                load_specs at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/source.rb:187
                tuples_for at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/spec_fetcher.rb:267
  block in available_specs at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/spec_fetcher.rb:231
                      each at org/jruby/RubyArray.java:1734
               each_source at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/source_list.rb:98
           available_specs at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/spec_fetcher.rb:227
     search_for_dependency at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/spec_fetcher.rb:103
       spec_for_dependency at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/jruby/lib/ruby/stdlib/rubygems/spec_fetcher.rb:167
          logstash_plugin? at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/util.rb:29
           validate_plugin at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/install.rb:119
   block in verify_remote! at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/install.rb:113
                      each at org/jruby/RubyArray.java:1734
            verify_remote! at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/install.rb:111
                   execute at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/install.rb:57
                       run at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
                   execute at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/subcommand/execution.rb:11
                       run at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:67
                       run at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/vendor/bundle/jruby/2.3.0/gems/clamp-0.6.5/lib/clamp/command.rb:132
                    <main> at /Users/daniel/Documents/TRABAJO/Elastic/logstash-6.5.4/lib/pluginmanager/main.rb:48
  • Version: Logstash 6.5.4
  • Operating System: MacOS Mojave

Filter randomly drops events

using this filter is really great,
but i have noticed that it drops some events, & this is randomly I see the end tasks with the tag "elapsed_end_without_start".

I have tried to set the pipeline workers to 1, but it didn't help... just degrade the performance to the ground.
my system config:
Ubuntu 16.04
logstash 5.3.2
logstash-filter-elapsed (4.0.1)
Elasticsearch 5.3.1

elapsed {
periodic_flush => true
start_tag => "startevent"
end_tag => "endevent"
unique_id_field => "ID"
timeout => 600
new_event_on_match => false
add_tag => [ "autoelapsed" ]
}

incompatibility with MongoDB Output

Hello
I'm using LS 1.5.2
When i try to use mongodb output, it works well until there is an elapsed.match
i'm getting an exception related to the timestamp in the elapsed apparently (may be the format of elapsed.timestamp_start)

:exception=>#<NoMethodError: undefined methodbson_type' for "2015-08-24T12:26:19.664Z":LogStash::Timestamp>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:44:in to_bson'", "org/jruby/RubyHash.java:1341:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:43:in to_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/encodable.rb:57:inencode_with_placeholder_and_null'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:42:in to_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/array.rb:49:into_bson'", "org/jruby/RubyArray.java:1613:in each'", "org/jruby/RubyEnumerable.java:978:ineach_with_index'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/array.rb:46:in to_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/encodable.rb:57:inencode_with_placeholder_and_null'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/array.rb:45:in to_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:46:into_bson'", "org/jruby/RubyHash.java:1341:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:43:into_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/encodable.rb:57:in encode_with_placeholder_and_null'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/bson-3.2.1-java/lib/bson/hash.rb:42:into_bson'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/protocol/serializers.rb:155:in serialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/protocol/message.rb:153:inserialize_fields'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/protocol/message.rb:141:inserialize_fields'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/protocol/message.rb:70:in serialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/connection.rb:123:inwrite'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/connection.rb:122:inwrite'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/connectable.rb:66:in dispatch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/loggable.rb:44:inlog'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/loggable.rb:67:in log_debug'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/connectable.rb:65:indispatch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/operation/executable.rb:35:in execute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/connection_pool.rb:99:inwith_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/server/context.rb:63:in with_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/operation/executable.rb:34:inexecute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/operation/write/insert.rb:72:in execute_write_command'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/operation/write/insert.rb:62:inexecute'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/collection.rb:190:in insert_many'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/mongo-2.0.6/lib/mongo/collection.rb:175:ininsert_one'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-mongodb-0.1.4/lib/logstash/outputs/mongodb.rb:56:in receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/outputs/base.rb:88:inhandle'", "(eval):513:in output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipeline.rb:243:inoutputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipeline.rb:165:in start_outputs'"], :level=>:warn} NoMethodError: undefined methoderror_code' for # receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-mongodb-0.1.4/lib/logstash/outputs/mongodb.rb:60 handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/outputs/base.rb:88 output_func at (eval):513 outputworker at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipelinestrong text.rb:243 start_outputs at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.2.2-java/lib/logstash/pipeline.rb:165

Any idea to solve this issue please ? Thanks
I've posted this also in mongodb output section.

plugin Milestone

The milestone of this plugin is not set, therefore, logstash does not want to run it. please fix.

Elapsed filter with load balanced logstash in Filebeat

Hello.

I have Filebeat with 4 logstash servers configured with load balancing on

  logstash:
    # The Logstash hosts
    hosts: ["logstash1", "logstash2", "logstash3", "logstash4"]

    # Optional load balance the events between the Logstash hosts
    loadbalance: true

And I have logstash-filter-elapsed configured on my logstash servers.

The problem is when "start" event comes to one logstash instance, and "end" event comes to another logstash server I'm getting elapsed_end_without_start with no elapsed_time calculated (obviously).

I didn't find any notes about dealing with this filter in loadbalanced logstash env.
Are there any recommendations how this can be configured?
Or this plug in shouldn't be used in loadbalanced env? Then it makes sense to add this info into documentation.

Thanks.

Filter seems to ignore timeout value

Hi,

after a few hours of experimentation I think there may be an issue with the timeout value. I am trying to calculate the elapsed time between two of my events. I have two scenarios:

  1. First event and Second event are less than 20 seconds apart (immediate execution). These are the ones I want to measure
  2. First and Second event minutes apart - this happens when something is scheduled and I want it to be ignored.

So I set my config to timeout events > 20 seconds. However, all events match and nothing is filtered.

This is my config:

input {

    stdin{
            "add_field" =>  { "client" => "test" }
    }

}

filter {


        multiline {
                pattern => "^\[%{LOGLEVEL}\]"
                negate => true
                what => "previous"
        }

        grok {
            break_on_match => false
            patterns_dir => "/Users/artur/dev/logstash/config/patterns"
            match => {
                "message" => "\[%{LOGLEVEL:level}\] \[%{IPORHOST:from}\] %{TIMESTAMP_ISO8601:timestamp} \[%{DATA:thread}\] \[%{NOTSPACE:logger}\] %{GREEDYDATA:msg}"
            }
            #remove_tag => ["_grokparsefailure"]
            #named_captures_only => false
        }

        date {
            locale => "en"
            match => ["timestamp", "ISO8601"]
            timezone => "UTC"
            target => "@timestamp"
            add_field => { "debug" => "timestampMatched"}
        }

        mutate
        {
         remove_field => [ "timestamp"]
        }


        grok {
            match => {
                "msg" => "START\: Received schedule request for\: \(%{GREEDYDATA:task_id}\)"
            }
            add_tag => [ "taskStarted" ]
            tag_on_failure => [ ]
        }

        grok {
            match => {
                "msg" => "END\: Poking:.* for Key: \(%{GREEDYDATA:task_id}\)"
            }
            add_tag => [ "taskEnded" ]
            tag_on_failure => [ ]
        }

        elapsed {
            start_tag => "taskStarted"
            end_tag => "taskEnded"
            unique_id_field => "task_id"
            timeout => 2050        
        }


}

output {
    stdout {
            codec => "rubydebug"
    }
}

Here is the input I am using:

[INFO] [SomeId] 2016-02-26T15:34:06.179Z [message-inserter-1] [ClassA] START: Received schedule request for: (test) Scheduled: 2016-02-26T15:34:00
[INFO] [SomeId2] 2016-02-26T16:08:00.813Z [message-sender-3] [ClassB] END: Poking: {"message"} for Key: (test)

Here is the output that I am seeing:

Logstash startup completed
[INFO] [SomeId] 2016-02-26T15:34:06.179Z [message-inserter-1] [ClassA] START: Received schedule request for: (test) Scheduled: 2016-02-26T15:34:00
[INFO] [SomeId2] 2016-02-26T16:08:00.813Z [message-sender-3] [ClassB] END: Poking: {"message"} for Key: (test)
{
       "message" => "[INFO] [SomeId] 2016-02-26T15:34:06.179Z [message-inserter-1] [ClassA] START: Received schedule request for: (test) Scheduled: 2016-02-26T15:34:00",
      "@version" => "1",
    "@timestamp" => "2016-02-26T15:34:06.179Z",
        "client" => "test",
          "host" => "arturk.local",
         "level" => "INFO",
          "from" => "SomeId",
        "thread" => "message-inserter-1",
        "logger" => "ClassA",
           "msg" => "START: Received schedule request for: (test) Scheduled: 2016-02-26T15:34:00",
         "debug" => "timestampMatched",
       "task_id" => "test",
          "tags" => [
        [0] "taskStarted"
    ]
}
{
                    "message" => "[INFO] [SomeId2] 2016-02-26T16:08:00.813Z [message-sender-3] [ClassB] END: Poking: {\"message\"} for Key: (test)",
                   "@version" => "1",
                 "@timestamp" => "2016-02-26T16:08:00.813Z",
                     "client" => "test",
                       "host" => "arturk.local",
                      "level" => "INFO",
                       "from" => "SomeId2",
                     "thread" => "message-sender-3",
                     "logger" => "ClassB",
                        "msg" => "END: Poking: {\"message\"} for Key: (test)",
                      "debug" => "timestampMatched",
                    "task_id" => "test",
                       "tags" => [
        [0] "taskEnded",
        [1] "elapsed",
        [2] "elapsed_match"
    ],
               "elapsed_time" => 2034.634,
    "elapsed_timestamp_start" => "2016-02-26T15:34:06.179Z"
}

As you can see, the elapsed time is 2034 seconds, which is the difference between the timestamps, so the time has been calculated correctly. However, the timeout hasn't kicked in.

Elapsed Plugin Negative elapsed_time

There are no unique ids in my logs so i'm creating my own by adding multiple fields but the thing is that they may repeat over time.

My LOC #1457(END Tag) is matching with #1559 (START Tag) instead of #1450(START Tag).
#1450(Start Tag) is matching with #1556(END Tag) instead of #1457(END Tag).
It is also very non-deterministic in this when i run it multiple times with same logs.

I believe its not reading the logs in a serialized manner otherwise this problem would never happen. My input config :-

input {
file {
path => "/home/vsharma/Documents/logstash-5.1.2/bin/varun1.log"
}
}

Any way to get it elapsed plugin to go through my logs sequentially? Example :- line 1, line 2, 3....
Edit:
Seems like threads in elapsed plugin are the main culprit here. Any suggestion?

Elastic 7.1 elapsed_end_without_start but works in 6.8.1 (same config)

For all general issues, please provide the following details for fast resolution:

  • Version: Logstash 7.1
  • Operating System: RHEL 6.5
  • Config File (if you have sensitive info, please remove it):
`...MORE ABOVE
if "_grokparsefailure" in [tags]{
grok {
remove_tag => ["_grokparsefailure"]
patterns_dir => ["/etc/logstash/conf.d/patterns"]
match => { "message" => "%{YEAR:log_year}\.%{MONTHNUM:log_month}\.%{MONTHDAY:log_day}\|%{TIME:log_time}\|\|%{SESSIONUID:SESUID}\|\|%{SESSIONUID:THREADUID}\|\(null\)\|%{WORD:ThreadName}\|%{WORD:Action}\|Begin\|check_patient_for_completeness%{GREEDYDATA}"}	
		add_tag => [ "CheckPatientForCompletenessStart"]
		add_field => { "event" => "CheckPatientForCompletenessStart" }
	}
}

if "_grokparsefailure" in [tags]{
	grok {
		remove_tag => ["_grokparsefailure"]
		patterns_dir => ["/etc/logstash/conf.d/patterns"]
		match => { "message" => "%{YEAR:log_year}\.%{MONTHNUM:log_month}\.%{MONTHDAY:log_day}\|%{TIME:log_time}\|\|%{SESSIONUID:SESUID}\|\|%{SESSIONUID:THREADUID}\|\(null\)\|%{WORD:ThreadName}\|%{WORD:Action}\|End\|check_patient_for_completeness%{GREEDYDATA}"}
		
		add_tag => [ "CheckPatientForCompletenessEnd"]
		add_field => { "event" => "CheckPatientForCompletenessEnd" }
	}
}
elapsed {
    start_tag => "CheckPatientForCompletenessStart"
    end_tag => "CheckPatientForCompletenessEnd"
    unique_id_field => "THREADUID"
}`
  • Sample Data:
`2019.04.25|09:36:49.850||B8CA3A947330-5CC1B251-1||B8CA3A947330-5CC1B251-146|(null)|MainThread|Generic|Begin|check_patient_for_completeness()|(null)|
2019.04.25|09:36:50.128||B8CA3A947330-5CC1B251-1||B8CA3A947330-5CC1B251-146|(null)|MainThread|Generic|End|check_patient_for_completeness()|(null)|`
  • Steps to Reproduce:
    I run the configs with the following command line parameters for 6.8.1:

/logstash-6.8.1/bin# ./logstash -f /etc/logstash/conf.d/performance_log/v2/performance.log.conf -w 1 -r

and for my 7.1 installation:

/usr/share/logstash/bin# ./logstash -f /etc/logstash/conf.d/performance_log/v2/performance.log.conf -w 1 -r

However, I get different results!
The 6.8.1 correctly identifies the CheckPatientForCompletenessStart tag as the start and the CheckPatientForCompletenessEnd tag as the ending parameter.
However the 7.1 installation does not do this correctly and constantly displays "elapsed_end_without_start" tag.
The only thing that has changed between these two runs is the version of logstash.

This smells like a bug, but I'm not sure where. Maybe 7.1 is not properly interpreting the -w 1 flag correctly?

7.1 Example (notice how it seems to process the end but completely ignores the start):

    {
         "@timestamp" => 2019-04-25T13:36:50.128Z,
    "workstationName" => "ii-rs-hc-mam-04",
              "event" => "CheckPatientForCompletenessEnd",
               "tags" => [
        [0] "CheckPatientForCompletenessEnd",
        [1] "elapsed_end_without_start"
    ],
          "timestamp" => "2019 04 25 09:36:50.128",
           "SiteName" => "USA-SOMEWHERE-XX",
           "@version" => "1",
               "type" => "plain",
          "THREADUID" => "B8CA3A947330-5CC1B251-146",
            "message" => "2019.04.25|09:36:50.128||B8CA3A947330-5CC1B251-1||B8CA3A947330-5CC1B251-146|(null)|MainThread|Generic|End|check_patient_for_completeness()|(null)|\r",
             "Action" => "Generic",
               "host" => "wtlsuv403.sitename.com",
               "path" => "/var/log/perf_logv2/v3/client_performance_USA-SOMEWHERE-XX_ii-rs-hc-mam-04.log",
             "SESUID" => "B8CA3A947330-5CC1B251-1",
         "ThreadName" => "MainThread"
    }
    {
         "@timestamp" => 2019-04-25T13:36:49.850Z,
    "workstationName" => "ii-rs-hc-mam-04",
              "event" => "CheckPatientForCompletenessStart",
               "tags" => [
        [0] "CheckPatientForCompletenessStart"
    ],
          "timestamp" => "2019 04 25 09:36:49.850",
           "SiteName" => "USA-SOMEWHERE-XX",
           "@version" => "1",
               "type" => "plain",
          "THREADUID" => "B8CA3A947330-5CC1B251-146",
            "message" => "2019.04.25|09:36:49.850||B8CA3A947330-5CC1B251-1||B8CA3A947330-5CC1B251-146|(null)|MainThread|Generic|Begin|check_patient_for_completeness()|(null)|\r",
             "Action" => "Generic",
               "host" => "wtlsuv403.sitename.com",
               "path" => "/var/log/perf_logv2/v3/client_performance_USA-SOMEWHERE-XX_ii-rs-hc-mam-04.log",
             "SESUID" => "B8CA3A947330-5CC1B251-1",
         "ThreadName" => "MainThread"
    }

6.8.1 Example (Correct)

    {
               "type" => "plain",
         "ThreadName" => "MainThread",
             "SESUID" => "B8CA3A947330-5CC1B251-1",
           "@version" => "1",
               "path" => "/var/log/perf_logv2/v3/client_performance_USA-SOMEWHERE-XX_ii-rs-hc-mam-04.log",
          "THREADUID" => "B8CA3A947330-5CC1B251-146",
              "event" => "CheckPatientForCompletenessStart",
               "host" => "wtlsuv403.sitename.com",
         "@timestamp" => 2019-04-25T13:36:49.850Z,
    "workstationName" => "ii-rs-hc-mam-04",
             "Action" => "Generic",
          "timestamp" => "2019 04 25 09:36:49.850",
               "tags" => [
        [0] "CheckPatientForCompletenessStart"
    ],
           "SiteName" => "USA-SOMEWHERE-XX"
    }
    {
                       "type" => "plain",
                 "ThreadName" => "MainThread",
                     "SESUID" => "B8CA3A947330-5CC1B251-1",
    "elapsed_timestamp_start" => 2019-04-25T13:36:49.850Z,
                   "@version" => "1",
                       "path" => "/var/log/perf_logv2/v3/client_performance_USA-SOMEWHERE-XX_ii-rs-hc-mam-04.log",
                  "THREADUID" => "B8CA3A947330-5CC1B251-146",
                      "event" => "CheckPatientForCompletenessEnd",
                       "host" => "wtlsuv403.sitename.com",
                 "@timestamp" => 2019-04-25T13:36:50.128Z,
            "workstationName" => "ii-rs-hc-mam-04",
                     "Action" => "Generic",
                  "timestamp" => "2019 04 25 09:36:50.128",
                       "tags" => [
        [0] "CheckPatientForCompletenessEnd",
        [1] "elapsed",
        [2] "elapsed_match"
    ],
               "elapsed_time" => 0.278,
                   "SiteName" => "USA-SOMEWHERE-XX"
    }

Multiple elapsed plugins for one event

Hi,

So I've been busy with timing the time difference between some events, and came across an instance where two tasks are stopped with the same event, however, when calling the elapsed plugin twice, only the first is recorded. What should I do to make elapsed record both?

Example cofig:
filter {
grok {
match => ["message", "STARTING TASK1: (?.)"]
add_tag => [ "Task1Started" ]
}
grok {
match => ["message", "STARTING TASK2: (?.)"]
add_tag => [ "Task2Started" ]
}
grok {
match => ["message", "ENDING TASK: (?.)"]
add_tag => [ "Task1Terminated", "Task2Terminated"]
}
elapsed {
start_tag => "Task1Started"
end_tag => "Task1Terminated"
unique_id_field => "task_id"
}
elapsed {
start_tag => "Task2Started"
end_tag => "Task2Terminated"
unique_id_field => "task_id"
}
}

Thanks for any help on this issue!

Disable logstash-filter-elapsed print log to logstash-plain.log

logstash-filter-elapsed 4.0.1 print lots INFO in logstash-plain.log

[2017-06-02T16:48:04,840][INFO ][logstash.filters.elapsed ] Elapsed, 'end event' received {:end_tag=>"endtask", :unique_id_field=>"taskID"} [2017-06-02T16:48:04,842][INFO ][logstash.filters.elapsed ] Elapsed, 'start event' received {:start_tag=>"starttask", :unique_id_field=>"taskID"} [2017-06-02T16:48:04,846][INFO ][logstash.filters.elapsed ] Elapsed, 'end event' received {:end_tag=>"endtask", :unique_id_field=>"taskID"} [2017-06-02T16:48:04,849][INFO ][logstash.filters.elapsed ] Elapsed, 'start event' received {:start_tag=>"starttask", :unique_id_field=>"taskID"}

How can I disable the logs ?

Hand over fields from the start event to the end event

I have a event stream where the start event contains details of the process and the end event only contains that it ended. With the elapsed filter the time is added to the end event. This makes it impossible to view everything in a single event.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.