logstash-plugins / logstash-input-cloudwatch Goto Github PK
View Code? Open in Web Editor NEWA Logstash input to pull events from the Amazon Web Services CloudWatch API
License: Apache License 2.0
A Logstash input to pull events from the Amazon Web Services CloudWatch API
License: Apache License 2.0
Hi , does it keep pull state?
If you are running multiple logstash instances at the same time, how to avoid duplicated metrics.
Should it keep a shared state between instances , maybe somewhere in dynamodb.
The verbosity level of the logstash-input-cloudwatch
plugin when running INFO level logging is quite high.
Currently, using this config I get the following info level messages on every iteration:
input {
cloudwatch {
namespace => "AWS/EC2"
metrics => 'CPUUtilization'
metrics => 'DiskReadOps'
metrics => 'DiskWriteOps'
metrics => 'NetworkIn'
metrics => 'NetworkOut'
metrics => 'CPUCreditBalance'
metrics => 'CPUCreditUsage'
metrics => 'StatusCheckFailed_Instance'
metrics => 'StatusCheckFailed_System'
filters => { "tag:Monitoring" => "Yes" }
region => "us-east-1"
interval => 600
}
}
This means in an environment with 100 instances (I am currently running in the 90s on average) I get approximately 918 log messages per 10 minute iteration.
(start) + (metrics) + (metrics * instances) = (total log messages)
1 + 18 + 9 * 100 = 919
Not only is this a lot of logs, but the logs are exceptionally verbose, as most include JSON payloads related to the message.
As compared to, for instance, the logstash-input-s3
plugin, where only info level log messages are in regards to registering the plugin (https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L67, https://github.com/logstash-plugins/logstash-input-s3/blob/master/lib/logstash/inputs/s3.rb#L278-L282), this seems excessive. It certainly made things difficult for me in terms of troubleshooting an issue recently.
Please post all product and debugging questions on our forum. Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here.
For all general issues, please provide the following details for fast resolution:
input {
cloudwatch {
namespace => "AWS/EC2"
metrics => [ "CPUUtilization" ]
filters => { "tag:Name" => "CaseWork" }
region => "us-east-1"
statistics => "Maximum"
period => 300
access_key_id => "xx"
secret_access_key => "xx"
}
}
output {
elasticsearch {
hosts => ["ip-172-31-55-42:9200"]
index => "metrics2-%{+YYYY.MM.dd}"
}
}
This is my config and there is not output. Help! I need this as soon as possible.
For all general issues, please provide the following details for fast resolution:
Version: 2.0.0
Operating System: Amazon Linux
Config File (if you have sensitive info, please remove it):
Sample Data:
[2017-06-02T06:57:19,078][DEBUG][logstash.inputs.cloudwatch] DPs: {:datapoints=>[], :label=>"CPUUtilization", :response_metadata=>{:request_id=>"b82e647f-4760-11e7-b7e9-67d8000f58e5"}}
[2017-06-02T06:57:19,078][INFO ][logstash.inputs.cloudwatch] Polling resource InstanceId: i-00ceb117581e228e8
[2017-06-02T06:57:19,101][INFO ][logstash.inputs.cloudwatch] [AWS CloudWatch 200 0.022 0 retries] get_metric_statistics(:dimensions=>[{:name=>"InstanceId",:value=>"i-00ceb117581e228e8"}],:end_time=>"2017-06-02T06:57:19+00:00",:metric_name=>"CPUUtilization",:namespace=>"AWS/EC2",:period=>300,:start_time=>"2017-06-02T06:42:19+00:00",:statistics=>["Maximum"])
Steps to Reproduce:
Followed the ELK article in elastic.co
Any number but 900 will result in this
cloudwatch {
namespace => "AWS/EC2"
metrics => [ "CPUUtilization" ]
interval => 400
filters => { "tag:Monitoring" => "Yes" }
region => "us-east-1"
}
{:timestamp=>"2016-06-18T22:04:57.214000+0000", :message=>"Pipeline aborted due to error", :exception=>#<RuntimeError: Interval must be divisible by peruid>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-cloudwatch-1.1.3/lib/logstash/inputs/cloudwatch.rb:126:in register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:330:in
start_inputs'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:329:in
start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:180:in start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:136:in
run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/agent.rb:473:in `start_pipeline'"], :level=>:error}
Hi all. I am working on logstash version 7.10.0.
Several month ago, I can install plugin without issue with the same Docker file.
My Docker file content:
FROM docker.elastic.co/logstash/logstash:7.10.0
RUN rm -f /usr/share/logstash/pipeline/*
RUN bin/logstash-plugin install logstash-input-cloudwatch
This is output of command: docker build
:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/usr/share/logstash/vendor/jruby/lib/ruby/stdlib/jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Validating logstash-input-cloudwatch
Installing logstash-input-cloudwatch
My command window is stuck on the last line: "Installing logstash-input-cloudwatch.."
Nothing happen even I waited some minutes...
Thank you!
Migrated from EagerELK/logstash-input-cloudwatch#18
cloudwatch {
access_key_id => "XXXXXXXXXX"
secret_access_key => "YYYYYYYYYYY"
interval => 300
namespace => "AWS/EC2"
metrics => [ "CPUCreditBalance", "CPUCreditUsage", "CPUSurplusCreditBalance", "CPUSurplusCreditsCharged", "CPUUtilization", "DiskReadBytes", "DiskReadOps", "DiskWriteBytes", "DiskWriteOps", "NetworkIn", "NetworkOut", "NetworkPacketsIn", "NetworkPacketsOut", "StatusCheckFailed", "StatusCheckFailed_Instance", "StatusCheckFailed_System" ]
region => "us-east-1"
filters => {
"instance-state-code" => "running"
}
add_field => {
"instance-state-code" => "running"
"techstack" => "XYZ"
"source" => "CloudWatch"
"region" => "us-east-1"
}
}
Error as follow
Error log, config file could be found if scroll down
[2018-05-04T02:22:12,493][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::CloudWatch access_key_id=>"XXXXXXXXX", secret_access_key=><password>, interval=>300, namespace=>"AWS/EC2", metrics=>["CPUCreditBalance", "CPUCreditUsage", "CPUSurplusCreditBalance", "CPUSurplusCreditsCharged", "CPUUtilization", "DiskReadBytes", "DiskReadOps", "DiskWriteBytes", "DiskWriteOps", "NetworkIn", "NetworkOut", "NetworkPacketsIn", "NetworkPacketsOut", "StatusCheckFailed", "StatusCheckFailed_Instance", "StatusCheckFailed_System"], region=>"us-east-1", filters=>{"instance-state-code"=>"running"}, add_field=>{"instance-state-code"=>"running", "techstack"=>"ZZZZZZZZ", "source"=>"CloudWatch", "region"=>"us-east-1"}, id=>"XXXXXXXXXXXXX", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"XXXXXXX", enable_metric=>true, charset=>"UTF-8">, role_session_name=>"logstash", use_ssl=>true, statistics=>["SampleCount", "Average", "Minimum", "Maximum", "Sum"], period=>300, combined=>false>
Error: no implicit conversion of LogStash::Util::Password into String
Exception: TypeError
Stack: org/jruby/RubyString.java:1144:in `+'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/signers/version_4.rb:93:in `derive_key'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/signers/version_4.rb:58:in `sign_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:708:in `block in sign_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:492:in `block in client_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/response.rb:175:in `build_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/response.rb:114:in `initialize'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:203:in `new_response'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:490:in `block in client_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:391:in `log_client_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:477:in `block in client_request'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:373:in `return_or_raise'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/aws-sdk-v1-1.67.0/lib/aws/core/client.rb:476:in `client_request'
(eval):3:in `list_metrics'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.1.1/lib/logstash/inputs/cloudwatch.rb:255:in `block in metrics_available'
org/jruby/RubyHash.java:711:in `default'
org/jruby/RubyHash.java:1100:in `[]'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.1.1/lib/logstash/inputs/cloudwatch.rb:245:in `metrics_for'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.1.1/lib/logstash/inputs/cloudwatch.rb:144:in `block in run'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:20:in `interval'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.1.1/lib/logstash/inputs/cloudwatch.rb:141:in `run'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:514:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:507:in `block in start_input'
Hi,
I was wondering if someone could help me?
I have followed your readme and it seems like the requests to cloudwatch are successful but there is no output?
I have included my config below and a verbose log output.
input {
cloudwatch {
namespace => "AWS/EC2"
metrics => [ "CPUUtilization" ]
filters => { "tag:Monitoring" => "Yes" }
region => "eu-west-1"
access_key_id => "MYKEY"
secret_access_key => "Secret Access"
}
}
output {
stdout {}
}
[AWS CloudWatch 200 1.788 0 retries] list_metrics(:namespace=>"AWS/EC2")
{:level=>:info}
Polling metric CPUUtilization {:level=>:info}
Filters: [{:name=>"tag:Monitoring", :values=>["Yes"]}] {:level=>:info}
[AWS EC2 200 0.389 0 retries] describe_instances(:filters=>[{:name=>"tag:Monitoring",:values=>["Yes"]}])
{:level=>:info}
Cheers
Brock
Please post all product and debugging questions on our forum. Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here.
For all general issues, please provide the following details for fast resolution:
Please post all product and debugging questions on our forum. Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here.
For all general issues, please provide the following details for fast resolution:
As of last week, CloudWatch added support for high resolution metrics: https://aws.amazon.com/about-aws/whats-new/2017/07/amazon-cloudwatch-introduces-high-resolution-custom-metrics-and-alarms/
Previously, the resolution was 1 data point per minute. Now you can have 1 data point per second.
The way this is done is to specify a StorageResolution
value of 1
in the CloudWatch PutMetricData
API call. There are only currently 2 valid values for this setting - 1
or 60
. It will default to 60
if not specified.
Obviously you don't have to feed in data every second, if you set a StorageResolution
value of 1
. You can freely feed in, for example, every 10 seconds and 6 points per minute will show in CloudWatch.
It would be great if logstash-input-cloudwatch
supported high resolution metrics.
We need to periodically collect few metrics from cloudwatch.
We are using logstash-input-cloudwatch plugin for this purpose.
Here is how the setup goes:
Logstash pipeline has a bunch of logstash agents configured and one of them is this with cloudwatch input plugin. Logstash is configured to auto reload config changes. However, we observed that the pipeline restarts is blocked by this input plugin. We see below error message when we try to auto reload pipeline changes or gracefully stop the service. The only way to stop logstash is to issue a "kill -9".
Did anyone face this issue?
Could someone help with solutions or any ideas towards overcoming.
Thanks in advance !!
We currently have a number of AWS Lambda functions that are reporting metrics to CloudWatch using their own namespace. I've configured the logstash-cloudwatch-plugin like this:
input {
cloudwatch
namespace => "CustomNamespace"
metrics => [ "CustomMetric" ]
filters => {}
region => "us-east-1"
type => "cloudwatch"
}
}
filter {
}
output {
elasticsearch {
index => "cloudwatch-%{+YYYY.MM.dd}"
hosts => ["http://my.ece.elasticsearch.instance.ip.es.io:9200"]
user => //
password => //
}
}
I have confirmed that the plugin correctly collects EC2 metrics. It's unclear what the filter statement should be, but it is required.
And here’s what the plugin’s logs show:
[2018-07-23T14:39:53,469][INFO ][logstash.inputs.cloudwatch] Polling CloudWatch API
[2018-07-23T14:39:53,470][DEBUG][logstash.inputs.cloudwatch] Polling metric CustomMetric
[2018-07-23T14:39:53,470][DEBUG][logstash.inputs.cloudwatch] Filters: []
The EC2 instance that hosts logstash has an IAM role that allows the following:
I'm running Logstash 6.3.2 on an Amazon Linux EC2 instance.
arn:aws:iam::??????????:role/logstash IAM role inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1444715676000",
"Effect": "Allow",
"Action": [
"cloudwatch:GetMetricStatistics",
"cloudwatch:ListMetrics"
],
"Resource": "*"
},
{
"Sid": "Stmt1444716576170",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances"
],
"Resource": "*"
}
]
}
/usr/share/logstash/config/logstash.yml
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
/usr/share/logstash/pipeline/logstash.conf
input {
cloudwatch {
namespace => "AWS/EC2"
metrics => [ "CPUUtilization" ]
filters => { "tag:Monitoring" => "Yes" }
role_arn => "arn:aws:iam::??????????:role/logstash"
region => "us-west-2"
}
}
output {
elasticsearch {
hosts => ["elastic.com:9200"]
user => "elastic"
password => "password"
ssl => false
ssl_certificate_verification => false
index => "cloudwatch-metrics-%{+YYYY.MM.dd}"
}
}
Dockerfile:
FROM docker.elastic.co/logstash/logstash:7.6.0
RUN bin/logstash-plugin install logstash-input-cloudwatch
RUN bin/logstash-plugin install logstash-output-elasticsearch
Run the container with the following:
docker run -it \
-v $(current_dir)/config:/usr/share/logstash/config \
-v $(current_dir)/pipeline:/usr/share/logstash/pipeline \
logstash-with-plugins:latest
Error:
[ERROR] 2020-05-01 16:20:00.805 [[main]<cloudwatch] javapipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::CloudWatch namespace=>"AWS/EC2", metrics=>["CPUUtilization"], filters=>{"tag:Monitoring"=>"Yes"}, id=>"3dcfbed65cb898284f8766782e4041abdf2b6e1d085b6bdeca03ddd96ca817ef", role_arn=>"arn:aws:iam::???????????:role/logstash-role", region=>"us-west-2", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_58ed5d50-ec8a-4751-b78c-0e27722fa906", enable_metric=>true, charset=>"UTF-8">, role_session_name=>"logstash", statistics=>["SampleCount", "Average", "Minimum", "Maximum", "Sum"], interval=>900, period=>300, combined=>false>
Error: unable to sign request without credentials set
Exception: Aws::Errors::MissingCredentialsError
See https://github.com/lukewaite/logstash-input-cloudwatch-logs as an example
Migrated from EagerELK/logstash-input-cloudwatch#13.
I can get the metric data with this AWS CLI command.
But I can't configure the logstash input to collect it, Because the dimensions need 2 parameters, which are DomainName, and ClientId to collect data for namespace AWS/ES. But when I configure in logstash, They had been separated to 2 polls command.
Please help.
Here are my configuration file, and the error I receive:
input {
cloudwatch {
namespace => "AWS/ES"
metrics => [ "CPUUtilization" ]
filters => {
"DomainName" => "my-ES-domainName"
"ClientId" => "my-AWS-AccountId"
}
aws_credentials_file => "/etc/logstash/aws_credential.yml"
region => "ap-southeast-1"
}
}
output {
if [namespace] == "AWS/ES" {
stdout {codec => rubydebug}
}
}
Running Log Received:
[2017-09-27T09:01:59,531][INFO ][logstash.inputs.cloudwatch] [AWS CloudWatch 200 1.549 0 retries] list_metrics(:namespace=>"AWS/ES")
[2017-09-27T09:01:59,533][INFO ][logstash.inputs.cloudwatch] Polling metric CPUUtilization
[2017-09-27T09:01:59,534][INFO ][logstash.inputs.cloudwatch] Filters: [{:name=>"DomainName", :values=>["my-ES-domainName"]}, {:name=>"ClientId", :values=>["my-AWS-AccountId"]}]
[2017-09-27T09:01:59,534][INFO ][logstash.inputs.cloudwatch] Polling resource DomainName: my-ES-domainName
[2017-09-27T09:01:59,587][INFO ][logstash.inputs.cloudwatch] [AWS CloudWatch 200 0.05 0 retries] get_metric_statistics(:dimensions=>[{:name=>"DomainName",:value=>"my-ES-domainName"}],:end_time=>"2017-09-27T09:01:59+00:00",:metric_name=>"CPUUtilization",:namespace=>"AWS/ES",:period=>300,:start_time=>"2017-09-27T08:46:59+00:00",:statistics=>["SampleCount","Average","Minimum","Maximum","Sum"])
[2017-09-27T09:01:59,588][INFO ][logstash.inputs.cloudwatch] Polling resource ClientId: my-AWS-AccountId
[2017-09-27T09:01:59,647][INFO ][logstash.inputs.cloudwatch] [AWS CloudWatch 200 0.059 0 retries] get_metric_statistics(:dimensions=>[{:name=>"ClientId",:value=>"my-AWS-AccountId"}],:end_time=>"2017-09-27T09:01:59+00:00",:metric_name=>"CPUUtilization",:namespace=>"AWS/ES",:period=>300,:start_time=>"2017-09-27T08:46:59+00:00",:statistics=>["SampleCount","Average","Minimum","Maximum","Sum"])
aws cloudwatch get-metric-statistics --namespace AWS/ES --metric-name CPUUtilization --dimensions Name=DomainName,Value=my-ES-domainName Name=ClientId,Value=my-AWS-AccountId --start-time 2017-09-27T08:26:58.000Z --end-time 2017-09-27T08:41:58.000Z --period 300 --statistics SampleCount Average
Result:
{
"Datapoints": [
{
"SampleCount": 5.0,
"Timestamp": "2017-09-27T08:26:00Z",
"Average": 8.0,
"Unit": "Percent"
},
{
"SampleCount": 5.0,
"Timestamp": "2017-09-27T08:36:00Z",
"Average": 6.0,
"Unit": "Percent"
},
{
"SampleCount": 5.0,
"Timestamp": "2017-09-27T08:31:00Z",
"Average": 7.6,
"Unit": "Percent"
}
],
"Label": "CPUUtilization"
}
Hi
It would be a nice feature to not have to provide a filter. Instead all possible metrics should be fetched when no filter has been provided. Alternatively, it could be supported to provided an empty filter (ie. filter => {}) which could then be used as a "catch all" alternative.
BR,
David Westlund
This is more like a question about how to do the following:
I have 2 metrics stored in CloudWatch: number of searches and number of detail page views. I want to use Logstash to calculate a conversion percentage of these 2, per hour. This data should be put back into another metric in Logstash.
Another requirements is that if Logstash exists for some reason, it has to be able to continue where it left off.
Is this even possible at all? I don't see an option to specify a starttime so I assume it's not possible.
I tried asking for help and debugging on both the forum and ServerFault, with no success. I am able to make AWS API calls from my server's command line, and I've verified that my aws credentials file allows read access to all users on the system (not that I'm crazy about that, but for debugging purposes I'm doing it).
input {
cloudwatch {
metrics => ["CPUUtilization"]
filters => { "tag:Monitoring" => "Yes" }
region => "us-east-1"
namespace => "AWS/EC2"
aws_credentials_file => "/home/pvencill/.aws/credentials" # this file is currently world-readable
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
Sample Data: ?? not sure what this is; it's whatever your plugin is querying from cloudwatch
Steps to Reproduce: Install the ELK stack, install the plugin, configure credentials, start the service.
Result is regular log entries of the format:
{:timestamp=>"2016-11-02T10:23:24.746000-0400", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n Plugin: <LogStash::Inputs::CloudWatch metrics=>[\"CPUUtilization\"], filters=>{\"tag:Monitoring\"=>\"Yes\"}, region=>\"us-east-1\", namespace=>\"AWS/EC2\", aws_credentials_file=>\"/home/pvencill/.aws/credentials\", codec=><LogStash::Codecs::Plain charset=>\"UTF-8\">, use_ssl=>true, statistics=>[\"SampleCount\", \"Average\", \"Minimum\", \"Maximum\", \"Sum\"], interval=>900, period=>300, combined=>false>\n Error: No metrics to query", :level=>:error}
Version: 6.3.0
Steps to Reproduce:
We are getting this error on cloudwatch input plugin. This appears to be related to an unresolved thread here: https://discuss.elastic.co/t/cloudwatch-input-plugin-no-metrics-to-query-error-in-6-3-0/137032
Can you please help identify if there is a specific issue related to the namespace or IAM profile in this instance There are some indications based on the discuss thread that this worked in 6.2.1. Not sure if this is a new bug?
[2019-02-01T13:08:39,269][INFO ][logstash.inputs.cloudwatch] Polling CloudWatch API
[2019-02-01T13:08:39,270][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:cloudwatch
Plugin: <LogStash::Inputs::CloudWatch access_key_id=>"secret", secret_access_key=><password>, namespace=>"AWS/EBS", metrics=>["VolumeQueueLength"], filters=>{"tag:Monitoring"=>"Yes"}, region=>"us-west-2", id=>"304367d8a0857399d9c633e61239f211e0b842b926b448773fa8b72b8ae166aa", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_43df151a-dca4-4e32-b603-f45d9bcf6a05", enable_metric=>true, charset=>"UTF-8">, role_session_name=>"logstash", statistics=>["SampleCount", "Average", "Minimum", "Maximum", "Sum"], interval=>900, period=>300, combined=>false>
Error: No metrics to query
Exception: RuntimeError
Stack: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.2.4/lib/logstash/inputs/cloudwatch.rb:154:in `run'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:512:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:505:in `block in start_input'
input {
cloudwatch {
access_key_id => "secret"
secret_access_key => "secret"
namespace => "AWS/EBS"
metrics => ["VolumeQueueLength"]
filters => { "tag:Monitoring" => "Yes" }
region => "us-west-2"
}
}
Hi,
I'm reading the docs and the source code for this input plugin and you've said that the filter parameter is to be provided as an array, however all your examples show the filter as a hash. For example:
filters => { "tag:Group" => "API-Production" }
I'm confused. Is the filter an array or a hash? It certainly looks like a hash to me. I'm curious how the validation in the input source code would allow this?
config :filters, :validate => :array
Could you please shed some light on this for me?
Many thanks,
Nick
I was trying to use this plugin to collect the Lambda metrics and post it to elastic search.
Config -
cloudwatch {
namespace => "AWS/Lambda"
type => "cloudwatch_lambda"
metrics => [ "Invocations", "Errors", "Duration", "ConcurrentExecutions" ]
tags => [ "lambda-metric-logs" ]
filters => { "tag:name" => "tag_name" }
interval => 900
region => "$APPLICATION_REGION"
}
The logs are as below in logstash (surprisingly they show it's successful without any data). -
{
"level": "INFO",
"loggerName": "logstash.inputs.cloudwatch",
"timeMillis": 1606209424153,
"thread": "[main]<cloudwatch",
"logEvent": {
"message": "[Aws::CloudWatch::Client 200 0.076219 0 retries] get_metric_statistics(namespace:"AWS/Lambda",metric_name:"Invocations",start_time:2020-11-24 09:02:04 UTC,end_time:2020-11-24 09:17:04 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"tag:sku",value:"[FILTERED]"}]) \n"
}
}
Logstash version - 7.9.0
Use the config above to reproduce the issue.
When I try to use the plugin, although it seems to retrieve the metrics, they never get outputted by logstash. I added a few extra logger.info lines to try to troubleshoot, but I just don't know enough about the plugin architecture to get far.
Polling CloudWatch API {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"133", :method=>"run"}
[AWS CloudWatch 200 2.694 0 retries] list_metrics(:namespace=>"AWS/EC2")
{:level=>:info, :file=>"aws/core/client.rb", :line=>"410", :method=>"log_response"}
Polling metric CPUUtilization {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"139", :method=>"run"}
Filters: [{:name=>"tag:Monitoring", :values=>["Yes"]}] {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"140", :method=>"run"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
[AWS EC2 200 1.079 0 retries] describe_instances(:filters=>[{:name=>"tag:Monitoring",:values=>["Yes"]}])
{:level=>:info, :file=>"aws/core/client.rb", :line=>"410", :method=>"log_response"}
AWS/EC2 Instances: ["i-7ec3f7dd"] {:level=>:debug, :file=>"logstash/inputs/cloudwatch.rb", :line=>"264", :method=>"resources"}
Polling resource InstanceId: i-7ec3f7dd {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"180", :method=>"fetch_resource_events"}
[AWS CloudWatch 200 0.678 0 retries] get_metric_statistics(:dimensions=>[{:name=>"InstanceId",:value=>"i-7ec3f7dd"}],:end_time=>"2016-02-17T18:49:41Z",:metric_name=>"CPUUtilization",:namespace=>"AWS/EC2",:period=>300,:start_time=>"2016-02-17T18:44:41Z",:statistics=>["SampleCount","Average","Minimum","Maximum","Sum"])
{:level=>:info, :file=>"aws/core/client.rb", :line=>"410", :method=>"log_response"}
DPs: {:datapoints=>[{:timestamp=>2016-02-17 18:44:00 UTC, :sample_count=>5.0, :unit=>"Percent", :minimum=>1.02, :maximum=>4.25, :sum=>10.66, :average=>2.132}], :label=>"CPUUtilization", :response_metadata=>{:request_id=>"343c9b68-d5a7-11e5-9777-4f5c72bb6080"}} {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"183", :method=>"fetch_resource_events"}
DPArray: 1 {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"184", :method=>"fetch_resource_events"}
Event: {:timestamp=>2016-02-17 18:44:00 UTC, :sample_count=>5.0, :unit=>"Percent", :minimum=>1.02, :maximum=>4.25, :sum=>10.66, :average=>2.132} {:level=>:info, :file=>"logstash/inputs/cloudwatch.rb", :line=>"186", :method=>"fetch_resource_events"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
Pushing flush onto pipeline {:level=>:debug, :file=>"logstash/pipeline.rb", :line=>"450", :method=>"flush"}
Are you guys open for adding different fields to the output event like tags or ipaddress incase of EC2 namespace?
Here is an AWS document about custom metrics that can be performed on EC2 instances: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html
I would like to get support for these custom metrics. Specifically, MemoryUtilization and DiskSpaceUtilization.
Using the dafault values for Interval (15 minutes) and Period (5 minutes).
AWS/RDS is providing 3 datasets for the above Interval, while AWS/EC2 is only giving 2 datasets.
Request for ELB namespaced cloudwatch metrics.
Also I'd be willing to write this as well, just need to find the resources key (which is probably ElbId or something similar)
V1:
input {
# ElasticSearch AZ1
cloudwatch {
type => "custom-metrics"
namespace => "CWAgent"
metrics => [ "disk_used_percent", "disk_free", "disk_used", "disk_total" ]
filters => [
{ path => "/" },
{ InstanceId => "i-0348f41427efbe150" },
{ device => "nvme0n1p1" },
{ fstype => "ext4" }
]
region => "ap-southeast-1"
aws_credentials_file => "/etc/logstash/conf.d/aws_credentials_file"
}
}
output {
stdout { codec => rubydebug }
}
This will output an errors:
[ERROR] 2018-09-28 08:36:12.816 [[main]<cloudwatch] pipeline - A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::CloudWatch aws_credentials_file=>"/etc/logstash/conf.d/aws_credentials_file", namespace=>"CWAgent", metrics=>["disk_used_percent", "disk_free", "disk_used", "disk_total"], filters=>[{"path"=>"/"}, {"InstanceId"=>"i-0348f41427efbe150"}, {"device"=>"nvme0n1p1"}, {"fstype"=>"ext4"}], id=>"795992bdd431e8496ae8d2faf950f5e656fd817606706f92699bad7184acd473", type=>"custom-metrics", region=>"ap-southeast-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_6dfad351-a845-4e8a-8ff1-a3400dc757ce", enable_metric=>true, charset=>"UTF-8">, role_session_name=>"logstash", statistics=>["SampleCount", "Average", "Minimum", "Maximum", "Sum"], interval=>900, period=>300, combined=>false>
Error: undefined method `each_pair' for #<Array:0x16180ab5>
Did you mean? each_entry
Exception: NoMethodError
Stack: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.2.2/lib/logstash/inputs/cloudwatch.rb:175:in `from_resources'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.2.2/lib/logstash/inputs/cloudwatch.rb:161:in `block in run'
org/jruby/RubyArray.java:1734:in `each'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.2.2/lib/logstash/inputs/cloudwatch.rb:155:in `block in run'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:20:in `interval'
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-cloudwatch-2.2.2/lib/logstash/inputs/cloudwatch.rb:149:in `run'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:408:in `inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:402:in `block in start_input'
V2:
input {
# ElasticSearch AZ1
cloudwatch {
type => "custom-metrics"
namespace => "CWAgent"
metrics => [ "disk_used_percent", "disk_free", "disk_used", "disk_total" ]
filters => {
path => "/"
InstanceId => "i-0348f41427efbe150"
device => "nvme0n1p1"
fstype => "ext4"
}
region => "ap-southeast-1"
aws_credentials_file => "/etc/logstash/conf.d/aws_credentials_file"
}
}
output {
stdout { codec => rubydebug }
}
This will instead query the metric using each filter, the plugin will initiate 4 API request for one metric, but we will need them to be queried in a single request, otherwise it wouldn't work.
[INFO ] 2018-09-28 08:39:46.143 [[main]<cloudwatch] cloudwatch - Polling CloudWatch API
[INFO ] 2018-09-28 08:39:46.158 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2018-09-28 08:39:46.818 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-09-28 08:39:52.931 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 4.485635 0 retries] list_metrics(namespace:"CWAgent")
[INFO ] 2018-09-28 08:39:53.579 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.361178 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_total",start_time:2018-09-28 08:24:52 UTC,end_time:2018-09-28 08:39:52 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"path",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.659 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.054871 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_total",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"InstanceId",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.692 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.025595 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_total",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"device",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.730 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.032873 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_total",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"fstype",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.781 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.03495 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"path",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.898 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.107968 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"InstanceId",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.931 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.027331 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"device",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:53.974 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.033452 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"fstype",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.009 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.025037 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_free",start_time:2018-09-28 08:24:53 UTC,end_time:2018-09-28 08:39:53 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"path",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.033 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.018691 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_free",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"InstanceId",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.131 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.087238 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_free",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"device",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.168 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.02437 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_free",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"fstype",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.210 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.025173 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used_percent",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"path",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.258 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.036867 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used_percent",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"InstanceId",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.294 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.020601 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used_percent",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"device",value:"[FILTERED]"}])
[INFO ] 2018-09-28 08:39:54.387 [[main]<cloudwatch] cloudwatch - [Aws::CloudWatch::Client 200 0.080744 0 retries] get_metric_statistics(namespace:"CWAgent",metric_name:"disk_used_percent",start_time:2018-09-28 08:24:54 UTC,end_time:2018-09-28 08:39:54 UTC,period:300,statistics:["SampleCount","Average","Minimum","Maximum","Sum"],dimensions:[{name:"fstype",value:"[FILTERED]"}])
V3:
input {
# ElasticSearch AZ1
cloudwatch {
type => "custom-metrics"
namespace => "CWAgent"
metrics => [ "disk_used_percent", "disk_free", "disk_used", "disk_total" ]
filters => {
path => "/",
InstanceId => "i-0348f41427efbe150",
device => "nvme0n1p1",
fstype => "ext4"
}
region => "ap-southeast-1"
aws_credentials_file => "/etc/logstash/conf.d/aws_credentials_file"
}
}
output {
stdout { codec => rubydebug }
}
This will output an error on configuration:
[ERROR] 2018-09-28 08:43:12.050 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at line 8, column 20 (byte 217) after input {\n # ElasticSearch AZ1\n cloudwatch {\n type => \"custom-metrics\"\n namespace => \"CWAgent\"\n metrics => [ \"disk_used_percent\", \"disk_free\", \"disk_used\", \"disk_total\" ]\n filters => {\n path => \"/\"", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:157:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
Can you please let me know if I am doing something wrong, or is it really an issue? The documentation also does not specify how would you use multiple filters for this. It just specify the type as Array
which in my case didn't work.
Thanks!
Hendry
Hi,
I'm trying to use this plugin to import custom metrics. Though, I noticed that metric's stats timestamp was overriden by end_time of the requested period, which sounds inconsistent to me.
def cleanup(event)
event.delete :statistics
event.delete :dimensions
event[:start_time] = Time.parse(event[:start_time]).utc
event[:end_time] = Time.parse(event[:end_time]).utc
event[:timestamp] = event[:end_time]
LogStash::Util.stringify_symbols(event)
end
Any reason for this behavior?
Thanks!
Thanks a lot for this plugin, I'm glad I found it, it does exactly what I needed it to do. 😄
However when installing it with the logstash plugin manager, I had version 1.1.0 installed (since it gets the latest version from rubygems.org), which happens to be broken. Just tested the latest version of master in this repo and it works great.
Would really help with automation if a working version was in rubygems.org. Any chance of that happening any time soon?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.