Git Product home page Git Product logo

cloudwatch-logs-subscription-consumer's Issues

Setup in an existing VPC

I intend to set this up in an existing VPC. I have an existing VPC, subnets, IGW, Route table and Network Access list.
It will be nice not to have a template that one can use and a step-by-step guide to get it done.

@dvassallo any work around for this?

How to map a geo_point in indices that are created automatically by elasticsearch-kopf?

I would really appreciate any help with the following issue:

Take a look at the mapping for the following index

{
"_default_":{
"properties":{
"@timestamp":{
"format":"dateOptionalTime",
"doc_values":true,
"type":"date"
},
"@message":{
"type":"string"
},
"@id":{
"type":"string"
}
},
"_all":{
"enabled":false
}
},
"development":{
"properties":{
"@timestamp":{
"format":"dateOptionalTime",
"doc_values":true,
"type":"date"
},
"@log_stream":{
"type":"string"
},
"@message":{
"type":"string"
},
"Context":{
"properties":{
"LocationId":{
"type":"string"
},
"SubCategoryId":{
"type":"string"
},
"HttpServerName":{
"type":"string"
},
"HttpRequestUri":{
"type":"string"
},
"CategoryId":{
"type":"string"
},
"RequestId":{
"type":"string"
},
"Coordinate":{
"type":"string"
},
"ServiceId":{
"type":"string"
},
"UserId":{
"type":"string"
},
"HttpMethod":{
"type":"string"
}
}
},
"Message":{
"type":"string"
},
"@id":{
"type":"string"
},
"Thread":{
"properties":{
"Name":{
"type":"string"
},
"Id":{
"type":"long"
},
"Priority":{
"type":"long"
}
}
},
"Timestamp":{
"format":"dateOptionalTime",
"type":"date"
},
"Marker":{
"type":"string"
},
"@log_group":{
"type":"string"
},
"@owner":{
"type":"string"
}
},
"_all":{
"enabled":false
}
}
}

From the mapping above, you can see that the Coordinate property type is a stringtype but it would be nice if I can find a way to ensure that this property is of type geo_point.

Keep in mind that if I manually change the mapping for Coordinate to geo_point, it will work and Kibana will recognize it as a geo_point type. However, when kopf automatically creates another daily index, it will map Coordinate as a string type and Kibana will get a mapping conflict.

https://cdn.discourse.org/elastic/uploads/default/optimized/2X/9/99d5ed421e580787948626382ab547ec17bbbd56_1_690x339.png

Cloudformation template failure

Hi,

My colleague and I have been trying to begin a testing environment to see if this will be useful enough for our daily working. We have come across an error when initially deploying the cloudfromation template supplied /configuration/cloudformation/cwl-elasticsearch.template. Is this something you have come across before? We've logged into the created instance and cfn-init is indeed accessible on the instance. below details the error;

16:27:58 UTC+0100 CREATE_FAILED AWS::CloudFormation::WaitCondition WaitCondition WaitCondition received failed message: 'failed to run cfn-init' for uniqueId: i-c8939d0a
Physical ID:arn:aws:cloudformation:us-west-1:532578262755:stack/test-stack-cf/ade64480-3abc-11e5-82fb-500c335a70e0/WaitHandle

Support for multiple log groups

I have multiple log groups that I want to stream into this cloud formation stack. So far I've only been able to stream one of my log groups. Is there a way to support multiple log groups?

Errors using consumer with ES 2.0

Hi - I'm attempting to use the consumer with our ES 2.0 cluster and am receiving errors. I know there were some breaking changes with the Java API in ES 2.0, so I'm assuming that is the issue here, but I'm hoping to give as much info as possible in order to get this resolved.

The consumer was compiled as instructed in the Readme with default configuration other than the ES cluster details and TRACE logging.

The error I am receiving on the consumer side is:

2015-11-13 19:53:47,418 INFO  transport - [Blockbuster] failed to get local cluster state for [#transport#-1][localhost][inet[elasticsearch-int.x.com/10.xx.xx.xx:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:178)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:130)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: Unsupported version: 1
    at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46)
    at java.io.ObjectInputStream.<init>(ObjectInputStream.java:299)
    at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:175)
    ... 23 more

The error that I am receiving on the Elasticsearch side is:

[2015-11-13 19:56:03,117][WARN ][transport.netty          ] [Mister X] exception caught on transport layer [[id: 0x45211071, /10.xx.xx.xx:26266 => /10.xx.xx.xx:9300]], closing connection
java.lang.IllegalStateException: Message not fully read (request) for requestId [24], action [cluster/state], readerIndex [34] vs expected [49]; resetting
        at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:120)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
        at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
        at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Both the ES cluster and the consumer are using Java 1.7.0_91.

Please let me know if there is anything else I can add.

Thanks.

Consumer doesn't restart after instance reboot

The elasticsearch instance got (accidentally) rebooted and afterwards no data was being put into elasticsearch. I checked Kinesis on the AWS Console and it showed 0 get requests. Emailed Daniel who got back to me super quick (thanks Daniel!) and suggested I rerun this portion of the CF script. https://github.com/awslabs/cloudwatch-logs-subscription-consumer/blob/master/configuration/cloudformation/cwl-elasticsearch.template#L675-L682

Dug out the variables from the CF resources and ran that script. Looks like something is happening but I'm still not getting any data in ES. Daniel said this was probably worthy of an issue. So here it is.

I'll gladly send any logs that you need, but I don't even know what to send at this point.

Thanks,
matthew

Kopf-Elasticsearch Version Mismatch

We were able to successfully deploy this CF template but Kopf is complaining that:

This version of kopf is not compatible with your ES version

not consuming existing flowlogs

I have about 2 weeks of flow logs that I've collected into a log group, and I am able to spin this stack up successfully (thank you great work on this!)

But it will only index and visualize data that came into the log group after I spun up the stack, how do I get this system to grab everything in the log group for X number of days backwards?

Fails to send JSON with a string prefix and no extracted fields to elasticsearch

There was a change here 9eef3cd to support adding JSON extracted fields with a non-JSON prefix. However, it misses the case where there are no extracted fields but still a string with a JSON prefix:
https://github.com/awslabs/cloudwatch-logs-subscription-consumer/blob/master/src/main/java/com/amazonaws/services/logs/connectors/elasticsearch/CloudWatchLogsElasticsearchDocument.java#L100

For example, putting a log event with a message:

test json: {"hello": "world"}

Fails with the following exception in /var/log/cloudwatch-logs-subscription-consumer.log

2015-12-23 18:47:39,825 ERROR KinesisConnectorRecordProcessor - Failed to transform record com.amazonaws.services.logs.subscriptions.CloudWatchLogsEvent@732d486e to output type
java.io.IOException: Error serializing the Elasticsearch document to JSON: A JSONObject text must begin with '{' at 1 [character 2 line 1]
        at com.amazonaws.services.logs.connectors.elasticsearch.ElasticsearchTransformer.fromClass(ElasticsearchTransformer.java:65)
        at com.amazonaws.services.logs.connectors.elasticsearch.ElasticsearchTransformer.fromClass(ElasticsearchTransformer.java:33)
        at com.amazonaws.services.kinesis.connectors.KinesisConnectorRecordProcessor.transformToOutput(KinesisConnectorRecordProcessor.java:150)
        at com.amazonaws.services.kinesis.connectors.KinesisConnectorRecordProcessor.processRecords(KinesisConnectorRecordProcessor.java:135)
        at com.amazonaws.services.kinesis.clientlibrary.lib.worker.V1ToV2RecordProcessorAdapter.processRecords(V1ToV2RecordProcessorAdapter.java:42)
        at com.amazonaws.services.kinesis.clientlibrary.lib.worker.ProcessTask.call(ProcessTask.java:172)
        at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:48)
        at com.amazonaws.services.kinesis.clientlibrary.lib.worker.MetricsCollectingTaskDecorator.call(MetricsCollectingTaskDecorator.java:23)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.util.json.JSONException: A JSONObject text must begin with '{' at 1 [character 2 line 1]
        at com.amazonaws.util.json.JSONTokener.syntaxError(JSONTokener.java:422)
        at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:196)
        at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:323)
        at com.amazonaws.services.logs.connectors.elasticsearch.CloudWatchLogsElasticsearchDocument.getFields(CloudWatchLogsElasticsearchDocument.java:101)
        at com.amazonaws.services.logs.connectors.elasticsearch.CloudWatchLogsElasticsearchDocument.<init>(CloudWatchLogsElasticsearchDocument.java:43)
        at com.amazonaws.services.logs.connectors.elasticsearch.ElasticsearchTransformer.fromClass(ElasticsearchTransformer.java:47)
        ... 11 more

CloudTrail to CloudWatch Issues

Hi,

I'm setting this up manually and trying to point it at our existing ES cluster (not using Amazon ES). I have CloudTrail logging to CloudWatch and a subscription from Kinesis and all that is working great, the consumer connects to Kinesis fine too, that's not the issue.

The issue I'm seeing is this:

I get these WARN and ERROR messages from the logs about the nested JSON array that is returned with the requestParameters field:

2015-10-30 19:12:46,022 WARN  ElasticsearchEmitter - Returning 1 records as failed
2015-10-30 19:12:56,028 ERROR ElasticsearchEmitter - Record failed with message: MapperParsingException[failed to parse [requestParameters.iamInstanceProfile]]; nested: ElasticsearchIllegalArgumentException[unknown property [arn]];

In Elasticsearch, I do get data, but it looks like it's coming in as bulk, and the JSON isn't being parsed properly (I've cut this output down a lot for readability):

failed to execute bulk item (index) index {[cwl-2015.10.30][CloudTrail [3225203955xxxxxxxxxxxxxxxxxxxx1130203408060579867], source[{"eventID":"e95e5.....
.....06xx","@owner":"xxxxxxxxxxxxxx","@id":"322xxxxxxxxxxxxxxxxxxxxxxx071130203408060579867"}]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [requestParameters.iamInstanceProfile]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:411)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:554)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:487)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeObject(ObjectMapper.java:554)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:487)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:466)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:418)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:148)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: unknown property [arn]
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateFieldForString(StringFieldMapper.java:331)
at org.elasticsearch.index.mapper.core.StringFieldMapper.parseCreateField(StringFieldMapper.java:277)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:401)
... 15 more

I'm not a java developer so I hesitate to jump into the code

Thanks!

Trying to set this up.

I have cloudwatch logs and an elasticsearch cluster w/kibana. I'm trying to setup cloudwatch-logs-subscription-consumer found from this repo.

But unfortunately it doesn't really tell me HOW this is setting this up. I know there is a cloudformation template, however that template is not an option.
Do you have any other instructions that can help?

Cloudformation doesn't start sample template

Hi, I am aws begginer operator

I want kinesis, elasticsearch and kibana
So google search and this sample discover.
I want Earnestly this sample(Cloudformation) start.
But, sample template for make Cloudformation, error issue.

19:12:55 UTC+0900 ROLLBACK_IN_PROGRESS AWS::CloudFormation::Stack fff The following resource(s) failed to create: [WaitCondition]. . Rollback requested by user.
19:12:53 UTC+0900 CREATE_FAILED AWS::CloudFormation::WaitCondition WaitCondition WaitCondition received failed message: 'failed to run cfn-init' for uniqueId: i-c227a060
Physical ID:arn:aws:cloudformation:ap-northeast-1:432113265161:stack/fff/f9925e20-5090-11e5-911f-5001aba75438/WaitHandle
19:09:45 UTC+0900 CREATE_COMPLETE AWS::AutoScaling::AutoScalingGroup ElasticsearchServerGroup
19:07:57 UTC+0900 CREATE_IN_PROGRESS AWS::AutoScaling::AutoScalingGroup ElasticsearchServerGroup Resource creation Initiated
19:07:56 UTC+0900 CREATE_IN_PROGRESS AWS::CloudFormation::WaitCondition WaitCondition Resource creation Initiated

Please, help me

Record failed with message: UnavailableShardsException

It seems I'm not able to index logs from cloudwatch:

Record failed: {"index":"cwl-2016.12.19","type":"logging","source":"{\"timestamp\":\"10.0.0.107\",\"@log_stream\":\"production.10.0.1.87.cows.http_access\",\"@timestamp\":1482185012343,\"@message\":\"10.0.0.107 - - [19/Dec/2016:22:03:32 +0000] \\\"HEAD /health HTTP/1.1\\\" 200 - \\\"-\\\" \\\"lua-resty-http/0.08 (Lua) ngx_lua/10005\\\"\",\"request_id\":\"-\",\"event\":\"- [19/Dec/2016:22:03:32 +0000] \\\"HEAD /health HTTP/1.1\\\" 200 - \\\"-\\\" \\\"lua-resty-http/0.08 (Lua) ngx_lua/10005\\\"\",\"@id\":\"33053830297342209648292451756418042022894691916625608704\",\"@log_group\":\"logging\",\"@owner\":\"412642013128\"}","id":"33053830297342209648292451756418042022894691916625608704","version":null,"ttl":null,"create":true}
2016-12-19 22:01:36,404 ERROR ElasticsearchEmitter - Record failed with message: UnavailableShardsException[[cwl-2016.12.19][7] Not enough active copies to meet write consistency of [QUORUM] (have 1, needed 2). Timeout: [1m], request: org.elasticsearch.action.bulk.BulkShardRequest@7ac0d0eb]
2016-12-19 22:01:36,404 INFO  ElasticsearchEmitter - Emitted 0 records to Elasticsearch
2016-12-19 22:01:36,405 WARN  ElasticsearchEmitter - Cluster health is YELLOW.
2016-12-19 22:01:36,405 WARN  ElasticsearchEmitter - Returning 86 records as failed

I suspect this is simply not understanding the proper configuration for Kibana 4, ES, and using a Cloudwatch logging group. Here's my params:
screen shot 2016-12-19 at 2 12 50 pm

keeps getting roll_back

I don't know if any has try this but I kept getting roll_back failures. Also the roll_back eventually failed due to dependencies.

Can't be used with Amazon Elasticsearch service

This connector can't be used in conjunction with the Amazon Elasticsearch service because it requires the ES transport protocol, which Amazon ES doesn't expose. Would it be possible for this connector to use the REST protocol?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.