Git Product home page Git Product logo

graylog-plugin-aws's Introduction

AWS Plugin For Graylog

Build Status Github Downloads GitHub Release

This plugin provides the following Graylog modules:

This plugin provides the following functionality:

  • AWS Logs input: Reads a stream of CloudWatch logs from a specific AWS CloudWatch log group matching a particular log filter pattern.
  • AWS Flow Logs input: Reads a stream of network flow logs for a VPC, subnet, or network interface.
  • AWS entity translation: Looks up data for an an AWS entity.

Graylog Version Compatibility

Plugin Version Graylog Version
3.0.x 3.0.x
2.5.x 2.5.x
2.4.x 2.4.x
2.3.x 2.3.x
1.3.2 2.2.2
1.2.1 2.1.3
0.6.0 2.0.x

Installation

Since Graylog Version 2.4.0 this plugin is already included in the Graylog server installation package as default plugin.

Download the plugin and place the .jar file in your Graylog plugin directory. The plugin directory is the plugins/ folder relative from your graylog-server directory by default and can be configured in your graylog.conf file.

Restart graylog-server and you are done.

General setup

After installing the plugin you will have a new cluster configuration section at “System -> Configurations” in your Graylog Web Interface. Make sure to complete the configuration before using any of the modules this plugin provides. You’ll see a lot of warnings in your graylog-server log file if you fail to do so.

Note that the AWS access and secret key are currently not stored encrypted. This feature is following shortly and before the final release of v1.0 of this plugin.

AWS entity translation

The configuration of this plugin has a parameter that controls if AWS entity translations are supposed to be attempted or not. This basically means that the plugin will try to find certain fields like a source IP address and enrich the log message with more information about the AWS entity (like a EC2 box, an ELB instance, a RDS database, …) automatically.

This would look something like this:

IAM permissions required to use this feature:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Stmt1469415911000",
            "Effect": "Allow",
            "Action": [
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:DescribeLoadBalancers"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Sid": "Stmt1469415936000",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeNetworkInterfaceAttribute",
                "ec2:DescribeNetworkInterfaces"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}

AWS Flow Logs/AWS Logs

Graylog supports the ability to read AWS Cloudwatch logs regardless of where they originated from. As long as the logs end up in a constant CloudWatch log group, then Graylog should be able to read them. The log messages for a particular CloudWatch log group must be forwarded to a Kinesis stream (via CloudWatch Subscription filters) from which Graylog will be able to read and process them. Using a Kinesis stream allows Graylog the ability to sequentially read the log messages. If the input is temporarily stopped, then started again, the input will continue reading messages were it left off (as long as the data retention period on the Kinesis stream is large enough).

AWS Flow Logs and AWS Logs inputs both read logs from CloudWatch log groups. However, the AWS Flow Logs input internally uses a special decoder that parses fields from each message according to the AWS flow log message format.

The AWS Logs input automatically parses the aws_log_group and aws_log_stream fields, and leaves the remaining field extraction up to the user to define.

Several flow logs integration and analysis examples are described in this graylog.org blog post.

Preparation:

This input uses the AWS SDK to communicate with various AWS resources. Therefore, HTTPS communication must be permitted between the Graylog server and each of the resources. If communication on the network segment containing the Graylog cluster is restricted, please make sure that communication to the following endpoints is explicitly permitted.

monitoring.<region>.amazonaws.com
cloudtrail.<region>.amazonaws.com
dynamodb.<region>.amazonaws.com
kinesis.<region>.amazonaws.com
logs.<region>.amazonaws.com 

Step 1: Enable Flow Logs

This step is only needed when setting up the AWS Flow Logs input (skip if setting up the AWS logs input). There are two ways to enable Flow Logs for an AWS network interface:

For a specific network interface in your EC2 console, under the “Network Interfaces” main navigation link:

… or for all network interfaces in your VPC using the VPC console:

After a few minutes (usually 15 minutes but it can take up to an hour), AWS will start writing Flow Logs and you can view them in your CloudWatch console:

Now let’s go on and instruct AWS to write the FlowLogs to a Kinesis stream.

Steps 2: Set up Kinesis stream

Create a Kinesis stream using the AWS CLI tools:

aws kinesis create-stream --stream-name "flowlogs" --shard-count 1

Now get the Stream details:

aws kinesis describe-stream --stream-name "flowlogs"

Copy the StreamARN from the output. We'll need it later.

Next, create a file called trust_policy.json with the following content:

{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.eu-west-1.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }
}

Make sure to change the Service from eu-west-1 to the Region you are running in.

Now create a a new IAM role with the permissions in the file we just created:

aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://trust_policy.json

Copy the ARN of the role you just created. You'll need it in the next step.

Create a new file called permissions.json and set both ARNs to the ARNs your copied above:

{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "kinesis:PutRecord",
      "Resource": "[YOUR KINESIS STREAM ARN HERE]"
    },
    {
      "Effect": "Allow",
      "Action": "iam:PassRole",
      "Resource": "[YOUR IAM ARN HERE]"
    }
  ]
}

Now attach this role:

aws iam put-role-policy --role-name CWLtoKinesisRole --policy-name Permissions-Policy-For-CWL --policy-document file://permissions.json

The last step is to create the actual subscription that will write the Flow Logs to Kinesis:

aws logs put-subscription-filter \
    --filter-name "MatchAllValidFilter" \
    --filter-pattern "OK" \
    --log-group-name "my-flowlogs" \
    --destination-arn "[YOUR KINESIS STREAM ARN HERE]" \
    --role-arn "[YOUR IAM ARN HERE]"

You should now see Flow Logs being written into your Kinesis stream.

Step 3: Create AWS user for running the input

IAM permissions required for the input user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
        "cloudwatch:PutMetricData",
        "dynamodb:CreateTable",
        "dynamodb:DescribeTable",
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:Scan",
        "dynamodb:UpdateItem",
        "ec2:DescribeInstances",
        "ec2:DescribeNetworkInterfaceAttribute",
        "ec2:DescribeNetworkInterfaces",
        "elasticloadbalancing:DescribeLoadBalancerAttributes",
        "elasticloadbalancing:DescribeLoadBalancers",
        "kinesis:GetRecords",
        "kinesis:GetShardIterator",
        "kinesis:ListShards"
      ],
      "Resource": "*"
    }
  ]
}

Once the user is created, take note of the Access key ID Secret access key. You will need these to properly configure the input in AWS.

Step 4: Launch input

Now go into the Graylog Web Interface and start a new AWS Flow Logs input. It will ask you for some simple parameters like the Kinesis Stream name you are writing your Flow Logs to.

You should see something like this in your graylog-server log file after starting the input:

2017-06-03T15:22:43.376Z INFO  [InputStateListener] Input [AWS FlowLogs Input/5932d443bb4feb3768b2fe6f] is now STARTING
2017-06-03T15:22:43.404Z INFO  [FlowLogReader] Starting AWS FlowLog reader.
2017-06-03T15:22:43.404Z INFO  [FlowLogTransport] Starting FlowLogs Kinesis reader thread.
2017-06-03T15:22:43.410Z INFO  [InputStateListener] Input [AWS FlowLogs Input/5932d443bb4feb3768b2fe6f] is now RUNNING
2017-06-03T15:22:43.509Z INFO  [LeaseCoordinator] With failover time 10000 ms and epsilon 25 ms, LeaseCoordinator will renew leases every 3308 ms, takeleases every 20050 ms, process maximum of 2147483647 leases and steal 1 lease(s) at a time.
2017-06-03T15:22:43.510Z INFO  [Worker] Initialization attempt 1
2017-06-03T15:22:43.511Z INFO  [Worker] Initializing LeaseCoordinator
2017-06-03T15:22:44.060Z INFO  [KinesisClientLibLeaseCoordinator] Created new lease table for coordinator with initial read capacity of 10 and write capacity of 10.
2017-06-03T15:22:54.251Z INFO  [Worker] Syncing Kinesis shard info
2017-06-03T15:22:55.077Z INFO  [Worker] Starting LeaseCoordinator
2017-06-03T15:22:55.279Z INFO  [LeaseTaker] Worker graylog-server-master saw 1 total leases, 1 available leases, 1 workers. Target is 1 leases, I have 0 leases, I will take 1 leases
2017-06-03T15:22:55.375Z INFO  [LeaseTaker] Worker graylog-server-master successfully took 1 leases: shardId-000000000000
2017-06-03T15:23:05.178Z INFO  [Worker] Initialization complete. Starting worker loop.
2017-06-03T15:23:05.203Z INFO  [Worker] Created new shardConsumer for : ShardInfo [shardId=shardId-000000000000, concurrencyToken=9f6910f6-4725-3464e7e54251, parentShardIds=[], checkpoint={SequenceNumber: LATEST,SubsequenceNumber: 0}]
2017-06-03T15:23:05.204Z INFO  [BlockOnParentShardTask] No need to block on parents [] of shard shardId-000000000000
2017-06-03T15:23:06.300Z INFO  [KinesisDataFetcher] Initializing shard shardId-000000000000 with LATEST
2017-06-03T15:23:06.719Z INFO  [FlowLogReader] Initializing Kinesis worker.
2017-06-03T15:23:44.277Z INFO  [Worker] Current stream shard assignments: shardId-000000000000
2017-06-03T15:23:44.277Z INFO  [Worker] Sleeping ...

It will take a few minutes until the first logs are coming in.

Important: AWS delivers Flow Logs intermittently in batches (usually in 5 to 15 minute intervals), and sometimes out of order. Keep this in mind when searching over messages in a recent time frame.

Throttling

Starting in version 2.5.0, the AWS Flow Logs and AWS Logs inputs support the ability to throttle if contention occurs in the Graylog Journal. Throttling will slow the rate of AWS Kinesis stream intake for these inputs by pausing processing until the Journal contention is cleared when the node catches up. If the contention lasts for more than 60 seconds, then the Kinesis consumer will be temporarily stopped until the Journal contention is resolved. This setting can help to slow down the processing of large, intermittent log batches. If frequent long/disruptive throttling occurs, then additional hardware resources may need to be allocated to the Graylog node where the input is running.

To enable throttling, edit the input and check the Allow throttling this input checkbox.

See the Input Throttling section of the Graylog docs for more information about how and when throttling will occur.

CloudTrail setup and configuration

This input uses the AWS SDK to communicate with various AWS resources. Therefore, HTTPS communication must be permitted between the Graylog server and each of the resources. If communication on the network segment containing the Graylog cluster is restricted, please make sure that communication to the following endpoints is explicitly permitted.

monitoring.<region>.amazonaws.com
cloudtrail.<region>.amazonaws.com
sqs.<region>.amazonaws.com
sqs-fips.<region>.amazonaws.com
<bucket-name>.s3-<region>.amazonaws.com 

Step 1: Enabling CloudTrail for an AWS region

Start by enabling CloudTrail for an AWS region:

Configuring CloudTrail

  • Create a new S3 bucket: Yes
  • S3 bucket: Choose anything here, you do not need it for configuration of Graylog later
  • Log file prefix: Optional, not required for Graylog configuration
  • Include global services: Yes (you might want to change this when using CloudTrail in multiple AWS regions)
    • SNS notification for every log file delivery: Yes
    • SNS topic: Choose something like cloudtrail-log-write here. Remember the name.

Step 2: Set up SQS for CloudTrail write notifications

Navigate to the AWS SQS service (in the same region as the just enabled CloudTrail) and hit Create New Queue.

Creating a SQS queue

You can leave all settings on their default values for now but write down the Queue Name because you will need it for the Graylog configuration later. Our recommended default value is cloudtrail-notifications.

CloudTrail will write notifications about log files it wrote to S3 to this queue and Graylog needs this information. Let’s subscribe the SQS queue to the CloudTrail SNS topic you created in the first step now:

Subscribing SQS queue to SNS topic

Right click on the new queue you just created and select Subscribe Queue to SNS Topic. Select the SNS topic that you configured in the first step when setting up CloudTrail. Hit subscribe, but make sure not to check the Raw message delivery option. See this AWS docs page for more info on raw message delivery.

That's it! You're are all done with the AWS configuration.

Step 3: Install and configure the Graylog CloudTrail plugin

Copy the .jar file that you received to your Graylog plugin directory which is configured in your graylog.conf configuration file using the plugin_dir variable.

Restart graylog-server and you should see the new input type AWS CloudTrail Input at System -> Inputs -> Launch new input. The required input configuration should be self-explanatory.

Important: The IAM user you configured in “System -> Configurations” has to have permissions to read CloudTrail logs from S3 and write notifications from SQS:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1411854479000",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::cloudtrail-logfiles/*"
      ]
    }
  ]
}
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1411834838000",
      "Effect": "Allow",
      "Action": [
        "sqs:DeleteMessage",
        "sqs:ReceiveMessage"
      ],
      "Resource": [
        "arn:aws:sqs:eu-west-1:450000000000:cloudtrail-write"
      ]
    }
  ]
}

(Make sure to replace resource values with the actual ARNs of your environment)

More required IAM roles: The way we communicate with Kinesis requires us to store some metadata in AWS DynamoDB and we are also writing some metrics back to AWS CloudWatch. For this to work you have to attach the following standard AWS IAM policies to your AWS API user:

  • CloudWatchFullAccess
  • AmazonDynamoDBFullAccess
  • AmazonKinesisReadOnlyAccess

Note that these are very open standard permissions. We recommend to use them for a test setup but further boil them down to only allow access (read+write) to the DynamoDB table we automatically created (you'll see it in the list of tables) and also to only call cloudwatch:PutMetricData. How to get the ARNs and how to create custom policies would be out of scope for this guide.

Usage

You should see CloudTrail messages coming in after launching the input. (Note that it can take a few minutes based on how frequent systems are accessing your AWS resource) You can even stop Graylog and it will catch up with all CloudTrail messages that were written since it was stopped when it is started a!gain.

Now do a search in Graylog. Select “Search in all messages” and search for: source:"aws-cloudtrail"

Troubleshooting

Enable Debug Logging

To troubleshoot the AWS plugin, it may be useful to turn on debug logging for this plugin specifically. Note that changing the Graylog subsystem logging level to DEBUG in System > Logging does not affect the logging level for the AWS plugin. You will need to use the Graylog API to enable logging for this plugin. Execute this curl command against the Graylog node running the AWS plugin to enable DEBUG logging for it:

curl -I -X PUT http://<graylog-username>:<graylog-password>@<graylog-node-ip>:9000/api/system/loggers/org.graylog.aws/level/debug

CloudTrail troubleshooting

If the CloudTrail input is starting, and the debug log messages show that messages are being received, but no messages are visible when searching in Graylog, then make sure the SQS subscription is not set to deliver the messages in raw format.

Build

This project is using Maven 3 and requires Java 8 or higher.

You can build a plugin (JAR) with mvn package.

DEB and RPM packages can be build with mvn jdeb:jdeb and mvn rpm:rpm respectively.

Plugin Release

We are using the maven release plugin:

$ mvn release:prepare
[...]
$ mvn release:perform

This sets the version numbers, creates a tag and pushes to GitHub. Travis CI will build the release artifacts and upload to GitHub automatically.

graylog-plugin-aws's People

Contributors

a-yiorgos avatar antonebel avatar bernd avatar bronius avatar chunters avatar danielgrant avatar danotorrey avatar dennisoelkers avatar dependabot-preview[bot] avatar dependabot[bot] avatar edmundoa avatar gally47 avatar garybot2 avatar jalogisch avatar janheise avatar joschi avatar kmerz avatar kroepke avatar kyleknighted avatar linuspahl avatar moesterheld avatar mpfz0r avatar radykal-com avatar rongutierrez avatar sturman avatar thll avatar waab76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graylog-plugin-aws's Issues

Cloudtrail plugin is not working

Hi,

I have done the configuration of section "CloudTrail setup and configuration". I have created the policies, the configuration of plugin, etc... but i can see messages of cloudtrail on search menu at Graylog2 Web Interface

I can't see any messages at Graylog2 server.log ...... i have not errors.

captura de tela 2017-04-13 as 13 38 19

I am using Graylog 2.2.3+7adc951 on ip-10-XXX-X-XXX.ec2.internal (Oracle Corporation 1.8.0_121 on Linux 3.10.0-327.10.1.el7.x86_64)

captura de tela 2017-04-13 as 13 40 37

captura de tela 2017-04-13 as 13 41 49

I would like to know how to enable any debug log ....

Regards.,

'The specified log group does not exist' error

Question, I can't get the last aws logs command to work. I get the error: The specified log group does not exist. Note that I've confirmed the ARNs are accurate

    aws logs put-subscription-filter --filter-name "MatchAllValidFilter"  --filter-pattern "OK"  --log-group-name "my-flowlogs"  --destination-arn "arn:aws:kinesis:us-west-2:xxxxxxxxxx:stream/flowlogs" --role-arn "arn:aws:iam::xxxxxxxxxx:role/CWLtoKinesisRole"

    An error occurred (ResourceNotFoundException) when calling the PutSubscriptionFilter operation: The specified log group does not exist.

My guess is either I missed a step or there is a step missing in the instructions. It would appear the 'my-flowlogs' needs to be created before running the aws logs pub-subscription-filter? I tried --log-group-name "flowlogs", but that produced the same error.

You might want to also point out that the graylog Omnibus AMI on AWS runs Ubuntu 14.04, which apt-get install awscli`` will by default grab an old version of the AWS CLI (which doesn't include the aws logs``` command). It took me a while to realize I needed to uninstall and install directly from GIT (using PIP).

Results from previous commands:

$ aws kinesis describe-stream --stream-name "flowlogs"
    "StreamDescription": {
        "RetentionPeriodHours": 24, 
        "StreamName": "flowlogs", 
        "Shards": [
            {
                "ShardId": "shardId-000000000000", 
                "HashKeyRange": {
                    "EndingHashKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", 
                    "StartingHashKey": "0"
                }, 
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
                }
            }
        ], 
        "StreamARN": "arn:aws:kinesis:us-west-2:xxxxxxxxxxx:stream/flowlogs", 
        "EnhancedMonitoring": [
            {
                "ShardLevelMetrics": []
            }
        ], 
        "StreamStatus": "ACTIVE"
    }
}

$ aws iam get-role --role-name CWLtoKinesisRole
{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2008-10-17", 
            "Statement": [
                {
                    "Action": "sts:AssumeRole", 
                    "Effect": "Allow", 
                    "Principal": {
                        "Service": "logs.us-west-2.amazonaws.com"
                    }
                }
            ]
        }, 
        "RoleId": "xxxxxxxxxxx", 
        "CreateDate": "2017-05-19T02:45:09Z", 
        "RoleName": "CWLtoKinesisRole", 
        "Path": "/", 
        "Arn": "arn:aws:iam::xxxxxxxxxx:role/CWLtoKinesisRole"
    }
}

Compatibility with graylog 2.3

So I tried installing this plugin in the first beta of graylog 2.3. The plugin installs fine and shows in the list of plugins, but there's no configuration section showing up for the plugin. Is support for 2.3 in the pipeline?

Plugin no longer working

Hi,
Thanks for the plugin.
Not sure exactly when it stopped working, the only changes to the environments were Graylog updates as the AWS side nothing changed.

These are the errors I get in the logs

2016-01-05_15:38:39.89265 ERROR [CloudTrailSubscriber] Could not read messages from SNS. This is most likely a misconfiguration of the plugin. Going into sleep loop and retrying.
2016-01-05_15:38:39.89309 com.amazonaws.AmazonServiceException: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
2016-01-05_15:38:39.89376
2016-01-05_15:38:39.89422 The Canonical String for this request should have been
2016-01-05_15:38:39.89491 'POST
2016-01-05_15:38:39.89528 /cloudtrail-notifications
2016-01-05_15:38:39.89572
2016-01-05_15:38:39.89621 host:sqs.us-east-1.amazonaws.com
2016-01-05_15:38:39.89810 user-agent:aws-sdk-java/1.9.20.1 Linux/3.13.0-74-generic Java_HotSpot(TM)_64-Bit_Server_VM/25.66-b17/1.8.0_66
2016-01-05_15:38:39.89917 x-amz-date:20160105T153838Z
2016-01-05_15:38:39.89939
2016-01-05_15:38:39.89976 host;user-agent;x-amz-date
2016-01-05_15:38:39.90057 62bd803266d1241d4d977f450bc1dec1a924d61a9fe6e7ca76a26c6acf706134'
2016-01-05_15:38:39.90077
2016-01-05_15:38:39.90098 The String-to-Sign should have been
2016-01-05_15:38:39.90172 'AWS4-HMAC-SHA256
2016-01-05_15:38:39.90196 20160105T153838Z
2016-01-05_15:38:39.90248 20160105/us-east-1/sqs/aws4_request
2016-01-05_15:38:39.90288 d1c80d30412173588b376b144733e905b8f3c4a0d25ca2637f9c679d85de0fb8' (Service: AmazonSQS; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: 7a595cfc-0ab5-50d0-84f9-13f9ef0ac36e)
2016-01-05_15:38:39.90334       at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1077)
2016-01-05_15:38:39.90442       at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
2016-01-05_15:38:39.90478       at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
2016-01-05_15:38:39.90522       at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
2016-01-05_15:38:39.90618       at com.amazonaws.services.sqs.AmazonSQSClient.invoke(AmazonSQSClient.java:2339)
2016-01-05_15:38:39.90660       at com.amazonaws.services.sqs.AmazonSQSClient.receiveMessage(AmazonSQSClient.java:1072)
2016-01-05_15:38:39.90764       at com.graylog2.input.cloudtrail.notifications.CloudtrailSQSClient.getNotifications(CloudtrailSQSClient.java:41)
2016-01-05_15:38:39.90810       at com.graylog2.input.cloudtrail.CloudTrailSubscriber.run(CloudTrailSubscriber.java:80)

Thanks

Role Based Authentication

Would it be possible to include role and cross account role access rather than just access/secret keys.

This would be very beneficial when running Graylog in AWS.

Proxy configuration does not seem to work

The plugin configuration object has a proxyEnabled attribute, but it cannot be set anywhere. All AWS calls are checking for this attribute and do not use the configured proxy if it is set to false - which it is by default, with now no way to set it to true.

I don't know if I'm completely missing something here, but it looks to me like this simply cannot work.

sns events to graylog

I would like to send SNS events to Graylog. This is a feature that is available in competitive products (e.g. https://www.loggly.com/docs/amazon-sns/).

It seems like this Plugin already has a lot of the pieces to make this happen. It can already poll an SQS queue subscribed to an SNS topic for Cloudtrail log notifications. But it doesn't appear to support the more generic case of simply wanting to send events from any SNS topic into Graylog.

Is there a way to make this work with graylog-plugin-aws as-is? If not, would it make sense to support the SNS->Graylog use case in this plugin?

Downloaded the latest AWS plugin will not load

I see this error after download the jar file and restarting my GL instance.
2017-10-20T19:55:59.801Z ERROR [CmdLineTool] Plugin "AWS plugins" requires version 2.4.0-beta.1 - not loading!

Version: 2.3.2+3df951e, codename Tegernseer
JVM: PID 47140, Oracle Corporation 1.8.0_141 on Linux 3.13.0-105-generic

More documentation about IAM setup

Background: I am trying to set things up so that my CloudWatch Logs are pushed through to a Graylog instance. My Graylog is running in a Docker container, and all of the AWS infrastructure is created and configured using Terraform scripts.

In the Graylog UI, the AWS Plugin Configuration popup says the following regarding the AWS access key and secret key fields:

Please consult the documentation for suggested rights to assign to the underlying IAM user.

The only documentation I can find is the README.

This contains a section about permissions required for AWS entity translation, so in my Terraform configs I've created a new user and attached this policy.

The documentation then goes on to explain the requirements for Flowlogs, then CloudTrail. The CloudTrail part will definitely be of interest but my immediate aim is to get access to CloudWatch Logs, and there isn't a specific section about that in the document.

The UI for the "AWS Logs" input has similar references to the README.

Access key of an AWS user with sufficient permissions. (See documentation)

The name of the Kinesis stream that receives your messages. See README for instructions on how to connect messages to a Kinesis Stream.

Is there any documentation which covers the specific requirements for the AWS Logs input?

Thanks in advance.

Cannot translate [us-east-2] into AWS region

Hi there

I've got the current plugin running, and in the Configuration area I entered a full list of AWS regions (just so I didn't need to remember to edit it later)

ap-northeast-1,ap-northeast-2,ap-south-1,ap-southeast-1,ap-southeast-2,ca-central-1,eu-central-1,eu-west-1,eu-west-2,sa-east-1,us-east-1,us-east-2,us-west-1,us-west-2

Well the graylog-server.log file is full of the following

2017-03-03T00:50:23.070Z INFO  [AWSPluginConfiguration] Cannot translate [ca-central-1] into AWS region. Make sure it is a correct region code like for example 'us-west-1'.
2017-03-03T00:50:23.071Z INFO  [AWSPluginConfiguration] Cannot translate [eu-west-2] into AWS region. Make sure it is a correct region code like for example 'us-west-1'.
2017-03-03T00:50:23.071Z INFO  [AWSPluginConfiguration] Cannot translate [us-east-2] into AWS region. Make sure it is a correct region code like for example 'us-west-1'.

Any ideas what's behind that? Maybe the account I'm using doesn't have access to those regions, but as I only have one "Input" AWS channel and that's "aws_sqs_region: us-east-1", I don't understand where these comments about (say) "ca-central-1" come from, especially as I also don't have anything in (say) eu-central-1 - and that isn't showing up as an error message

Jason

AWS plugin stopped processing messages

Hello!

Plugin just stopped working, I can see the following in the logs:

2017-09-29T09:25:13.256Z ERROR [CloudTrailSubscriber] Could not read messages from SQS. This is most likely a misconfiguration of the plugin. Going into sleep loop and retrying.
java.lang.RuntimeException: Could not parse SNS notification: {
  "Type" : "Notification",
  "MessageId" : "5b0a73e6-a4f8-11e7-8dfb-8f76310a10a8",
  "TopicArn" : "arn:aws:sns:eu-west-1:123456789012:cloudtrail-log-write",
  "Subject" : "[AWS Config:eu-west-1] AWS::RDS::DBSnapshot rds:instance-2017-09-03-23-11 Dele...",
  "Message" : "{\"configurationItemDiff\":{\"changedProperties\":{\"Relationships.0\":{\"previousValue\":{\"resourceId\":\"vpc-12345678\",\"resourceName\":null,\"resourceType\":\"AWS::EC2::VPC\",\"name\":\"Is associated with Vpc\"},\"updatedValue\":null,\"changeType\":\"DELETE\"},\"SupplementaryConfiguration.Tags\":{\"previousValue\":[],\"updatedValue\":null,\"changeType\":\"DELETE\"},\"SupplementaryConfiguration.DBSnapshotAttributes\":{\"previousValue\":[{\"attributeName\":\"restore\",\"attributeValues\":[]}],\"updatedValue\":null,\"changeType\":\"DELETE\"},\"Configuration\":{\"previousValue\":{\"dBSnapshotIdentifier\":\"rds:instance-2017-09-03-23-11\",\"dBInstanceIdentifier\":\"instance\",\"snapshotCreateTime\":\"2017-09-03T23:11:38.218Z\",\"engine\":\"mysql\",\"allocatedStorage\":200,\"status\":\"available\",\"port\":3306,\"availabilityZone\":\"eu-west-1b\",\"vpcId\":\"vpc-12345678\",\"instanceCreateTime\":\"2015-04-09T07:08:07.476Z\",\"masterUsername\":\"root\",\"engineVersion\":\"5.6.34\",\"licenseModel\":\"general-public-license\",\"snapshotType\":\"automated\",\"iops\":null,\"optionGroupName\":\"default:mysql-5-6\",\"percentProgress\":100,\"sourceRegion\":null,\"sourceDBSnapshotIdentifier\":null,\"storageType\":\"standard\",\"tdeCredentialArn\":null,\"encrypted\":false,\"kmsKeyId\":null,\"dBSnapshotArn\":\"arn:aws:rds:eu-west-1:123456789012:snapshot:rds:instance-2017-09-03-23-11\",\"timezone\":null,\"iAMDatabaseAuthenticationEnabled\":false},\"updatedValue\":null,\"changeType\":\"DELETE\"}},\"changeType\":\"DELETE\"},\"configurationItem\":{\"relatedEvents\":[],\"relationships\":[],\"configuration\":null,\"supplementaryConfiguration\":{},\"tags\":{},\"configurationItemVersion\":\"1.2\",\"configurationItemCaptureTime\":\"2017-09-28T19:54:47.815Z\",\"configurationStateId\":1234567890123,\"awsAccountId\":\"123456789012\",\"configurationItemStatus\":\"ResourceDeleted\",\"resourceType\":\"AWS::RDS::DBSnapshot\",\"resourceId\":\"rds:instance-2017-09-03-23-11\",\"resourceName\":\"rds:instance-2017-09-03-23-11\",\"ARN\":\"arn:aws:rds:eu-west-1:123456789012:snapshot:rds:instance-2017-09-03-23-11\",\"awsRegion\":\"eu-west-1\",\"availabilityZone\":null,\"configurationStateMd5Hash\":\"b026324c6904b2a9cb4b88d6d61c81d1\",\"resourceCreationTime\":null},\"notificationCreationTime\":\"2017-09-28T19:54:48.311Z\",\"messageType\":\"ConfigurationItemChangeNotification\",\"recordVersion\":\"1.2\"}",
  "Timestamp" : "2017-09-28T19:54:58.543Z",
  "SignatureVersion" : "1",
  "Signature" : "...",
  "SigningCertURL" : "https://sns.eu-west-1.amazonaws.com/SimpleNotificationService-....pem",
  "UnsubscribeURL" : "https://sns.eu-west-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-west-1:123456789012:cloudtrail-log-write:5b0a73e6-a4f8-11e7-8dfb-8f76310a10a8"
}
        at org.graylog.aws.inputs.cloudtrail.notifications.CloudtrailSNSNotificationParser.parse(CloudtrailSNSNotificationParser.java:36) ~[graylog-plugin-aws-2.3.1.jar:?]
        at org.graylog.aws.inputs.cloudtrail.notifications.CloudtrailSQSClient.getNotifications(CloudtrailSQSClient.java:51) ~[graylog-plugin-aws-2.3.1.jar:?]
        at org.graylog.aws.inputs.cloudtrail.CloudTrailSubscriber.run(CloudTrailSubscriber.java:86) [graylog-plugin-aws-2.3.1.jar:?]
Caused by: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "configurationItemDiff" (class org.graylog.aws.inputs.cloudtrail.json.CloudtrailWriteNotification), not marked as ignorable (2 known properties: "s3ObjectKey", "s3Bucket"])
 at [Source: {"configurationItemDiff":{"changedProperties":{"Relationships.0":{"previousValue":{"resourceId":"vpc-12345678","resourceName":null,"resourceType":"AWS::EC2::VPC","name":"Is associated with Vpc"},"updatedValue":null,"changeType":"DELETE"},"SupplementaryConfiguration.Tags":{"previousValue":[],"updatedValue":null,"changeType":"DELETE"},"SupplementaryConfiguration.DBSnapshotAttributes":{"previousValue":[{"attributeName":"restore","attributeValues":[]}],"updatedValue":null,"changeType":"DELETE"},"Configuration":{"previousValue":{"dBSnapshotIdentifier":"rds:instance-2017-09-03-23-11","dBInstanceIdentifier":"instance","snapshotCreateTime":"2017-09-03T23:11:38.218Z","engine":"mysql","allocatedStorage":200,"status":"available","port":3306,"availabilityZone":"eu-west-1b","vpcId":"vpc-12345678","instanceCreateTime":"2015-04-09T07:08:07.476Z","masterUsername":"root","engineVersion":"5.6.34","licenseModel":"general-public-license","snapshotType":"automated","iops":null,"optionGroupName":"default:mysql-5-6","percentProgress":100,"sourceRegion":null,"sourceDBSnapshotIdentifier":null,"storageType":"standard","tdeCredentialArn":null,"encrypted":false,"kmsKeyId":null,"dBSnapshotArn":"arn:aws:rds:eu-west-1:123456789012:snapshot:rds:instance-2017-09-03-23-11","timezone":null,"iAMDatabaseAuthenticationEnabled":false},"updatedValue":null,"changeType":"DELETE"}},"changeType":"DELETE"},"configurationItem":{"relatedEvents":[],"relationships":[],"configuration":null,"supplementaryConfiguration":{},"tags":{},"configurationItemVersion":"1.2","configurationItemCaptureTime":"2017-09-28T19:54:47.815Z","configurationStateId":1234567890123,"awsAccountId":"123456789012","configurationItemStatus":"ResourceDeleted","resourceType":"AWS::RDS::DBSnapshot","resourceId":"rds:instance-2017-09-03-23-11","resourceName":"rds:instance-2017-09-03-23-11","ARN":"arn:aws:rds:eu-west-1:123456789012:snapshot:rds:instance-2017-09-03-23-11","awsRegion":"eu-west-1","availabilityZone":null,"configurationStateMd5Hash":"b026324c6904b2a9cb4b88d6d61c81d1","resourceCreationTime":null},"notificationCreationTime":"2017-09-28T19:54:48.311Z","messageType":"ConfigurationItemChangeNotification","recordVersion":"1.2"}; line: 1, column: 27] (through reference chain: org.graylog.aws.inputs.cloudtrail.json.CloudtrailWriteNotification["configurationItemDiff"])
        at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1093) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1478) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1456) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:282) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:140) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3814) ~[graylog.jar:?]
        at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2858) ~[graylog.jar:?]
        at org.graylog.aws.inputs.cloudtrail.notifications.CloudtrailSNSNotificationParser.parse(CloudtrailSNSNotificationParser.java:30) ~[?:?]
        ... 2 more

Configuration looks fine, seems it just cannot parse the message.

Cloudtrail plugin is not reading messages from SQS

Hi,

I have done the necessary configurations for cloudtrail notifications. I can see messages being delivered in my SQS but for some reason graylog is not reading those messages. I checked the server logs too. There isn't any errors being thrown. Is there any way to enable debug for the plugin? I'm using the latest graylog version 2.4.3.

Thanks in advance.

Instructions to create IAM CWLtoKinesisRole role fail

Possible that AWS instructions are outdated?

aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://trust_policy.json

output:

A client error (MalformedPolicyDocument) occurred when calling the CreateRole operation: This policy contains invalid Json

InstanceLookupTable throws errors when not running inside AWS

The InstanceLookupTable refresh throws errors when not running inside an AWS environment. It should not run when disabled or the Graylog setup does not reside inside AWS.

2017-10-04 01:17:59,851 ERROR: org.graylog.aws.processors.instancelookup.InstanceLookupTable - Error when trying to refresh AWS instance lookup table in [us-east-1]
com.amazonaws.SdkClientException: Unable to execute HTTP request: ec2.us-east-1.amazonaws.com
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1068) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1034) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:741) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:715) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:697) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:665) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:647) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:511) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.services.ec2.AmazonEC2Client.doInvoke(AmazonEC2Client.java:13930) ~[aws-java-sdk-ec2-1.11.174.jar:?]
	at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:13906) ~[aws-java-sdk-ec2-1.11.174.jar:?]
	at com.amazonaws.services.ec2.AmazonEC2Client.executeDescribeNetworkInterfaces(AmazonEC2Client.java:7290) ~[aws-java-sdk-ec2-1.11.174.jar:?]
	at com.amazonaws.services.ec2.AmazonEC2Client.describeNetworkInterfaces(AmazonEC2Client.java:7266) ~[aws-java-sdk-ec2-1.11.174.jar:?]
	at com.amazonaws.services.ec2.AmazonEC2Client.describeNetworkInterfaces(AmazonEC2Client.java:7302) ~[aws-java-sdk-ec2-1.11.174.jar:?]
	at org.graylog.aws.processors.instancelookup.InstanceLookupTable.reload(InstanceLookupTable.java:68) [classes/:?]
	at org.graylog.aws.processors.instancelookup.AWSInstanceNameLookupProcessor$1.run(AWSInstanceNameLookupProcessor.java:82) [classes/:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_144]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_144]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_144]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: java.net.UnknownHostException: ec2.us-east-1.amazonaws.com
	at java.net.InetAddress.getAllByName0(InetAddress.java:1280) ~[?:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1192) ~[?:1.8.0_144]
	at java.net.InetAddress.getAllByName(InetAddress.java:1126) ~[?:1.8.0_144]
	at com.amazonaws.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:27) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.DelegatingDnsResolver.resolve(DelegatingDnsResolver.java:38) ~[aws-java-sdk-core-1.11.174.jar:?]
	at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359) ~[httpclient-4.5.3.jar:4.5.3]
	at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source) ~[?:?]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_144]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144]
	at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.conn.$Proxy198.connect(Unknown Source) ~[?:?]
	at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.3.jar:4.5.3]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.3.jar:4.5.3]
	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1189) ~[aws-java-sdk-core-1.11.174.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1029) ~[aws-java-sdk-core-1.11.174.jar:?]
	... 20 more

AWS CloudTrail behind proxy

Greetings,

Install: AWS AMI
Graylog: v2.3.2
AWS plugin: v2.3.1

I run graylog java instance with -Dhttp.proxyHost, -Dhttp.proxyPort and -Dhttp.nonProxyHosts options.
The all infrastructure is behind of corporate proxy. The all aws endpoints are whitelisted on proxy server.
I checked proxy logs and there is no incoming connection to sqs from aws cloudtrail plugin. Graylog logs shows that the connection times out to AWS sqs service.

does aws plugin work behind proxy ?

Configuration issue

How does one configure the latest version of this plugin?
The documentation states the following.

After installing the plugin you will have a new cluster configuration section at “System -> Configurations” in your Graylog Web Interface.

Yet navigating to the System -> Configurations show no options for this plugin (see attached image)
2016-09-08 11_33_44-graylog web interface

The plugin is properly installed, the logs show

2016-09-08 15:33:59,208 WARN : org.graylog.aws.processors.instancelookup.AWSInstanceNameLookupProcessor - AWS plugin is not fully configured. No instance lookups will happen. 2016-09-08 15:33:59,723 WARN : org.graylog.aws.processors.instancelookup.AWSInstanceNameLookupProcessor - AWS plugin is not fully configured. No instance lookups will happen. 2016-09-08 15:33:59,887 WARN : org.graylog.aws.processors.instancelookup.AWSInstanceNameLookupProcessor - AWS plugin is not fully configured. No instance lookups will happen.

I can see the plugin in the Graylog Inputs
Which offers no options for configuring credentials or other settings, it only asks for Region.
(see image)
2016-09-08 11_18_25-graylog web interface

Obviously the input can't be started without the proper configuration.

What am I missing?
Thanks

Check if the local lookup table is instantiated too often

2016-11-26 19:07:11,333 INFO : org.graylog.aws.processors.instancelookup.InstanceLookupTable - Reloading AWS instance lookup table.
2016-11-26 19:07:11,394 INFO : org.graylog.aws.processors.instancelookup.InstanceLookupTable - Reloading AWS instance lookup table.
2016-11-26 19:07:12,020 INFO : org.graylog.aws.processors.instancelookup.InstanceLookupTable - Reloading AWS instance lookup table.
2016-11-26 19:07:12,020 INFO : org.graylog.aws.processors.instancelookup.InstanceLookupTable - Reloading AWS instance lookup table.

Running with 4 ProcessBufferProcessor.

Cannot save plugin configuration

When trying to save the AWS plugin configuration on the "Configurations" page I get a HTTP 400 with the following error in the graylog-server log:

2018-01-16T13:36:02.001-06:00 ERROR [ClusterConfigResource] Couldn't parse cluster configuration "org.graylog.aws.config.AWSPluginConfiguration".
com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "flowlogs_last_run" (class org.graylog.aws.config.AutoValue_AWSPluginConfiguration), not marked as ignorable (5 known properties: "secret_key", "lookup_regions", "lookups_enabled", "access_key", "proxy_enabled"])
 at [Source: org.glassfish.jersey.message.internal.EntityInputStream@1a5a0937; line: 1, column: 164] (through reference chain: org.graylog.aws.config.AutoValue_AWSPluginConfiguration["flowlogs_last_run"])
	at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62) ~[graylog.jar:?]
	at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:834) ~[graylog.jar:?]
	at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1093) ~[graylog.jar:?]
	at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1478) ~[graylog.jar:?]

graylog-plugin-aws 0.6.0 not working on new instance of Graylog 2.0.3

This is strange since the plugin IS working on another instance of Graylog 2.0.3 that I upgraded from a Graylog 1.3.4 instance.

BTW, this is the same error that @123dev reported in issue #9 after it was closed. Didn't see a response over there so opened this new issue.

This is the error:

2016-07-14_17:06:19.87439 2016-07-14 11:06:19,874 INFO : com.graylog2.input.cloudtrail.CloudTrailTransport - Starting cloud trail subscriber
2016-07-14_17:06:19.87516 2016-07-14 11:06:19,874 INFO : org.graylog2.inputs.InputStateListener - Input [AWS CloudTrail Input/5787c68b4335af0398c2fd03] is now STARTING
2016-07-14_17:06:19.87725 Exception in thread "Thread-16" java.lang.NoClassDefFoundError: Could not initialize class com.amazonaws.http.conn.ssl.SdkTLSSocketFactory
2016-07-14_17:06:19.87827       at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.getPreferredSocketFactory(ApacheConnectionManagerFactory.java:87)
2016-07-14_17:06:19.87958       at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:65)
2016-07-14_17:06:19.88028       at com.amazonaws.http.apache.client.impl.ApacheConnectionManagerFactory.create(ApacheConnectionManagerFactory.java:58)
2016-07-14_17:06:19.88089       at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:50)
2016-07-14_17:06:19.88136       at com.amazonaws.http.apache.client.impl.ApacheHttpClientFactory.create(ApacheHttpClientFactory.java:38)
2016-07-14_17:06:19.88238       at com.amazonaws.http.AmazonHttpClient.<init>(AmazonHttpClient.java:259)
2016-07-14_17:06:19.88286       at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:145)
2016-07-14_17:06:19.88360       at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:136)
2016-07-14_17:06:19.88405       at com.amazonaws.AmazonWebServiceClient.<init>(AmazonWebServiceClient.java:121)
2016-07-14_17:06:19.88493       at com.amazonaws.services.sqs.AmazonSQSClient.<init>(AmazonSQSClient.java:229)
2016-07-14_17:06:19.88546       at com.amazonaws.services.sqs.AmazonSQSClient.<init>(AmazonSQSClient.java:209)
2016-07-14_17:06:19.88640       at com.graylog2.input.cloudtrail.notifications.CloudtrailSQSClient.<init>(CloudtrailSQSClient.java:39)
2016-07-14_17:06:19.88734       at com.graylog2.input.cloudtrail.CloudTrailSubscriber.run(CloudTrailSubscriber.java:63)
2016-07-14_17:06:19.88764 2016-07-14 11:06:19,877 INFO : org.graylog2.inputs.InputStateListener - Input [AWS CloudTrail Input/5787c68b4335af0398c2fd03] is now RUNNING

What am I missing in this new instance? What else can I try to troubleshoot?

FWIW, I tried deleting and re-adding the Input with the same result.

Thanks,
Steve.

Flowlogs: Only messages with exactly 15 fields supported?

I have set up "Detailed monitoring" in AWS RDS instance. It is logging to Cloudwatch logs.
After setting up Kinesis filter stream, this is what I see in Graylog:

2016-11-07 18:54:06,894 WARN : org.graylog.aws.inputs.flowlogs.FlowLogCodec - Received FlowLog message with not exactly 15 fields. Skipping. Message was: [14785344844000 {"engine":"Postgres","instanceID":null,"instanceResourceID":"db-...","timestamp":"2016-11-07T18:54:04Z","version":1.00,"uptime":"23:53:00","numVCPUs":4,"cpuUtilization":

Is there some kind of limitation? Is 15 fields some kind of magic number?

appeared to have corrupted aws plugin - cannot remove

Hi there

I just upgraded to graylog-plugin-aws-1.3.2.jar and upon restarting have found there's no configuration area to edit any more. It's weird, I have my existing Input config, and can see "AWS Instance Name Lookup" in the "Message Processors Configuration" area, but there's no where to edit what my OAUTH creds are. graylog is now generating infinite "access denied" errors as it tries to download the S3 buckets without any creds.

I then removed graylog-plugin-aws-1.3.2.jar and restarted in order to re-initialize, but the "AWS Instance Name Lookup" still shows up in the Configuration page - should that be the case?

So how can I totally wipe out the AWS data so I can create a new Input from scratch again?

Thanks

Jason

Please add AWS Config Logs to this plugin

This is one of the key gaps between this and Splunk AWS app at the moment. Config log data follows the exact process as CloudTrail and I've gotten as far as launching a new input in Graylog that is correctly receiving notifications when new Config items arrive but obviously the data structure and elements are different between CloudTrail and Config so the plug in exceptions out when it sees fields that it doesn't have a variable for.

Can't receive messages from SQS

I have tried creating a AWS CloudTrail input and i know all the variables within are correct.

This is the out put in my Graylog log file:

2017-12-06T15:57:32.846Z ERROR [CloudTrailSubscriber] Could not read messages from SQS. This is most likely a misconfiguration of the plugin. Going into sleep loop and retrying.
com.amazonaws.services.sqs.model.QueueDoesNotExistException: The specified queue does not exist for this wsdl version. (Service: AmazonSQS; Status Code: 400; Error Code: AWS.SimpleQueueService.NonExistentQueue; Request ID: b596-50e5-b285-f11b09ea677c)

I googled the message and i'm sure the region is correct, i have also tried the SQS URL and have still had no joy.

Can anyone help?

Graylog version: Graylog v2.4.0-beta.2
Plugin Version: graylog-plugin-aws-2.4.0-beta.1.jar

Add ability to configure lookup credentials for multiple AWS accounts

The detail lookups configuration for this plugin only provides the option to enter credentials for a single AWS account. I am currently collecting logs from kinesis streams in multiple accounts but the detail lookup only works for one of the accounts, which means I can only filter/stream on entity types for a single AWS account.

It would be useful to have this feature as maintaining multiple AWS accounts is now a common use case.

Add CloudWatch Log Group and CloudWatch Log Stream fields to AWS Flow Logs and Logs

Currently, org.graylog.aws.cloudwatch.CloudWatchLogData only extracts the logEvents field from a CloudWatch payload, and ignores additional useful metadata - specifically, the logGroup and logStream fields.

We have encountered use cases where this information is not only useful, but essential, e.g.:

  • When running ECS tasks, the task definition can be configured to use the awslogs driver and write to a CloudWatch Log Group. This results in multiple tasks writing to different streams within the same group. There is currently no way to distinguish between these streams (and therefore, the individual tasks responsible for generating the log entries) in Graylog.

  • When using Auto Scaling Groups that create and destroy EC2 instances based on CloudWatch Alarms, the user data defined in the Launch Configuration attached to the Auto Scaling Group can install and configure the CloudWatch Logs Agent to stream various system logs from the EC2 instance to a CloudWatch Log Group. Each individual EC2 instance writes to its own stream within the log group. There is currently no way to distinguish between these streams (and therefore, the different EC2 instances responsible for generating the log entries) in Graylog.

We would propose that the Graylog AWS plugin be updated to consume the logGroup and logStream fields from the CloudWatch payload, and apply these fields to the log entries in Graylog, so that Graylog is capable of distinguishing between the constituent streams of a log group.

AWS plugin v 1.2.0 needs dynamo DB access - not mentioned in documentation

2016-10-31_08:33:41.17625 Caused by: com.amazonaws.AmazonServiceException: User: arn:aws:iam::************:user/graylog-cloudtrail is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:eu-west-1:***********:table/graylog-aws-plugin (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: 2UPIJA2L0UE3OMFG40EFQEN4Q3VV4KQNSO5AEMVJF66Q9ASUAAJG)
2016-10-31_08:33:41.17657       at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1377) ~[?:?]
2016-10-31_08:33:41.17735       at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:923) ~[?:?]
2016-10-31_08:33:41.17780       at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701) ~[?:?]
2016-10-31_08:33:41.17877       at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:453) ~[?:?]
2016-10-31_08:33:41.17908       at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:415) ~[?:?]
2016-10-31_08:33:41.17998       at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:364) ~[?:?]
2016-10-31_08:33:41.18041       at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2048) ~[?:?]
2016-10-31_08:33:41.18148       at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2018) ~[?:?]
2016-10-31_08:33:41.18188       at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:878) ~[?:?]
2016-10-31_08:33:41.18272       at com.amazonaws.services.kinesis.leases.impl.LeaseManager.createLeaseTableIfNotExists(LeaseManager.java:127) ~[?:?]
2016-10-31_08:33:41.18315       at com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibLeaseCoordinator.initialize(KinesisClientLibLeaseCoordinator.java:227) ~[?:?]
2016-10-31_08:33:41.18418       at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.initialize(Worker.java:396) ~[?:?]
2016-10-31_08:33:41.18462       ... 7 more

After adding policy allowing dyanmodb.* on arn:* (which I really need to narrow down) to the user, the error messages stopped.

Doesn't appear to be working on 2.0

I added the plugin to our new graylog 2.0 server but it never receives messages. The amazon side is exactly the same as before and the configuration was copied from the old 1.2 server.

Nothing in the log files, it just never receives any messages. Let me know what I can do to help track this down.

plugins doesn't work for all regions?

Hi there

I've got this plugin up and running in US-WEST-2, and then configured all other regions identically.

However, now I get this error in /var/log/graylog-server/server.log

The bucket you are attempting to access must be addressed using the specified endpoint
(Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 23esadd323s), S3 Extended Request ID:.....

Googling that implies it's something to do with the S3 bucket not being global? So I tried to create the same S3 bucket in all other regions, but got a "already exists" error - which implies the bucket is actually OK

Any other reason it should act like this? BTW, us-east-1 works fine - so I actually have two working - but the rest do not - although the S3 bucket shows logs from all of them (ie it's like an authentication problem - but it's the same account for all of them - and it obviously has access to the S3 bucket otherwise it wouldn't be working for any of them)

Thanks

Hitting API request limits

I've set up reading flowlogs into Graylog for our dev VPC to test out and see what information we can get from them.

I've almost immediately hit a problem where a number of requests get 400 responses from AWS:

2016-09-07_07:31:26.66398 2016-09-07 08:31:26,663 ERROR: org.graylog.aws.inputs.flowlogs.FlowLogReader - Could not read AWS FlowLogs from stream [eni-b8******-reject].
2016-09-07_07:31:26.66440 com.amazonaws.AmazonServiceException: Rate exceeded (Service: AWSLogs; Status Code: 400; Error Code: ThrottlingException; Request ID: 1ebbcb03-74cd-11e6-8780-9bffc48ed044)
2016-09-07_07:31:26.66496       at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1377) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66611       at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:923) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66660       at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66744       at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:453) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66787       at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:415) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66875       at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:364) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.66921       at com.amazonaws.services.logs.AWSLogsClient.doInvoke(AWSLogsClient.java:1962) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.67057       at com.amazonaws.services.logs.AWSLogsClient.invoke(AWSLogsClient.java:1932) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.67096       at com.amazonaws.services.logs.AWSLogsClient.getLogEvents(AWSLogsClient.java:1431) ~[graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.67204       at org.graylog.aws.inputs.flowlogs.FlowLogReader.readStream(FlowLogReader.java:153) [graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.67253       at org.graylog.aws.inputs.flowlogs.FlowLogReader.run(FlowLogReader.java:122) [graylog-plugin-aws-1.0.0.jar:?]
2016-09-07_07:31:26.67346       at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_101]
2016-09-07_07:31:26.67388       at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_101]
2016-09-07_07:31:26.67472       at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_101]
2016-09-07_07:31:26.67521       at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_101]
2016-09-07_07:31:26.67658       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
2016-09-07_07:31:26.67715       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
2016-09-07_07:31:26.67804       at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]

Originally I was seeing many of these a second, but then I disabled the IP->name lookups and the volume has dropped to several a minute.

I asked AWS support if they'd increase the limit for GetLogEvents calls and their response was "yes, but..."

The solution is to use the Subscriptions facility, which turns the egress from the Logs service from a polling interface to a "push" interface that scales

aws logs stream can't connect to kinesis due to table name issue

errors:

2017-10-26 11:36:09,575 ERROR: com.amazonaws.services.kinesis.leases.impl.LeaseManager - Failed to get table status for graylog-aws-plugin-arn:aws:kinesis:eu-west-1:534996215098:stream/stage_ecs
com.amazonaws.services.kinesis.leases.exceptions.DependencyException: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: 1 validation error detected: Value 'graylog-aws-plugin-arn:aws:kinesis:eu-west-1:534996215098:stream/stage_ecs' at 'tableName' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+ (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: O38921VR7HBC99LRCI561THGJRVV4KQNSO5AEMVJF66Q9ASUAAJG)

using graylog 4 beta1 with the plugin preinstalled and the aws user has the recommended permissions to cloudwatch, dynamodb and kinesis streams.

Input stuck in starting state if plugin not configured previously

When installing the plugin for first time (dropping the jar to the plugin dir, for example) and then entering Graylog web interface and configuring a new cloudtrail input from this plugin. If you try to start the new created input it will remain in starting status forever.
To fix it, you need to go to the plugin configuration page, and just press the save button, without need of filling any of the fields. Then come back to the plugin page and try to start it aagain, it will start immediately and start working.

Seems the config page does some kind of initialization that makes the input to don't start if it has not been done previously.

Cloudwatch Logs Input

Feature request for CloudWatch Logs input to allow collecting CloudWatch logs into Graylog.

Add Aws S3 Logs

Hello,
is it possible to add S3 logs to your plugins
Regards,
Nicolas Prochazka

Error with Endpoint Grabbing

I believe now S3 requires you to input endpoint in the S3 configs

com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 12345, S3 Extended Request ID: 12345

Using it with AssumeRoles

I have an account where I don't have any user, but only IAM roles for each EC2 instance. How can I use this plugin in that environment?

Need Security Token support for AWS account access

There is no way to pass the security-token by the AWS credential class as it uses "BasicAWSCredentials" class which has no such provision.
Need to implement "DefaultAWSCredentialsProviderChain" or a provision to use "InstanceProfileCredentialsProvider" as I am deploying the graylog server in ec2 clusters.

Faulty code snippets:
...
public AWSCredentials getCredentials() {
return new BasicAWSCredentials(awsConfig.accessKey(), awsConfig.secretKey());
}
...

in graylog-plugin-aws/src/main/java/org/graylog/aws/inputs/flowlogs/FlowLogReader.java

Please fix this to use the plugin.

AWS Plugin looses connection when using proxy

Hi,

We try to read our data from AWS via proxy, it works fine first, but then we recieve loads of errors and AWS looses connection:

2016-09-20T12:54:16.613Z INFO [AmazonHttpClient] Unable to execute HTTP request: Connect to sapcloudtrail.s3.amazonaws.com:443 [sapcloudtrail.s3.amazonaws.com/54.231.72.51] failed: connect timed out
org.apache.http.conn.ConnectTimeoutException: Connect to sapcloudtrail.s3.amazonaws.com:443 [sapcloudtrail.s3.amazonaws.com/54.231.72.51] failed: connect timed out
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:150) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353) ~[graylog-plugin-aws-1.0.0.jar:?]
at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]
at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.conn.$Proxy154.connect(Unknown Source) ~[?:?]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:858) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:453) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:415) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:364) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3964) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1259) [graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1134) [graylog-plugin-aws-1.0.0.jar:?]
at org.graylog.aws.s3.S3Reader.readCompressed(S3Reader.java:28) [graylog-plugin-aws-1.0.0.jar:?]
at org.graylog.aws.inputs.cloudtrail.CloudTrailSubscriber.run(CloudTrailSubscriber.java:108) [graylog-plugin-aws-1.0.0.jar:?]
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_101]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_101]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_101]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:337) ~[?:?]
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:132) ~[?:?]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141) ~[?:?]
... 23 more
2016-09-20T12:54:16.614Z ERROR [CloudTrailSubscriber] Could not read CloudTrail log file for . Skipping.
com.amazonaws.AmazonClientException: Unable to execute HTTP request: Connect to sapcloudtrail.s3.amazonaws.com:443 [sapcloudtrail.s3.amazonaws.com/54.231.72.51] failed: connect timed out
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:713) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:453) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:415) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:364) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3964) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1259) ~[graylog-plugin-aws-1.0.0.jar:?]
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1134) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.graylog.aws.s3.S3Reader.readCompressed(S3Reader.java:28) ~[graylog-plugin-aws-1.0.0.jar:?]
at org.graylog.aws.inputs.cloudtrail.CloudTrailSubscriber.run(CloudTrailSubscriber.java:108) [graylog-plugin-aws-1.0.0.jar:?]
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to sapcloudtrail.s3.amazonaws.com:443 [sapcloudtrail.s3.amazonaws.com/54.231.72.51] failed: connect timed out
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:150) ~[?:?]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353) ~[?:?]
at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]
at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) ~[?:?]
at com.amazonaws.http.conn.$Proxy154.connect(Unknown Source) ~[?:?]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380) ~[?:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[?:?]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184) ~[?:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184) ~[?:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[?:?]
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:858) ~[?:?]
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701) ~[?:?]
... 8 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[?:1.8.0_101]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[?:1.8.0_101]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:1.8.0_101]
at java.net.Socket.connect(Socket.java:589) ~[?:1.8.0_101]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:337) ~[?:?]
at com.amazonaws.http.conn.ssl.SdkTLSSocketFactory.connectSocket(SdkTLSSocketFactory.java:132) ~[?:?]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:141) ~[?:?]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353) ~[?:?]
at sun.reflect.GeneratedMethodAccessor47.invoke(Unknown Source) ~[?:?]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]
at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76) ~[?:?]
at com.amazonaws.http.conn.$Proxy154.connect(Unknown Source) ~[?:?]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380) ~[?:?]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) ~[?:?]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184) ~[?:?]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184) ~[?:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[?:?]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[?:?]
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72) ~[?:?]
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:858) ~[?:?]
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:701) ~[?:?]
... 8 more

The proxy is configured propperly:

root@mo-81185639b:/var/log/graylog-server# curl https://sapcloudtrail.s3.amazonaws.com

AccessDeniedAccess Denied43F82724745C9EC2x6duNrRHR6HUwuvkuUn1A0bCte5YYScHJP7g5Dby9/AxbkI8LcZpoiEWYMlFHDuEroot@mo-81185639b:/var/log/graylog-server#
root@mo-81185639b:/var/log/graylog-server#

Any Ideas?

Not supporting in Graylog 2.2.3

getting error
org.graylog.aws.processors.instancelookup.AWSInstanceNameLookupProcessor - AWS plugin is not fully configured. No instance lookups will happen.

Error: no readable build.config.js

I just pulled down latest master (I have never built this before) and ran into this on a mvn package.
Any thoughts on how to fix this?

[ERROR] It seems like there is no readable build.config.js file:  { [Error: ENOENT: no such file or directory, lstat '/graylog2-server/graylog2-web-interface']
[ERROR]   errno: -2,
[ERROR]   code: 'ENOENT',
[ERROR]   syscall: 'lstat',
[ERROR]   path: '/graylog2-server/graylog2-web-interface' }
[ERROR] 
[ERROR] npm ERR! Linux 4.4.0-47-generic
[ERROR] npm ERR! argv "/tmp/graylog-plugin-aws/node/node" "/tmp/graylog-plugin-aws/node/node_modules/npm/bin/npm-cli.js" "run" "build"
[ERROR] npm ERR! node v4.4.3
[ERROR] npm ERR! npm  v3.8.6
[ERROR] npm ERR! code ELIFECYCLE
[ERROR] npm ERR! [email protected] build: `webpack`
[ERROR] npm ERR! Exit status 255
[ERROR] npm ERR! 
[ERROR] npm ERR! Failed at the [email protected] build script 'webpack'.
[ERROR] npm ERR! Make sure you have the latest version of node.js and npm installed.
[ERROR] npm ERR! If you do, this is most likely a problem with the AWSPlugin package,
[ERROR] npm ERR! not with npm itself.
[ERROR] npm ERR! Tell the author that this fails on your system:
[ERROR] npm ERR!     webpack
[ERROR] npm ERR! You can get information on how to open an issue for this project with:
[ERROR] npm ERR!     npm bugs AWSPlugin
[ERROR] npm ERR! Or if that isn't available, you can get their info via:
[ERROR] npm ERR!     npm owner ls AWSPlugin
[ERROR] npm ERR! There is likely additional logging output above.
[ERROR] 
[ERROR] npm ERR! Please include the following file with any support request:
[ERROR] npm ERR!     /tmp/graylog-plugin-aws/npm-debug.log

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.