Git Product home page Git Product logo

terraform-aws-logs's Introduction

Supports two main uses cases:

  • Creates and configures a single private S3 bucket for storing logs from various AWS services, which are nested as bucket prefixes. Logs will expire after a default of 90 days, with option to configure retention value.
  • Creates and configures a single private S3 bucket for a single AWS service. Logs will expire after a default of 90 days, with option to configure retention value.

Logging from the following services is supported for both cases as well as in AWS GovCloud:

Usage for a single log bucket storing logs from all services

# Allows all services to log to bucket
module "aws_logs" {
  source         = "trussworks/logs/aws"
  s3_bucket_name = "my-company-aws-logs"
}

Usage for a single log bucket storing logs from a single service (ELB in this case)

module "aws_logs" {
  source         = "trussworks/logs/aws"
  s3_bucket_name = "my-company-aws-logs-elb"
  default_allow  = false
  allow_elb      = true
}

Usage for a single log bucket storing logs from multiple specified services (ALB and ELB in this case)

module "aws_logs" {
  source         = "trussworks/logs/aws"
  s3_bucket_name = "my-company-aws-logs-lb"
  default_allow  = false
  allow_alb      = true
  allow_elb      = true
}

Usage for a single log bucket storing CloudTrail logs from multiple accounts

module "aws_logs" {
  source              = "trussworks/logs/aws"
  s3_bucket_name      = "my-company-aws-logs-cloudtrail"
  default_allow       = false
  allow_cloudtrail    = true
  cloudtrail_accounts = [data.aws_caller_identity.current.account_id, aws_organizations_account.example.id]
}

Usage for a single log bucket storing logs from multiple application load balancers (ALB) and network load balancers (NLB)

module "aws_logs" {
  source            = "trussworks/logs/aws"
  s3_bucket_name    = "my-company-aws-logs-lb"
      default_allow     = false
      allow_alb         = true
      allow_nlb         = true
      alb_logs_prefixes = [
       "alb/hello-world-prod",
       "alb/hello-world-staging",
       "alb/hello-world-experimental",
      ]
      nlb_logs_prefixes = [
       "nlb/hello-world-prod",
       "nlb/hello-world-staging",
       "nlb/hello-world-experimental",
      ]
    }

Requirements

Name Version
terraform >= 1.0
aws >= 3.75.0

Providers

Name Version
aws >= 3.75.0

Modules

No modules.

Resources

Name Type
aws_s3_bucket.aws_logs resource
aws_s3_bucket_acl.aws_logs resource
aws_s3_bucket_lifecycle_configuration.aws_logs resource
aws_s3_bucket_logging.aws_logs resource
aws_s3_bucket_ownership_controls.aws_logs resource
aws_s3_bucket_policy.aws_logs resource
aws_s3_bucket_public_access_block.public_access_block resource
aws_s3_bucket_server_side_encryption_configuration.aws_logs resource
aws_s3_bucket_versioning.aws_logs resource
aws_caller_identity.current data source
aws_elb_service_account.main data source
aws_iam_policy_document.main data source
aws_partition.current data source
aws_region.current data source

Inputs

Name Description Type Default Required
alb_account Account for ALB logs. By default limits to the current account. string "" no
alb_logs_prefixes S3 key prefixes for ALB logs. list(string) [ "alb" ] no
allow_alb Allow ALB service to log to bucket. bool false no
allow_cloudtrail Allow Cloudtrail service to log to bucket. bool false no
allow_cloudwatch Allow Cloudwatch service to export logs to bucket. bool false no
allow_config Allow Config service to log to bucket. bool false no
allow_elb Allow ELB service to log to bucket. bool false no
allow_nlb Allow NLB service to log to bucket. bool false no
allow_redshift Allow Redshift service to log to bucket. bool false no
allow_s3 Allow S3 service to log to bucket. bool false no
cloudtrail_accounts List of accounts for CloudTrail logs. By default limits to the current account. list(string) [] no
cloudtrail_logs_prefix S3 prefix for CloudTrail logs. string "cloudtrail" no
cloudtrail_org_id AWS Organization ID for CloudTrail. string "" no
cloudwatch_logs_prefix S3 prefix for CloudWatch log exports. string "cloudwatch" no
config_accounts List of accounts for Config logs. By default limits to the current account. list(string) [] no
config_logs_prefix S3 prefix for AWS Config logs. string "config" no
control_object_ownership Whether to manage S3 Bucket Ownership Controls on this bucket. bool true no
create_public_access_block Whether to create a public_access_block restricting public access to the bucket. bool true no
default_allow Whether all services included in this module should be allowed to write to the bucket by default. Alternatively select individual services. It's recommended to use the default bucket ACL of log-delivery-write. bool true no
elb_accounts List of accounts for ELB logs. By default limits to the current account. list(string) [] no
elb_logs_prefix S3 prefix for ELB logs. string "elb" no
enable_mfa_delete A bool that requires MFA to delete the log bucket. bool false no
enable_s3_log_bucket_lifecycle_rule Whether the lifecycle rule for the log bucket is enabled. bool true no
force_destroy A bool that indicates all objects (including any locked objects) should be deleted from the bucket so the bucket can be destroyed without error. bool false no
logging_target_bucket S3 Bucket to send S3 logs to. Disables logging if omitted. string "" no
logging_target_prefix Prefix for logs going into the log_s3_bucket. string "s3/" no
nlb_account Account for NLB logs. By default limits to the current account. string "" no
nlb_logs_prefixes S3 key prefixes for NLB logs. list(string) [ "nlb" ] no
noncurrent_version_retention Number of days to retain non-current versions of objects if versioning is enabled. string 30 no
object_ownership Object ownership. Valid values: BucketOwnerEnforced, BucketOwnerPreferred or ObjectWriter. string "BucketOwnerEnforced" no
redshift_logs_prefix S3 prefix for RedShift logs. string "redshift" no
s3_bucket_acl Set bucket ACL per AWS S3 Canned ACL list. string null no
s3_bucket_name S3 bucket to store AWS logs in. string n/a yes
s3_log_bucket_retention Number of days to keep AWS logs around. string 90 no
s3_logs_prefix S3 prefix for S3 access logs. string "s3" no
tags A mapping of tags to assign to the logs bucket. Please note that tags with a conflicting key will not override the original tag. map(string) {} no
versioning_status A string that indicates the versioning status for the log bucket. string "Disabled" no

Outputs

Name Description
aws_logs_bucket ID of the S3 bucket containing AWS logs.
bucket_arn ARN of the S3 logs bucket
configs_logs_path S3 path for Config logs.
elb_logs_path S3 path for ELB logs.
redshift_logs_path S3 path for RedShift logs.
s3_bucket_policy S3 bucket policy

Upgrade Paths

Upgrading from 14.x.x to 15.x.x

Version 15.x.x updates the module to account for changes made by AWS in April 2023 to the default security settings of new S3 buckets.

Version 15.x.x of this module adds the following resource and variables. How to use the new variables will depend on your use case.

New resource:

  • aws_s3_bucket_ownership_controls.aws_logs

New variables:

  • allow_s3
  • control_object_ownership
  • object_ownership
  • s3_bucket_acl
  • s3_logs_prefix

Steps for updating existing buckets managed by this module:

  • Option 1: Disable ACLs. This module's default values for control_object_ownership, object_ownership, and s3_bucket_acl follow the new AWS recommended best practice. For a new S3 bucket, using those settings will disable S3 access control lists for the bucket and set object ownership to BucketOwnerEnforced. For an existing bucket that is used to store s3 server access logs, the bucket ACL permissions for the S3 log delivery group must be migrated to the bucket policy. The changes must be applied in multiple steps.

Step 1: Update the log bucket policy to grant s3:PutObject permission to the logging service principal (logging.s3.amazonaws.com).

Example:

  statement {
    sid    = "s3-logs-put-object"
    effect = "Allow"
    principals {
      type        = "Service"
      identifiers = ["logging.s3.amazonaws.com"]
    }
    actions   = ["s3:PutObject"]
    resources = ["BUCKET_ARN_PLACEHOLDER/LOGGING_PREFIX_PLACEHOLDER/*"]
  }

Step 2: Change s3_bucket_acl to private.

Step 3: Change object_ownership to BucketOwnerEnforced.

  • Option 2: Continue using ACLs. To continue using ACLs, set s3_bucket_acl to "log-delivery-write" and set object_ownership to ObjectWriter or BucketOwnerPreferred.

See Controlling ownership of objects and disabling ACLs for your bucket for further details and migration considerations.

terraform-aws-logs's People

Contributors

amber-h avatar amitch23 avatar avanti-joshi avatar bazbremner avatar brainsik avatar carterjones avatar cblkwell avatar chrisgilmerproj avatar chtakahashi avatar dependabot-preview[bot] avatar dependabot[bot] avatar eeeady avatar esacteksab avatar exequielrafaela avatar github-actions[bot] avatar jsclarridge avatar kilbergr avatar kodiakhq[bot] avatar lgallard avatar mdawn avatar mdrummerboy09 avatar mr337 avatar nyanbinaryneko avatar pjdufour-truss avatar ralren avatar renovate-bot avatar renovate[bot] avatar rpdelaney avatar sojeri avatar thefotios avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-logs's Issues

Need an upgrade guide between 1.7.1 and 2.1.0

There are a number of things that changed between the 1.7.1 and 2.1.0 release of this module. A migration guide would be very helpful and should include a couple things:

  • CloudTrail management disappeared between versions. Previously it was managed but users now need to provide their own implementation.
  • Template Provider 1.0.0 used to work with the module and now you need Template Provider 2.X or better
  • Several of the resources names were changed, which means you need to use terraform mv to manage the transition in your own state.
  • The bucket policy was moved out of the aws_s3_bucket resource and into a aws_s3_bucket_policy. While this is potentially a no-op it has the effect of making it look like your policy is being removed completely unless you specify default_allow=true as a variable. And then you've got to compare your new and old policy to make sure it has what you want in it.
  • It adds a aws_s3_bucket_public_access_block with sane defaults but there's no way to change the behavior if you didn't have these set before.

A transition guide would be really helpful for folks using this module to help them move forward. If I have time I'll try to make one but I want the issue open here to remind me.

Tag 8.4.0 requires tf13?

In the project description there's a statement that the terraform v12 users should pin to module v8, however the latest tagged release is specifying tf13 as requirement in versions.tf?
Is there some reason for it or was that just oversight during backporting?

Add option for s3 logs to be written to another bucket

From AWS SecurityHub:

[CIS.2.6] S3 Bucket Access Logging generates a log that contains access records for each request made to your S3 bucket. An access log record contains details about the request, such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.

Having an option to send logs from a bucket created by this module to another bucket could be used to remediate this issue.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/validate.yml
terraform
versions.tf
  • aws >= 3.75.0
  • hashicorp/terraform >= 1.0

  • Check this box to trigger a request for Renovate to run again on this repository

Organizational CloudTrail logs have different path, causes creation failure

Normal log path for cloudtrail for account 123456789012 is:

my-logs-bucket/cloudtrail/AWSLogs/123456789012

When is_organization_trail=true on the aws_cloudtrail resource the path becomes

my-logs-bucket/cloudtrail/AWSLogs/o-somehash/123456789012

and so

cloudtrail_resources = toset(formatlist("${local.bucket_arn}/${local.cloudtrail_logs_path}/%s/*", local.cloudtrail_accounts))
won't match and Cloudtrail creation fails.

Not sure what to suggest as a fix.

policy.tpl error?

Hi there, trying out the module for the first time, and ran into the following error:

module "s3_logs" {
  source         = "trussworks/logs/aws"
  s3_bucket_name = "${var.corp}-${var.env}-logs"
  region         = "${var.aws_region}"
}

Terraform apply:

Error: Error running plan: 1 error(s) occurred:

* module.s3_logs.aws_s3_bucket_policy.bucket_policy: "policy" contains an invalid JSON: invalid character '{' after array element

Any ideas? Thanks!

EDIT: Testing on Terraform 0.11.x

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /go.mod.

The error Dependabot encountered was:

go: github.com/gruntwork-io/[email protected] requires
	github.com/google/[email protected] requires
	github.com/vdemeester/[email protected] requires
	k8s.io/[email protected] requires
	google.golang.org/[email protected]: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /opt/go/gopath/pkg/mod/cache/vcs/30a5dbaa452c7ca9354df264080379bbcf24496036c60968495fa0ec4a41888c: exit status 128:
	error: RPC failed; HTTP 502 curl 22 The requested URL returned error: 502 Bad Gateway
	fatal: The remote end hung up unexpectedly

View the update logs.

Pin template provider to 2.X series

This module doesn't work with the template provider at 1.0.0. It's not clear from the error message why this is:

Releasing state lock. This may take a few moments...

Error: Error running plan: 2 errors occurred:
        * module.logs.aws_s3_bucket_policy.bucket_policy: "policy" contains an invalid JSON: invalid character '%' looking for beginning of value
        * module.logs_us_east_1.aws_s3_bucket_policy.bucket_policy: "policy" contains an invalid JSON: invalid character '%' looking for beginning of value

Can we pin the provider by including a providers.tf file?

Error creating bucket. AccessControlListNotSupported: The bucket does not allow ACLs.

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Clone https://github.com/trussworks/terraform-aws-logs
  2. cd to terraform-aws-logs/examples/s3
  3. terraform apply

Expected behavior
Able to create bucket successfully.

Screenshots
n/a

Code Snippet

 ❯ terraform apply
var.force_destroy
  Enter a value: 1

var.region
  Enter a value: us-east-2

var.s3_logs_prefix
  Enter a value: pref

var.test_name
  Enter a value: tf-aws-logs-test-bucket


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

module.aws_logs.aws_s3_bucket_lifecycle_configuration.aws_logs: Creation complete after 32s [id=tf-aws-logs-test-bucket]

Error: error creating S3 bucket ACL for tf-aws-logs-test-bucket-source: AccessControlListNotSupported: The bucket does not allow ACLs
	status code: 400, request id: 6X4TZRP5EQ2YPDQ3, host id: 9mHxqkGpu11rF2zIiAn99oeV3SMHQG+zMjmnl7CK1eG6f9nQBtF7MgrAsXfpwYWCZwGRp9bBD7k=

  on main.tf line 15, in resource "aws_s3_bucket_acl" "log_source_bucket":
  15: resource "aws_s3_bucket_acl" "log_source_bucket" {



Error: error creating S3 bucket ACL for tf-aws-logs-test-bucket: AccessControlListNotSupported: The bucket does not allow ACLs
	status code: 400, request id: 3G4V9C00DNN2ZCAT, host id: OPOKoxFPHI+fhxMTT3M2kh25bWr7gOH02pY7Fp4aMjoHbdTh9Ud1+RAh1V7hvi6CIvsigoVVlX8=

  on ../../main.tf line 424, in resource "aws_s3_bucket_acl" "aws_logs":
 424: resource "aws_s3_bucket_acl" "aws_logs" {

Additional context
Possibly related to https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-s3-automatically-enable-block-public-access-disable-access-control-lists-buckets-april-2023/

provide kms key to encrypt s3 bucket for logs

Is your feature request related to a problem? Please describe.
This module creates a bucket without the option to provide a kms key to encrypt objects. This produces a finding in AWS Config "Checks if the S3 buckets are encrypted with AWS Key Management Service (AWS KMS). The rule is NON_COMPLIANT if the S3 bucket is not encrypted with an AWS KMS key."

Describe the solution you'd like
Id like for a kms key to be an input to this module which would be used instead of default encryption
Describe alternatives you've considered
I have explored no alternatives
Additional context
Add any other context or screenshots about the feature request here.

`allow_s3` Doesn't work as expected

The variable allow_s3 is used in two places in the module:

https://github.com/trussworks/terraform-aws-logs/blob/master/main.tf#L83

and

https://github.com/trussworks/terraform-aws-logs/blob/master/main.tf#L111

In the first instance it appears to change the acl between "log-delivery-write" and "private" and in the second instance it seems to manage the existence of the s3 bucket policy. Which would mean:

True = "private" + no policy
False = "log-delivery-write" + policy

There is also no associated policy for S3 in the template file as there is for the other allow_* variables. I'm not sure that its doing what is intended then based on this.

For my use case I'd love to set this as my module:

module "logs" {
  source  = "trussworks/logs/aws"
  version = "~> 2.1.0"

  allow_alb        = true
  allow_cloudtrail = true
  allow_config     = true
  allow_cloudwatch = true
  allow_s3         = true

  default_allow = false

  s3_log_bucket_retention = "${local.log_retention_days}"
  region                  = "${local.region}"
  s3_bucket_name          = "${local.aws_logs_bucket}"
}

But I get no attached policy, which doesn't seem to be what I want. Instead I have to use this to get the template policy:

module "logs" {
  source  = "trussworks/logs/aws"
  version = "~> 2.1.0"

  default_allow = true

  s3_log_bucket_retention = "${local.log_retention_days}"
  region                  = "${local.region}"
  s3_bucket_name          = "${local.aws_logs_bucket}"
}

But then I have no control over the individual statements inside the template.

Support for VPC Flow Logs

Is your feature request related to a problem? Please describe.
It's nice to be able to also use this module for VPC Flow Logs.

Describe the solution you'd like
Add allow_vpc_flow_logs parameter and possibly extra ones related to VPC Flow Logs

Describe alternatives you've considered
Using your private bucket module.

Additional context
Most of these are security-related logs so it would be nice if they could be put under one umbrella.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.