Git Product home page Git Product logo

terraform-aws-dynamodb-table's Introduction

AWS DynamoDB Table Terraform module

Terraform module to create a DynamoDB table.

Usage

module "dynamodb_table" {
  source   = "terraform-aws-modules/dynamodb-table/aws"

  name     = "my-table"
  hash_key = "id"

  attributes = [
    {
      name = "id"
      type = "N"
    }
  ]

  tags = {
    Terraform   = "true"
    Environment = "staging"
  }
}

Notes

Warning: enabling or disabling autoscaling can cause your table to be recreated

There are two separate Terraform resources used for the DynamoDB table: one is for when any autoscaling is enabled the other when disabled. If your table is already created and then you change the variable autoscaling_enabled then your table will be recreated by Terraform. In this case you will need to move the old aws_dynamodb_table resource that is being destroyed to the new resource that is being created. For example:

terraform state mv module.dynamodb_table.aws_dynamodb_table.this module.dynamodb_table.aws_dynamodb_table.autoscaled

Warning: autoscaling with global secondary indexes

When using an autoscaled provisioned table with GSIs you may find that applying TF changes whilst a GSI is scaled up will reset the capacity, there is an open issue for this on the AWS Provider. To get around this issue you can enable the ignore_changes_global_secondary_index setting however, using this setting means that any changes to GSIs will be ignored by Terraform and will hence have to be applied manually (or via some other automation).

NOTE: Setting ignore_changes_global_secondary_index after the table is already created causes your table to be recreated. In this case, you will need to move the old aws_dynamodb_table resource that is being destroyed to the new resource that is being created. For example:

terraform state mv module.dynamodb_table.aws_dynamodb_table.autoscaled module.dynamodb_table.aws_dynamodb_table.autoscaled_ignore_gsi

Module wrappers

Users of this Terraform module can create multiple similar resources by using for_each meta-argument within module block which became available in Terraform 0.13.

Users of Terragrunt can achieve similar results by using modules provided in the wrappers directory, if they prefer to reduce amount of configuration files.

Examples

Requirements

Name Version
terraform >= 1.0
aws >= 5.21

Providers

Name Version
aws >= 5.21

Modules

No modules.

Resources

Name Type
aws_appautoscaling_policy.index_read_policy resource
aws_appautoscaling_policy.index_write_policy resource
aws_appautoscaling_policy.table_read_policy resource
aws_appautoscaling_policy.table_write_policy resource
aws_appautoscaling_target.index_read resource
aws_appautoscaling_target.index_write resource
aws_appautoscaling_target.table_read resource
aws_appautoscaling_target.table_write resource
aws_dynamodb_table.autoscaled resource
aws_dynamodb_table.autoscaled_gsi_ignore resource
aws_dynamodb_table.this resource

Inputs

Name Description Type Default Required
attributes List of nested attribute definitions. Only required for hash_key and range_key attributes. Each attribute has two properties: name - (Required) The name of the attribute, type - (Required) Attribute type, which must be a scalar type: S, N, or B for (S)tring, (N)umber or (B)inary data list(map(string)) [] no
autoscaling_defaults A map of default autoscaling settings map(string)
{
"scale_in_cooldown": 0,
"scale_out_cooldown": 0,
"target_value": 70
}
no
autoscaling_enabled Whether or not to enable autoscaling. See note in README about this setting bool false no
autoscaling_indexes A map of index autoscaling configurations. See example in examples/autoscaling map(map(string)) {} no
autoscaling_read A map of read autoscaling settings. max_capacity is the only required key. See example in examples/autoscaling map(string) {} no
autoscaling_write A map of write autoscaling settings. max_capacity is the only required key. See example in examples/autoscaling map(string) {} no
billing_mode Controls how you are billed for read/write throughput and how you manage capacity. The valid values are PROVISIONED or PAY_PER_REQUEST string "PAY_PER_REQUEST" no
create_table Controls if DynamoDB table and associated resources are created bool true no
deletion_protection_enabled Enables deletion protection for table bool null no
global_secondary_indexes Describe a GSI for the table; subject to the normal limits on the number of GSIs, projected attributes, etc. any [] no
hash_key The attribute to use as the hash (partition) key. Must also be defined as an attribute string null no
ignore_changes_global_secondary_index Whether to ignore changes lifecycle to global secondary indices, useful for provisioned tables with scaling bool false no
import_table Configurations for importing s3 data into a new table. any {} no
local_secondary_indexes Describe an LSI on the table; these can only be allocated at creation so you cannot change this definition after you have created the resource. any [] no
name Name of the DynamoDB table string null no
point_in_time_recovery_enabled Whether to enable point-in-time recovery bool false no
range_key The attribute to use as the range (sort) key. Must also be defined as an attribute string null no
read_capacity The number of read units for this table. If the billing_mode is PROVISIONED, this field should be greater than 0 number null no
replica_regions Region names for creating replicas for a global DynamoDB table. any [] no
server_side_encryption_enabled Whether or not to enable encryption at rest using an AWS managed KMS customer master key (CMK) bool false no
server_side_encryption_kms_key_arn The ARN of the CMK that should be used for the AWS KMS encryption. This attribute should only be specified if the key is different from the default DynamoDB CMK, alias/aws/dynamodb. string null no
stream_enabled Indicates whether Streams are to be enabled (true) or disabled (false). bool false no
stream_view_type When an item in the table is modified, StreamViewType determines what information is written to the table's stream. Valid values are KEYS_ONLY, NEW_IMAGE, OLD_IMAGE, NEW_AND_OLD_IMAGES. string null no
table_class The storage class of the table. Valid values are STANDARD and STANDARD_INFREQUENT_ACCESS string null no
tags A map of tags to add to all resources map(string) {} no
timeouts Updated Terraform resource management timeouts map(string)
{
"create": "10m",
"delete": "10m",
"update": "60m"
}
no
ttl_attribute_name The name of the table attribute to store the TTL timestamp in string "" no
ttl_enabled Indicates whether ttl is enabled bool false no
write_capacity The number of write units for this table. If the billing_mode is PROVISIONED, this field should be greater than 0 number null no

Outputs

Name Description
dynamodb_table_arn ARN of the DynamoDB table
dynamodb_table_id ID of the DynamoDB table
dynamodb_table_stream_arn The ARN of the Table Stream. Only available when var.stream_enabled is true
dynamodb_table_stream_label A timestamp, in ISO 8601 format of the Table Stream. Only available when var.stream_enabled is true

Authors

Module is maintained by Anton Babenko with help from these awesome contributors.

License

Apache 2 Licensed. See LICENSE for full details.

terraform-aws-dynamodb-table's People

Contributors

antonbabenko avatar bangpound avatar betajobot avatar bryantbiggs avatar dev-slatto avatar huzaifa-binafzal avatar leoalves100 avatar lobsterdore avatar magreenbaum avatar majoras-masque avatar max-rocket-internet avatar mnylensc avatar mukta-puri avatar semantic-release-bot avatar szesch avatar szymonpk avatar zyntogz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-dynamodb-table's Issues

Support for addition of GSI

Is your request related to a new offering from AWS?

No, this is a higher level abstraction

Is your request related to a problem? Please describe.

AWS does not support adding GSI to existing tables. A new table needs to be created for adding a GSI. Destroying a table would cause data loss.

Describe the solution you'd like.

Ideally, I would love for Terraform to take care of the data migration from the old table to the new one when a "must recreate"-level change is detected.

If the option is activated, the module should take care of

  • backing up the table
  • create the new table with the new GSI
  • restore the backup in the new table
  • destroy the old table

Describe alternatives you've considered.

Alternatively, everything has to be done by hand

Additional context

I'm not sure if this is already supported and there is no proper place to ask in this repository. Maybe a "question" type of issue?

Documentation for ignore_changes_global_secondary_index should warn that it will destroy the table

Is your request related to a problem? Please describe.

Documentation states that

Warning: autoscaling with global secondary indexes

When using an autoscaled provisioned table with GSIs you may find that applying TF changes whilst a GSI is scaled up will reset the capacity, there is an open issue for this on the AWS Provider. To get around this issue you can enable the ignore_changes_global_secondary_index setting however, using this setting means that any changes to GSIs will be ignored by Terraform and will hence have to be applied manually (or via some other automation).

However, it should maybe also warn you that trying to set this variable after the table is already created will cause the table to be recreated.

Describe the solution you'd like.

Mention this caveat and maybe offer a migration path in the documentation.

It seems that using terraform state mv "module.dynamodb.aws_dynamodb_table.autoscaled[0]" "module.dynamodb.aws_dynamodb_table.autoscaled_gsi_ignore[0]" before applying the change from ignore_changes_global_secondary_index = false -> true will preserve the table without recreating it.

KMS CMK key for DynamoDB Global tabel

Hi wanted to know how can we give KMS CMK for DynamoDB global table as KMS is region specific and in global table TF will create table in another region.

Problem updating resource every time due to `table_class` argument

Hello πŸ™‚

Description

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

CONTEXT : I have a PAY_BY_REQUESTdynamo for the prod and PROVISIONEDfor others account
PROBLEM : When specifying table_class argument, terraform wants to update the resource every time.
Initial problem found in version 3.1.2 of this module (latest)

My dynamo config (partial content) :

module "svc_propects_dynamodb" {
  source  = "terraform-aws-modules/dynamodb-table/aws"
  version = "3.1.2"

  name = local.dynamodb_table_name

  # prod: onDemand
  # others: provisionned 1-10 R/W
  billing_mode = local.env_vars.billing_mode
  table_class  = "STANDARD"

when doing a terraform plan we have this issue where tf wants to update table_class every time

module.svc_propects_dynamodb.aws_dynamodb_table.this[0]: Refreshing state... [id=svcProspects-tfmanaged-sdbx_with_prod_spec]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.svc_propects_dynamodb.aws_dynamodb_table.this[0] will be updated in-place
  ~ resource "aws_dynamodb_table" "this" {
        id             = "svcProspects-tfmanaged-sdbx_with_prod_spec"
        name           = "svcProspects-tfmanaged-sdbx_with_prod_spec"
      + table_class    = "STANDARD"
        tags           = {
            "Name" = "svcProspects-tfmanaged-sdbx_with_prod_spec"
        }
        # (8 unchanged attributes hidden)

I looked up in your releases note and you recently work to add the table_classfor PAY_BY_REQUEST dynamo. See #64

This is a recent problem occurred by the recent changes.

I tested different versions to see if we have the bug.

PAY_BY_REQUEST have bug in : 3.1.2, 3.1.1 and 3.1.0
PROVISIONED have bug in 3.1.2 and 3.1.1 (not in 3.1.0)

⚠️ Note

  • βœ‹ I did the note notice

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 3.1.2, 3.1.1 and 3.1.0

  • Terraform version: 1.2.0 & 1.3.5 (latest)

  • Provider version(s): AWS 4.38 & 4.40 (latest)

Reproduction Code [Required]

Steps to reproduce the behavior:

Workspace used

Expected behavior

Terraform should not update this resource

Actual behavior

Update infinite loop on the table_class argument

updating Amazon DynamoDB Table: updating replicas, while creating: updating replica point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region

Hi, I really hope that you can help me with this issue, as I already spent 3 days troubleshooting and trying different things.
Thanks in advance.

Description

We have several DynamoDB tables in our main region (us-east-1) and want to create replica (global) tables in another region (us-east-2). We have PITR enabled in main region but we do not want to enable it for tables in replica region. So, I have terraform script which in theory should work fine and it actually creates all the replica tables, but the process completes with very weird error:

Error: updating Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): updating replicas, while creating: updating replica (us-east-2) point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>'

This very same error shows for each table created in replica region, though as i mentioned the global tables have been created successfully. I think it somehow tries to configure PITR for replica region as well, but I don't understand why.

I first tried to not pass point_in_time_recovery parameter in replica_regions block at all, as I see that your module will set it to null here and the default value for aws_dynamodb_table resource is false in aws provider according to the documentation.

Then I tried to update configuration and pass point_in_time_recovery parameter as false explicitly in replica_regions block, but I've got the same error. I cannot find anything related in the Internet. I understand that it is ValidationException returned from aws api, but I don't understand what am I missing?

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]: 4.0.1

  • Terraform version: 1.7.4

  • Provider version(s): 5.46.0

Reproduction Code [Required]

main.tf

module "dynamodb_table" {
  source  = "terraform-aws-modules/dynamodb-table/aws"
  version = "4.0.1"

  name                               = "table_name"
  hash_key                           = "HK"
  range_key                          = "SK"
  point_in_time_recovery_enabled     = true
  ttl_enabled                        = true
  ttl_attribute_name                 = "expire_ttl"
  server_side_encryption_enabled     = true
  server_side_encryption_kms_key_arn = data.aws_kms_key.general_key.arn
  table_class                        = "STANDARD"
  deletion_protection_enabled        = true
  stream_enabled                     = true
  attributes                         = local.dynamodb_attributes
  global_secondary_indexes           = local.dynamodb_global_secondary_indexes
  replica_regions                    = local.dynamodb_replica_regions
  tags                               = local.tags
}

local.tf

locals {
  environment    = "prod"
  region         = "us-east-1"
  replica_region = "us-east-2"

  tags = {
    Terraform   = "true"
    Environment = local.environment
  }

  dynamodb_attributes = [
    {
      name = "HK"
      type = "S"
    },
    {
      name = "SK"
      type = "S"
    },
    {
      name = "GSI1_HK"
      type = "S"
    },
    {
      name = "GSI1_SK"
      type = "S"
    },
    {
      name = "GSI2_HK"
      type = "S"
    },
    {
      name = "GSI2_SK"
      type = "S"
    }
  ]

  dynamodb_global_secondary_indexes = [
    {
      name            = "GSI1"
      hash_key        = "GSI1_HK"
      range_key       = "GSI1_SK"
      projection_type = "ALL"
    },
    {
      name            = "GSI2"
      hash_key        = "GSI2_HK"
      range_key       = "GSI2_SK"
      projection_type = "ALL"
    },
  ]

  dynamodb_replica_regions = [{
    region_name            = local.replica_region
    kms_key_arn            = data.aws_kms_key.dynamodb_replica_cmk.arn
    propagate_tags         = true
    point_in_time_recovery = false
  }]
}

data.tf

data "aws_kms_key" "dynamodb_replica_cmk" {
  provider = aws.replica
  key_id     = "alias/replica-cmk"
}

data "aws_kms_key" "general_key" {
  key_id = "alias/general-key"
}

provider.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.21"
    }
  }
}

provider "aws" {
  region = local.replica_region
  alias  = "replica"
}

Steps to reproduce the behavior:

-Are you using workspaces?
-No, we are not using workspace.

-Have you cleared the local cache (see Notice section above)?
-Yes, I have cleared the local cache

List steps in order that led up to the issue you encountered:
-Just run terraform apply that's it. Plan shows everything good, but plan generates error mentioned above.

Expected behavior

This should create replica (global) tables in replica region without PITR.

Actual behavior

It actually creates replica tables, but process completes with error (for each table described in terraform configuration) already mentioned above:

Error: updating Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): updating replicas, while creating: updating replica (us-east-2) point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>'

Terminal Output Screenshot(s)

Additional context

After the global tables were created despite the fact that terraform apply failed, now if I run it one more time, even terraform plan completes with this error:

Error: reading Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): describing Continuous Backups: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>' status code: 400, request id: AND2DAN15FOAP7KSA3LK5VCDDRVV4KQNSO5AEMVJF66Q9ASUAAJG

That's why I think that it tries to configure PITR for replica tables as well, even that I set it to false explicitly.

Separated target_value for GSI autoscaling

Is your request related to a problem? Please describe.

Hello, maybe I read it incorrectly, however it seems to me that for autoscaling on GSI, you can not actually choose different target_value for reads and for writes. Both of them will share the same value. Is this intentional?

Describe the solution you'd like.

If it's the case, then I'd like to have a separate value for read_target_value and write_target_value.

Dynamodb Global Table version

I have an issue where the Global table version that is created by default is 2017.11.29 not the 2019 version. The code that creates this is this:

resource "aws_dynamodb_global_table" "global-cats-table" {
depends_on = [
module.dynamodb_table_globalCats_eu_central_1,
module.dynamodb_table_globalCats_us_west_1,
]

name = "globalCats"

replica {
region_name = "eu-central-1"
}

replica {
region_name = "us-west-2"
}
}

What needs to be changed to implement the 2019 version?

Terraform ignores changes attribute type

Description

Created a new dynamodb table.
updated the attribute type.
Terraform thinks nothing has changed.
I'd expect it to destroy the table and recreate it.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

It does

Versions

  • Version:
    Terraform: Terraform v1.0.2 (tested with Terraform v1.0.3 too)
  • Provider(s):
Terraform v1.0.2
on linux_amd64
provider registry.terraform.io/hashicorp/aws v3.50.0
  • Module: aws_dynamodb_table

Reproduction

Deploy aws_dynamodb_table
Change Attribute ID
run plan or apply

Code Snippet to Reproduce

resource "aws_dynamodb_table" "tablename" {
    name = "tablename${var.Environment}"
    billing_mode    = "PROVISIONED"
    read_capacity   = 5
    write_capacity  = 5
    hash_key = "id"

    attribute {
        name = "id"
        type = "N"
    }

change to:

resource "aws_dynamodb_table" "tablename" {
    name = "tablename${var.Environment}"
    billing_mode    = "PROVISIONED"
    read_capacity   = 5
    write_capacity  = 5
    hash_key = "id"

    attribute {
        name = "id"
        type = "S"
    }

Expected behavior

Terraform will remove the table and recreate with the new attribute type

Actual behavior

Terraform thinks nothing has changed

Terminal Output Screenshot(s)

No changes. Your infrastructure matches the configuration.

Additional context

ResourceInUseException

Getting the following error on table creation:

module.TEST_TABLE.aws_dynamodb_table.this[0]: Creating...
module.TEST_TABLE.aws_dynamodb_table.this[0]: Creating...
module.TEST_TABLE.aws_dynamodb_table.this[0]: Still creating... [10s elapsed]
module.TEST_TABLE.aws_dynamodb_table.this[0]: Creation complete after 20s [id=test-log]

Error: error creating DynamoDB Table: ResourceInUseException: Attempt to change a resource which is still in use: Table is being created: hkdev-dynamo-fp-sale-log

Version:

Terraform v0.13.7
+ provider registry.terraform.io/hashicorp/aws v4.9.0

"propagate_tags" is not expected error when creating a new ddb table

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 3.0.0

  • Terraform version: v1.2.6

  • Provider version(s): v4.27.0

Reproduction Code [Required]

Steps to reproduce the behavior:

  1. create a new project with main.tf
  2. add the example resource taken from README
module "dynamodb_table" {
  source   = "terraform-aws-modules/dynamodb-table/aws"

  name     = "my-table"
  hash_key = "id"

  attributes = [
    {
      name = "id"
      type = "N"
    }
  ]

  tags = {
    Terraform   = "true"
    Environment = "staging"
  }
}
  1. terraform init and terraform plan

Expected behavior

expect terraform plan succeed without errors

Actual behavior

terrafrom plan failed with error.

Terminal Output Screenshot(s)

β•·
β”‚ Error: Unsupported argument
β”‚ 
β”‚   on .terraform/modules/dynamodb_table/main.tf line 63, in resource "aws_dynamodb_table" "this":
β”‚   63:       propagate_tags = lookup(replica.value, "propagate_tags", null)
β”‚ 
β”‚ An argument named "propagate_tags" is not expected here.
β•΅

Additional context

stream arn reference

stream arn in output seems incorrect

value = var.stream_enabled ? try(aws_dynamodb_table.this[0].id, aws_dynamodb_table.autoscaled[0].stream_arn, "") : null

shouldn't this be tfvar.stream_enabled ? try(aws_dynamodb_table.this[0].stream_arn, aws_dynamodb_table.autoscaled[0].stream_arn, "") : null

Table class is not supported in this module, which causes an issue for the tables already created with the table class set.

Description

Table class is not supported in this module, which causes an issue for the tables already created with the table class set. Please see https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table#table_class.

Versions

  • Module version [Required]: Latest

  • Terraform version:
    Latest

  • Provider version(s):
    AWS (Latest)

Reproduction Code [Required]

No way to provide the table class in this module!

Steps to reproduce the behavior:

N/A

Expected behavior

Table class attribute should be supported.

Actual behavior

Table class is not set, making it null (which is considered not correct value by AWS).

image

Terminal Output Screenshot(s)

N/A

Additional context

N/A

Option to ignore replica changes and play nice with aws_dynamodb_table_replica

See: #73.

Opening a new issue because the above issue was "automatically locked as resolved". But it was not resolved. See comments, my last comment which I feel was perfectly valid was not addressed, seems like it was completely ignored. Is this module even maintained anymore? Judging by how the above issue was handled, and the other issue(s) and PR(s) in this repo, nobody seems to be looking at them at all.

propagate_tags variable is not supported in the module which is preventing the replication of tags to DynamoDB replicas table in other region.

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

Is your request related to a problem? Please describe.

Currently we're facing the issue where there is no way to propagate tags to the DynamoDB replicas table in other region.

Describe the solution you'd like.

This issue has been fixed in the new release v4.23.0 of terraform-provider-aws. details can be found on the link. https://github.com/hashicorp/terraform-provider-aws/blob/v4.23.0/CHANGELOG.md , resource/aws_dynamodb_table: Add replica.*.propagate_tags argument to allow propagating tags to replicas (hashicorp/terraform-provider-aws#25866). We need to update the module by adding propagate_tags variable inside replica_regions variable that we already have.

Failure when trying to import existing AutoScaling policy

Description

Using this module I want to import existing resources created by the web console.

Everything goes fine, until I try to import the resource related to aws_appautoscaling_policy.table_read_policy[0] and aws_appautoscaling_policy.table_write_policy[0], where the error says that the desired resource is not found::
Error: Cannot import non-existent remote object

After multiple searches on the internet and going through the documentation I tried to import the existing resources, according to how the module is assembled it should be: dynamodb/table/example/dynamodb:table:ReadCapacityUnits/DynamoDBReadCapacityUtilization:table/example

This identifier belongs to the structure:
service-namespace = dynamodb
resource-id = table/example
scalable-dimension = dynamodb:table:ReadCapacityUnits
policy-name = DynamoDBReadCapacityUtilization:table/example

However, when executing the command: aws application-autoscaling describe-scaling-policies --service-namespace dynamodb

I obtain the following structure

             "PolicyARN": "arn:aws:autoscaling:us-west-2:00000000:scalingPolicy:xxxxxxxxxx-98xx:resource/dynamodb/table/example:policyName/$example",
            "PolicyName": "$example-scaling-policy",
            "ServiceNamespace": "dynamodb",
            "ResourceId": "table/example",
            "ScalableDimension": "dynamodb:table:ReadCapacityUnits", 
            "PolicyType": "TargetTrackingScaling",
            "TargetTrackingScalingPolicyConfiguration": {
                "TargetValue": 70.0,
                "PredefinedMetricSpecification": {
                    "PredefinedMetricType": "DynamoDBReadCapacityUtilization"
                }
            },

Using this value I construct the following identifier which does work:
dynamodb/table/example/dynamodb:table:WriteCapacityUnits/$example-scaling-policy

The problem is that Terraform is now trying to destroy the resource:

  module.dynamodb.aws_appautoscaling_policy.table_read_policy[0] must be replaced
-/+ resource "aws_appautoscaling_policy" "table_read_policy" {
      ~ alarm_arns         = [                                                                                                           
          - "arn:aws:cloudwatch:us-west-2:000000000000:alarm:TargetTracking-table/example-AlarmHigh-xxxxxxxxxx",
          - "arn:aws:cloudwatch:us-west-2:000000000000:alarm:TargetTracking-table/example-AlarmLow-xxxxxxxxxx",
          - "arn:aws:cloudwatch:us-west-2:000000000000:alarm:TargetTracking-table/example-ProvisionedCapacityHigh-xxxxxxxxxx",
          - "arn:aws:cloudwatch:us-west-2:000000000000:alarm:TargetTracking-table/example-ProvisionedCapacityLow-xxxxxxxxxx",
        ] -> (known after apply)                         
      ~ arn                = "arn:aws:autoscaling:us-west-2:000000000000:scalingPolicy:xxxxxxxxxx:resource/dynamodb/table/example:policyName/$example-scaling-policy" -> (known after apply)
      ~ id                 = "$example-scaling-policy" -> (known after apply)
      ~ name               = "$example-scaling-policy" -> "DynamoDBReadCapacityUtilization:table/example" # forces replacement     
    }
  • [ X] βœ‹ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: 4.0.0

  • Terraform version: v1.6.3

  • Provider version(s): v5.25.0

Change of attribute type not triggering a new plan

Description

I created a table with some attributes related to the hash_key and range_key.
After some internal talk, the company decided to change the type of these attributes.
However, after the modification, the terraform is not triggering a new plan.

Versions

  • Module version [Required]:
registry.terraform.io/terraform-aws-modules/dynamodb-table/aws 2.0.0
  • Terraform version:
Terraform v1.2.4
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v4.22.0

Reproduction Code [Required]

First version of the file:

module "User" {
  source   = "terraform-aws-modules/dynamodb-table/aws"

  name                = "User"
  hash_key            = "UserId"
  range_key           = "AppId"
  billing_mode        = "PROVISIONED"
  read_capacity       = 1
  write_capacity      = 1
  autoscaling_enabled = true

  autoscaling_read = {
    target_value       = 70
    max_capacity       = 10
  }

  autoscaling_write = {
    target_value       = 70
    max_capacity       = 10
  }

  attributes = [
    {
      name = "UserId"
      type = "S"
    },
    {
      name = "AppId"
      type = "S"
    }
  ]

  tags = {
    Terraform   = "true"
    Environment = "staging"
  }
}

After the modification

module "User" {
  source   = "terraform-aws-modules/dynamodb-table/aws"

  name                = "User"
  hash_key            = "UserId"
  range_key           = "AppId"
  billing_mode        = "PROVISIONED"
  read_capacity       = 1
  write_capacity      = 1
  autoscaling_enabled = true

  autoscaling_read = {
    target_value       = 70
    max_capacity       = 10
  }

  autoscaling_write = {
    target_value       = 70
    max_capacity       = 10
  }

  attributes = [
    {
      name = "UserId"
      type = "N"
    },
    {
      name = "AppId"
      type = "N"
    }
  ]

  tags = {
    Terraform   = "true"
    Environment = "staging"
  }
}

Response for Terraform plan after the change:

module.User.aws_dynamodb_table.autoscaled[0]: Refreshing state... [id=User]
module.User.aws_appautoscaling_target.table_read[0]: Refreshing state... [id=table/User]
module.User.aws_appautoscaling_target.table_write[0]: Refreshing state... [id=table/User]
module.User.aws_appautoscaling_policy.table_write_policy[0]: Refreshing state... [id=DynamoDBWriteCapacityUtilization:table/User]
module.User.aws_appautoscaling_policy.table_read_policy[0]: Refreshing state... [id=DynamoDBReadCapacityUtilization:table/User]

No changes. Your infrastructure matches the configuration.

Expected behavior

terraform plan should trigger a table modification, as the attributes changed their types.

Actual behavior

terraform plan states no changes.

Support removing a table without deleting it

Is your request related to a problem? Please describe.

Right now: if we remove a table's configuration, it destroys the table

Expected behaviour: a way to remove a table's configuration without deleting the table.

Describe the solution you'd like.

We could add a prevent_destroy = true boolean variable (default value: false to retain existing behaviour) that would control the lifecycle { prevent_destroy = true } to the aws_dynamodb_table resource.

Describe alternatives you've considered.

Right now, the best solution is manually removing the table from the Terraform state. Super inconvenient and doesn't mesh well with CI/CD practices.

Option to ignore replica changes and play nice with aws_dynamodb_table_replica

Is this functionality available in the AWS provider for Terraform?

Yes, see the NOTEs here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table_replica.html and here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dynamodb_table.html.

Is your request related to a problem? Please describe.

This feature would provide a better work around than the one mentioned in this issue: (although the issue is closed, there is still no proper solution) hashicorp/terraform-provider-aws#13097.

Describe the solution you'd like.

a variable that will ultimately result in lifecycle { ignore_changes = [replica] }

Additional context

This module does not give an option to ignore replicas, making it impossible to use together with aws_dynamodb_table_replica. The reason for this request stems from this outstanding unresolved issue: DynamoDB with Global table v2019.11.21 and PROVISIONED and autscaling generates Table Capacity and/or GSI capacityValidationException. The work around for the above solution is to run terraform without the replica block, and re run again with it, which is unacceptable for obvious reasons.

My situation is that I have two separate terraform configurations, one is responsible for all resources in us-east-1, and the other is responsible for all resources in us-west-2. My solution to the issue linked above is to create the source table in my us-east-1 configuration, ignoring the replica block. Once this config is applied, the us-west-1 configuration can then create a replica of the us-east-1 source table in us-west-1 using aws_dynamodb_table_replica. My us-east-1 configuration that creates the table with autoscaling is just a copy of this module with the one addition of { ignore_changes = [replica] }. I would love to be able to avoid having my own module and instead use this module. but since there is no option to ignore replica changes, this isn't possible because a subsequent run with my us-east-1 configuration ends up deleting the replica that was created with aws_dynamodb_table_replica in my us-west-2 configuration.

Introduce the option to specify lifecycle rules on the dynamoDb tables

Is your request related to a new offering from AWS?

Is this functionality available in the AWS provider for Terraform? See CHANGELOG.md, too.

  • Yes βœ…

Is your request related to a problem? Please describe.

The module does not currently have the option to specify the prevent destroy lifecycle rules on the DynamoDb table

Describe the solution you'd like.

I would like to be able to introduce a variable similar to
variable "prevent_destruction_of_table" { description = "Enable to prevent destruction of the DynamoDb table. No other resources are protected by this flag." default = false }

This could then be used to toggle lifecycle rules on the DynamoDB table in main
lifecycle { prevent_destroy = var.prevent_destruction_of_table }

Describe alternatives you've considered.

Currently unable to specify lifecycle rules on modules as per this feature request hashicorp/terraform#18367

Additional context

Schedule scaling

Is your request related to a problem? Please describe.

If you use PROVISIONED autoscaling and you have an application that has a high spike at a specific time ddb autoscaling is not quick enough. For that reason, you can use schedule scaling (https://docs.aws.amazon.com/autoscaling/application/userguide/get-started-exercise.html). This feature is currently missing inside that module.

Describe the solution you'd like.

The best case would be to have it as an option implemented in this module.

Describe alternatives you've considered.

On the other side, you could also output the required values to use an additional module together with this one.

Additional context

#74 One possible approach

DynamoDB Output ID

Hey @antonbabenko @max-rocket-internet,

I currently building a side project with your wonderful serverless.tf modules ✨ I love it so far πŸ₯°

One thing I couldn't figure out yet was this line here:

output "this_dynamodb_table_id" {

My goal is to use the ID of the DynamoDB table in my AppSync module. Apparently when I consume it in AppSync module it add the the twice:

arn:aws:dynamodb:eu-central-X:XXXXXXXXXX:table/arn:aws:dynamodb:eu-central-X:XXXXXXXXXX::table/my-table

I want to use the output of the ID in my AppSync module. So what I did is:

  1. define a variable:
variable this_dynamodb_table_id {
  type = string
}
  1. Jump into the datasource configuration in my AppSync module and add the variable:
    dynamodb1 = {
      type = "AMAZON_DYNAMODB"

      # Note: dynamic references (module.dynamodb_table1.this_dynamodb_table_id) do not work unless you create this resource in advance
      table_name = var.this_dynamodb_table_id
      region     = var.region
    }
  1. Last step I go to my main.tf in the root and add the modules:
module my_test_api {
  source = "./modules/my-test-api"
  region = var.region
  this_dynamodb_table_id = module.my_table.this_dynamodb_table_arn
}

After excute terraform plan if receive this output:

arn:aws:dynamodb:eu-central-X:XXXXXXXXXX:table/arn:aws:dynamodb:eu-central-X:XXXXXXXXXX::table/my-table

When I just add a string value w/o using a variable it works as expected.

Maybe someone can tell me what I'm doing wrong here or is it a bug, because of the concat function?

Cheers
AndrΓ©

Error creating application autoscaling target

I've been getting errors when trying to create tables with auto scaling. The following error will occur and then if I run terraform apply again, it will work.

Error:

Error: Error creating application autoscaling target: ValidationException: DynamoDB table does not exist: table/MyTable

  on .terraform/modules/dynamodb-table-MyTable/terraform-aws-dynamodb-table-0.5.0/autoscaling.tf line 29, in resource "aws_appautoscaling_target" "table_write":
  29: resource "aws_appautoscaling_target" "table_write" {

Config:

module "dynamodb-table-MyTable" {
  source  = "terraform-aws-modules/dynamodb-table/aws"
  version = "0.5.0"

  name = "MyTable"

  billing_mode = "PROVISIONED"

  read_capacity     = 5
  write_capacity    = 5
  autoscaling_read  = { max_capacity = 1000 }
  autoscaling_write = { max_capacity = 1000 }

  hash_key  = "foo"
  range_key = "bar"

  attributes = [
    {
      name = "foo"
      type = "S"
    },
    {
      name = "bar"
      type = "S"
    }

  ]

  tags = local.tags
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.