Git Product home page Git Product logo

terraform-aws-rds-cluster's Issues

How to migrate from the inline Security Group rules to SG rules as separate resources

This PR #80 changed the Security Group rules from inline to resource-based.

This is a good move since using inline SG rules is a "bad practice". Inline rules have many issues (one of them is that you can't add new rules to the security group since it's not possible to mix the inline rules and rules as separate resources).

At the same time, this introduced a breaking change: if you want to update the module to the latest version, Terraform will try to add the new resource-based rules to the security group and will fail since the same rules already exist and we can't mix inline rules with resource-based rules.

Note that it's not possible to taint and destroy the security group since it has a dependent object (an Elastic Network Interface), which in turn has its own dependencies.

One possible solution would be to destroy the Aurora RDS cluster completely and recreate it. While possible in some cases (e.g. in dev environments), it could not be feasible in other environments (e.g. a production database has data, and it's not possible to have a long outage).

A better way would be to just destroy the inline security group rules without destroying the security group itself (and any other Aurora resources), and then add the resource-based security group rules.

Here are the steps to do that:

  1. Create a new branch of terraform-aws-rds-cluster module, e.g. strip-inline-sg-rules

  2. In the new branch, comment out all the aws_security_group_rule resources for resource "aws_security_group" "default"

  3. Add empty ingress and egress lists to the security group. NOTE: you can't skip the ingress and egress completely since terraform will not detect any changes to the inline rules (this is a bug/feature of TF):

resource "aws_security_group" "default" {
  name        = ...
  vpc_id      = var.vpc_id

  ingress = []
  egress  = []
}

NOTE: Branch strip-inline-sg-rules has been already created in this repository and steps 1-3 already performed.
The branch strip-inline-sg-rules can be used to perform the next steps.

  1. Update the Aurora cluster project to use the strip-inline-sg-rules branch of the terraform-aws-rds-cluster module
module "aurora_postgres_cluster" {
  source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=strip-inline-sg-rules"

  1. Apply the project. Terraform will just remove the inline rules from the security group without destroying the SG itself and any of the Aurora resources

  2. Update the Aurora cluster project to use the latest release of the terraform-aws-rds-cluster module

module "aurora_postgres_cluster" {
  source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.34.0"

  1. Apply the project. Terraform will add the external resource-based SG rules

It takes a few minutes to go through all the steps, so the disruption to the production database will be minimal.

Action of deleting serverlessv2_scaling_configuration has no effect

Describe the Bug

The serverlessv2_scaling_configuration can not be deleted.

Expected Behavior

No change should be detected.

From AWS's documentation, it seems there is no way to delete these settings. But the Terraform change makes it look like it's going to delete it. It will be great to not report this type of change (setting the value to null) until it can actually do it so the change doesn't have to reappear again and again.

Steps to Reproduce

  1. Create a regional RDS cluster with one writer with terraform.
  2. From the AWS console, add a reader with DB instance class Serverless v2
  3. Delete the reader from the AWS console. Now if we click "Modify" on the cluster, we will see the leftover Serverless v2 capacity settings such as Minimum ACUs and Maximum ACUs.
  4. When we run terraform plan, it always detects changes like:
     ~ serverlessv2_scaling_configuration {
          - max_capacity = 128 -> null
          - min_capacity = 2 -> null
        }

But it won't actually change these settings or delete the whole Serverless v2 capacity settings section from the cluster when we terraform apply.
5. When we rerun terraform plan, the above change will show up again.

Screenshots

No response

Environment

  • Module version: 0.44.0
  • Terraform version: 1.3.2

Additional Context

No response

Use an existing db cluster parameter group instead of creating new one

Describe the Feature

Since the cluster parameter group is not adjustable, it's not feasible that always create a new cluster parameter group in a large system.

Expected Behavior

Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.

Use Case

We have a large developing team that creates a lot of rds serverless clusters for development and testing.
Since the db cluster parameter group number is not adjustable, we can't create more.

Because nearly all of these RDS clusters are for testing only, a shared default cluster parameter group is acceptable in our environment.

Describe Ideal Solution

Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.

Alternatives Considered

Change to db instance as a workaround, however, it's not cost-efficient. A serverless cluster is very good for us for RnD testing.

Additional Context

No response

enable_http_endpoint not working for serverlessv2 configurations

Describe the Bug

When instance_type is "db.serverless" (for V2 serverless) the engine_mode does not accept the value "serverless", but this value is required to enable the Data API via enable_http_endpoint = true. As a result, the co-condition only applies for serverless v1.

Expected Behavior

That

...
instance_type` = "db.serverless"
enable_http_endpoint = true
...

would enable the Data API for serverless V2

Steps to Reproduce

...
instance_type` = "db.serverless"
enable_http_endpoint = true
...

Screenshots

No response

Environment

OSX, M1

Additional Context

No response

Second destroy will fail if snapshot is not skipped due to snapshot conflict

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

If the cluster is created and destroyed and then created again and attempted a destroy again, the last destroy will fail because there is a snapshot with the same name as the last one.

│ Error: error deleting RDS Cluster (aurora-example-shared): DBClusterSnapshotAlreadyExistsFault: Cannot create the cluster snapshot because one with the identifier aurora-example-shared already exists.

Expected Behavior

Add a random id to the final snapshot when the cluster is created to avoid conflicts

Question Regarding parameter groups

I have a question regarding parameter groups. I have tried a couple things but I have not been able to construct a list of parameters for aws_rds_cluster_parameter_group resource in the module. For example I would like to set,

character_set_client=utf8
character_set_connection=utf8

Do you have an example definition for cluster_paramparameters

Cheers

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

terraform
enhanced-monitoring.tf
  • cloudposse/label/null 0.25.0
examples/basic/main.tf
examples/complete/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/complete/versions.tf
  • aws >= 4.17.0
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/enhanced_monitoring/main.tf
examples/postgres/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/postgres/versions.tf
  • aws >= 4.17.0
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/serverless_mysql/main.tf
examples/serverless_mysql57/main.tf
examples/serverlessv2_postgres/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/serverlessv2_postgres/versions.tf
  • aws >= 4.12
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/with_cluster_parameters/main.tf
main.tf
  • cloudposse/route53-cluster-hostname/aws 0.12.2
  • cloudposse/route53-cluster-hostname/aws 0.12.2
versions.tf
  • aws >= 4.23.0
  • null >= 2.0
  • hashicorp/terraform >= 1.0.0

  • Check this box to trigger a request for Renovate to run again on this repository

db_port not working as expected

db_port = 5454
I have defined db_port value as 5454 in .hcl file but after applying, RDS Instances(reader and writer) are created with port 5432.
I am using below RDS configuration
engine = "aurora-postgresql"
engine_version = "10.14"
cluster_family = "aurora-postgresql10"

Incorrect cluster_instance_count calculation when autoscaling_enabled = true

For Aurora, autoscaling only applies to read replicas. However, the terraform code here does not support creation of an autoscaling group with one read replica.
This seems related to #61, but is slightly different, I believe

The instance_count and cluster_instance_count calculations are not correct when autoscaling_enabled = true.

Steps to reproduce:

  1. Create cluster with autoscaling_enabled = false (default)
  2. change autoscaling_enabled to true, and accept the default value for autoscaling_min_capacity (default: 1)

result: two new resources are created

  • resource "aws_appautoscaling_policy" "replicas"
  • resource "aws_appautoscaling_target" "replicas"
    and one resource is deleted
  • resource "aws_rds_cluster_instance" "default"

The aws_rds_cluster_instance is deleted because the value for local.cluster_instance_count has changed from 2 (the default if autoscaling_enabled=false) to 1 (based on different logic when autoscaling_enabled=true.

I confirmed this by setting autoscaling_min_capacity to 2. With this value, the resource aws_rds_cluster_instance is unmodified. However, in this case, the number of read replicas created is 2

Potential fix:
min_instance_count = var.autoscaling_enabled ? var.autoscaling_min_capacity +1 : var.cluster_size

Missing arguments

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Looking at https://www.terraform.io/docs/providers/aws/r/rds_cluster.html and https://www.terraform.io/docs/providers/aws/r/rds_cluster_instance.html the following arguments are missing:

aws_rds_cluster - missing arguments:

availability_zones
cluster_identifier_prefix
db_subnet_group_name
port

rds_cluster_instance - missing arguments:

identifier_prefix
apply_immediately
promotion_tier
preferred_backup_window
preferred_maintenance_window
auto_minor_version_upgrade
copy_tags_to_snapshot
ca_cert_identifier

Expected Behavior

Arguments included and configurable if necessary

Missing required db_cluster_instance_class variable when creating Multi A-Z RDS cluster

Slack Community

Describe the Bug

When setting up a provisioned multi a-z postgres rds cluster, we need to specify db_cluster_instance_class attribute otherwise it leads to the following error during the apply:

Error: error creating RDS cluster: InvalidParameterValue: DBClusterInstanceClass is required. status code: 400

Expected Behavior

When the missing db_cluster_instance_class is specified the rds cluster should be created normally.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Set the variables for multi a-z rds cluster:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
  1. Do a terraform apply
  2. See error

CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead.

Describe the Bug

i am using minimal config to provision the db cluster, the cluster on console works properly but the terraform scripts fails in the end with the error message

│ Error: creating RDS Cluster (prod-mysql) Instance (prod-mysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. │ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef │ │ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], │ on .terraform/modules/rds_mysql/main.tf line 251, in resource "aws_rds_cluster_instance" "default": │ 251: resource "aws_rds_cluster_instance" "default" {

Expected Behavior

script should not fail as cluster is up and running

Steps to Reproduce

` source = "cloudposse/rds-cluster/aws"
version = "1.9.0"

name = "name"
cluster_family = "mysql8.0"
engine = "mysql"
engine_mode = "provisioned"
engine_version = "8.0"
cluster_size = 1
namespace = var.namespace
stage = var.environment
admin_user = var.db_admin_username
admin_password = var.db_admin_password
db_name = "db_name"
db_port = 3306
db_cluster_instance_class = var.db_instance_type
vpc_id = var.vpc_id
security_groups = []
subnets = var.subnets
zone_id = var.zone_id
storage_type = "io1"
iops = 1000
allocated_storage = 100`

the tf script used

│ Error: creating RDS Cluster (bloom-prod-idpmysql) Instance (bloom-prod-idpmysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. │ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef │ │ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], │ on .terraform/modules/rds_mysql_idp/main.tf line 251, in resource "aws_rds_cluster_instance" "default": │ 251: resource "aws_rds_cluster_instance" "default" {

Screenshots

No response

Environment

module version : 1.9.0
Terraform v1.5.0
on darwin_amd64

  • provider registry.terraform.io/hashicorp/aws v4.67.0
  • provider registry.terraform.io/hashicorp/local v2.5.1
  • provider registry.terraform.io/hashicorp/null v3.2.2
  • provider registry.terraform.io/hashicorp/random v3.6.0
  • provider registry.terraform.io/hashicorp/tls v4.0.5

Additional Context

No response

Support serverless v2

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Add serverless v2 support

Expected Behavior

Can create a serverless v2 cluster by this module

Use Case

Create a serverless v2 cluster by this module

Describe Ideal Solution

Add a new config section like serverlessv2_scaling_configuration

Alternatives Considered

Create the cluster by AWS provider directly

Additional Context

The cluster instance class is "db.serverless"

Add Aurora Postgresql Serverless Example

This config worked for me:

module "aurora_postgres_serverless" {
  source                   = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.15.0"
  namespace                = "${var.namespace}"
  stage                    = "${var.stage}"
  name                     = "${var.postgres_name}"
  engine                   = "aurora-postgresql"
  engine_mode              = "serverless"
  engine_version           = "10.7"
  cluster_family           = "aurora-postgresql10"
  cluster_size             = "0"
  admin_user               = "${local.postgres_admin_user}"
  admin_password           = "${local.postgres_admin_password}"
  db_name                  = "${local.postgres_db_name}"
  db_port                  = "5432"
  vpc_id                   = "${data.terraform_remote_state.backing_services.vpc_id}"
  subnets                  = ["${data.terraform_remote_state.backing_services.public_subnet_ids}"]
  zone_id                  = "${local.zone_id}"
  publicly_accessible      = "true"
  allowed_cidr_blocks      = ["0.0.0.0/0"]
  enabled                  = "${var.postgres_cluster_enabled}"

  scaling_configuration = [
    {
      auto_pause               = true
      max_capacity             = "384"
      min_capacity             = "8"
      seconds_until_auto_pause = 300
    }
  ]
}

Valid capacity units for Postgres are 8, 16, 32, 64, 192, and 384, per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.create.html

Update to latest available parameters

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

I noticed there are missing parameters available to consumers. One was the major upgrade version param. There may be others.

Error: Failed to modify RDS Cluster (sharedpostgres): InvalidParameterCombination: The AllowMajorVersionUpgrade flag must be present when upgrading to a new major version.
        status code: 400, request id: 3bfeabd4-6459-4cc3-a789-5e5e2663ac95

Expected Behavior

  • All configurable vars available to consumers

Use Case

  • Upgrading from version Postgres 10 to 11

Describe Ideal Solution

  • Upgrade worked and I could control it within my project

Alternatives Considered

  • Destroyed the whole cluster then recreated it

Additional Context

...

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Implement rolling update for instances.

Describe the Feature

While updating the instance_type recently in preparation for a major version upgrade, both instances upgraded in parallel resulting in significant downtime. I found a simple fix for this which I will submit as a pull request.

Expected Behavior

At least one new node is in service at all times.

Use Case

A Zero or minimal downtime deploy.

Describe Ideal Solution

A rolling update.

Alternatives Considered

I considered a blue/green update which I was even able to implement using the create_before_destroy. I can provide this implementation if anyone is interested.

Additional Context

No response

Allow point in time restoration using a specific datetime

Describe the Feature

AWS Console and AWS's vanilla aws_rds_cluster resource allows for specifying a date time as opposed to using the latest restorable time.

Expected Behavior

Have the option to pass in restore_to_time as a UTC datetime string instead of passing use_latest_restorable_time (or passing it as false)

Use Case

Having this option is really valuable for running Data Recovery following an incident where the latest restorable time's data may be corrupt.

Describe Ideal Solution

Have a new RDS Cluster created using restored data from a particular point in time (not necessarily the latest point in time).

Alternatives Considered

No response

Additional Context

No response

Cannot treat Security Group egress in the same way as we do with ingress

Describe the Bug

When configuring security group ingress I can specify either a list of CIDR blocks, or an additional security group.
With egress, instead, I can only either disable it or have it fully open (any port, any protocol, 0.0.0.0/0)

Expected Behavior

Being able to specify CIDR and security groups for egress as well

Steps to Reproduce

N/A

Screenshots

No response

Environment

No response

Additional Context

No response

Creating Postgres Multi A-Z RDS cluster running into error InvalidParameterValue: CreateDBInstance

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When trying to create a Multi A-Z postgres cluster, it runs into the following error:

Error: error creating RDS Cluster (eg-test-rds-cluster) Instance: InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead.
│ 	status code: 400, request id: xxx-xxxx-xxxxx-xxxxx
│
│   with module.rds_cluster.aws_rds_cluster_instance.default[0],
│   on ../../main.tf line 240, in resource "aws_rds_cluster_instance" "default":

The resource aws_rds_cluster_instance is specifically used for Aurora engine types like aurora, aurora-mysql, aurora-postgresql Check here.

Expected Behavior

When trying to setup other non-aurora engine types, the resource aws_rds_cluster_instance creation should be skipped.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create non-aurora multi a-z rds-cluster with following vars:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
  1. Do a terraform apply
  2. See error

Additional Context

Add any other context about the problem here.

Reiterate on BridgeCrew warnings

Describe the Bug

A bunch of unrelated warnings appeared in a PR: #126
I think BridgeCrew has updated their database

Expected Behavior

Keep master clean so that people don't get confused when they contribute

Support for missing storage variables

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

aws_rds_cluster resource has ability to have the storage options specified.
We should be able to specify storage_type, iops and allocated_stoage via this module.

Use Case

  • more flexibility for setting the size of rds with specified storage type and iops.

Required variables

There's now were I could find the minimum variables needed for creating a cluster.

I believe the required are:

  • vpc_id
  • security_groups
  • zone_id
  • admin_password
  • subnets

Cluster name and current stage

Our goal was to setup a single-instance development database (1 cluster member), and then a production cluster that scaled as usage grew.

However, when we created a single node, the route53 record didn't include the stage, so it will conflict with our production cluster (when created)

If I set name = "${var.stage}-${var.name}", then my cluster name is zw-dev-dev-application (which I can live with).

Should stage be in the route53 records?

Unable to set `performance_insights_enabled` to false

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Unable to set the variable performance_insights_enabled to false. While set to false Terraform throws the following error:

Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'

Expected Behavior

In our dev environment we may not want to enable performance insights in order to save money. I would have expected to be able to tell the module to set it to false. it would be great if we could make this a bit more dynamic.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a rds_cluster/aws module
  2. set performance_insights_enabled to false
  3. Run terraform apply
  4. See error
Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Alpine linux
terraform 1.2.5

apply_method on cluster_parameters

Hello!

I have some cluster_parameters modifications defined and if I use the "immediate" apply_method the first time it creates de database correctly, but then, aws changes the paramter to pending_reboot internally, so every time I reapply my code it gets the difference and applies it again.

Is there a correct way to avoid this?
Also, the apply_method is not mandatory for terraform (it defaults to immediate) but it is for your module, why?

Thanks a lot

Add option for RDS/Aurora Managed Master Passwords via Secrets Manager

Describe the Feature

We want to use RDS integration with secret manager so that master password will be managed by RDS and rotated by secret manager.
This option is available in Terraform by using the variable manage_master_user_password :
Set to true to allow RDS to manage the master user password in Secrets Manager. Cannot be set if master_password is provided.
Currently the cloudposse module does not allow to enable this feature.

Expected Behavior

The module allows to enable managed user password feature in RDS.

Use Case

Managed secrets are more secure and easy to use.

Describe Ideal Solution

  • Add a variable to enable managed master user password option in RDS.
  • Add output block that contains the the secret ARN (see master_user_secret reference in the terraform docs).

Alternatives Considered

No response

Additional Context

No response

Invalid parameter value while trying to use engine as aurora-mysql

Terraform version 0.12.24

I am trying to create aurora mysql serverless rds with below configuration, running into error invalid parameter value when I use aurora-mysql. it works fine if i use engine as aurora. plan does not give me any error.

provider "aws" {
region = "us-east-1"
}
resource "aws_rds_cluster" "serverless" {
cluster_identifier = "serverless-dev"
engine = "aurora-mysql"
engine_mode = "serverless"
master_username = "dba_admin"
master_password = "changemepass"
skip_final_snapshot = true
db_subnet_group_name = "serverless-vpc"
}

Error: error creating RDS cluster: InvalidParameterValue: The engine mode serverless you requested is currently unavailable.
status code: 400, request id: 2294c942-fec5-4f45-a9e0-7520e33b73b8

Support deterministic versioning of RDS

Describe the Feature

auto_minor_version_upgrade defaults to true and tells AWS to update minor versions during the set maintenance window.

Expected Behavior

Variable available to set but it is not.

Use Case

Desire more control over whether updates are applied automatically or not.
Perhaps true in staging but false in production.
It is not always possible to rely on ZDP, so some updates will be downtime or at least app interruption (app reconnects).

Describe Ideal Solution

Expose variable in module.

Alternatives Considered

Forking module.

Additional Context

During terraform plan we can see the value is defaulted to true.:

  # module.eeva_aurora_mysql.aws_rds_cluster_instance.default[1] must be replaced                                             
-/+ resource "aws_rds_cluster_instance" "default" {                                                                           
      + apply_immediately               = (known after apply)                                                                 
      ~ arn                             = "arn:aws:rds:<snip>" -> (known after apply)
        auto_minor_version_upgrade      = true       

Dropping variable availability_zones ?

Hi,

availability_zones is EC2 classic, I believe that the module and the examples will get better if EC2 classic support is dropped. The current examples are mixing EC2 Classic params with VPC params.

availability_zones - (Optional) A list of EC2 Availability Zones that instances in the DB cluster can be created in

Cannot restore cluster from snapshot without removing auto-scaling profile

We are using this module to provision an auto-scaling read replica and it is working well. However when we try to rebuild the cluster from a snapshot the apply process fails with the following error.

Error: error deleting Database Instance "db-instance-1": AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/jenkins is not authorized to perform: rds:DeleteDBInstance on resource: arn:aws:rds:us-east-2:xxxxxxxxxxx:db:db-instance-1
status code: 403, request id: a43bf094-e294-4ecd-ad51-6d7ad78689b8

In order to allow this to work we need to remove read replicas and auto-scaling profile of Aurora cluster before restoring RDS from snapshot.

Cross-region replication not working

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

I am trying to configure Auirora Global Cluster spenned on 2 regions. I craete the "aws_rds_global_cluster" terraform resource externally, and then I am trying to use your mopdule to deploy the 2 sub-clusters in 2 regions. The main cluster works fine, the secondary raises errors:

creating RDS Cluster (): InvalidParameterCombination: Cannot specify database name for cross region replication cluster
creating RDS Cluster (
): InvalidParameterCombination: Cannot specify user name for cross region replication cluster

I am using global_cluster_identifier to enable the cross-region replication feature. For the secondary cluster I am also specifying source_region to link it to the main one.

Using a local foek of your module I made it work by commenting out 3 lines the source:

# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#replication_source_identifier
resource "aws_rds_cluster" "secondary" {
  count              = local.enabled && !local.is_regional_cluster ? 1 : 0
  cluster_identifier = var.cluster_identifier == "" ? module.this.id : var.cluster_identifier
  # database_name                       = var.db_name
  # master_username                     = local.ignore_admin_credentials ? null : var.admin_user
  # master_password                     = local.ignore_admin_credentials ? null : var.admin_password

Expected Behavior

I was expecting a second cluster to be deployed on the second region, having it connected to the global cluster

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a global cluster using aws_rds_global_cluster Terraform resource
  2. Create an Aurora MySQL cluster using your module, specifying global_cluster_identifier
  3. Create another Aurora MySQL cluster using your module, specifying global_cluster_identifier and source_region
  4. See the error

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: [e.g. Linux, OSX, WSL, etc]
  • Version [e.g. 10.15]

Additional Context

Add any other context about the problem here.

Cluster is recreated with every apply

This is linked to hashicorp/terraform#16724 and might be fixed by #35

Currently a terraform plan shows that the RDS cluster is recreated with every apply, the cause seems to be a wonky availability-zones attribute.
Somehow this seems to trigger a new resource. See plan output.

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster.default (new resource required)
      id:                                  "namespace-stage-project" => <computed> (forces new resource)
      apply_immediately:                   "true" => "true"
      arn:                                 "arn:aws:rds:eu-central-1:123456789:cluster:namespace-stage-project" => <computed>
      availability_zones.#:                "3" => "2" (forces new resource)
      availability_zones.1126047633:       "eu-central-1a" => "eu-central-1a"
      availability_zones.2903539389:       "eu-central-1c" => "" (forces new resource)
      availability_zones.3658960427:       "eu-central-1b" => "eu-central-1b"
      backup_retention_period:             "5" => "5"
      cluster_identifier:                  "namespace-stage-project" => "namespace-stage-project"
      cluster_identifier_prefix:           "" => <computed>
      cluster_members.#:                   "2" => <computed>
      cluster_resource_id:                 "cluster-AAAXXXX" => <computed>
      database_name:                       "project" => "project"
      db_cluster_parameter_group_name:     "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      endpoint:                            "namespace-stage-project.cluster-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_mode:                         "provisioned" => "provisioned"
      engine_version:                      "5.7.12" => <computed>
      final_snapshot_identifier:           "namespace-stage-project" => "namespace-stage-project"
      hosted_zone_id:                      "Z1RLSENSITIVE" => <computed>
      iam_database_authentication_enabled: "false" => "false"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      master_password:                     <sensitive> => <sensitive> (attribute changed)
      master_username:                     "project" => "project"
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => "07:00-09:00"
      preferred_maintenance_window:        "wed:03:00-wed:04:00" => "wed:03:00-wed:04:00"
      reader_endpoint:                     "namespace-stage-project.cluster-ro-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      skip_final_snapshot:                 "false" => "false"
      storage_encrypted:                   "true" => "true"
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      vpc_security_group_ids.#:            "1" => "1"
      vpc_security_group_ids.636648702:    "sg-080d3cfa4609edea8" => "sg-080d3cfa4609edea8"

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[0] (new resource required)
      id:                                  "namespace-stage-project-1" => <computed> (forces new resource)
      apply_immediately:                   "" => <computed>
      arn:                                 "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-1" => <computed>
      auto_minor_version_upgrade:          "true" => "true"
      availability_zone:                   "eu-central-1a" => <computed>
      cluster_identifier:                  "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
      db_parameter_group_name:             "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      dbi_resource_id:                     "db-SENSITIVE0" => <computed>
      endpoint:                            "namespace-stage-project-1.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_version:                      "5.7.12" => <computed>
      identifier:                          "namespace-stage-project-1" => "namespace-stage-project-1"
      identifier_prefix:                   "" => <computed>
      instance_class:                      "db.t2.small" => "db.t2.small"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      monitoring_interval:                 "0" => "0"
      monitoring_role_arn:                 "" => <computed>
      performance_insights_enabled:        "false" => <computed>
      performance_insights_kms_key_id:     "" => <computed>
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => <computed>
      preferred_maintenance_window:        "mon:04:25-mon:04:55" => <computed>
      promotion_tier:                      "0" => "0"
      publicly_accessible:                 "false" => "false"
      storage_encrypted:                   "true" => <computed>
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      writer:                              "false" => <computed>

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[1] (new resource required)
      id:                                  "namespace-stage-project-2" => <computed> (forces new resource)
      apply_immediately:                   "" => <computed>
      arn:                                 "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-2" => <computed>
      auto_minor_version_upgrade:          "true" => "true"
      availability_zone:                   "eu-central-1b" => <computed>
      cluster_identifier:                  "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
      db_parameter_group_name:             "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      dbi_resource_id:                     "db-SENSITIVE1" => <computed>
      endpoint:                            "namespace-stage-project-2.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_version:                      "5.7.12" => <computed>
      identifier:                          "namespace-stage-project-2" => "namespace-stage-project-2"
      identifier_prefix:                   "" => <computed>
      instance_class:                      "db.t2.small" => "db.t2.small"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      monitoring_interval:                 "0" => "0"
      monitoring_role_arn:                 "" => <computed>
      performance_insights_enabled:        "false" => <computed>
      performance_insights_kms_key_id:     "" => <computed>
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => <computed>
      preferred_maintenance_window:        "sun:03:45-sun:04:15" => <computed>
      promotion_tier:                      "0" => "0"
      publicly_accessible:                 "false" => "false"
      storage_encrypted:                   "true" => <computed>
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      writer:                              "true" => <computed>

Config:

module "rds_cluster_aurora_mysql" {
  source             = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=master"
  engine             = "aurora-mysql"
  cluster_family     = "aurora-mysql5.7"
  cluster_size       = "${var.rds_cluster_size}"
  namespace          = "dc"
  stage              = "${element(split("-", var.name), 1)}"
  name               = "${element(split("-", var.name), 0)}"
  admin_user         = "${element(split("-", var.name), 0)}"
  admin_password     = "${random_string.password.result}"
  db_name            = "${element(split("-", var.name), 0)}"
  instance_type      = "${var.rds_instance_type}"
  vpc_id             = "${aws_vpc.this.id}"
  availability_zones = ["${var.azs}"]
  security_groups    = ["${module.security_group_webapp.this_security_group_id}", "${module.security_group_bastion.this_security_group_id}"]
  subnets            = ["${aws_subnet.private.*.id}"]
  # zone_id            = "${aws_route53_zone.internal.zone_id}"
  storage_encrypted  = true
  maintenance_window = "wed:03:00-wed:04:00"
  skip_final_snapshot = false
}

I assume that it would work fine with the availability_zones variable dropped.

Upgrading DB version fails with InvalidParameterCombination for instance parameter group

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Attempting to do a major version upgrade of an Aurora Postgres instance from 11.13 to 12.9. On latest version 50.2 and 3.63.0 of terraform. Below is my module config:

module "postgres" {
  source                      = "cloudposse/rds-cluster/aws"
  version                     = "0.50.2"
  name                        = "api-db"
  engine                      = "aurora-postgresql"
  cluster_family              = "aurora-postgresql12"
  engine_version              = "12.9"
  allow_major_version_upgrade = true
  apply_immediately           = true
  cluster_size                = 1
  admin_user                  = data.aws_ssm_parameter.db_admin_user.value
  admin_password              = data.aws_ssm_parameter.db_admin_password.value
  db_name                     = "api"
  db_port                     = 5432
  instance_type               = "db.t3.medium"
  vpc_id                      = var.vpc_id
  security_groups             = concat([aws_security_group.api.id], var.rds_security_group_inbound)
  subnets                     = var.rds_subnets
  storage_encrypted           = true
}

When running apply I get the error:

 Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: OSX
  • Version: 50.2
  • Terraform: 3.63.0

Aurora Serverless "BackupWindow" parameter not working

Describe the Bug

It seems like AWS does not support (or maybe is bugged) "preferred_maintenance_window " and "preferred_backup_window"
Existing AWS issue: aws-cloudformation/cloudformation-coverage-roadmap#396
This was also reported internally in AWS team

Expected Behavior

Everything should work as expected

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Enter '....'
  4. See error

Screenshots

Create a serverless database from a snapshot. Right after it's finished terraform will throw:
Error: error modifying RDS Cluster (tc-staging-shared-main-rds): InvalidParameterCombination: You currently can't modify BackupWindow with Aurora Serverless. status code: 400, request id: aa5042ba-f0be-49ea-a695-e68da91a01f8

If you run terraform again it will say the cluster is tainted and must be replaced.
Moreover a lot of values are just wrong. The created resource has totally different values than specified in terraform code:

  • backup_retention_period (1 day instead of configured 7)
  • preferred_backup_window
  • preferred_maintenance_window
  • master_username (the module passes a default but the instance was created from a snapshot)

image
image

Environment (please complete the following information):

  • OS: Windows 10

Additional Context

I think you could have a dedicated resource for serverless clusters with the above fields omitted

Count not evaluating properly for mysql aurora serverless

local.cluster_instance_count fails to evaluate to 0 when specifying:

cluster_size          = 0
autoscaling_enabled   = false

and module still attempts to create resource

resource "aws_rds_cluster_instance" "default" {

Additionally, specifying false for var.enabled is ineffective.

So the enabled variable doesn't appear to be working? How can cluster instance be disabled when using aurora serverless?

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Consider making zone_id an optional parameter

Hi there,

I noticed that the zone_id parameter is mandatory. Although it is a nice to have friendly DNS names for RDS endpoints, this may cause issues when ssl connections are used. Perhaps, it's worth to consider making the zone_id parameter optional.

// Siert

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.