cloudposse / terraform-aws-rds-cluster Goto Github PK
View Code? Open in Web Editor NEWTerraform module to provision an RDS Aurora cluster for MySQL or Postgres
Home Page: https://cloudposse.com/accelerate
License: Apache License 2.0
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres
Home Page: https://cloudposse.com/accelerate
License: Apache License 2.0
This PR #80 changed the Security Group rules from inline to resource-based.
This is a good move since using inline SG rules is a "bad practice". Inline rules have many issues (one of them is that you can't add new rules to the security group since it's not possible to mix the inline rules and rules as separate resources).
At the same time, this introduced a breaking change: if you want to update the module to the latest version, Terraform will try to add the new resource-based rules to the security group and will fail since the same rules already exist and we can't mix inline rules with resource-based rules.
Note that it's not possible to taint
and destroy the security group since it has a dependent object (an Elastic Network Interface), which in turn has its own dependencies.
One possible solution would be to destroy the Aurora RDS cluster completely and recreate it. While possible in some cases (e.g. in dev
environments), it could not be feasible in other environments (e.g. a production database has data, and it's not possible to have a long outage).
A better way would be to just destroy the inline security group rules without destroying the security group itself (and any other Aurora resources), and then add the resource-based security group rules.
Here are the steps to do that:
Create a new branch of terraform-aws-rds-cluster
module, e.g. strip-inline-sg-rules
In the new branch, comment out all the aws_security_group_rule
resources for resource "aws_security_group" "default"
Add empty ingress
and egress
lists to the security group. NOTE: you can't skip the ingress
and egress
completely since terraform will not detect any changes to the inline rules (this is a bug/feature of TF):
resource "aws_security_group" "default" {
name = ...
vpc_id = var.vpc_id
ingress = []
egress = []
}
NOTE: Branch strip-inline-sg-rules
has been already created in this repository and steps 1-3 already performed.
The branch strip-inline-sg-rules
can be used to perform the next steps.
strip-inline-sg-rules
branch of the terraform-aws-rds-cluster
modulemodule "aurora_postgres_cluster" {
source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=strip-inline-sg-rules"
Apply the project. Terraform will just remove the inline rules from the security group without destroying the SG itself and any of the Aurora resources
Update the Aurora cluster project to use the latest release of the terraform-aws-rds-cluster
module
module "aurora_postgres_cluster" {
source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.34.0"
It takes a few minutes to go through all the steps, so the disruption to the production database will be minimal.
The serverlessv2_scaling_configuration
can not be deleted.
No change should be detected.
From AWS's documentation, it seems there is no way to delete these settings. But the Terraform change makes it look like it's going to delete it. It will be great to not report this type of change (setting the value to null) until it can actually do it so the change doesn't have to reappear again and again.
Serverless v2
Minimum ACUs
and Maximum ACUs
.terraform plan
, it always detects changes like: ~ serverlessv2_scaling_configuration {
- max_capacity = 128 -> null
- min_capacity = 2 -> null
}
But it won't actually change these settings or delete the whole Serverless v2 capacity settings
section from the cluster when we terraform apply
.
5. When we rerun terraform plan
, the above change will show up again.
No response
No response
resource "aws_rds_cluster" "example" {
# ... other configuration ...
master_password = "${data.aws_kms_secrets.example.plaintext["master_password"]}"
master_username = "${data.aws_kms_secrets.example.plaintext["master_username"]}"
}
https://www.terraform.io/docs/providers/aws/d/kms_secrets.html
Since the cluster parameter group is not adjustable, it's not feasible that always create a new cluster parameter group in a large system.
Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.
We have a large developing team that creates a lot of rds serverless clusters for development and testing.
Since the db cluster parameter group number is not adjustable, we can't create more.
Because nearly all of these RDS clusters are for testing only, a shared default cluster parameter group is acceptable in our environment.
Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.
Change to db instance as a workaround, however, it's not cost-efficient. A serverless cluster is very good for us for RnD testing.
No response
When instance_type
is "db.serverless" (for V2 serverless) the engine_mode
does not accept the value "serverless", but this value is required to enable the Data API via enable_http_endpoint
= true. As a result, the co-condition only applies for serverless v1.
That
...
instance_type` = "db.serverless"
enable_http_endpoint = true
...
would enable the Data API for serverless V2
...
instance_type` = "db.serverless"
enable_http_endpoint = true
...
No response
OSX, M1
No response
Found a bug? Maybe our Slack Community can help.
If the cluster is created and destroyed and then created again and attempted a destroy again, the last destroy will fail because there is a snapshot with the same name as the last one.
│ Error: error deleting RDS Cluster (aurora-example-shared): DBClusterSnapshotAlreadyExistsFault: Cannot create the cluster snapshot because one with the identifier aurora-example-shared already exists.
Add a random id to the final snapshot when the cluster is created to avoid conflicts
I have a question regarding parameter groups. I have tried a couple things but I have not been able to construct a list of parameters for aws_rds_cluster_parameter_group resource in the module. For example I would like to set,
character_set_client=utf8
character_set_connection=utf8
Do you have an example definition for cluster_paramparameters
Cheers
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
enhanced-monitoring.tf
cloudposse/label/null 0.25.0
examples/basic/main.tf
examples/complete/main.tf
cloudposse/dynamic-subnets/aws 2.4.2
cloudposse/vpc/aws 2.1.0
examples/complete/versions.tf
aws >= 4.17.0
null >= 2.0
hashicorp/terraform >= 1.1.0
examples/enhanced_monitoring/main.tf
examples/postgres/main.tf
cloudposse/dynamic-subnets/aws 2.4.2
cloudposse/vpc/aws 2.1.0
examples/postgres/versions.tf
aws >= 4.17.0
null >= 2.0
hashicorp/terraform >= 1.1.0
examples/serverless_mysql/main.tf
examples/serverless_mysql57/main.tf
examples/serverlessv2_postgres/main.tf
cloudposse/dynamic-subnets/aws 2.4.2
cloudposse/vpc/aws 2.1.0
examples/serverlessv2_postgres/versions.tf
aws >= 4.12
null >= 2.0
hashicorp/terraform >= 1.1.0
examples/with_cluster_parameters/main.tf
main.tf
cloudposse/route53-cluster-hostname/aws 0.12.2
cloudposse/route53-cluster-hostname/aws 0.12.2
versions.tf
aws >= 4.23.0
null >= 2.0
hashicorp/terraform >= 1.0.0
db_port = 5454
I have defined db_port value as 5454 in .hcl file but after applying, RDS Instances(reader and writer) are created with port 5432.
I am using below RDS configuration
engine = "aurora-postgresql"
engine_version = "10.14"
cluster_family = "aurora-postgresql10"
It is possible to pass network_type
parameter - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster
Support network_type parameter - IPV4
or DUAL
Access clusters from ipv6-only networks
Support network_type parameter - IPV4
or DUAL
No response
No response
For Aurora, autoscaling only applies to read replicas. However, the terraform code here does not support creation of an autoscaling group with one read replica.
This seems related to #61, but is slightly different, I believe
The instance_count and cluster_instance_count calculations are not correct when autoscaling_enabled = true.
Steps to reproduce:
result: two new resources are created
The aws_rds_cluster_instance is deleted because the value for local.cluster_instance_count has changed from 2 (the default if autoscaling_enabled=false) to 1 (based on different logic when autoscaling_enabled=true.
I confirmed this by setting autoscaling_min_capacity to 2. With this value, the resource aws_rds_cluster_instance is unmodified. However, in this case, the number of read replicas created is 2
Potential fix:
min_instance_count = var.autoscaling_enabled ? var.autoscaling_min_capacity +1 : var.cluster_size
Found a bug? Maybe our Slack Community can help.
Looking at https://www.terraform.io/docs/providers/aws/r/rds_cluster.html and https://www.terraform.io/docs/providers/aws/r/rds_cluster_instance.html the following arguments are missing:
aws_rds_cluster - missing arguments:
availability_zones
cluster_identifier_prefix
db_subnet_group_name
port
rds_cluster_instance - missing arguments:
identifier_prefix
apply_immediately
promotion_tier
preferred_backup_window
preferred_maintenance_window
auto_minor_version_upgrade
copy_tags_to_snapshot
ca_cert_identifier
Arguments included and configurable if necessary
When setting up a provisioned multi a-z postgres rds cluster, we need to specify db_cluster_instance_class
attribute otherwise it leads to the following error during the apply:
Error: error creating RDS cluster: InvalidParameterValue: DBClusterInstanceClass is required. status code: 400
When the missing db_cluster_instance_class
is specified the rds cluster should be created normally.
Steps to reproduce the behavior:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
Use aws_security_group_rule
instead of inline rules
i am using minimal config to provision the db cluster, the cluster on console works properly but the terraform scripts fails in the end with the error message
│ Error: creating RDS Cluster (prod-mysql) Instance (prod-mysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. │ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef │ │ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], │ on .terraform/modules/rds_mysql/main.tf line 251, in resource "aws_rds_cluster_instance" "default": │ 251: resource "aws_rds_cluster_instance" "default" {
script should not fail as cluster is up and running
` source = "cloudposse/rds-cluster/aws"
version = "1.9.0"
name = "name"
cluster_family = "mysql8.0"
engine = "mysql"
engine_mode = "provisioned"
engine_version = "8.0"
cluster_size = 1
namespace = var.namespace
stage = var.environment
admin_user = var.db_admin_username
admin_password = var.db_admin_password
db_name = "db_name"
db_port = 3306
db_cluster_instance_class = var.db_instance_type
vpc_id = var.vpc_id
security_groups = []
subnets = var.subnets
zone_id = var.zone_id
storage_type = "io1"
iops = 1000
allocated_storage = 100`
the tf script used
│ Error: creating RDS Cluster (bloom-prod-idpmysql) Instance (bloom-prod-idpmysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. │ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef │ │ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], │ on .terraform/modules/rds_mysql_idp/main.tf line 251, in resource "aws_rds_cluster_instance" "default": │ 251: resource "aws_rds_cluster_instance" "default" {
No response
module version : 1.9.0
Terraform v1.5.0
on darwin_amd64
No response
Have a question? Please checkout our Slack Community or visit our Slack Archive.
Add serverless v2 support
Can create a serverless v2 cluster by this module
Create a serverless v2 cluster by this module
Add a new config section like serverlessv2_scaling_configuration
Create the cluster by AWS provider directly
The cluster instance class is "db.serverless"
Error downloading modules: Error loading modules: module rds_cluster_aurora_mysql: Error parsing .terraform/modules/9ab520df0bb135021263cffe5f895638/main.tf: At 3:16: Unknown token: 3:16 IDENT var.namespace
Can this module enable enhanced monitoring on a new aurora cluster?
This config worked for me:
module "aurora_postgres_serverless" {
source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.15.0"
namespace = "${var.namespace}"
stage = "${var.stage}"
name = "${var.postgres_name}"
engine = "aurora-postgresql"
engine_mode = "serverless"
engine_version = "10.7"
cluster_family = "aurora-postgresql10"
cluster_size = "0"
admin_user = "${local.postgres_admin_user}"
admin_password = "${local.postgres_admin_password}"
db_name = "${local.postgres_db_name}"
db_port = "5432"
vpc_id = "${data.terraform_remote_state.backing_services.vpc_id}"
subnets = ["${data.terraform_remote_state.backing_services.public_subnet_ids}"]
zone_id = "${local.zone_id}"
publicly_accessible = "true"
allowed_cidr_blocks = ["0.0.0.0/0"]
enabled = "${var.postgres_cluster_enabled}"
scaling_configuration = [
{
auto_pause = true
max_capacity = "384"
min_capacity = "8"
seconds_until_auto_pause = 300
}
]
}
Valid capacity units for Postgres are 8, 16, 32, 64, 192, and 384, per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.create.html
Have a question? Please checkout our Slack Community or visit our Slack Archive.
I noticed there are missing parameters available to consumers. One was the major upgrade version param. There may be others.
Error: Failed to modify RDS Cluster (sharedpostgres): InvalidParameterCombination: The AllowMajorVersionUpgrade flag must be present when upgrading to a new major version.
status code: 400, request id: 3bfeabd4-6459-4cc3-a789-5e5e2663ac95
...
While updating the instance_type recently in preparation for a major version upgrade, both instances upgraded in parallel resulting in significant downtime. I found a simple fix for this which I will submit as a pull request.
At least one new node is in service at all times.
A Zero or minimal downtime deploy.
A rolling update.
I considered a blue/green update which I was even able to implement using the create_before_destroy. I can provide this implementation if anyone is interested.
No response
AWS Console and AWS's vanilla aws_rds_cluster
resource allows for specifying a date time as opposed to using the latest restorable time.
Have the option to pass in restore_to_time
as a UTC datetime string instead of passing use_latest_restorable_time
(or passing it as false
)
Having this option is really valuable for running Data Recovery following an incident where the latest restorable time's data may be corrupt.
Have a new RDS Cluster created using restored data from a particular point in time (not necessarily the latest point in time).
No response
No response
When configuring security group ingress I can specify either a list of CIDR blocks, or an additional security group.
With egress, instead, I can only either disable it or have it fully open (any port, any protocol, 0.0.0.0/0)
Being able to specify CIDR and security groups for egress as well
N/A
No response
No response
No response
Found a bug? Maybe our Slack Community can help.
When trying to create a Multi A-Z postgres cluster, it runs into the following error:
Error: error creating RDS Cluster (eg-test-rds-cluster) Instance: InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead.
│ status code: 400, request id: xxx-xxxx-xxxxx-xxxxx
│
│ with module.rds_cluster.aws_rds_cluster_instance.default[0],
│ on ../../main.tf line 240, in resource "aws_rds_cluster_instance" "default":
The resource aws_rds_cluster_instance
is specifically used for Aurora engine types like aurora
, aurora-mysql
, aurora-postgresql
Check here.
When trying to setup other non-aurora engine types, the resource aws_rds_cluster_instance
creation should be skipped.
Steps to reproduce the behavior:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
Add any other context about the problem here.
A bunch of unrelated warnings appeared in a PR: #126
I think BridgeCrew has updated their database
Keep master clean so that people don't get confused when they contribute
Found a bug? Maybe our Slack Community can help.
It can be a pain when the security group name changes as it would not be able to destroy - potentially using this pattern would work - https://github.com/terraform-aws-modules/terraform-aws-security-group/blob/master/main.tf#L34
Able to create new security group and assign it prior to destroy
Have a question? Please checkout our Slack Community or visit our Slack Archive.
aws_rds_cluster
resource has ability to have the storage options specified.
We should be able to specify storage_type
, iops
and allocated_stoage
via this module.
There's now were I could find the minimum variables needed for creating a cluster.
I believe the required are:
Our goal was to setup a single-instance development database (1 cluster member), and then a production cluster that scaled as usage grew.
However, when we created a single node, the route53 record didn't include the stage, so it will conflict with our production cluster (when created)
If I set name = "${var.stage}-${var.name}"
, then my cluster name is zw-dev-dev-application
(which I can live with).
Should stage be in the route53 records?
Found a bug? Maybe our Slack Community can help.
Unable to set the variable performance_insights_enabled
to false. While set to false Terraform throws the following error:
Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'
In our dev environment we may not want to enable performance insights in order to save money. I would have expected to be able to tell the module to set it to false
. it would be great if we could make this a bit more dynamic.
Steps to reproduce the behavior:
performance_insights_enabled
to falseterraform apply
Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'
If applicable, add screenshots or logs to help explain your problem.
Alpine linux
terraform 1.2.5
Hello!
I have some cluster_parameters modifications defined and if I use the "immediate" apply_method the first time it creates de database correctly, but then, aws changes the paramter to pending_reboot internally, so every time I reapply my code it gets the difference and applies it again.
Is there a correct way to avoid this?
Also, the apply_method is not mandatory for terraform (it defaults to immediate) but it is for your module, why?
Thanks a lot
We want to use RDS integration with secret manager so that master password will be managed by RDS and rotated by secret manager.
This option is available in Terraform by using the variable manage_master_user_password
:
Set to true to allow RDS to manage the master user password in Secrets Manager. Cannot be set if master_password is provided.
Currently the cloudposse module does not allow to enable this feature.
The module allows to enable managed user password feature in RDS.
Managed secrets are more secure and easy to use.
master_user_secret
reference in the terraform docs).No response
No response
Terraform version 0.12.24
I am trying to create aurora mysql serverless rds with below configuration, running into error invalid parameter value when I use aurora-mysql. it works fine if i use engine as aurora. plan does not give me any error.
provider "aws" {
region = "us-east-1"
}
resource "aws_rds_cluster" "serverless" {
cluster_identifier = "serverless-dev"
engine = "aurora-mysql"
engine_mode = "serverless"
master_username = "dba_admin"
master_password = "changemepass"
skip_final_snapshot = true
db_subnet_group_name = "serverless-vpc"
}
Error: error creating RDS cluster: InvalidParameterValue: The engine mode serverless you requested is currently unavailable.
status code: 400, request id: 2294c942-fec5-4f45-a9e0-7520e33b73b8
auto_minor_version_upgrade
defaults to true and tells AWS to update minor versions during the set maintenance window.
Variable available to set but it is not.
Desire more control over whether updates are applied automatically or not.
Perhaps true
in staging but false
in production.
It is not always possible to rely on ZDP, so some updates will be downtime or at least app interruption (app reconnects).
Expose variable in module.
Forking module.
During terraform plan
we can see the value is defaulted to true.:
# module.eeva_aurora_mysql.aws_rds_cluster_instance.default[1] must be replaced
-/+ resource "aws_rds_cluster_instance" "default" {
+ apply_immediately = (known after apply)
~ arn = "arn:aws:rds:<snip>" -> (known after apply)
auto_minor_version_upgrade = true
Hi,
availability_zones
is EC2 classic, I believe that the module and the examples will get better if EC2 classic support is dropped. The current examples are mixing EC2 Classic params with VPC params.
availability_zones - (Optional) A list of EC2 Availability Zones that instances in the DB cluster can be created in
We are using this module to provision an auto-scaling read replica and it is working well. However when we try to rebuild the cluster from a snapshot the apply process fails with the following error.
Error: error deleting Database Instance "db-instance-1": AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/jenkins is not authorized to perform: rds:DeleteDBInstance on resource: arn:aws:rds:us-east-2:xxxxxxxxxxx:db:db-instance-1
status code: 403, request id: a43bf094-e294-4ecd-ad51-6d7ad78689b8
In order to allow this to work we need to remove read replicas and auto-scaling profile of Aurora cluster before restoring RDS from snapshot.
Found a bug? Maybe our Slack Community can help.
I am trying to configure Auirora Global Cluster spenned on 2 regions. I craete the "aws_rds_global_cluster" terraform resource externally, and then I am trying to use your mopdule to deploy the 2 sub-clusters in 2 regions. The main cluster works fine, the secondary raises errors:
creating RDS Cluster (): InvalidParameterCombination: Cannot specify database name for cross region replication cluster
creating RDS Cluster (): InvalidParameterCombination: Cannot specify user name for cross region replication cluster
I am using global_cluster_identifier to enable the cross-region replication feature. For the secondary cluster I am also specifying source_region to link it to the main one.
Using a local foek of your module I made it work by commenting out 3 lines the source:
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#replication_source_identifier
resource "aws_rds_cluster" "secondary" {
count = local.enabled && !local.is_regional_cluster ? 1 : 0
cluster_identifier = var.cluster_identifier == "" ? module.this.id : var.cluster_identifier
# database_name = var.db_name
# master_username = local.ignore_admin_credentials ? null : var.admin_user
# master_password = local.ignore_admin_credentials ? null : var.admin_password
I was expecting a second cluster to be deployed on the second region, having it connected to the global cluster
Steps to reproduce the behavior:
If applicable, add screenshots or logs to help explain your problem.
Anything that will help us triage the bug will help. Here are some ideas:
Add any other context about the problem here.
This is linked to hashicorp/terraform#16724 and might be fixed by #35
Currently a terraform plan
shows that the RDS cluster is recreated with every apply, the cause seems to be a wonky availability-zones attribute.
Somehow this seems to trigger a new resource. See plan output.
-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster.default (new resource required)
id: "namespace-stage-project" => <computed> (forces new resource)
apply_immediately: "true" => "true"
arn: "arn:aws:rds:eu-central-1:123456789:cluster:namespace-stage-project" => <computed>
availability_zones.#: "3" => "2" (forces new resource)
availability_zones.1126047633: "eu-central-1a" => "eu-central-1a"
availability_zones.2903539389: "eu-central-1c" => "" (forces new resource)
availability_zones.3658960427: "eu-central-1b" => "eu-central-1b"
backup_retention_period: "5" => "5"
cluster_identifier: "namespace-stage-project" => "namespace-stage-project"
cluster_identifier_prefix: "" => <computed>
cluster_members.#: "2" => <computed>
cluster_resource_id: "cluster-AAAXXXX" => <computed>
database_name: "project" => "project"
db_cluster_parameter_group_name: "namespace-stage-project" => "namespace-stage-project"
db_subnet_group_name: "namespace-stage-project" => "namespace-stage-project"
endpoint: "namespace-stage-project.cluster-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
engine: "aurora-mysql" => "aurora-mysql"
engine_mode: "provisioned" => "provisioned"
engine_version: "5.7.12" => <computed>
final_snapshot_identifier: "namespace-stage-project" => "namespace-stage-project"
hosted_zone_id: "Z1RLSENSITIVE" => <computed>
iam_database_authentication_enabled: "false" => "false"
kms_key_id: "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
master_password: <sensitive> => <sensitive> (attribute changed)
master_username: "project" => "project"
port: "3306" => <computed>
preferred_backup_window: "07:00-09:00" => "07:00-09:00"
preferred_maintenance_window: "wed:03:00-wed:04:00" => "wed:03:00-wed:04:00"
reader_endpoint: "namespace-stage-project.cluster-ro-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
skip_final_snapshot: "false" => "false"
storage_encrypted: "true" => "true"
tags.%: "3" => "3"
tags.Name: "namespace-stage-project" => "namespace-stage-project"
tags.Namespace: "namespace" => "namespace"
tags.Stage: "stage" => "stage"
vpc_security_group_ids.#: "1" => "1"
vpc_security_group_ids.636648702: "sg-080d3cfa4609edea8" => "sg-080d3cfa4609edea8"
-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[0] (new resource required)
id: "namespace-stage-project-1" => <computed> (forces new resource)
apply_immediately: "" => <computed>
arn: "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-1" => <computed>
auto_minor_version_upgrade: "true" => "true"
availability_zone: "eu-central-1a" => <computed>
cluster_identifier: "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
db_parameter_group_name: "namespace-stage-project" => "namespace-stage-project"
db_subnet_group_name: "namespace-stage-project" => "namespace-stage-project"
dbi_resource_id: "db-SENSITIVE0" => <computed>
endpoint: "namespace-stage-project-1.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
engine: "aurora-mysql" => "aurora-mysql"
engine_version: "5.7.12" => <computed>
identifier: "namespace-stage-project-1" => "namespace-stage-project-1"
identifier_prefix: "" => <computed>
instance_class: "db.t2.small" => "db.t2.small"
kms_key_id: "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
monitoring_interval: "0" => "0"
monitoring_role_arn: "" => <computed>
performance_insights_enabled: "false" => <computed>
performance_insights_kms_key_id: "" => <computed>
port: "3306" => <computed>
preferred_backup_window: "07:00-09:00" => <computed>
preferred_maintenance_window: "mon:04:25-mon:04:55" => <computed>
promotion_tier: "0" => "0"
publicly_accessible: "false" => "false"
storage_encrypted: "true" => <computed>
tags.%: "3" => "3"
tags.Name: "namespace-stage-project" => "namespace-stage-project"
tags.Namespace: "namespace" => "namespace"
tags.Stage: "stage" => "stage"
writer: "false" => <computed>
-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[1] (new resource required)
id: "namespace-stage-project-2" => <computed> (forces new resource)
apply_immediately: "" => <computed>
arn: "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-2" => <computed>
auto_minor_version_upgrade: "true" => "true"
availability_zone: "eu-central-1b" => <computed>
cluster_identifier: "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
db_parameter_group_name: "namespace-stage-project" => "namespace-stage-project"
db_subnet_group_name: "namespace-stage-project" => "namespace-stage-project"
dbi_resource_id: "db-SENSITIVE1" => <computed>
endpoint: "namespace-stage-project-2.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
engine: "aurora-mysql" => "aurora-mysql"
engine_version: "5.7.12" => <computed>
identifier: "namespace-stage-project-2" => "namespace-stage-project-2"
identifier_prefix: "" => <computed>
instance_class: "db.t2.small" => "db.t2.small"
kms_key_id: "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
monitoring_interval: "0" => "0"
monitoring_role_arn: "" => <computed>
performance_insights_enabled: "false" => <computed>
performance_insights_kms_key_id: "" => <computed>
port: "3306" => <computed>
preferred_backup_window: "07:00-09:00" => <computed>
preferred_maintenance_window: "sun:03:45-sun:04:15" => <computed>
promotion_tier: "0" => "0"
publicly_accessible: "false" => "false"
storage_encrypted: "true" => <computed>
tags.%: "3" => "3"
tags.Name: "namespace-stage-project" => "namespace-stage-project"
tags.Namespace: "namespace" => "namespace"
tags.Stage: "stage" => "stage"
writer: "true" => <computed>
Config:
module "rds_cluster_aurora_mysql" {
source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=master"
engine = "aurora-mysql"
cluster_family = "aurora-mysql5.7"
cluster_size = "${var.rds_cluster_size}"
namespace = "dc"
stage = "${element(split("-", var.name), 1)}"
name = "${element(split("-", var.name), 0)}"
admin_user = "${element(split("-", var.name), 0)}"
admin_password = "${random_string.password.result}"
db_name = "${element(split("-", var.name), 0)}"
instance_type = "${var.rds_instance_type}"
vpc_id = "${aws_vpc.this.id}"
availability_zones = ["${var.azs}"]
security_groups = ["${module.security_group_webapp.this_security_group_id}", "${module.security_group_bastion.this_security_group_id}"]
subnets = ["${aws_subnet.private.*.id}"]
# zone_id = "${aws_route53_zone.internal.zone_id}"
storage_encrypted = true
maintenance_window = "wed:03:00-wed:04:00"
skip_final_snapshot = false
}
I assume that it would work fine with the availability_zones
variable dropped.
Found a bug? Maybe our Slack Community can help.
Attempting to do a major version upgrade of an Aurora Postgres instance from 11.13 to 12.9. On latest version 50.2 and 3.63.0 of terraform. Below is my module config:
module "postgres" {
source = "cloudposse/rds-cluster/aws"
version = "0.50.2"
name = "api-db"
engine = "aurora-postgresql"
cluster_family = "aurora-postgresql12"
engine_version = "12.9"
allow_major_version_upgrade = true
apply_immediately = true
cluster_size = 1
admin_user = data.aws_ssm_parameter.db_admin_user.value
admin_password = data.aws_ssm_parameter.db_admin_password.value
db_name = "api"
db_port = 5432
instance_type = "db.t3.medium"
vpc_id = var.vpc_id
security_groups = concat([aws_security_group.api.id], var.rds_security_group_inbound)
subnets = var.rds_subnets
storage_encrypted = true
}
When running apply I get the error:
Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.
Anything that will help us triage the bug will help. Here are some ideas:
It seems like AWS does not support (or maybe is bugged) "preferred_maintenance_window " and "preferred_backup_window"
Existing AWS issue: aws-cloudformation/cloudformation-coverage-roadmap#396
This was also reported internally in AWS team
Everything should work as expected
Steps to reproduce the behavior:
Create a serverless database from a snapshot. Right after it's finished terraform will throw:
Error: error modifying RDS Cluster (tc-staging-shared-main-rds): InvalidParameterCombination: You currently can't modify BackupWindow with Aurora Serverless. status code: 400, request id: aa5042ba-f0be-49ea-a695-e68da91a01f8
If you run terraform again it will say the cluster is tainted and must be replaced.
Moreover a lot of values are just wrong. The created resource has totally different values than specified in terraform code:
I think you could have a dedicated resource for serverless clusters with the above fields omitted
local.cluster_instance_count
fails to evaluate to 0
when specifying:
cluster_size = 0
autoscaling_enabled = false
and module still attempts to create resource
terraform-aws-rds-cluster/main.tf
Line 92 in d95acc1
Additionally, specifying false
for var.enabled
is ineffective.
So the enabled variable doesn't appear to be working? How can cluster instance be disabled when using aurora serverless?
Hi there,
I noticed that the zone_id
parameter is mandatory. Although it is a nice to have friendly DNS names for RDS endpoints, this may cause issues when ssl connections are used. Perhaps, it's worth to consider making the zone_id
parameter optional.
// Siert
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.