Git Product home page Git Product logo

terraform-aws-rds-cluster's Introduction

Project Banner

Latest ReleaseLast UpdatedSlack Community

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres.

Supports Amazon Aurora Serverless.

Tip

๐Ÿ‘ฝ Use Atmos with Terraform

Cloud Posse uses atmos to easily orchestrate multiple environments using Terraform.
Works with Github Actions, Atlantis, or Spacelift.

Watch demo of using Atmos with Terraform
Example of running atmos to manage infrastructure from our Quick Start tutorial.

Usage

For a complete example, see examples/complete.

For automated tests of the complete example using bats and Terratest (which tests and deploys the example on AWS), see test.

Basic example

module "rds_cluster_aurora_postgres" {
  source = "cloudposse/rds-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"

  name            = "postgres"
  engine          = "aurora-postgresql"
  cluster_family  = "aurora-postgresql9.6"
  # 1 writer, 1 reader
  cluster_size    = 2
  # 1 writer, 3 reader
  # cluster_size    = 4
  # 1 writer, 5 reader
  # cluster_size    = 6
  namespace       = "eg"
  stage           = "dev"
  admin_user      = "admin1"
  admin_password  = "Test123456789"
  db_name         = "dbname"
  db_port         = 5432
  instance_type   = "db.r4.large"
  vpc_id          = "vpc-xxxxxxxx"
  security_groups = ["sg-xxxxxxxx"]
  subnets         = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
  zone_id         = "Zxxxxxxxx"
}

Serverless Aurora MySQL 5.6

module "rds_cluster_aurora_mysql_serverless" {
  source = "cloudposse/rds-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  namespace            = "eg"
  stage                = "dev"
  name                 = "db"
  engine               = "aurora"
  engine_mode          = "serverless"
  cluster_family       = "aurora5.6"
  cluster_size         = 0
  admin_user           = "admin1"
  admin_password       = "Test123456789"
  db_name              = "dbname"
  db_port              = 3306
  instance_type        = "db.t2.small"
  vpc_id               = "vpc-xxxxxxxx"
  security_groups      = ["sg-xxxxxxxx"]
  subnets              = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
  zone_id              = "Zxxxxxxxx"
  enable_http_endpoint = true

  scaling_configuration = [
    {
      auto_pause               = true
      max_capacity             = 256
      min_capacity             = 2
      seconds_until_auto_pause = 300
    }
  ]
}

Serverless Aurora 2.07.1 MySQL 5.7

module "rds_cluster_aurora_mysql_serverless" {
  source = "cloudposse/rds-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  namespace            = "eg"
  stage                = "dev"
  name                 = "db"
  engine               = "aurora-mysql"
  engine_mode          = "serverless"
  engine_version       = "5.7.mysql_aurora.2.07.1"
  cluster_family       = "aurora-mysql5.7"
  cluster_size         = 0
  admin_user           = "admin1"
  admin_password       = "Test123456789"
  db_name              = "dbname"
  db_port              = 3306
  vpc_id               = "vpc-xxxxxxxx"
  security_groups      = ["sg-xxxxxxxx"]
  subnets              = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
  zone_id              = "Zxxxxxxxx"
  enable_http_endpoint = true

  scaling_configuration = [
    {
      auto_pause               = true
      max_capacity             = 16
      min_capacity             = 1
      seconds_until_auto_pause = 300
      timeout_action           = "ForceApplyCapacityChange"
    }
  ]
}

With cluster parameters

module "rds_cluster_aurora_mysql" {
  source = "cloudposse/rds-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  engine          = "aurora"
  cluster_family  = "aurora-mysql5.7"
  cluster_size    = 2
  namespace       = "eg"
  stage           = "dev"
  name            = "db"
  admin_user      = "admin1"
  admin_password  = "Test123456789"
  db_name         = "dbname"
  instance_type   = "db.t2.small"
  vpc_id          = "vpc-xxxxxxx"
  security_groups = ["sg-xxxxxxxx"]
  subnets         = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
  zone_id         = "Zxxxxxxxx"

  cluster_parameters = [
    {
      name  = "character_set_client"
      value = "utf8"
    },
    {
      name  = "character_set_connection"
      value = "utf8"
    },
    {
      name  = "character_set_database"
      value = "utf8"
    },
    {
      name  = "character_set_results"
      value = "utf8"
    },
    {
      name  = "character_set_server"
      value = "utf8"
    },
    {
      name  = "collation_connection"
      value = "utf8_bin"
    },
    {
      name  = "collation_server"
      value = "utf8_bin"
    },
    {
      name         = "lower_case_table_names"
      value        = "1"
      apply_method = "pending-reboot"
    },
    {
      name         = "skip-character-set-client-handshake"
      value        = "1"
      apply_method = "pending-reboot"
    }
  ]
}

With enhanced monitoring

# create IAM role for monitoring
resource "aws_iam_role" "enhanced_monitoring" {
  name               = "rds-cluster-example-1"
  assume_role_policy = data.aws_iam_policy_document.enhanced_monitoring.json
}

# Attach Amazon's managed policy for RDS enhanced monitoring
resource "aws_iam_role_policy_attachment" "enhanced_monitoring" {
  role       = aws_iam_role.enhanced_monitoring.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole"
}

# allow rds to assume this role
data "aws_iam_policy_document" "enhanced_monitoring" {
  statement {
      actions = [
      "sts:AssumeRole",
    ]

    effect = "Allow"

    principals {
      type        = "Service"
      identifiers = ["monitoring.rds.amazonaws.com"]
    }
  }
}

module "rds_cluster_aurora_postgres" {
  source = "cloudposse/rds-cluster/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  engine          = "aurora-postgresql"
  cluster_family  = "aurora-postgresql9.6"
  cluster_size    = 2
  namespace       = "eg"
  stage           = "dev"
  name            = "db"
  admin_user      = "admin1"
  admin_password  = "Test123456789"
  db_name         = "dbname"
  db_port         = 5432
  instance_type   = "db.r4.large"
  vpc_id          = "vpc-xxxxxxx"
  security_groups = ["sg-xxxxxxxx"]
  subnets         = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
  zone_id         = "Zxxxxxxxx"

  # enable monitoring every 30 seconds
  rds_monitoring_interval = 30

  # reference iam role created above
  rds_monitoring_role_arn = aws_iam_role.enhanced_monitoring.arn
}

Important

In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic approach for updating versions to avoid unexpected changes.

Examples

Review the complete example to see how to use this module.

Makefile Targets

Available targets:

  help                                Help screen
  help/all                            Display help for all targets
  help/short                          This help short screen
  lint                                Lint terraform code

Requirements

Name Version
terraform >= 1.0.0
aws >= 4.23.0
null >= 2.0

Providers

Name Version
aws >= 4.23.0

Modules

Name Source Version
dns_master cloudposse/route53-cluster-hostname/aws 0.12.2
dns_replicas cloudposse/route53-cluster-hostname/aws 0.12.2
enhanced_monitoring_label cloudposse/label/null 0.25.0
this cloudposse/label/null 0.25.0

Resources

Name Type
aws_appautoscaling_policy.replicas resource
aws_appautoscaling_target.replicas resource
aws_db_parameter_group.default resource
aws_db_subnet_group.default resource
aws_iam_role.enhanced_monitoring resource
aws_iam_role_policy_attachment.enhanced_monitoring resource
aws_rds_cluster.primary resource
aws_rds_cluster.secondary resource
aws_rds_cluster_activity_stream.primary resource
aws_rds_cluster_instance.default resource
aws_rds_cluster_parameter_group.default resource
aws_security_group.default resource
aws_security_group_rule.egress resource
aws_security_group_rule.ingress_cidr_blocks resource
aws_security_group_rule.ingress_security_groups resource
aws_security_group_rule.traffic_inside_security_group resource
aws_iam_policy_document.enhanced_monitoring data source
aws_partition.current data source

Inputs

Name Description Type Default Required
activity_stream_enabled Whether to enable Activity Streams bool false no
activity_stream_kms_key_id The ARN for the KMS key to encrypt Activity Stream Data data. When specifying activity_stream_kms_key_id, activity_stream_enabled needs to be set to true string "" no
activity_stream_mode The mode for the Activity Streams. async and sync are supported. Defaults to async string "async" no
additional_tag_map Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration.
map(string) {} no
admin_password Password for the master DB user. Ignored if snapshot_identifier or replication_source_identifier is provided string "" no
admin_user Username for the master DB user. Ignored if snapshot_identifier or replication_source_identifier is provided string "admin" no
allocated_storage The allocated storage in GBs number null no
allow_major_version_upgrade Enable to allow major engine version upgrades when changing engine versions. Defaults to false. bool false no
allowed_cidr_blocks List of CIDR blocks allowed to access the cluster list(string) [] no
apply_immediately Specifies whether any cluster modifications are applied immediately, or during the next maintenance window bool true no
attributes ID element. Additional attributes (e.g. workers or cluster) to add to id,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the delimiter
and treated as a single ID element.
list(string) [] no
auto_minor_version_upgrade Indicates that minor engine upgrades will be applied automatically to the DB instance during the maintenance window bool true no
autoscaling_enabled Whether to enable cluster autoscaling bool false no
autoscaling_max_capacity Maximum number of instances to be maintained by the autoscaler number 5 no
autoscaling_min_capacity Minimum number of instances to be maintained by the autoscaler number 1 no
autoscaling_policy_type Autoscaling policy type. TargetTrackingScaling and StepScaling are supported string "TargetTrackingScaling" no
autoscaling_scale_in_cooldown The amount of time, in seconds, after a scaling activity completes and before the next scaling down activity can start. Default is 300s number 300 no
autoscaling_scale_out_cooldown The amount of time, in seconds, after a scaling activity completes and before the next scaling up activity can start. Default is 300s number 300 no
autoscaling_target_metrics The metrics type to use. If this value isn't provided the default is CPU utilization string "RDSReaderAverageCPUUtilization" no
autoscaling_target_value The target value to scale with respect to target metrics number 75 no
backtrack_window The target backtrack window, in seconds. Only available for aurora engine currently. Must be between 0 and 259200 (72 hours) number 0 no
backup_window Daily time range during which the backups happen string "07:00-09:00" no
ca_cert_identifier The identifier of the CA certificate for the DB instance string null no
cluster_dns_name Name of the cluster CNAME record to create in the parent DNS zone specified by zone_id. If left empty, the name will be auto-asigned using the format master.var.name string "" no
cluster_family The family of the DB cluster parameter group string "aurora5.6" no
cluster_identifier The RDS Cluster Identifier. Will use generated label ID if not supplied string "" no
cluster_parameters List of DB cluster parameters to apply
list(object({
apply_method = string
name = string
value = string
}))
[] no
cluster_size Number of DB instances to create in the cluster number 2 no
cluster_type Either regional or global.
If regional will be created as a normal, standalone DB.
If global, will be made part of a Global cluster (requires global_cluster_identifier).
string "regional" no
context Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
any
{
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
}
no
copy_tags_to_snapshot Copy tags to backup snapshots bool false no
db_cluster_instance_class This setting is required to create a provisioned Multi-AZ DB cluster string null no
db_name Database name (default is not to create a database) string "" no
db_port Database port number 3306 no
deletion_protection If the DB instance should have deletion protection enabled bool false no
delimiter Delimiter to be used between ID elements.
Defaults to - (hyphen). Set to "" to use no delimiter at all.
string null no
descriptor_formats Describe additional descriptors to be output in the descriptors output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
{<br> format = string<br> labels = list(string)<br>}
(Type is any so the map values can later be enhanced to provide additional options.)
format is a Terraform format string to be passed to the format() function.
labels is a list of labels, in order, to pass to format() function.
Label values will be normalized before being passed to format() so they will be
identical to how they appear in id.
Default is {} (descriptors output will be empty).
any {} no
egress_enabled Whether or not to apply the egress security group rule to default security group, defaults to true bool true no
enable_global_write_forwarding Set to true, to forward writes to an associated global cluster. bool false no
enable_http_endpoint Enable HTTP endpoint (data API). Only valid when engine_mode is set to serverless bool false no
enabled Set to false to prevent the module from creating any resources bool null no
enabled_cloudwatch_logs_exports List of log types to export to cloudwatch. The following log types are supported: audit, error, general, slowquery list(string) [] no
engine The name of the database engine to be used for this DB cluster. Valid values: aurora, aurora-mysql, aurora-postgresql string "aurora" no
engine_mode The database engine mode. Valid values: parallelquery, provisioned, serverless string "provisioned" no
engine_version The version of the database engine to use. See aws rds describe-db-engine-versions string "" no
enhanced_monitoring_attributes The attributes for the enhanced monitoring IAM role list(string)
[
"enhanced-monitoring"
]
no
enhanced_monitoring_role_enabled A boolean flag to enable/disable the creation of the enhanced monitoring IAM role. If set to false, the module will not create a new role and will use rds_monitoring_role_arn for enhanced monitoring bool false no
environment ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' string null no
global_cluster_identifier ID of the Aurora global cluster string "" no
iam_database_authentication_enabled Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled bool false no
iam_roles Iam roles for the Aurora cluster list(string) [] no
id_length_limit Limit id to this many characters (minimum 6).
Set to 0 for unlimited length.
Set to null for keep the existing setting, which defaults to 0.
Does not affect id_full.
number null no
instance_availability_zone Optional parameter to place cluster instances in a specific availability zone. If left empty, will place randomly string "" no
instance_parameters List of DB instance parameters to apply
list(object({
apply_method = string
name = string
value = string
}))
[] no
instance_type Instance type to use string "db.t2.small" no
intra_security_group_traffic_enabled Whether to allow traffic between resources inside the database's security group. bool false no
iops The amount of provisioned IOPS. Setting this implies a storage_type of 'io1'. This setting is required to create a Multi-AZ DB cluster. Check TF docs for values based on db engine number null no
kms_key_arn The ARN for the KMS encryption key. When specifying kms_key_arn, storage_encrypted needs to be set to true string "" no
label_key_case Controls the letter case of the tags keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the tags input.
Possible values: lower, title, upper.
Default value: title.
string null no
label_order The order in which the labels (ID elements) appear in the id.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present.
list(string) null no
label_value_case Controls the letter case of ID elements (labels) as included in id,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the tags input.
Possible values: lower, title, upper and none (no transformation).
Set this to title and set delimiter to "" to yield Pascal Case IDs.
Default value: lower.
string null no
labels_as_tags Set of labels (ID elements) to include as tags in the tags output.
Default is to include all labels.
Tags with empty values will not be included in the tags output.
Set to [] to suppress all generated tags.
Notes:
The value of the name tag, if included, will be the id, not the name.
Unlike other null-label inputs, the initial setting of labels_as_tags cannot be
changed in later chained modules. Attempts to change it will be silently ignored.
set(string)
[
"default"
]
no
maintenance_window Weekly time range during which system maintenance can occur, in UTC string "wed:03:00-wed:04:00" no
name ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a tag.
The "name" tag is set to the full id string. There is no tag with the value of the name input.
string null no
namespace ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique string null no
parameter_group_name_prefix_enabled Set to true to use name_prefix to name the cluster and database parameter groups. Set to false to use name instead bool true no
performance_insights_enabled Whether to enable Performance Insights bool false no
performance_insights_kms_key_id The ARN for the KMS key to encrypt Performance Insights data. When specifying performance_insights_kms_key_id, performance_insights_enabled needs to be set to true string "" no
performance_insights_retention_period Amount of time in days to retain Performance Insights data. Either 7 (7 days) or 731 (2 years) number null no
publicly_accessible Set to true if you want your cluster to be publicly accessible (such as via QuickSight) bool false no
rds_monitoring_interval The interval, in seconds, between points when enhanced monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0. The default is 0. Valid Values: 0, 1, 5, 10, 15, 30, 60 number 0 no
rds_monitoring_role_arn The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs string null no
reader_dns_name Name of the reader endpoint CNAME record to create in the parent DNS zone specified by zone_id. If left empty, the name will be auto-asigned using the format replicas.var.name string "" no
regex_replace_chars Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits.
string null no
replication_source_identifier ARN of a source DB cluster or DB instance if this DB cluster is to be created as a Read Replica string "" no
restore_to_point_in_time List point-in-time recovery options. Only valid actions are source_cluster_identifier, restore_type and use_latest_restorable_time
list(object({
source_cluster_identifier = string
restore_type = string
use_latest_restorable_time = bool
}))
[] no
retention_period Number of days to retain backups for number 5 no
s3_import Restore from a Percona Xtrabackup in S3. The bucket_name is required to be in the same region as the resource.
object({
bucket_name = string
bucket_prefix = string
ingestion_role = string
source_engine = string
source_engine_version = string
})
null no
scaling_configuration List of nested attributes with scaling properties. Only valid when engine_mode is set to serverless
list(object({
auto_pause = bool
max_capacity = number
min_capacity = number
seconds_until_auto_pause = number
timeout_action = string
}))
[] no
security_groups List of security groups to be allowed to connect to the DB instance list(string) [] no
serverlessv2_scaling_configuration serverlessv2 scaling properties
object({
min_capacity = number
max_capacity = number
})
null no
skip_final_snapshot Determines whether a final DB snapshot is created before the DB cluster is deleted bool true no
snapshot_identifier Specifies whether or not to create this cluster from a snapshot string null no
source_region Source Region of primary cluster, needed when using encrypted storage and region replicas string "" no
stage ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' string null no
storage_encrypted Specifies whether the DB cluster is encrypted. The default is false for provisioned engine_mode and true for serverless engine_mode bool false no
storage_type One of 'standard' (magnetic), 'gp2' (general purpose SSD), 'io1' (provisioned IOPS SSD), 'aurora', or 'aurora-iopt1' string null no
subnet_group_name Database subnet group name. Will use generated label ID if not supplied. string "" no
subnets List of VPC subnet IDs list(string) n/a yes
tags Additional tags (e.g. {'BusinessUnit': 'XYZ'}).
Neither the tag keys nor the tag values will be modified by this module.
map(string) {} no
tenant ID element _(Rarely used, not included by default)_. A customer identifier, indicating who this instance of a resource is for string null no
timeouts_configuration List of timeout values per action. Only valid actions are create, update and delete
list(object({
create = string
update = string
delete = string
}))
[] no
vpc_id VPC ID to create the cluster in (e.g. vpc-a22222ee) string n/a yes
vpc_security_group_ids Additional security group IDs to apply to the cluster, in addition to the provisioned default security group with ingress traffic from existing CIDR blocks and existing security groups list(string) [] no
zone_id Route53 DNS Zone ID as list of string (0 or 1 items). If empty, no custom DNS name will be published.
If the list contains a single Zone ID, a custom DNS name will be pulished in that zone.
Can also be a plain string, but that use is DEPRECATED because of Terraform issues.
any [] no

Outputs

Name Description
activity_stream_arn Activity Stream ARN
activity_stream_name Activity Stream Name
arn Amazon Resource Name (ARN) of the cluster
cluster_identifier Cluster Identifier
cluster_resource_id The region-unique, immutable identifie of the cluster
cluster_security_groups Default RDS cluster security groups
database_name Database name
dbi_resource_ids List of the region-unique, immutable identifiers for the DB instances in the cluster
endpoint The DNS address of the RDS instance
master_host DB Master hostname
master_username Username for the master DB user
reader_endpoint A read-only endpoint for the Aurora cluster, automatically load-balanced across replicas
replicas_host Replicas hostname
security_group_arn Security Group ARN
security_group_id Security Group ID
security_group_name Security Group name

Related Projects

Check out these related projects.

Tip

Use Terraform Reference Architectures for AWS

Use Cloud Posse's ready-to-go terraform architecture blueprints for AWS to get up and running quickly.

โœ… We build it with you.
โœ… You own everything.
โœ… Your team wins.

Request Quote

๐Ÿ“š Learn More

Cloud Posse is the leading DevOps Accelerator for funded startups and enterprises.

Your team can operate like a pro today.

Ensure that your team succeeds by using Cloud Posse's proven process and turnkey blueprints. Plus, we stick around until you succeed.

Day-0: Your Foundation for Success

  • Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
  • Deployment Strategy. Adopt a proven deployment strategy with GitHub Actions, enabling automated, repeatable, and reliable software releases.
  • Site Reliability Engineering. Gain total visibility into your applications and services with Datadog, ensuring high availability and performance.
  • Security Baseline. Establish a secure environment from the start, with built-in governance, accountability, and comprehensive audit logs, safeguarding your operations.
  • GitOps. Empower your team to manage infrastructure changes confidently and efficiently through Pull Requests, leveraging the full power of GitHub Actions.

Request Quote

Day-2: Your Operational Mastery

  • Training. Equip your team with the knowledge and skills to confidently manage the infrastructure, ensuring long-term success and self-sufficiency.
  • Support. Benefit from a seamless communication over Slack with our experts, ensuring you have the support you need, whenever you need it.
  • Troubleshooting. Access expert assistance to quickly resolve any operational challenges, minimizing downtime and maintaining business continuity.
  • Code Reviews. Enhance your teamโ€™s code quality with our expert feedback, fostering continuous improvement and collaboration.
  • Bug Fixes. Rely on our team to troubleshoot and resolve any issues, ensuring your systems run smoothly.
  • Migration Assistance. Accelerate your migration process with our dedicated support, minimizing disruption and speeding up time-to-value.
  • Customer Workshops. Engage with our team in weekly workshops, gaining insights and strategies to continuously improve and innovate.

Request Quote

โœจ Contributing

This project is under active development, and we encourage contributions from our community.

Many thanks to our outstanding contributors:

For ๐Ÿ› bug reports & feature requests, please use the issue tracker.

In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.

  1. Review our Code of Conduct and Contributor Guidelines.
  2. Fork the repo on GitHub
  3. Clone the project to your own machine
  4. Commit changes to your own branch
  5. Push your work back up to your fork
  6. Submit a Pull Request so that we can review your changes

NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!

๐ŸŒŽ Slack Community

Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.

๐Ÿ“ฐ Newsletter

Sign up for our newsletter and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know. Dropped straight into your Inbox every week โ€” and usually a 5-minute read.

๐Ÿ“† Office Hours

Join us every Wednesday via Zoom for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a live Q&A that you canโ€™t find anywhere else. It's FREE for everyone!

License

License

Preamble to the Apache License, Version 2.0

Complete license is available in the LICENSE file.

Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.

Trademarks

All other trademarks referenced herein are the property of their respective owners.


Copyright ยฉ 2017-2024 Cloud Posse, LLC

README footer

Beacon

terraform-aws-rds-cluster's People

Contributors

ac-hibbert avatar adamcrews avatar aknysh avatar alexjurkiewicz avatar cloudpossebot avatar dependabot[bot] avatar dylanbannon avatar finchr avatar goruha avatar gowiem avatar jamengual avatar justincbeard avatar kevcube avatar matharoo avatar max-lobur avatar maximmi avatar nitrocode avatar nuru avatar osterman avatar renovate[bot] avatar rexroof avatar richardheywood avatar s2504s avatar sarkis avatar sumeetshk avatar tjarjoura avatar tmeijn avatar tptodorov avatar vadim-hleif avatar woz5999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-rds-cluster's Issues

Cluster name and current stage

Our goal was to setup a single-instance development database (1 cluster member), and then a production cluster that scaled as usage grew.

However, when we created a single node, the route53 record didn't include the stage, so it will conflict with our production cluster (when created)

If I set name = "${var.stage}-${var.name}", then my cluster name is zw-dev-dev-application (which I can live with).

Should stage be in the route53 records?

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Edited/Blocked

These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

terraform
enhanced-monitoring.tf
  • cloudposse/label/null 0.25.0
examples/basic/main.tf
examples/complete/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/complete/versions.tf
  • aws >= 4.17.0
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/enhanced_monitoring/main.tf
examples/postgres/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/postgres/versions.tf
  • aws >= 4.17.0
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/serverless_mysql/main.tf
examples/serverless_mysql57/main.tf
examples/serverlessv2_postgres/main.tf
  • cloudposse/dynamic-subnets/aws 2.4.2
  • cloudposse/vpc/aws 2.1.0
examples/serverlessv2_postgres/versions.tf
  • aws >= 4.12
  • null >= 2.0
  • hashicorp/terraform >= 1.1.0
examples/with_cluster_parameters/main.tf
main.tf
  • cloudposse/route53-cluster-hostname/aws 0.12.2
  • cloudposse/route53-cluster-hostname/aws 0.12.2
versions.tf
  • aws >= 4.23.0
  • null >= 2.0
  • hashicorp/terraform >= 1.0.0

  • Check this box to trigger a request for Renovate to run again on this repository

Support deterministic versioning of RDS

Describe the Feature

auto_minor_version_upgrade defaults to true and tells AWS to update minor versions during the set maintenance window.

Expected Behavior

Variable available to set but it is not.

Use Case

Desire more control over whether updates are applied automatically or not.
Perhaps true in staging but false in production.
It is not always possible to rely on ZDP, so some updates will be downtime or at least app interruption (app reconnects).

Describe Ideal Solution

Expose variable in module.

Alternatives Considered

Forking module.

Additional Context

During terraform plan we can see the value is defaulted to true.:

  # module.eeva_aurora_mysql.aws_rds_cluster_instance.default[1] must be replaced                                             
-/+ resource "aws_rds_cluster_instance" "default" {                                                                           
      + apply_immediately               = (known after apply)                                                                 
      ~ arn                             = "arn:aws:rds:<snip>" -> (known after apply)
        auto_minor_version_upgrade      = true       

Add Aurora Postgresql Serverless Example

This config worked for me:

module "aurora_postgres_serverless" {
  source                   = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.15.0"
  namespace                = "${var.namespace}"
  stage                    = "${var.stage}"
  name                     = "${var.postgres_name}"
  engine                   = "aurora-postgresql"
  engine_mode              = "serverless"
  engine_version           = "10.7"
  cluster_family           = "aurora-postgresql10"
  cluster_size             = "0"
  admin_user               = "${local.postgres_admin_user}"
  admin_password           = "${local.postgres_admin_password}"
  db_name                  = "${local.postgres_db_name}"
  db_port                  = "5432"
  vpc_id                   = "${data.terraform_remote_state.backing_services.vpc_id}"
  subnets                  = ["${data.terraform_remote_state.backing_services.public_subnet_ids}"]
  zone_id                  = "${local.zone_id}"
  publicly_accessible      = "true"
  allowed_cidr_blocks      = ["0.0.0.0/0"]
  enabled                  = "${var.postgres_cluster_enabled}"

  scaling_configuration = [
    {
      auto_pause               = true
      max_capacity             = "384"
      min_capacity             = "8"
      seconds_until_auto_pause = 300
    }
  ]
}

Valid capacity units for Postgres are 8, 16, 32, 64, 192, and 384, per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.create.html

Count not evaluating properly for mysql aurora serverless

local.cluster_instance_count fails to evaluate to 0 when specifying:

cluster_size          = 0
autoscaling_enabled   = false

and module still attempts to create resource

resource "aws_rds_cluster_instance" "default" {

Additionally, specifying false for var.enabled is ineffective.

So the enabled variable doesn't appear to be working? How can cluster instance be disabled when using aurora serverless?

How to migrate from the inline Security Group rules to SG rules as separate resources

This PR #80 changed the Security Group rules from inline to resource-based.

This is a good move since using inline SG rules is a "bad practice". Inline rules have many issues (one of them is that you can't add new rules to the security group since it's not possible to mix the inline rules and rules as separate resources).

At the same time, this introduced a breaking change: if you want to update the module to the latest version, Terraform will try to add the new resource-based rules to the security group and will fail since the same rules already exist and we can't mix inline rules with resource-based rules.

Note that it's not possible to taint and destroy the security group since it has a dependent object (an Elastic Network Interface), which in turn has its own dependencies.

One possible solution would be to destroy the Aurora RDS cluster completely and recreate it. While possible in some cases (e.g. in dev environments), it could not be feasible in other environments (e.g. a production database has data, and it's not possible to have a long outage).

A better way would be to just destroy the inline security group rules without destroying the security group itself (and any other Aurora resources), and then add the resource-based security group rules.

Here are the steps to do that:

  1. Create a new branch of terraform-aws-rds-cluster module, e.g. strip-inline-sg-rules

  2. In the new branch, comment out all the aws_security_group_rule resources for resource "aws_security_group" "default"

  3. Add empty ingress and egress lists to the security group. NOTE: you can't skip the ingress and egress completely since terraform will not detect any changes to the inline rules (this is a bug/feature of TF):

resource "aws_security_group" "default" {
  name        = ...
  vpc_id      = var.vpc_id

  ingress = []
  egress  = []
}

NOTE: Branch strip-inline-sg-rules has been already created in this repository and steps 1-3 already performed.
The branch strip-inline-sg-rules can be used to perform the next steps.

  1. Update the Aurora cluster project to use the strip-inline-sg-rules branch of the terraform-aws-rds-cluster module
module "aurora_postgres_cluster" {
  source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=strip-inline-sg-rules"

  1. Apply the project. Terraform will just remove the inline rules from the security group without destroying the SG itself and any of the Aurora resources

  2. Update the Aurora cluster project to use the latest release of the terraform-aws-rds-cluster module

module "aurora_postgres_cluster" {
  source = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=tags/0.34.0"

  1. Apply the project. Terraform will add the external resource-based SG rules

It takes a few minutes to go through all the steps, so the disruption to the production database will be minimal.

Upgrading DB version fails with InvalidParameterCombination for instance parameter group

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Attempting to do a major version upgrade of an Aurora Postgres instance from 11.13 to 12.9. On latest version 50.2 and 3.63.0 of terraform. Below is my module config:

module "postgres" {
  source                      = "cloudposse/rds-cluster/aws"
  version                     = "0.50.2"
  name                        = "api-db"
  engine                      = "aurora-postgresql"
  cluster_family              = "aurora-postgresql12"
  engine_version              = "12.9"
  allow_major_version_upgrade = true
  apply_immediately           = true
  cluster_size                = 1
  admin_user                  = data.aws_ssm_parameter.db_admin_user.value
  admin_password              = data.aws_ssm_parameter.db_admin_password.value
  db_name                     = "api"
  db_port                     = 5432
  instance_type               = "db.t3.medium"
  vpc_id                      = var.vpc_id
  security_groups             = concat([aws_security_group.api.id], var.rds_security_group_inbound)
  subnets                     = var.rds_subnets
  storage_encrypted           = true
}

When running apply I get the error:

 Failed to modify RDS Cluster (api-db): InvalidParameterCombination: The current DB instance parameter group api-db-xxxxxxx is custom. You must explicitly specify a new DB instance parameter group, either default or custom, for the engine version upgrade.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: OSX
  • Version: 50.2
  • Terraform: 3.63.0

Cluster is recreated with every apply

This is linked to hashicorp/terraform#16724 and might be fixed by #35

Currently a terraform plan shows that the RDS cluster is recreated with every apply, the cause seems to be a wonky availability-zones attribute.
Somehow this seems to trigger a new resource. See plan output.

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster.default (new resource required)
      id:                                  "namespace-stage-project" => <computed> (forces new resource)
      apply_immediately:                   "true" => "true"
      arn:                                 "arn:aws:rds:eu-central-1:123456789:cluster:namespace-stage-project" => <computed>
      availability_zones.#:                "3" => "2" (forces new resource)
      availability_zones.1126047633:       "eu-central-1a" => "eu-central-1a"
      availability_zones.2903539389:       "eu-central-1c" => "" (forces new resource)
      availability_zones.3658960427:       "eu-central-1b" => "eu-central-1b"
      backup_retention_period:             "5" => "5"
      cluster_identifier:                  "namespace-stage-project" => "namespace-stage-project"
      cluster_identifier_prefix:           "" => <computed>
      cluster_members.#:                   "2" => <computed>
      cluster_resource_id:                 "cluster-AAAXXXX" => <computed>
      database_name:                       "project" => "project"
      db_cluster_parameter_group_name:     "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      endpoint:                            "namespace-stage-project.cluster-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_mode:                         "provisioned" => "provisioned"
      engine_version:                      "5.7.12" => <computed>
      final_snapshot_identifier:           "namespace-stage-project" => "namespace-stage-project"
      hosted_zone_id:                      "Z1RLSENSITIVE" => <computed>
      iam_database_authentication_enabled: "false" => "false"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      master_password:                     <sensitive> => <sensitive> (attribute changed)
      master_username:                     "project" => "project"
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => "07:00-09:00"
      preferred_maintenance_window:        "wed:03:00-wed:04:00" => "wed:03:00-wed:04:00"
      reader_endpoint:                     "namespace-stage-project.cluster-ro-sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      skip_final_snapshot:                 "false" => "false"
      storage_encrypted:                   "true" => "true"
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      vpc_security_group_ids.#:            "1" => "1"
      vpc_security_group_ids.636648702:    "sg-080d3cfa4609edea8" => "sg-080d3cfa4609edea8"

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[0] (new resource required)
      id:                                  "namespace-stage-project-1" => <computed> (forces new resource)
      apply_immediately:                   "" => <computed>
      arn:                                 "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-1" => <computed>
      auto_minor_version_upgrade:          "true" => "true"
      availability_zone:                   "eu-central-1a" => <computed>
      cluster_identifier:                  "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
      db_parameter_group_name:             "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      dbi_resource_id:                     "db-SENSITIVE0" => <computed>
      endpoint:                            "namespace-stage-project-1.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_version:                      "5.7.12" => <computed>
      identifier:                          "namespace-stage-project-1" => "namespace-stage-project-1"
      identifier_prefix:                   "" => <computed>
      instance_class:                      "db.t2.small" => "db.t2.small"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      monitoring_interval:                 "0" => "0"
      monitoring_role_arn:                 "" => <computed>
      performance_insights_enabled:        "false" => <computed>
      performance_insights_kms_key_id:     "" => <computed>
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => <computed>
      preferred_maintenance_window:        "mon:04:25-mon:04:55" => <computed>
      promotion_tier:                      "0" => "0"
      publicly_accessible:                 "false" => "false"
      storage_encrypted:                   "true" => <computed>
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      writer:                              "false" => <computed>

-/+ module.rds_cluster_aurora_mysql.aws_rds_cluster_instance.default[1] (new resource required)
      id:                                  "namespace-stage-project-2" => <computed> (forces new resource)
      apply_immediately:                   "" => <computed>
      arn:                                 "arn:aws:rds:eu-central-1:123456789:db:namespace-stage-project-2" => <computed>
      auto_minor_version_upgrade:          "true" => "true"
      availability_zone:                   "eu-central-1b" => <computed>
      cluster_identifier:                  "namespace-stage-project" => "${aws_rds_cluster.default.id}" (forces new resource)
      db_parameter_group_name:             "namespace-stage-project" => "namespace-stage-project"
      db_subnet_group_name:                "namespace-stage-project" => "namespace-stage-project"
      dbi_resource_id:                     "db-SENSITIVE1" => <computed>
      endpoint:                            "namespace-stage-project-2.sensitive.eu-central-1.rds.amazonaws.com" => <computed>
      engine:                              "aurora-mysql" => "aurora-mysql"
      engine_version:                      "5.7.12" => <computed>
      identifier:                          "namespace-stage-project-2" => "namespace-stage-project-2"
      identifier_prefix:                   "" => <computed>
      instance_class:                      "db.t2.small" => "db.t2.small"
      kms_key_id:                          "arn:aws:kms:eu-central-1:123456789:key/xxx" => <computed>
      monitoring_interval:                 "0" => "0"
      monitoring_role_arn:                 "" => <computed>
      performance_insights_enabled:        "false" => <computed>
      performance_insights_kms_key_id:     "" => <computed>
      port:                                "3306" => <computed>
      preferred_backup_window:             "07:00-09:00" => <computed>
      preferred_maintenance_window:        "sun:03:45-sun:04:15" => <computed>
      promotion_tier:                      "0" => "0"
      publicly_accessible:                 "false" => "false"
      storage_encrypted:                   "true" => <computed>
      tags.%:                              "3" => "3"
      tags.Name:                           "namespace-stage-project" => "namespace-stage-project"
      tags.Namespace:                      "namespace" => "namespace"
      tags.Stage:                          "stage" => "stage"
      writer:                              "true" => <computed>

Config:

module "rds_cluster_aurora_mysql" {
  source             = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=master"
  engine             = "aurora-mysql"
  cluster_family     = "aurora-mysql5.7"
  cluster_size       = "${var.rds_cluster_size}"
  namespace          = "dc"
  stage              = "${element(split("-", var.name), 1)}"
  name               = "${element(split("-", var.name), 0)}"
  admin_user         = "${element(split("-", var.name), 0)}"
  admin_password     = "${random_string.password.result}"
  db_name            = "${element(split("-", var.name), 0)}"
  instance_type      = "${var.rds_instance_type}"
  vpc_id             = "${aws_vpc.this.id}"
  availability_zones = ["${var.azs}"]
  security_groups    = ["${module.security_group_webapp.this_security_group_id}", "${module.security_group_bastion.this_security_group_id}"]
  subnets            = ["${aws_subnet.private.*.id}"]
  # zone_id            = "${aws_route53_zone.internal.zone_id}"
  storage_encrypted  = true
  maintenance_window = "wed:03:00-wed:04:00"
  skip_final_snapshot = false
}

I assume that it would work fine with the availability_zones variable dropped.

apply_method on cluster_parameters

Hello!

I have some cluster_parameters modifications defined and if I use the "immediate" apply_method the first time it creates de database correctly, but then, aws changes the paramter to pending_reboot internally, so every time I reapply my code it gets the difference and applies it again.

Is there a correct way to avoid this?
Also, the apply_method is not mandatory for terraform (it defaults to immediate) but it is for your module, why?

Thanks a lot

db_port not working as expected

db_port = 5454
I have defined db_port value as 5454 in .hcl file but after applying, RDS Instances(reader and writer) are created with port 5432.
I am using below RDS configuration
engine = "aurora-postgresql"
engine_version = "10.14"
cluster_family = "aurora-postgresql10"

Support for missing storage variables

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

aws_rds_cluster resource has ability to have the storage options specified.
We should be able to specify storage_type, iops and allocated_stoage via this module.

Use Case

  • more flexibility for setting the size of rds with specified storage type and iops.

Question Regarding parameter groups

I have a question regarding parameter groups. I have tried a couple things but I have not been able to construct a list of parameters for aws_rds_cluster_parameter_group resource in the module. For example I would like to set,

character_set_client=utf8
character_set_connection=utf8

Do you have an example definition for cluster_paramparameters

Cheers

Update to latest available parameters

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

I noticed there are missing parameters available to consumers. One was the major upgrade version param. There may be others.

Error: Failed to modify RDS Cluster (sharedpostgres): InvalidParameterCombination: The AllowMajorVersionUpgrade flag must be present when upgrading to a new major version.
        status code: 400, request id: 3bfeabd4-6459-4cc3-a789-5e5e2663ac95

Expected Behavior

  • All configurable vars available to consumers

Use Case

  • Upgrading from version Postgres 10 to 11

Describe Ideal Solution

  • Upgrade worked and I could control it within my project

Alternatives Considered

  • Destroyed the whole cluster then recreated it

Additional Context

...

Cross-region replication not working

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

I am trying to configure Auirora Global Cluster spenned on 2 regions. I craete the "aws_rds_global_cluster" terraform resource externally, and then I am trying to use your mopdule to deploy the 2 sub-clusters in 2 regions. The main cluster works fine, the secondary raises errors:

creating RDS Cluster (): InvalidParameterCombination: Cannot specify database name for cross region replication cluster
creating RDS Cluster (
): InvalidParameterCombination: Cannot specify user name for cross region replication cluster

I am using global_cluster_identifier to enable the cross-region replication feature. For the secondary cluster I am also specifying source_region to link it to the main one.

Using a local foek of your module I made it work by commenting out 3 lines the source:

# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#replication_source_identifier
resource "aws_rds_cluster" "secondary" {
  count              = local.enabled && !local.is_regional_cluster ? 1 : 0
  cluster_identifier = var.cluster_identifier == "" ? module.this.id : var.cluster_identifier
  # database_name                       = var.db_name
  # master_username                     = local.ignore_admin_credentials ? null : var.admin_user
  # master_password                     = local.ignore_admin_credentials ? null : var.admin_password

Expected Behavior

I was expecting a second cluster to be deployed on the second region, having it connected to the global cluster

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a global cluster using aws_rds_global_cluster Terraform resource
  2. Create an Aurora MySQL cluster using your module, specifying global_cluster_identifier
  3. Create another Aurora MySQL cluster using your module, specifying global_cluster_identifier and source_region
  4. See the error

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: [e.g. Linux, OSX, WSL, etc]
  • Version [e.g. 10.15]

Additional Context

Add any other context about the problem here.

Cannot treat Security Group egress in the same way as we do with ingress

Describe the Bug

When configuring security group ingress I can specify either a list of CIDR blocks, or an additional security group.
With egress, instead, I can only either disable it or have it fully open (any port, any protocol, 0.0.0.0/0)

Expected Behavior

Being able to specify CIDR and security groups for egress as well

Steps to Reproduce

N/A

Screenshots

No response

Environment

No response

Additional Context

No response

Support serverless v2

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

Add serverless v2 support

Expected Behavior

Can create a serverless v2 cluster by this module

Use Case

Create a serverless v2 cluster by this module

Describe Ideal Solution

Add a new config section like serverlessv2_scaling_configuration

Alternatives Considered

Create the cluster by AWS provider directly

Additional Context

The cluster instance class is "db.serverless"

CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead.

Describe the Bug

i am using minimal config to provision the db cluster, the cluster on console works properly but the terraform scripts fails in the end with the error message

โ”‚ Error: creating RDS Cluster (prod-mysql) Instance (prod-mysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. โ”‚ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef โ”‚ โ”‚ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], โ”‚ on .terraform/modules/rds_mysql/main.tf line 251, in resource "aws_rds_cluster_instance" "default": โ”‚ 251: resource "aws_rds_cluster_instance" "default" {

Expected Behavior

script should not fail as cluster is up and running

Steps to Reproduce

` source = "cloudposse/rds-cluster/aws"
version = "1.9.0"

name = "name"
cluster_family = "mysql8.0"
engine = "mysql"
engine_mode = "provisioned"
engine_version = "8.0"
cluster_size = 1
namespace = var.namespace
stage = var.environment
admin_user = var.db_admin_username
admin_password = var.db_admin_password
db_name = "db_name"
db_port = 3306
db_cluster_instance_class = var.db_instance_type
vpc_id = var.vpc_id
security_groups = []
subnets = var.subnets
zone_id = var.zone_id
storage_type = "io1"
iops = 1000
allocated_storage = 100`

the tf script used

โ”‚ Error: creating RDS Cluster (bloom-prod-idpmysql) Instance (bloom-prod-idpmysql-1): InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead. โ”‚ status code: 400, request id: 7ec7b266-62c3-46b0-89f3-8ad0782e73ef โ”‚ โ”‚ with module.rds_mysql_idp.aws_rds_cluster_instance.default[0], โ”‚ on .terraform/modules/rds_mysql_idp/main.tf line 251, in resource "aws_rds_cluster_instance" "default": โ”‚ 251: resource "aws_rds_cluster_instance" "default" {

Screenshots

No response

Environment

module version : 1.9.0
Terraform v1.5.0
on darwin_amd64

  • provider registry.terraform.io/hashicorp/aws v4.67.0
  • provider registry.terraform.io/hashicorp/local v2.5.1
  • provider registry.terraform.io/hashicorp/null v3.2.2
  • provider registry.terraform.io/hashicorp/random v3.6.0
  • provider registry.terraform.io/hashicorp/tls v4.0.5

Additional Context

No response

Missing required db_cluster_instance_class variable when creating Multi A-Z RDS cluster

Slack Community

Describe the Bug

When setting up a provisioned multi a-z postgres rds cluster, we need to specify db_cluster_instance_class attribute otherwise it leads to the following error during the apply:

Error: error creating RDS cluster: InvalidParameterValue: DBClusterInstanceClass is required. status code: 400

Expected Behavior

When the missing db_cluster_instance_class is specified the rds cluster should be created normally.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Set the variables for multi a-z rds cluster:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
  1. Do a terraform apply
  2. See error

Second destroy will fail if snapshot is not skipped due to snapshot conflict

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

If the cluster is created and destroyed and then created again and attempted a destroy again, the last destroy will fail because there is a snapshot with the same name as the last one.

โ”‚ Error: error deleting RDS Cluster (aurora-example-shared): DBClusterSnapshotAlreadyExistsFault: Cannot create the cluster snapshot because one with the identifier aurora-example-shared already exists.

Expected Behavior

Add a random id to the final snapshot when the cluster is created to avoid conflicts

Use an existing db cluster parameter group instead of creating new one

Describe the Feature

Since the cluster parameter group is not adjustable, it's not feasible that always create a new cluster parameter group in a large system.

Expected Behavior

Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.

Use Case

We have a large developing team that creates a lot of rds serverless clusters for development and testing.
Since the db cluster parameter group number is not adjustable, we can't create more.

Because nearly all of these RDS clusters are for testing only, a shared default cluster parameter group is acceptable in our environment.

Describe Ideal Solution

Add a new db cluster parameter group name.
Use an existing db cluster parameter group if specifying.

Alternatives Considered

Change to db instance as a workaround, however, it's not cost-efficient. A serverless cluster is very good for us for RnD testing.

Additional Context

No response

Missing arguments

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Looking at https://www.terraform.io/docs/providers/aws/r/rds_cluster.html and https://www.terraform.io/docs/providers/aws/r/rds_cluster_instance.html the following arguments are missing:

aws_rds_cluster - missing arguments:

availability_zones
cluster_identifier_prefix
db_subnet_group_name
port

rds_cluster_instance - missing arguments:

identifier_prefix
apply_immediately
promotion_tier
preferred_backup_window
preferred_maintenance_window
auto_minor_version_upgrade
copy_tags_to_snapshot
ca_cert_identifier

Expected Behavior

Arguments included and configurable if necessary

Incorrect cluster_instance_count calculation when autoscaling_enabled = true

For Aurora, autoscaling only applies to read replicas. However, the terraform code here does not support creation of an autoscaling group with one read replica.
This seems related to #61, but is slightly different, I believe

The instance_count and cluster_instance_count calculations are not correct when autoscaling_enabled = true.

Steps to reproduce:

  1. Create cluster with autoscaling_enabled = false (default)
  2. change autoscaling_enabled to true, and accept the default value for autoscaling_min_capacity (default: 1)

result: two new resources are created

  • resource "aws_appautoscaling_policy" "replicas"
  • resource "aws_appautoscaling_target" "replicas"
    and one resource is deleted
  • resource "aws_rds_cluster_instance" "default"

The aws_rds_cluster_instance is deleted because the value for local.cluster_instance_count has changed from 2 (the default if autoscaling_enabled=false) to 1 (based on different logic when autoscaling_enabled=true.

I confirmed this by setting autoscaling_min_capacity to 2. With this value, the resource aws_rds_cluster_instance is unmodified. However, in this case, the number of read replicas created is 2

Potential fix:
min_instance_count = var.autoscaling_enabled ? var.autoscaling_min_capacity +1 : var.cluster_size

Add option for RDS/Aurora Managed Master Passwords via Secrets Manager

Describe the Feature

We want to use RDS integration with secret manager so that master password will be managed by RDS and rotated by secret manager.
This option is available in Terraform by using the variable manage_master_user_password :
Set to true to allow RDS to manage the master user password in Secrets Manager. Cannot be set if master_password is provided.
Currently the cloudposse module does not allow to enable this feature.

Expected Behavior

The module allows to enable managed user password feature in RDS.

Use Case

Managed secrets are more secure and easy to use.

Describe Ideal Solution

  • Add a variable to enable managed master user password option in RDS.
  • Add output block that contains the the secret ARN (see master_user_secret reference in the terraform docs).

Alternatives Considered

No response

Additional Context

No response

Implement rolling update for instances.

Describe the Feature

While updating the instance_type recently in preparation for a major version upgrade, both instances upgraded in parallel resulting in significant downtime. I found a simple fix for this which I will submit as a pull request.

Expected Behavior

At least one new node is in service at all times.

Use Case

A Zero or minimal downtime deploy.

Describe Ideal Solution

A rolling update.

Alternatives Considered

I considered a blue/green update which I was even able to implement using the create_before_destroy. I can provide this implementation if anyone is interested.

Additional Context

No response

Creating Postgres Multi A-Z RDS cluster running into error InvalidParameterValue: CreateDBInstance

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When trying to create a Multi A-Z postgres cluster, it runs into the following error:

Error: error creating RDS Cluster (eg-test-rds-cluster) Instance: InvalidParameterValue: CreateDBInstance can't be used to create a DB instance in a Multi-AZ DB cluster. Use CreateDBCluster instead.
โ”‚ 	status code: 400, request id: xxx-xxxx-xxxxx-xxxxx
โ”‚
โ”‚   with module.rds_cluster.aws_rds_cluster_instance.default[0],
โ”‚   on ../../main.tf line 240, in resource "aws_rds_cluster_instance" "default":

The resource aws_rds_cluster_instance is specifically used for Aurora engine types like aurora, aurora-mysql, aurora-postgresql Check here.

Expected Behavior

When trying to setup other non-aurora engine types, the resource aws_rds_cluster_instance creation should be skipped.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create non-aurora multi a-z rds-cluster with following vars:
availability_zones = ["us-east-2a", "us-east-2b", "us-east-2c"]
engine = "postgres"
engine_mode = "provisioned"
engine_version = "13.4"
db_cluster_instance_class = "db.m5d.large"
allocated_storage = 100
storage_type = "io1"
iops = 1000
  1. Do a terraform apply
  2. See error

Additional Context

Add any other context about the problem here.

enable_http_endpoint not working for serverlessv2 configurations

Describe the Bug

When instance_type is "db.serverless" (for V2 serverless) the engine_mode does not accept the value "serverless", but this value is required to enable the Data API via enable_http_endpoint = true. As a result, the co-condition only applies for serverless v1.

Expected Behavior

That

...
instance_type` = "db.serverless"
enable_http_endpoint = true
...

would enable the Data API for serverless V2

Steps to Reproduce

...
instance_type` = "db.serverless"
enable_http_endpoint = true
...

Screenshots

No response

Environment

OSX, M1

Additional Context

No response

Invalid parameter value while trying to use engine as aurora-mysql

Terraform version 0.12.24

I am trying to create aurora mysql serverless rds with below configuration, running into error invalid parameter value when I use aurora-mysql. it works fine if i use engine as aurora. plan does not give me any error.

provider "aws" {
region = "us-east-1"
}
resource "aws_rds_cluster" "serverless" {
cluster_identifier = "serverless-dev"
engine = "aurora-mysql"
engine_mode = "serverless"
master_username = "dba_admin"
master_password = "changemepass"
skip_final_snapshot = true
db_subnet_group_name = "serverless-vpc"
}

Error: error creating RDS cluster: InvalidParameterValue: The engine mode serverless you requested is currently unavailable.
status code: 400, request id: 2294c942-fec5-4f45-a9e0-7520e33b73b8

Allow point in time restoration using a specific datetime

Describe the Feature

AWS Console and AWS's vanilla aws_rds_cluster resource allows for specifying a date time as opposed to using the latest restorable time.

Expected Behavior

Have the option to pass in restore_to_time as a UTC datetime string instead of passing use_latest_restorable_time (or passing it as false)

Use Case

Having this option is really valuable for running Data Recovery following an incident where the latest restorable time's data may be corrupt.

Describe Ideal Solution

Have a new RDS Cluster created using restored data from a particular point in time (not necessarily the latest point in time).

Alternatives Considered

No response

Additional Context

No response

Cannot restore cluster from snapshot without removing auto-scaling profile

We are using this module to provision an auto-scaling read replica and it is working well. However when we try to rebuild the cluster from a snapshot the apply process fails with the following error.

Error: error deleting Database Instance "db-instance-1": AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/jenkins is not authorized to perform: rds:DeleteDBInstance on resource: arn:aws:rds:us-east-2:xxxxxxxxxxx:db:db-instance-1
status code: 403, request id: a43bf094-e294-4ecd-ad51-6d7ad78689b8

In order to allow this to work we need to remove read replicas and auto-scaling profile of Aurora cluster before restoring RDS from snapshot.

Unable to set `performance_insights_enabled` to false

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Unable to set the variable performance_insights_enabled to false. While set to false Terraform throws the following error:

Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'

Expected Behavior

In our dev environment we may not want to enable performance insights in order to save money. I would have expected to be able to tell the module to set it to false. it would be great if we could make this a bit more dynamic.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create a rds_cluster/aws module
  2. set performance_insights_enabled to false
  3. Run terraform apply
  4. See error
Error: creating RDS Cluster (dev-db) Instance (dev-db-1): InvalidParameterCombination: To enable Performance Insights, EnablePerformanceInsights must be set to 'true'

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Alpine linux
terraform 1.2.5

Required variables

There's now were I could find the minimum variables needed for creating a cluster.

I believe the required are:

  • vpc_id
  • security_groups
  • zone_id
  • admin_password
  • subnets

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Consider making zone_id an optional parameter

Hi there,

I noticed that the zone_id parameter is mandatory. Although it is a nice to have friendly DNS names for RDS endpoints, this may cause issues when ssl connections are used. Perhaps, it's worth to consider making the zone_id parameter optional.

// Siert

Reiterate on BridgeCrew warnings

Describe the Bug

A bunch of unrelated warnings appeared in a PR: #126
I think BridgeCrew has updated their database

Expected Behavior

Keep master clean so that people don't get confused when they contribute

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

Aurora Serverless "BackupWindow" parameter not working

Describe the Bug

It seems like AWS does not support (or maybe is bugged) "preferred_maintenance_window " and "preferred_backup_window"
Existing AWS issue: aws-cloudformation/cloudformation-coverage-roadmap#396
This was also reported internally in AWS team

Expected Behavior

Everything should work as expected

Steps to Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Run '....'
  3. Enter '....'
  4. See error

Screenshots

Create a serverless database from a snapshot. Right after it's finished terraform will throw:
Error: error modifying RDS Cluster (tc-staging-shared-main-rds): InvalidParameterCombination: You currently can't modify BackupWindow with Aurora Serverless. status code: 400, request id: aa5042ba-f0be-49ea-a695-e68da91a01f8

If you run terraform again it will say the cluster is tainted and must be replaced.
Moreover a lot of values are just wrong. The created resource has totally different values than specified in terraform code:

  • backup_retention_period (1 day instead of configured 7)
  • preferred_backup_window
  • preferred_maintenance_window
  • master_username (the module passes a default but the instance was created from a snapshot)

image
image

Environment (please complete the following information):

  • OS: Windows 10

Additional Context

I think you could have a dedicated resource for serverless clusters with the above fields omitted

Action of deleting serverlessv2_scaling_configuration has no effect

Describe the Bug

The serverlessv2_scaling_configuration can not be deleted.

Expected Behavior

No change should be detected.

From AWS's documentation, it seems there is no way to delete these settings. But the Terraform change makes it look like it's going to delete it. It will be great to not report this type of change (setting the value to null) until it can actually do it so the change doesn't have to reappear again and again.

Steps to Reproduce

  1. Create a regional RDS cluster with one writer with terraform.
  2. From the AWS console, add a reader with DB instance class Serverless v2
  3. Delete the reader from the AWS console. Now if we click "Modify" on the cluster, we will see the leftover Serverless v2 capacity settings such as Minimum ACUs and Maximum ACUs.
  4. When we run terraform plan, it always detects changes like:
     ~ serverlessv2_scaling_configuration {
          - max_capacity = 128 -> null
          - min_capacity = 2 -> null
        }

But it won't actually change these settings or delete the whole Serverless v2 capacity settings section from the cluster when we terraform apply.
5. When we rerun terraform plan, the above change will show up again.

Screenshots

No response

Environment

  • Module version: 0.44.0
  • Terraform version: 1.3.2

Additional Context

No response

Dropping variable availability_zones ?

Hi,

availability_zones is EC2 classic, I believe that the module and the examples will get better if EC2 classic support is dropped. The current examples are mixing EC2 Classic params with VPC params.

availability_zones - (Optional) A list of EC2 Availability Zones that instances in the DB cluster can be created in

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.