cloudposse / terraform-aws-elastic-beanstalk-environment Goto Github PK
View Code? Open in Web Editor NEWTerraform module to provision an AWS Elastic Beanstalk Environment
Home Page: https://cloudposse.com/accelerate
License: Apache License 2.0
Terraform module to provision an AWS Elastic Beanstalk Environment
Home Page: https://cloudposse.com/accelerate
License: Apache License 2.0
At the moment any load balanced web environment is automatically created with a eb-loadbalancer-logs
bucket along with logging enabled.
It would be nice to have an option to disable this behavior.
We don't actually need or want logs on a couple internal environments.
I use cloudposse modules. There are these warnings. Do you plan to address that?
Warning: Quoted references are deprecated
on .terraform/modules/subnets/private.tf line 45, in resource "aws_subnet" "private":
45: ignore_changes = ["tags.kubernetes", "tags.SubnetType"]
In this context, references are expected literally rather than in quotes.
Terraform 0.11 and earlier required quotes, but quoted references are now
deprecated and will be removed in a future version of Terraform. Remove the
quotes surrounding this reference to silence this warning.
module "vpc" {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=0.8.0"
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
cidr_block = "172.32.0.0/16"
}
module "subnets" {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0"
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = true
nat_instance_enabled = false
tags = var.tags
}
Found a bug? Maybe our Slack Community can help.
S3 bucket is not getting created - it returns 400 response with "Invalid name" error
elb log s3 bucket should be created
Hi, just wanted to know if there are any plans to support Terraform v0.12 in the near future.
AWS provider version constraint ~> 2.0 doesn't support modules that use ~> 3.x versions. For example, my module supports AWS ~> 3.4.0 and I get:
Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints ~> 3.4.0, ~> 2.0, ~> 2.0, ~>
3.4.0
Hi Guys,
Thank you for the great work you made available. Currently I am bumped to this particularly use case:
Using Multi-Docker environments which are reachable through an Application Loadbalancer with HostHeaders enabled. Some of the applications are reachable through other HostHeaders: e.g. app.domain1.com and web.domain2.com.
Terraform recently presented a fix for the AWS provider which allows usage of multiple ACM certificates for an Application Loadbalancer (terraform-aws-modules/terraform-aws-alb#26)
.
TL:DR is it possible to change loadbalancer_certificate_arn to type of List.
loadbalancer_certificate_arn = ["${module.acm-application.acm_certificate_arn}"]
Hello,
I recently started using your module and was able to create a new Elastic Beanstalk environment successfully. However, I need my load balancers to be internal facing only. I could not find a way to specify this within the module. The module requires 'public_subnets', so I assume this is functionality is not currently available?
In AWS, you can specify the visibility as shown in the attached screenshot. Thanks!
Some platforms (i.e. .NET) do not support the enhanced health system, yet there is no option to change this to basic. Causes the error
ConfigurationValidationException: Configuration validation exception: Enhanced health reporting system is not supported by current solution stack
Solution would be to add a variable for this.
I'm getting following error as using eu-north-1 region
Error: Invalid index
on .terraform/modules/eu-north-1.foo.elastic_beanstalk_environment/outputs.tf line 22, in output "elb_zone_id":
22: value = var.alb_zone_id[var.region]
|----------------
| var.alb_zone_id is map of string with 15 elements
| var.region is "eu-north-1"
The given key does not identify an element in this collection value.
As I taking a look at the code, some regions are missing.
# From: http://docs.aws.amazon.com/general/latest/gr/rande.html#elasticbeanstalk_region
# Via: https://github.com/hashicorp/terraform/issues/7071
variable "alb_zone_id" {
type = map(string)
default = {
ap-northeast-1 = "Z1R25G3KIG2GBW"
ap-northeast-2 = "Z3JE5OI70TWKCP"
ap-south-1 = "Z18NTBI3Y7N9TZ"
ap-southeast-1 = "Z16FZ9L249IFLT"
ap-southeast-2 = "Z2PCDNR3VC2G1N"
ca-central-1 = "ZJFCZL7SSZB5I"
eu-central-1 = "Z1FRNW7UH4DEZJ"
eu-west-1 = "Z2NYPWQ7DFZAZH"
eu-west-2 = "Z1GKAAAUGATPF1"
sa-east-1 = "Z10X7K2B4QSOFV"
us-east-1 = "Z117KPS5GTRQ2G"
us-east-2 = "Z14LCN19Q5QHIC"
us-west-1 = "Z1LQECGX5PH1X"
us-west-2 = "Z38NKT9BP95V3O"
eu-west-3 = "ZCMLWB8V5SYIT"
}
description = "ALB zone id"
}
So regions below are missing. Please update them.
Africa (Cape Town) | af-south-1
Asia Pacific (Hong Kong) | ap-east-1
Asia Pacific (Osaka-Local) | ap-northeast-3
Europe (Frankfurt) | eu-central-1
Europe (Milan) | eu-south-1
Europe (Stockholm) | eu-north-1
Middle East (Bahrain) | me-south-1
South America (São Paulo) | sa-east-1
Hi there,
Fantastic module. I've managed to get an EB env up and running with this with around 13 environment variables and all looks good. Except I seem to be getting 3 "rogue" environment variables being set:
DEFAULT_ENV_13
DEFAULT_ENV_27
DEFAULT_ENV_41
and only these 3. I think the way you've handled the environment variables is pretty nice as all the uglyness of EB variables is hidden nicely in the module. I can't see how I'm only getting properties for these 3 variables though because my env-vars map looks like:
env_vars = "${
map(
"AWS_REGION", "${local.aws_region}",
"BUNDLE_WITHOUT", "test:development",
"REDIS_PORT", "6379",
"REDIS_URL", "${local.redis_url}",
"APPLICATION_NAME", "foo",
"LOAD_DB", "none",
"RACK_ENV", "staging",
"RAILS_SKIP_ASSET_COMPILATION", "false",
"RAILS_SKIP_MIGRATIONS", "false",
"RDS_DB_NAME", "mydb",
"RDS_PORT", "5432",
"RDS_USERNAME", "myuser",
"API_URL", "https://api.foo.com"
)
}"
Any thoughts why I might get the 3 rogues?
I think it will be great to allow the use of dynamics blocks to set the additional_settings
block. Right now (as far as I can see) I can only write each of the values by hand.
Hi,
Got an email from AWS that says that AWSElasticBeanstalkService which is the managed IAM policy used in main.tf will be deprecated soon and should be switched to AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy
Was wondering if you plan to switch that.
Thank you
Currently we do not have solution stack attribute ignored. Beanstalk automatically updates environment due to managed actions being allowed, when we run terraform again, it downgrades it back to the version specified in the solution stack attribute.
I tried dynamic block approach, apparently lifecycle block isn't allowed to be dynamic.
As part of recent commit 93860ad , the version of module terraform-null-label
has been updated from 0.3.1
to 0.5.0
.
This seems to have introduced a breaking change in the way the environment
tag is handled.
Following that change, TF plan indicates that the following tags will be created:
tags.%: "" => "4"
tags.Environment: "" => ""
tags.Name: "" => "xxx"
tags.Namespace: "" => "yyy"
tags.Stage: "" => "demo"
The only empty tag value is for tags.Environment
.
It then fails when applying with the error:
* aws_elastic_beanstalk_environment.default: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateEnvironmentInput.Tags[1].Value.
This is due to the fact that the label
resource of module terraform-aws-elastic-beanstalk-environment
does not specify an environment
variable.
I tried adding the environment
variable to the resource (https://github.com/vlaurin/terraform-aws-elastic-beanstalk-environment/commit/edd54e1416e9ee120a10cdad900e0bc9d74884b7#diff-7a370d8342e7203b805911c92454f0f4R6) and it does resolve the issue.
Beanstalk now support spot instances:
https://aws.amazon.com/about-aws/whats-new/2019/11/aws-elastic-beanstalk-adds-support-for-amazon-ec2-spot-instances/
64bit Amazon Linux 2018.03 v2.11.9 running Multi-container Docker 18.06.1-ce (Generic)
solution stack needs at least ecs:RegisterContainerInstance
on ec2 instance to work properly.
Found a bug? Maybe our Slack Community can help.
When adding a S3 User with key and secret, TF 0.14 errors with this error:
Error: Output refers to sensitive values
on .terraform/modules/elastic_beanstalk_environment/outputs.tf line 41:
41: output "setting" {
Expressions used in outputs can only refer to sensitive values if the
sensitive attribute is true.
Terraform code:
module "s3_user_assets" {
source = "git::https://github.com/cloudposse/terraform-aws-iam-s3-user.git?ref=master"
namespace = local.name
stage = local.stage
name = "assets"
s3_actions = ["s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:GetBucketVersioning",
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:ListMultipartUploadParts",
"s3:GetObjectVersion",
"s3:AbortMultipartUpload"]
s3_resources = [module.s3_assets.this_s3_bucket_arn, "${module.s3_assets.this_s3_bucket_arn}/*"]
}
Elastic Beanstalk module:
module "elastic_beanstalk_environment" {
source = "cloudposse/elastic-beanstalk-environment/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "0.37.0"
...
additional_settings = [
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "EFS_NAME"
value = aws_efs_file_system.files.dns_name
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "S3_ACCESS_KEY_ID"
value = module.s3_user_assets.access_key_id
},
{
namespace = "aws:elasticbeanstalk:application:environment"
name = "S3_SECRET_ACCESS_KEY"
value = module.s3_user_assets.secret_access_key
},
]
...
I worked around by manually adding the requested sensitive = true
to the outputs.tf
in the module's cache folder.
The variables should be added without error.
Steps to reproduce the behavior:
See source code above.
If applicable, add screenshots or logs to help explain your problem.
Anything that will help us triage the bug will help. Here are some ideas:
Add any other context about the problem here.
If you answered yes to these questions, please reach out to me (@erik
) in our Slack Community.
Have a question? Please checkout our Slack Community or visit our Slack Archive.
DeploymentPolicy can currently only be Immutable
or Rolling
, but I would like to use others like RollingWithAdditionalBatch
.
There would be a deployment_policy
parameter that can be used to specify required deployment policy and it would support all policies: All at once, Rolling, Rolling with additional batch, Immutable,Traffic splitting
Tried to use additional_settings
to define the unsupported policy but the original one applies instead.
When running the example exactly as it is (with fixed stack solution name as I mentioned in the other issue) with the additional loadbalancer_certificate_arn = "arn:aws:acm:us-east-1:SOME_REAL_ARN_ID"
it fails with:
Error: Error applying plan:
1 error(s) occurred:
* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: Error waiting for Elastic Beanstalk Environment (e-d3ep2ub5md) to become ready: 3 errors occurred:
* 2019-04-14 01:35:27.327 +0000 UTC (e-d3ep2ub5md) : Stack named 'awseb-e-d3ep2ub5md-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBV2LoadBalancerListener443, AWSEBInstanceLaunchWaitCondition].
* 2019-04-14 01:35:27.498 +0000 UTC (e-d3ep2ub5md) : Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: ValidationError; Request ID: 8d9f505c-5e55-11e9-b45a-6b32a0fd16fd)
* 2019-04-14 01:35:27.576 +0000 UTC (e-d3ep2ub5md) : The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.
Any clues? Is it something wrong with the module or should I make some changes to other resources (VPC/subnets)?
Thanks!
I was trying to access my EB host behind the ELB but I found out that when checking with nmap
all ports were filtered.
I did have the following settings:
ssh_listener_port
set to "22"
ssh_listener_enabled
set to "true"
ssh_source_restriction
set to "0.0.0.0/0"
associate_public_ip_address
set to "true"
And yet I could not access any of the open ports on the instance.
I also had these set for the subnet
module:
nat_gateway_enabled
set to "true"
map_public_ip_on_launch
set to "true"
What did work was setting the private_subnets
setting to use module.subnets.public_subnet_ids
rather than module.subnets.private_subnet_ids
.
(I found this out by adding a host to the same VPC manually but in the public rather than private subnet, and it had access.)
I was wondering if this is Intended behavior?
And if so, maybe some additional documentation could help?
Would be great if there was an option in this module to provision a beanstalk vpc endpoint to alleviate reliance on internet. Same for the rds module too vpc endpoint would be handy to deploy.
Are you aware this
The following change removed ssh_source_restriction
fe6d0d7#diff-7a370d8342e7203b805911c92454f0f4L494
but it is still present in the docs, and is also a variable that can be passed in, is this intentional?
I have checked a recent environment using the changes that have been made, and it no longer locks ssh down to the said source, but is now available to the world.
Hello. I am new with terraform. Can you provide an example creating a elb env with a rds? I am using your modules for do that.
I create a vpc, an mysql database and then create the application and environment, adding the security group to the allowed_security_groups field in the env.
I have to add attributes, because the rds and elb env was creating the same security group.
The code create all the resources, but I having problem that ec2 Instance can not connect to the DB.
Here is the code:
provider "aws" {
profile = "classlolaws"
region = var.region
}
module "vpc" {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.7.0"
namespace = var.namespace
stage = var.stage
name = var.name
cidr_block = "172.16.0.0/16"
}
module "subnets" {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0"
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = true
nat_instance_enabled = false
}
module "rds_instance" {
source = "git::https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.19.0"
namespace = var.namespace
stage = var.stage
name = var.name
database_name = var.database_name
database_user = var.database_user
database_password = var.database_password
database_port = var.database_port
multi_az = var.multi_az
attributes = var.rds_attributes
storage_type = var.storage_type
allocated_storage = var.allocated_storage
storage_encrypted = var.storage_encrypted
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
db_parameter_group = var.db_parameter_group
publicly_accessible = var.publicly_accessible
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.private_subnet_ids
security_group_ids = [module.vpc.vpc_default_security_group_id]
apply_immediately = var.apply_immediately
dns_zone_id = var.dns_zone_id
db_parameter = [
{
name = "myisam_sort_buffer_size"
value = "1048576"
apply_method = "immediate"
},
{
name = "sort_buffer_size"
value = "2097152"
apply_method = "immediate"
}
]
}
module "elastic_beanstalk_application" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.5.0"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.elb_attributes
tags = var.tags
delimiter = var.delimiter
description = "Test elastic_beanstalk_application"
}
module "elastic_beanstalk_environment" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.19.0"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.elb_attributes
tags = var.tags
delimiter = var.delimiter
description = var.description
region = var.region
availability_zone_selector = var.availability_zone_selector
dns_zone_id = var.dns_zone_id
dns_subdomain = var.dns_subdomain
wait_for_ready_timeout = var.wait_for_ready_timeout
elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
environment_type = var.environment_type
loadbalancer_type = var.loadbalancer_type
elb_scheme = var.elb_scheme
tier = var.tier
version_label = var.version_label
force_destroy = var.force_destroy
instance_type = var.instance_type
root_volume_size = var.root_volume_size
root_volume_type = var.root_volume_type
autoscale_min = var.autoscale_min
autoscale_max = var.autoscale_max
autoscale_measure_name = var.autoscale_measure_name
autoscale_statistic = var.autoscale_statistic
autoscale_unit = var.autoscale_unit
autoscale_lower_bound = var.autoscale_lower_bound
autoscale_lower_increment = var.autoscale_lower_increment
autoscale_upper_bound = var.autoscale_upper_bound
autoscale_upper_increment = var.autoscale_upper_increment
vpc_id = module.vpc.vpc_id
loadbalancer_subnets = module.subnets.public_subnet_ids
application_subnets = module.subnets.private_subnet_ids
allowed_security_groups = [module.vpc.vpc_default_security_group_id, module.rds_instance.security_group_id]
rolling_update_enabled = var.rolling_update_enabled
rolling_update_type = var.rolling_update_type
updating_min_in_service = var.updating_min_in_service
updating_max_batch = var.updating_max_batch
healthcheck_url = var.healthcheck_url
application_port = var.application_port
solution_stack_name = var.solution_stack_name
additional_settings = var.additional_settings
env_vars = {
"DB_CREATE" = "update"
"HOST_BACKEND" = "http://localhost:8000"
"HOST_FRONTEND" = "http://localhost:3000"
"JDBC_CONNECTION_STRING" = ""
"RDS_DATABASE" = var.database_name
"RDS_HOST" = module.rds_instance.instance_address
"RDS_ENDPOINT" = module.rds_instance.instance_endpoint
"RDS_PORT" = var.database_port
"RDS_USER" = var.database_user
"RDS_PASS" = var.database_password
"RDS_HOST_DNS" = module.rds_instance.hostname
}
}
In the README, under the Inputs table, the descriptions of additional_security_groups
and allowed_security_groups
input fields are switched. This is merely an issue in the README documentation - the module behaves according to the expected descriptions below.
The description in the README currently shows
additional_security_groups: List of security groups to be allowed to connect to the EC2 instances
allowed_security_groups: List of security groups to add to the EC2 instances
additional_security_groups: List of security groups to add to the EC2 instances
allowed_security_groups: List of security groups to be allowed to connect to the EC2 instances
If I set http_listener_enabled="true"
and loadbalancer_certificate_arn=someArn
and loadbalancer_ssl_policy=whatever
, then I can get the load balancer to add a HTTPS listener and a HTTP listener.
But how can I make the HTTP listener create a forwarding rule to HTTPS?
Hello,
Thanks a great module like all other modules you have!
Currently we can specify the ec2_instance_profile_role_name
, and the module will create the instance profile role with the default policy that includes read permissions to a few different services.
This is a bit cumbersome since you would like to either specify other permissions to include the role policy or you would like to scope the read permission on parameter store to only access parameters for this specific environment.
It would be therefore great to either have the possibility to provide an existing role for the instances or provide the policy that should be used.
Thanks!
this is an feature suggestion
it would be nice if the module would expose an option to customize the AMI ImageId used by beanstalk:
setting {
name = "ImageId"
namespace = "aws:autoscaling:launchconfiguration"
value = var.custom_ami_imageid
}
for example like SSLPolicy
the error returns:
Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners
As per this issue, the 'Name' tag is reserved for use by AWS elastic beanstalk:
hashicorp/terraform-provider-aws#3963
This results in failures to deploy and non-idempotency issues.
The null label provider is taking the provided tags and 'sanitising' them... But for this module the 'Name' tag needs stripping out.
tags.%: "3" => "5"
tags.Name: "" => "mine-staging-jenkins2build-eb-env"
tags.Namespace: "" => "mine"
How to Reproduce:
This I believe happens in an already provisioned environment, using terraform-aws-jenkins and...
when using version 1.10 and 1.11 of the terraform AWS provider:
With beanstalk supporting spot instances now is possible to add an array of type of instances so beanstalk will choose which one to select based on the price.
instance_type
should be type list(string)
I have the following setting
allowed_security_groups = []
with or without that parameter, I get the output below every time I run terraform plan
~ resource "aws_security_group" "default" {
arn = "arn:aws:ec2:us-east-2:xxxxxxxxxxx:security-group/sg-xxxxxxxxxxxxxx"
description = "Allow inbound traffic from provided Security Groups"
egress = [
{
cidr_blocks = [
"0.0.0.0/0",
]
description = ""
from_port = 0
ipv6_cidr_blocks = []
prefix_list_ids = []
protocol = "-1"
security_groups = []
self = false
to_port = 0
},
]
id = "sg-xxxxxxxxxxxxxx"
~ ingress = [
+ {
+ cidr_blocks = []
+ description = ""
+ from_port = 0
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "-1"
+ security_groups = []
+ self = false
+ to_port = 0
},
]
name = "beanstalk-development"
owner_id = "xxxxxxxxxxxxxx"
revoke_rules_on_delete = false
tags = {
"Name" = "beanstalk-development"
}
vpc_id = "vpc-xxxxxxxxxxxxxxxx"
}
How that can be avoided?
terraform-aws-elastic-beanstalk-environment
recreates all settings
on each terraform plan/apply
setting.1039973377.name: "InstancePort" => "InstancePort"
setting.1039973377.namespace: "aws:elb:listener:22" => "aws:elb:listener:22"
setting.1039973377.resource: "" => ""
setting.1039973377.value: "22" => "22"
setting.1119692372.name: "" => "ListenerEnabled"
setting.1119692372.namespace: "" => "aws:elbv2:listener:443"
setting.1119692372.resource: "" => ""
setting.1119692372.value: "" => "false"
setting.1136119684.name: "RootVolumeSize" => "RootVolumeSize"
setting.1136119684.namespace: "aws:autoscaling:launchconfiguration" => "aws:autoscaling:launchconfiguration"
setting.1136119684.resource: "" => ""
setting.1136119684.value: "8" => "8"
setting.1201312680.name: "ListenerEnabled" => "ListenerEnabled"
setting.1201312680.namespace: "aws:elb:listener:443" => "aws:elb:listener:443"
setting.1201312680.resource: "" => ""
setting.1201312680.value: "false" => "false"
This feature/bug was present for years and is still not fixed:
hashicorp/terraform#6729
hashicorp/terraform-provider-aws#901
hashicorp/terraform#6257
hashicorp/terraform-provider-aws#280
hashicorp/terraform#11056
hashicorp/terraform-provider-aws#461
(tested some ideas from the links above, nothing worked 100%)
The only possible solution is to add this:
lifecycle {
ignore_changes = ["setting"]
}
but it’s a hack since it will not update the env if you update any of the settings.
Regarding terraform-aws-elastic-beanstalk-environment
recreating the settings
all the time, here what’s probably happening:
advanced
algorithm to determine if they are the same)What’s a possible solution?
Introduce var.settings
(list of maps) to be able to provide all the required settings from outside of the module.
It might work, but in practice would be very difficult to know all the needed settings and tedious to implement.
Running the example code with the additional tier = "Worker"
option fail with:
Error: Error applying plan:
1 error(s) occurred:
* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
* aws_elastic_beanstalk_environment.default: ConfigurationValidationException: Configuration validation exception: Load Balancer ListenerEnabled setting cannot be applied because AWSEBLoadBalancer doesn't exist.
status code: 400, request id: bd7a5475-7390-4fe9-a213-a728bf9c3895
Manually commenting out all the eb settings that starts with aws:elb
worked. If I can come up with some better solution I'll propose a PR.
Hello,
I'm trying to add the RDS settings to my EB environments configuration, and it seems that functionality does not work at all.
Based on your documentation & AWS:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-rdsdbinstance
I'm trying to do like that:
additional_settings = [
{
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
},
{
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "8.0.21"
},
{
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t2.micro"
},
{
namespace = "aws:rds:dbinstance"
name = "MultiAZDatabase"
value = "false"
},
{
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = local.database.user
},
{
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = random_password.password.result
},
{
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
},
{
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "5"
}
]
And this configuration is not creating any RDS DB & not attaching to the EB.
Do you have any solution for that?
Hello guys and thank you for your work on this module.
I would like to have the possibility to configure multiple Scheduled actions in my environment using additional_settings or with a dedicated variable.
I'm trying to configure my Beanstalk environment with some scheduled actions to handle my business daily load needs and ensure respect of some security compliance requirements.
Since we can do this using .ebextensions file we could use this method, but we want to be sure on the infrastructure team side that we can apply a mandatory Schedule action that could not be changed by application developments. (.ebextension being specific to this second)
I have 2 solutions at the moment.
Solution 1: We could create a dedicated variable scheduled_actions as following:
variable "scheduled_actions" {
type = list(object({
name = string
minsize = string
maxsize = string
desiredcapacity = string
starttime = string
endtime = string
recurrence = string
suspend = string
}))
default = []
description = "Define a list of scheduled actions"
}
Advantage:
Drawback:
Solution 2: We could modify the additional_settings variable to handle the "resource" field for this specific case.
terraform-aws-elastic-beanstalk-environment/variables.tf
Lines 380 to 385 in 2d146af
become
variable "additional_settings" {
type = list(object({
namespace = string
name = string
value = string
resource = string
}))
terraform-aws-elastic-beanstalk-environment/main.tf
Lines 867 to 875 in 2d146af
dynamic "setting" {
for_each = var.additional_settings
content {
namespace = setting.value.namespace
name = setting.value.name
value = setting.value.value
resource = setting.value.resource
}
}
Advantages:
Drawbacks:
I'v tried to use additional_settings without parameter just to see what happen:
I specify additional_settings as following
additional_settings = [
{
namespace = "aws:autoscaling:scheduledaction"
name = "MinSize"
value = "1"
},
{
namespace = "aws:autoscaling:scheduledaction"
name = "MaxSize"
value = "2"
},
{
namespace = "aws:autoscaling:scheduledaction"
name = "StartTime"
value = "2015-05-14T07:00:00Z"
},
{
namespace = "aws:autoscaling:scheduledaction"
name = "EndTime"
value = "2016-01-12T07:00:00Z"
},
{
namespace = "aws:autoscaling:scheduledaction"
name = "Recurrence"
value = "*/20 * * * *"
},
{
namespace = "aws:autoscaling:scheduledaction"
name = "DesiredCapacity"
value = "2"
}
]
Launch plan wich mention these new settings:
+ setting {
+ name = "DesiredCapacity"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "2"
}
+ setting {
+ name = "EndTime"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "2016-01-12T07:00:00Z"
}
+ setting {
+ name = "MaxSize"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "2"
}
+ setting {
+ name = "MinSize"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "1"
}
+ setting {
+ name = "Recurrence"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "*/20 * * * *"
}
+ setting {
+ name = "StartTime"
+ namespace = "aws:autoscaling:scheduledaction"
+ value = "2015-05-14T07:00:00Z"
}
then apply and get the following error
Error: InvalidParameterValue: The scheduled action name cannot be blank.
status code: 400, request id: f4003748-9393-4202-824c-312fbf77b7fc
Which is logic since the resource field is set to an empty string.
This need sounds like a bug and a feature request to me.
I'm open to discussion on this one and can propose a PR .
Hello @osterman @goruha @aknysh
Currently, your module for elastic beanstalk environment doesn't have a way to set the tier for the application to be "Worker" because it is hardcoded to "WebServer". I am asking that a variable is created for the tier with it defaulting to "WebServer" so it disrupts as little as possible. I will put in a PR shortly this Issue number.
Thanks,
Lucas Pearson
Error: Error applying plan:
1 error(s) occurred:
Simply running the example gives this error on `us-east-1`:
module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
aws_elastic_beanstalk_environment.default: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2018.03 v2.12.2 running Docker 18.03.1-ce' found.
status code: 400, request id: 656e5969-7bb3-4313-9158-2515435f3522
Changing to 64bit Amazon Linux 2018.03 v2.12.10 running Docker 18.06.1-ce
seems to work.
The inline policy *-eb-default
in main.tf line 138 is generally overreaching with all of it's permissions, but in particular Action: ["iam:PassRole"]
with Resource: "*"
is downright dangerous. See Unit 42 Cloud Threat Report: Misconfigured IAM Roles Lead to Thousands of Compromised Cloud Workloads for details on how to exploit.
Privilege escalation should not be possible. iam:PassRole
(and iam:ListRole
) should not be used.
Use terraform-aws-route53-alias instead of terraform-aws-route53-cluster-hostname in order to create a Route53 ALIAS instead of a CNAME.
I expect Route53 to be configured with an ALIAS and not a CNAME. This way our Beanstalk A record will point to the actual IPs and not process a CNAME.
It's currently not possible to set this platform specific setting.
aws:elasticbeanstalk:environment:proxy
Found a bug? Maybe our Slack Community can help.
Plan goes ok.
After trying to apply my config, a ConfigurationValidationException pops up
Clearer meaning of the problem. I think I used all the required vars documented... so no clue how to proceed
`
module "elastic_beanstalk_application" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.5.0"
name = "AppName"
}
module "elastic-beanstalk-environment" {
source = "cloudposse/elastic-beanstalk-environment/aws"
version = "0.22.0"
elastic_beanstalk_application_name = "AppName"
name = "dev-AppName"
vpc_id = aws_default_vpc.default.id
region = "eu-west-1"
solution_stack_name = "64bit Amazon Linux 2 v5.0.2 running Node.js 12"
application_subnets = [
aws_default_subnet.default_az1.id,
aws_default_subnet.default_az2.id,
aws_default_subnet.default_az3.id,
]
autoscale_max = 2
env_vars = {
API_URL = "https://my-api-url.net"
}
}
`
`
Error: ConfigurationValidationException: Configuration validation exception: Invalid option value: '' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBSubnets'): Specify the subnets for the VPC.
status code: 400, request id: ac1268fc-77a0-479d-8a5d-37c6b024ce9b
on .terraform/modules/elastic-beanstalk-environment/terraform-aws-elastic-beanstalk-environment-0.22.0/main.tf line 505, in resource "aws_elastic_beanstalk_environment" "default":
505: resource "aws_elastic_beanstalk_environment" "default" {
`
we have this
resource "aws_elasticache_replication_group" "redis" {
at_rest_encryption_enabled = true
auth_token = data.external.example.result.REDIS_AUTH
automatic_failover_enabled = false
engine = "redis"
engine_version = "5.0.5"
node_type = lookup(var.aws_elasticache_cluster_node_type, terraform.workspace)
maintenance_window = "sun:01:01-sun:23:00"
number_cache_clusters = lookup(var.aws_elasticache_cluster_nodes, terraform.workspace)
parameter_group_name = aws_elasticache_parameter_group.default.name
port = 6379
replication_group_id = "service-${terraform.workspace}"
replication_group_description = "Service ${terraform.workspace}"
snapshot_window = "00:00-01:00"
subnet_group_name = aws_elasticache_subnet_group.default.name
transit_encryption_enabled = true
lifecycle {
prevent_destroy = true
}
}
every time auth_token needs a change terraform is attempting to destroy and create new cluster.
in AWS GUI it's possible to set new token and chose either to rotate or set it.
Feels like we need to expose token update stategy option in TF (https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html)
Hello !
In my beanstalk env, my load balancer listens on https. I would like to listen on http too, to redirect http -> https. I know, there is already a patch on it, and I'm using the http_listener_enabled
parameter.
But... It doesn't work for me.
Here a screenshot of AWS console 👍
And terraform :
module "elastic_beanstalk_environment" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.4.8"
...
healthcheck_url = "${var.healthcheck_url}"
http_listener_enabled = true
loadbalancer_type = "application"
loadbalancer_certificate_arn = "${aws_acm_certificate.cert.arn}"
loadbalancer_security_groups = ["${module.metabase_sg.this_security_group_id}"]
loadbalancer_managed_security_group = "${module.metabase_sg.this_security_group_id}"
}
tfsate :
"all_settings.232565.namespace": "aws:elbv2:listener:default",
"all_settings.232565.resource": "",
"all_settings.232565.value": "false",
What am I doing wrong?
Thanks
Add option for network type loadbalancer.
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-nlb.html
When creating a new beanstalk environment, some resources naming should be configurable independently from each other with a custom name.
For created EC2 instances, S3 Bucket, instance_profile, iam_roles, there should be a configuration that let us custom their name.
Like the s3_bucket_access_log_bucket_name variable.
We could use these to customize our resources naming.
We are currently starting to use this module to deploy Beanstalk but we have specifics constraints on naming for our resources :
For EC2 : Product-Component-Environment-Blue_Green
For IAM Roles: Product-Component-Environment-AWS::Region
For S3: Product-Component-Environment-AWS::AccountId-AWS::Region
Since we are doing multi-regions deployments, there are main patterns standards we need to respect to ensure consistent naming of our resources. As you can see they are similar and could be implemented using context module mechanics but not the exact same.
Problem the module use the same naming pattern for these ressources.
For S3 we clearly need account ID in name to avoid conflicts since it's a global resource, same goes for IAM roles but not EC2 name which is specifics to the account.
Being able to specify an alternative name to these resources as variable.
For example we could have these variables to specify explicit name:
Or maybe being able to specify a prefix or a global naming convention for each type of resource.
Before opening this feature request I tried to use attributes option and label_order from context module.
var.tfvars
...
product = "product"
component = "component"
environment = "env"
label_order = ["namespace", "name", "environment", "attributes"]
...
main.tf
module "elastic_beanstalk_environment" {
...
label_order = var.label_order
attributes = [data.aws_caller_identity.current.account_id,var.region]
...
Plan give me
Beanstalk env name: product-comp-env-862853942159-eu-west-1
ec2 name: product-comp-env-862853942159-eu-west-1-eb-ec2
ec2 iam instance: product-comp-env-862853942159-eu-west-1-eb-ec2
iam role service: product-comp-env-862853942159-eu-west-1-eb-service
s3 bucket for logs: product-comp-env-862853942159-eu-west-1-eb-loadbalancer-logs
Which allow us at least to respect widely our convention but flood some resources naming with useless information.
S3 limitation to 63 chars for bucket name make it hard to name our Bucket since the module append the -eb-loadbalancer-logs field consuming 21 chars of the 63 available.
This would require another issue I think but since we could replace it with this features it's good information.
I was thinking about using multiple context module to solve this but it's not possible in the end since there is a single one referenced.
hello people
is there any possibility to manage ebextentions using terraform because my needs is :
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.