Git Product home page Git Product logo

terraform-aws-elastic-beanstalk-environment's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-aws-elastic-beanstalk-environment's Issues

Add variable to disable S3 eb-loadbalancer-logs

Describe the Feature

At the moment any load balanced web environment is automatically created with a eb-loadbalancer-logs bucket along with logging enabled.

Expected Behavior

It would be nice to have an option to disable this behavior.

Use Case

We don't actually need or want logs on a couple internal environments.

Warnings

I use cloudposse modules. There are these warnings. Do you plan to address that?


Warning: Quoted references are deprecated

  on .terraform/modules/subnets/private.tf line 45, in resource "aws_subnet" "private":
  45:     ignore_changes = ["tags.kubernetes", "tags.SubnetType"]

In this context, references are expected literally rather than in quotes.
Terraform 0.11 and earlier required quotes, but quoted references are now
deprecated and will be removed in a future version of Terraform. Remove the
quotes surrounding this reference to silence this warning.
module "vpc" {
  source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=0.8.0"
  namespace = var.namespace
  stage = var.stage
  name = var.name
  tags = var.tags
  cidr_block = "172.32.0.0/16"
}

module "subnets" {
  source               = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0"
  availability_zones   = var.availability_zones
  namespace            = var.namespace
  stage                = var.stage
  name                 = var.name
  vpc_id               = module.vpc.vpc_id
  igw_id               = module.vpc.igw_id
  cidr_block           = module.vpc.vpc_cidr_block
  nat_gateway_enabled  = true
  nat_instance_enabled = false
  tags = var.tags
}

S3 bucket invalid name

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

S3 bucket is not getting created - it returns 400 response with "Invalid name" error

Expected Behavior

elb log s3 bucket should be created

image

Doesn't support AWS provider v3.x

Describe the Bug

AWS provider version constraint ~> 2.0 doesn't support modules that use ~> 3.x versions. For example, my module supports AWS ~> 3.4.0 and I get:

Could not retrieve the list of available versions for provider hashicorp/aws:
no available releases match the given constraints ~> 3.4.0, ~> 2.0, ~> 2.0, ~>
3.4.0

list loadbalancer_certificate_arn

Hi Guys,

Thank you for the great work you made available. Currently I am bumped to this particularly use case:
Using Multi-Docker environments which are reachable through an Application Loadbalancer with HostHeaders enabled. Some of the applications are reachable through other HostHeaders: e.g. app.domain1.com and web.domain2.com.

Terraform recently presented a fix for the AWS provider which allows usage of multiple ACM certificates for an Application Loadbalancer (terraform-aws-modules/terraform-aws-alb#26)
capture
.

TL:DR is it possible to change loadbalancer_certificate_arn to type of List.

loadbalancer_certificate_arn = ["${module.acm-application.acm_certificate_arn}"]

Provide a way to specify visibility for Elastic Load Balancers to be internal

Hello,
I recently started using your module and was able to create a new Elastic Beanstalk environment successfully. However, I need my load balancers to be internal facing only. I could not find a way to specify this within the module. The module requires 'public_subnets', so I assume this is functionality is not currently available?

In AWS, you can specify the visibility as shown in the attached screenshot. Thanks!
image

`enhanced` health not supported by some platforms, but cannot be changed

Some platforms (i.e. .NET) do not support the enhanced health system, yet there is no option to change this to basic. Causes the error

ConfigurationValidationException: Configuration validation exception: Enhanced health reporting system is not supported by current solution stack

Solution would be to add a variable for this.

Missing alb_zone_id

I'm getting following error as using eu-north-1 region

Error: Invalid index

  on .terraform/modules/eu-north-1.foo.elastic_beanstalk_environment/outputs.tf line 22, in output "elb_zone_id":
  22:   value       = var.alb_zone_id[var.region]
    |----------------
    | var.alb_zone_id is map of string with 15 elements
    | var.region is "eu-north-1"

The given key does not identify an element in this collection value.

As I taking a look at the code, some regions are missing.

# From: http://docs.aws.amazon.com/general/latest/gr/rande.html#elasticbeanstalk_region
# Via: https://github.com/hashicorp/terraform/issues/7071
variable "alb_zone_id" {
  type = map(string)

  default = {
    ap-northeast-1 = "Z1R25G3KIG2GBW"
    ap-northeast-2 = "Z3JE5OI70TWKCP"
    ap-south-1     = "Z18NTBI3Y7N9TZ"
    ap-southeast-1 = "Z16FZ9L249IFLT"
    ap-southeast-2 = "Z2PCDNR3VC2G1N"
    ca-central-1   = "ZJFCZL7SSZB5I"
    eu-central-1   = "Z1FRNW7UH4DEZJ"
    eu-west-1      = "Z2NYPWQ7DFZAZH"
    eu-west-2      = "Z1GKAAAUGATPF1"
    sa-east-1      = "Z10X7K2B4QSOFV"
    us-east-1      = "Z117KPS5GTRQ2G"
    us-east-2      = "Z14LCN19Q5QHIC"
    us-west-1      = "Z1LQECGX5PH1X"
    us-west-2      = "Z38NKT9BP95V3O"
    eu-west-3      = "ZCMLWB8V5SYIT"
  }

  description = "ALB zone id"
}

https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/variables.tf#L440

So regions below are missing. Please update them.

Africa (Cape Town) | af-south-1
Asia Pacific (Hong Kong) | ap-east-1
Asia Pacific (Osaka-Local) | ap-northeast-3
Europe (Frankfurt) | eu-central-1
Europe (Milan) | eu-south-1
Europe (Stockholm) | eu-north-1
Middle East (Bahrain) | me-south-1
South America (São Paulo) | sa-east-1

some unset env vars are making their way into the envrionment

Hi there,
Fantastic module. I've managed to get an EB env up and running with this with around 13 environment variables and all looks good. Except I seem to be getting 3 "rogue" environment variables being set:

DEFAULT_ENV_13
DEFAULT_ENV_27
DEFAULT_ENV_41

and only these 3. I think the way you've handled the environment variables is pretty nice as all the uglyness of EB variables is hidden nicely in the module. I can't see how I'm only getting properties for these 3 variables though because my env-vars map looks like:

   env_vars = "${
      map(
        "AWS_REGION", "${local.aws_region}",
        "BUNDLE_WITHOUT", "test:development",
        "REDIS_PORT", "6379",
        "REDIS_URL", "${local.redis_url}",
        "APPLICATION_NAME", "foo",
        "LOAD_DB", "none",
        "RACK_ENV", "staging",
        "RAILS_SKIP_ASSET_COMPILATION", "false",
        "RAILS_SKIP_MIGRATIONS", "false",
        "RDS_DB_NAME", "mydb",
        "RDS_PORT", "5432",
        "RDS_USERNAME", "myuser",
        "API_URL", "https://api.foo.com"
      )
    }"

Any thoughts why I might get the 3 rogues?

AWSElasticBeanstalkService AWS IAM policy soon to be deprecated

Hi,
Got an email from AWS that says that AWSElasticBeanstalkService which is the managed IAM policy used in main.tf will be deprecated soon and should be switched to AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy
Was wondering if you plan to switch that.

Thank you

Do not downgrade solution stack if managed actions are enabled

Currently we do not have solution stack attribute ignored. Beanstalk automatically updates environment due to managed actions being allowed, when we run terraform again, it downgrades it back to the version specified in the solution stack attribute.

I tried dynamic block approach, apparently lifecycle block isn't allowed to be dynamic.

Validation error: Environment tag cannot be empty

As part of recent commit 93860ad , the version of module terraform-null-label has been updated from 0.3.1 to 0.5.0.

This seems to have introduced a breaking change in the way the environment tag is handled.

Following that change, TF plan indicates that the following tags will be created:

  tags.%: "" => "4"
  tags.Environment: "" => ""
  tags.Name: "" => "xxx"
  tags.Namespace:  "" => "yyy"
  tags.Stage: "" => "demo"

The only empty tag value is for tags.Environment.
It then fails when applying with the error:

* aws_elastic_beanstalk_environment.default: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateEnvironmentInput.Tags[1].Value.

This is due to the fact that the label resource of module terraform-aws-elastic-beanstalk-environment does not specify an environment variable.

I tried adding the environment variable to the resource (https://github.com/vlaurin/terraform-aws-elastic-beanstalk-environment/commit/edd54e1416e9ee120a10cdad900e0bc9d74884b7#diff-7a370d8342e7203b805911c92454f0f4R6) and it does resolve the issue.

Attach ECS permissions to EC2 instance

64bit Amazon Linux 2018.03 v2.11.9 running Multi-container Docker 18.06.1-ce (Generic) solution stack needs at least ecs:RegisterContainerInstance on ec2 instance to work properly.

TF0.14: Expressions used in outputs can only refer to sensitive values if the sensitive attribute is true.

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

When adding a S3 User with key and secret, TF 0.14 errors with this error:

Error: Output refers to sensitive values

  on .terraform/modules/elastic_beanstalk_environment/outputs.tf line 41:
  41: output "setting" {

Expressions used in outputs can only refer to sensitive values if the
sensitive attribute is true.

Terraform code:

module "s3_user_assets" {
  source    = "git::https://github.com/cloudposse/terraform-aws-iam-s3-user.git?ref=master"
  namespace = local.name
  stage     = local.stage
  name      = "assets"
  s3_actions = ["s3:ListBucket",
    "s3:ListBucketMultipartUploads",
    "s3:ListBucketVersions",
    "s3:GetBucketVersioning",
    "s3:PutObject",
    "s3:GetObject",
    "s3:DeleteObject",
    "s3:DeleteObjectVersion",
    "s3:ListMultipartUploadParts",
    "s3:GetObjectVersion",
  "s3:AbortMultipartUpload"]
  s3_resources = [module.s3_assets.this_s3_bucket_arn, "${module.s3_assets.this_s3_bucket_arn}/*"]
}

Elastic Beanstalk module:

module "elastic_beanstalk_environment" {
  source = "cloudposse/elastic-beanstalk-environment/aws"

  # Cloud Posse recommends pinning every module to a specific version
  version                            = "0.37.0"
...

  additional_settings = [
    {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = "EFS_NAME"
      value     = aws_efs_file_system.files.dns_name
    },
    {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = "S3_ACCESS_KEY_ID"
      value     = module.s3_user_assets.access_key_id
    },
    {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = "S3_SECRET_ACCESS_KEY"
      value     = module.s3_user_assets.secret_access_key
    },
  ]
...

I worked around by manually adding the requested sensitive = true to the outputs.tf in the module's cache folder.

Expected Behavior

The variables should be added without error.

Steps to Reproduce

Steps to reproduce the behavior:

See source code above.

Screenshots

If applicable, add screenshots or logs to help explain your problem.

Environment (please complete the following information):

Anything that will help us triage the bug will help. Here are some ideas:

  • OS: Mac
  • Version Terraform 0.14.6

Additional Context

Add any other context about the problem here.

DeploymentPolicy is not fully configurable

Have a question? Please checkout our Slack Community or visit our Slack Archive.

Slack Community

Describe the Feature

DeploymentPolicy can currently only be Immutable or Rolling, but I would like to use others like RollingWithAdditionalBatch.

Expected Behavior

There would be a deployment_policy parameter that can be used to specify required deployment policy and it would support all policies: All at once, Rolling, Rolling with additional batch, Immutable,Traffic splitting

Alternatives Considered

Tried to use additional_settings to define the unsupported policy but the original one applies instead.

Example fail if added HTTPS listeners

When running the example exactly as it is (with fixed stack solution name as I mentioned in the other issue) with the additional loadbalancer_certificate_arn = "arn:aws:acm:us-east-1:SOME_REAL_ARN_ID" it fails with:

Error: Error applying plan:

1 error(s) occurred:

* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:

* aws_elastic_beanstalk_environment.default: Error waiting for Elastic Beanstalk Environment (e-d3ep2ub5md) to become ready: 3 errors occurred:
	* 2019-04-14 01:35:27.327 +0000 UTC (e-d3ep2ub5md) : Stack named 'awseb-e-d3ep2ub5md-stack' aborted operation. Current state: 'CREATE_FAILED'  Reason: The following resource(s) failed to create: [AWSEBV2LoadBalancerListener443, AWSEBInstanceLaunchWaitCondition].
	* 2019-04-14 01:35:27.498 +0000 UTC (e-d3ep2ub5md) : Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code: ValidationError; Request ID: 8d9f505c-5e55-11e9-b45a-6b32a0fd16fd)
	* 2019-04-14 01:35:27.576 +0000 UTC (e-d3ep2ub5md) : The EC2 instances failed to communicate with AWS Elastic Beanstalk, either because of configuration problems with the VPC or a failed EC2 instance. Check your VPC configuration and try launching the environment again.

Any clues? Is it something wrong with the module or should I make some changes to other resources (VPC/subnets)?

Thanks!

Host not available despite having public IP

I was trying to access my EB host behind the ELB but I found out that when checking with nmap all ports were filtered.

I did have the following settings:

  • ssh_listener_port set to "22"
  • ssh_listener_enabled set to "true"
  • ssh_source_restriction set to "0.0.0.0/0"
  • associate_public_ip_address set to "true"

And yet I could not access any of the open ports on the instance.

I also had these set for the subnet module:

  • nat_gateway_enabled set to "true"
  • map_public_ip_on_launch set to "true"

What did work was setting the private_subnets setting to use module.subnets.public_subnet_ids rather than module.subnets.private_subnet_ids.

(I found this out by adding a host to the same VPC manually but in the public rather than private subnet, and it had access.)

I was wondering if this is Intended behavior?
And if so, maybe some additional documentation could help?

elastic beanstalk connecting with rds

Hello. I am new with terraform. Can you provide an example creating a elb env with a rds? I am using your modules for do that.
I create a vpc, an mysql database and then create the application and environment, adding the security group to the allowed_security_groups field in the env.
I have to add attributes, because the rds and elb env was creating the same security group.
The code create all the resources, but I having problem that ec2 Instance can not connect to the DB.

Here is the code:

provider "aws" {
profile = "classlolaws"
region = var.region
}

module "vpc" {
source = "git::https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.7.0"
namespace = var.namespace
stage = var.stage
name = var.name
cidr_block = "172.16.0.0/16"
}

module "subnets" {
source = "git::https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.16.0"
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = true
nat_instance_enabled = false
}

module "rds_instance" {
source = "git::https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.19.0"
namespace = var.namespace
stage = var.stage
name = var.name
database_name = var.database_name
database_user = var.database_user
database_password = var.database_password
database_port = var.database_port
multi_az = var.multi_az
attributes = var.rds_attributes
storage_type = var.storage_type
allocated_storage = var.allocated_storage
storage_encrypted = var.storage_encrypted
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
db_parameter_group = var.db_parameter_group
publicly_accessible = var.publicly_accessible
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.private_subnet_ids
security_group_ids = [module.vpc.vpc_default_security_group_id]
apply_immediately = var.apply_immediately
dns_zone_id = var.dns_zone_id
db_parameter = [
{
name = "myisam_sort_buffer_size"
value = "1048576"
apply_method = "immediate"
},
{
name = "sort_buffer_size"
value = "2097152"
apply_method = "immediate"
}
]
}

module "elastic_beanstalk_application" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.5.0"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.elb_attributes
tags = var.tags
delimiter = var.delimiter
description = "Test elastic_beanstalk_application"
}

module "elastic_beanstalk_environment" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.19.0"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.elb_attributes
tags = var.tags
delimiter = var.delimiter
description = var.description
region = var.region
availability_zone_selector = var.availability_zone_selector
dns_zone_id = var.dns_zone_id
dns_subdomain = var.dns_subdomain

wait_for_ready_timeout = var.wait_for_ready_timeout
elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
environment_type = var.environment_type
loadbalancer_type = var.loadbalancer_type
elb_scheme = var.elb_scheme
tier = var.tier
version_label = var.version_label
force_destroy = var.force_destroy

instance_type = var.instance_type
root_volume_size = var.root_volume_size
root_volume_type = var.root_volume_type

autoscale_min = var.autoscale_min
autoscale_max = var.autoscale_max
autoscale_measure_name = var.autoscale_measure_name
autoscale_statistic = var.autoscale_statistic
autoscale_unit = var.autoscale_unit
autoscale_lower_bound = var.autoscale_lower_bound
autoscale_lower_increment = var.autoscale_lower_increment
autoscale_upper_bound = var.autoscale_upper_bound
autoscale_upper_increment = var.autoscale_upper_increment

vpc_id = module.vpc.vpc_id
loadbalancer_subnets = module.subnets.public_subnet_ids
application_subnets = module.subnets.private_subnet_ids
allowed_security_groups = [module.vpc.vpc_default_security_group_id, module.rds_instance.security_group_id]

rolling_update_enabled = var.rolling_update_enabled
rolling_update_type = var.rolling_update_type
updating_min_in_service = var.updating_min_in_service
updating_max_batch = var.updating_max_batch

healthcheck_url = var.healthcheck_url
application_port = var.application_port

solution_stack_name = var.solution_stack_name

additional_settings = var.additional_settings
env_vars = {
"DB_CREATE" = "update"
"HOST_BACKEND" = "http://localhost:8000"
"HOST_FRONTEND" = "http://localhost:3000"
"JDBC_CONNECTION_STRING" = ""
"RDS_DATABASE" = var.database_name
"RDS_HOST" = module.rds_instance.instance_address
"RDS_ENDPOINT" = module.rds_instance.instance_endpoint
"RDS_PORT" = var.database_port
"RDS_USER" = var.database_user
"RDS_PASS" = var.database_password
"RDS_HOST_DNS" = module.rds_instance.hostname
}
}

Wrong Descriptions of additional_security_groups and allowed_security_groups

Describe the Bug

In the README, under the Inputs table, the descriptions of additional_security_groups and allowed_security_groups input fields are switched. This is merely an issue in the README documentation - the module behaves according to the expected descriptions below.

Current

The description in the README currently shows

additional_security_groups: List of security groups to be allowed to connect to the EC2 instances
allowed_security_groups: List of security groups to add to the EC2 instances

Expected

additional_security_groups: List of security groups to add to the EC2 instances
allowed_security_groups: List of security groups to be allowed to connect to the EC2 instances

Redirect http to https

If I set http_listener_enabled="true" and loadbalancer_certificate_arn=someArn and loadbalancer_ssl_policy=whatever, then I can get the load balancer to add a HTTPS listener and a HTTP listener.

But how can I make the HTTP listener create a forwarding rule to HTTPS?

Option to provide instance profile role or role policy

Hello,

Thanks a great module like all other modules you have!

Currently we can specify the ec2_instance_profile_role_name, and the module will create the instance profile role with the default policy that includes read permissions to a few different services.

This is a bit cumbersome since you would like to either specify other permissions to include the role policy or you would like to scope the read permission on parameter store to only access parameters for this specific environment.

It would be therefore great to either have the possibility to provide an existing role for the instances or provide the policy that should be used.

Thanks!

allow to use custom AMI Imageid

this is an feature suggestion

it would be nice if the module would expose an option to customize the AMI ImageId used by beanstalk:

setting {
    name      = "ImageId"
    namespace = "aws:autoscaling:launchconfiguration"
    value     = var.custom_ami_imageid
  }

when beanstalk setting is blank

for example like SSLPolicy

the error returns:
Creating Load Balancer listener failed Reason: An SSL policy must be specified for HTTPS listeners

Should *not* be passing the 'Name' tag to elastic beanstalk environment creation

As per this issue, the 'Name' tag is reserved for use by AWS elastic beanstalk:

hashicorp/terraform-provider-aws#3963

This results in failures to deploy and non-idempotency issues.

The null label provider is taking the provided tags and 'sanitising' them... But for this module the 'Name' tag needs stripping out.

 tags.%:                                        "3" => "5"
      tags.Name:                                     "" => "mine-staging-jenkins2build-eb-env"
      tags.Namespace:                                "" => "mine"

How to Reproduce:
This I believe happens in an already provisioned environment, using terraform-aws-jenkins and...
when using version 1.10 and 1.11 of the terraform AWS provider:

  • 2018-07-24 14:09:16.652 +0000 UTC (e-mip7mrnszq) : Service:AmazonCloudFormation, Message:No updates are to be performed.
  • 2018-07-24 14:09:17.325 +0000 UTC (e-mip7mrnszq) : Environment tag update failed.

resource "aws_security_group" "default" is always dirty

I have the following setting

allowed_security_groups = []

with or without that parameter, I get the output below every time I run terraform plan

  ~ resource "aws_security_group" "default" {
        arn                    = "arn:aws:ec2:us-east-2:xxxxxxxxxxx:security-group/sg-xxxxxxxxxxxxxx"
        description            = "Allow inbound traffic from provided Security Groups"
        egress                 = [
            {
                cidr_blocks      = [
                    "0.0.0.0/0",
                ]
                description      = ""
                from_port        = 0
                ipv6_cidr_blocks = []
                prefix_list_ids  = []
                protocol         = "-1"
                security_groups  = []
                self             = false
                to_port          = 0
            },
        ]
        id                     = "sg-xxxxxxxxxxxxxx"
      ~ ingress                = [
          + {
              + cidr_blocks      = []
              + description      = ""
              + from_port        = 0
              + ipv6_cidr_blocks = []
              + prefix_list_ids  = []
              + protocol         = "-1"
              + security_groups  = []
              + self             = false
              + to_port          = 0
            },
        ]
        name                   = "beanstalk-development"
        owner_id               = "xxxxxxxxxxxxxx"
        revoke_rules_on_delete = false
        tags                   = {
            "Name" = "beanstalk-development"
        }
        vpc_id                 = "vpc-xxxxxxxxxxxxxxxx"
    }

How that can be avoided?

Module recreates all `settings` on each `terraform plan/apply`

terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply

    setting.1039973377.name:               "InstancePort" => "InstancePort"
    setting.1039973377.namespace:      "aws:elb:listener:22" => "aws:elb:listener:22"
    setting.1039973377.resource:           "" => ""
    setting.1039973377.value:                "22" => "22"
    setting.1119692372.name:               "" => "ListenerEnabled"
    setting.1119692372.namespace:      "" => "aws:elbv2:listener:443"
    setting.1119692372.resource:           "" => ""
    setting.1119692372.value:                "" => "false"
    setting.1136119684.name:               "RootVolumeSize" => "RootVolumeSize"
    setting.1136119684.namespace:     "aws:autoscaling:launchconfiguration" => "aws:autoscaling:launchconfiguration"
    setting.1136119684.resource:           "" => ""
    setting.1136119684.value:              "8" => "8"
    setting.1201312680.name:             "ListenerEnabled" => "ListenerEnabled"
    setting.1201312680.namespace:   "aws:elb:listener:443" => "aws:elb:listener:443"
    setting.1201312680.resource:         "" => ""
    setting.1201312680.value:             "false" => "false"

This feature/bug was present for years and is still not fixed:

hashicorp/terraform#6729
hashicorp/terraform-provider-aws#901
hashicorp/terraform#6257
hashicorp/terraform-provider-aws#280
hashicorp/terraform#11056
hashicorp/terraform-provider-aws#461

(tested some ideas from the links above, nothing worked 100%)

The only possible solution is to add this:

lifecycle {
    ignore_changes = ["setting"]
}

but it’s a hack since it will not update the env if you update any of the settings.

Regarding terraform-aws-elastic-beanstalk-environment recreating the settings all the time, here what’s probably happening:

  • Terraform sends all settings to AWS, but some of them are not relevant to the environment you are deploying
  • Elastic Beanstalk accepts all settings, applies the relevant ones, and throws away the rest
  • Next time Terraform asks about the settings, Elastic Beanstalk returns a subset of the values and probably in different order
  • Terraform can’t decide/calculate if the settings are the same - they sure look different (and would require an advanced algorithm to determine if they are the same)
  • Terraform assigns new ID to the entire array of settings and tries to recreate all of them
  • Elastic Beanstalk accepts the settings, applies the relevant ones, and throws away the rest - the cycle repeats

What’s a possible solution?
Introduce var.settings (list of maps) to be able to provide all the required settings from outside of the module.
It might work, but in practice would be very difficult to know all the needed settings and tedious to implement.

module fail with worker tier

Running the example code with the additional tier = "Worker" option fail with:


Error: Error applying plan:

1 error(s) occurred:

* module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:

* aws_elastic_beanstalk_environment.default: ConfigurationValidationException: Configuration validation exception: Load Balancer ListenerEnabled setting cannot be applied because AWSEBLoadBalancer doesn't exist.
	status code: 400, request id: bd7a5475-7390-4fe9-a213-a728bf9c3895

Manually commenting out all the eb settings that starts with aws:elb worked. If I can come up with some better solution I'll propose a PR.

The additional_settings & RDS does not work

Hello,

I'm trying to add the RDS settings to my EB environments configuration, and it seems that functionality does not work at all.

Based on your documentation & AWS:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-rdsdbinstance

I'm trying to do like that:

additional_settings = [
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBEngine"
      value     = "mysql"
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBEngineVersion"
      value     = "8.0.21"
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBInstanceClass"
      value     = "db.t2.micro"
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "MultiAZDatabase"
      value     = "false"
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBUser"
      value     = local.database.user
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBPassword"
      value     = random_password.password.result
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBDeletionPolicy"
      value     = "Delete"
    },
    {
      namespace = "aws:rds:dbinstance"
      name      = "DBAllocatedStorage"
      value     = "5"
    }
  ]

And this configuration is not creating any RDS DB & not attaching to the EB.
Do you have any solution for that?

Allow configuring of Scheduled Actions

Hello guys and thank you for your work on this module.

Describe the Feature

I would like to have the possibility to configure multiple Scheduled actions in my environment using additional_settings or with a dedicated variable.

Use Case

I'm trying to configure my Beanstalk environment with some scheduled actions to handle my business daily load needs and ensure respect of some security compliance requirements.

Since we can do this using .ebextensions file we could use this method, but we want to be sure on the infrastructure team side that we can apply a mandatory Schedule action that could not be changed by application developments. (.ebextension being specific to this second)

Describe Ideal Solution

I have 2 solutions at the moment.

Solution 1: We could create a dedicated variable scheduled_actions as following:

variable "scheduled_actions" {
  type = list(object({
    name = string
    minsize = string
    maxsize = string
    desiredcapacity = string 
    starttime = string
    endtime = string
    recurrence = string
    suspend = string
  }))
  default     = []
  description = "Define a list of scheduled actions"
}

Advantage:

  • This variable allow us to declare as many scheduled actions as needed using a dedicated object for each one.
  • Easier to specify than declaring each setting (see second idea)

Drawback:

  • Some parameters in it are optional but will have to be filled anyway using a null value since we cannot yet specify default values in objects (see: hashicorp/terraform#19898)
  • Add another variable (not sure if it is a drawback here )

Solution 2: We could modify the additional_settings variable to handle the "resource" field for this specific case.

variable "additional_settings" {
type = list(object({
namespace = string
name = string
value = string
}))

become

variable "additional_settings" {
  type = list(object({
    namespace = string
    name      = string
    value     = string
    resource  = string
  }))

dynamic "setting" {
for_each = var.additional_settings
content {
namespace = setting.value.namespace
name = setting.value.name
value = setting.value.value
resource = ""
}
}
become

  dynamic "setting" {
    for_each = var.additional_settings
    content {
      namespace = setting.value.namespace
      name      = setting.value.name
      value     = setting.value.value
      resource  = setting.value.resource
    }
  }

Advantages:

  • No new variables
  • Allow us to declare as many scheduled actions as needed
  • Allow users to handle new settings that would require the specific setting resource field later

Drawbacks:

  • Sounds like a breaking change since we will change the object variable
  • Maybe too heavy in terms of readiness ?

Alternatives Considered

I'v tried to use additional_settings without parameter just to see what happen:

I specify additional_settings as following

additional_settings = [
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "MinSize"
    value       = "1"
  },
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "MaxSize"
    value       = "2"
  },
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "StartTime"
    value       = "2015-05-14T07:00:00Z"
  },
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "EndTime"
    value       = "2016-01-12T07:00:00Z"
  },
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "Recurrence"
    value       = "*/20 * * * *"
  },
  {
    namespace   = "aws:autoscaling:scheduledaction"
    name        = "DesiredCapacity"
    value       = "2"
  }
]

Launch plan wich mention these new settings:

      + setting {
          + name      = "DesiredCapacity"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "2"
        }
      + setting {
          + name      = "EndTime"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "2016-01-12T07:00:00Z"
        }
      + setting {
          + name      = "MaxSize"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "2"
        }
      + setting {
          + name      = "MinSize"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "1"
        }
      + setting {
          + name      = "Recurrence"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "*/20 * * * *"
        }
      + setting {
          + name      = "StartTime"
          + namespace = "aws:autoscaling:scheduledaction"
          + value     = "2015-05-14T07:00:00Z"
        }

then apply and get the following error

Error: InvalidParameterValue: The scheduled action name cannot be blank.
        status code: 400, request id: f4003748-9393-4202-824c-312fbf77b7fc

Which is logic since the resource field is set to an empty string.

Additional Context

This need sounds like a bug and a feature request to me.

I'm open to discussion on this one and can propose a PR .

Feature Request - Add variable for environment tier

Hello @osterman @goruha @aknysh

Currently, your module for elastic beanstalk environment doesn't have a way to set the tier for the application to be "Worker" because it is hardcoded to "WebServer". I am asking that a variable is created for the tier with it defaulting to "WebServer" so it disrupts as little as possible. I will put in a PR shortly this Issue number.

Thanks,

Lucas Pearson

Invalid solution stack in example


Error: Error applying plan:

1 error(s) occurred:

Simply running the example gives this error on `us-east-1`:
module.elastic_beanstalk_environment.aws_elastic_beanstalk_environment.default: 1 error(s) occurred:
aws_elastic_beanstalk_environment.default: InvalidParameterValue: No Solution Stack named '64bit Amazon Linux 2018.03 v2.12.2 running Docker 18.03.1-ce' found.
	status code: 400, request id: 656e5969-7bb3-4313-9158-2515435f3522

Changing to 64bit Amazon Linux 2018.03 v2.12.10 running Docker 18.06.1-ce seems to work.

LoadBalancer coming blank in beanstalk UI

screen shot 2019-02-11 at 12 01 37 pm

LoadBalancer coming blank in ElasticBeanstalk UI. Cannot see/list/edit options such as instanceport, https, routing, ssl certificates s3 log enabled etc. attached is the screenshot.

Error: ConfigurationValidationException on a apply

Found a bug? Maybe our Slack Community can help.

Slack Community

Describe the Bug

Plan goes ok.
After trying to apply my config, a ConfigurationValidationException pops up

Expected Behavior

Clearer meaning of the problem. I think I used all the required vars documented... so no clue how to proceed

Steps to Reproduce

`
module "elastic_beanstalk_application" {
source = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application.git?ref=tags/0.5.0"
name = "AppName"
}

module "elastic-beanstalk-environment" {
source = "cloudposse/elastic-beanstalk-environment/aws"
version = "0.22.0"
elastic_beanstalk_application_name = "AppName"
name = "dev-AppName"
vpc_id = aws_default_vpc.default.id
region = "eu-west-1"
solution_stack_name = "64bit Amazon Linux 2 v5.0.2 running Node.js 12"
application_subnets = [
aws_default_subnet.default_az1.id,
aws_default_subnet.default_az2.id,
aws_default_subnet.default_az3.id,
]
autoscale_max = 2
env_vars = {
API_URL = "https://my-api-url.net"
}
}
`

Screenshots

Captura de Pantalla 2020-06-04 a les 13 15 48
`
Error: ConfigurationValidationException: Configuration validation exception: Invalid option value: '' (Namespace: 'aws:ec2:vpc', OptionName: 'ELBSubnets'): Specify the subnets for the VPC.
status code: 400, request id: ac1268fc-77a0-479d-8a5d-37c6b024ce9b

on .terraform/modules/elastic-beanstalk-environment/terraform-aws-elastic-beanstalk-environment-0.22.0/main.tf line 505, in resource "aws_elastic_beanstalk_environment" "default":
505: resource "aws_elastic_beanstalk_environment" "default" {
`

Environment:

  • OS: OSX
  • Version Dockerized Terraform v0.12.21

auth_token changes destroys Redis cluster

we have this

resource "aws_elasticache_replication_group" "redis" {
  at_rest_encryption_enabled = true
  auth_token = data.external.example.result.REDIS_AUTH
  automatic_failover_enabled = false
  engine = "redis"
  engine_version = "5.0.5"
  node_type = lookup(var.aws_elasticache_cluster_node_type, terraform.workspace)
  maintenance_window = "sun:01:01-sun:23:00"
  number_cache_clusters = lookup(var.aws_elasticache_cluster_nodes, terraform.workspace)
  parameter_group_name = aws_elasticache_parameter_group.default.name
  port = 6379
  replication_group_id = "service-${terraform.workspace}"
  replication_group_description = "Service ${terraform.workspace}"
  snapshot_window = "00:00-01:00"
  subnet_group_name = aws_elasticache_subnet_group.default.name
  transit_encryption_enabled = true
  lifecycle {
    prevent_destroy = true
  }
}

every time auth_token needs a change terraform is attempting to destroy and create new cluster.
in AWS GUI it's possible to set new token and chose either to rotate or set it.

Feels like we need to expose token update stategy option in TF (https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html)

http_listener_enabled not working for me

Hello !

In my beanstalk env, my load balancer listens on https. I would like to listen on http too, to redirect http -> https. I know, there is already a patch on it, and I'm using the http_listener_enabled parameter.

But... It doesn't work for me.

Here a screenshot of AWS console 👍
screenshot 2019-02-01 at 10 22 42

And terraform :

module "elastic_beanstalk_environment" {
  source                              = "git::https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment.git?ref=tags/0.4.8"

...

  healthcheck_url                      = "${var.healthcheck_url}"
  http_listener_enabled	               = true 
  loadbalancer_type                             = "application"
  loadbalancer_certificate_arn        = "${aws_acm_certificate.cert.arn}"

  loadbalancer_security_groups      = ["${module.metabase_sg.this_security_group_id}"]
  loadbalancer_managed_security_group = "${module.metabase_sg.this_security_group_id}"

}

tfsate :

"all_settings.232565.namespace": "aws:elbv2:listener:default",
"all_settings.232565.resource": "",
 "all_settings.232565.value": "false",

What am I doing wrong?

Thanks

Beanstalk created resources naming should be configurable independently

Describe the Feature

When creating a new beanstalk environment, some resources naming should be configurable independently from each other with a custom name.

Expected Behavior

For created EC2 instances, S3 Bucket, instance_profile, iam_roles, there should be a configuration that let us custom their name.
Like the s3_bucket_access_log_bucket_name variable.

We could use these to customize our resources naming.

Use Case

We are currently starting to use this module to deploy Beanstalk but we have specifics constraints on naming for our resources :

For EC2 : Product-Component-Environment-Blue_Green
For IAM Roles: Product-Component-Environment-AWS::Region
For S3: Product-Component-Environment-AWS::AccountId-AWS::Region

Since we are doing multi-regions deployments, there are main patterns standards we need to respect to ensure consistent naming of our resources. As you can see they are similar and could be implemented using context module mechanics but not the exact same.

Problem the module use the same naming pattern for these ressources.

name = "${module.this.id}-eb-service"

name = "${module.this.id}-eb-ec2"

name = "${module.this.id}-eb-default"

name = "${module.this.id}-eb-ec2"

bucket = "${module.this.id}-eb-loadbalancer-logs"

For S3 we clearly need account ID in name to avoid conflicts since it's a global resource, same goes for IAM roles but not EC2 name which is specifics to the account.

Describe Ideal Solution

Being able to specify an alternative name to these resources as variable.
For example we could have these variables to specify explicit name:

  • iam_role_service_name
  • iam_role_ec2_name
  • iam_role_policy_default_name
  • iam_instance_profile_ec2_name
  • s3_bucket_elb_logs_bucket_name

Or maybe being able to specify a prefix or a global naming convention for each type of resource.

Alternatives Considered

Before opening this feature request I tried to use attributes option and label_order from context module.

var.tfvars

...
product = "product"
component = "component"
environment = "env"
label_order = ["namespace", "name", "environment", "attributes"]
...

main.tf

module "elastic_beanstalk_environment" {
  ...
  label_order = var.label_order
  attributes   = [data.aws_caller_identity.current.account_id,var.region]
  ...

Plan give me

Beanstalk env name: product-comp-env-862853942159-eu-west-1
ec2 name: product-comp-env-862853942159-eu-west-1-eb-ec2
ec2 iam instance: product-comp-env-862853942159-eu-west-1-eb-ec2
iam role service: product-comp-env-862853942159-eu-west-1-eb-service
s3 bucket for logs: product-comp-env-862853942159-eu-west-1-eb-loadbalancer-logs

Which allow us at least to respect widely our convention but flood some resources naming with useless information.

Additional Context

S3 limitation to 63 chars for bucket name make it hard to name our Bucket since the module append the -eb-loadbalancer-logs field consuming 21 chars of the 63 available.

This would require another issue I think but since we could replace it with this features it's good information.

I was thinking about using multiple context module to solve this but it's not possible in the end since there is a single one referenced.

Add Example Usage

what

  • Add example invocation

why

  • We need this so we can soon enable automated continuous integration testing of module

manage ebextentions from terraform

hello people
is there any possibility to manage ebextentions using terraform because my needs is :

  • create resources with names for the elastic load balancer and autoscaling group and launch configuration, otherwise, they will get names using cloudformation stack name

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.