Git Product home page Git Product logo

Comments (31)

herasimau avatar herasimau commented on June 13, 2024 10

Any news about this feature?

from terraform-plugin-sdk.

encron avatar encron commented on June 13, 2024 5

Hey @apparentlymart ,

We have another use case where we want to be able to perform a zero-downtime rolling update of clustered stateful services where we persist data on EBS volumes:

The EBS volume has to be detached and attached to a new instance in a sequential order for each instance of that cluster. Since we want to assure there is no downtime, we will need to cycle every instance 1-by-1 while keeping the cluster itself healthy. However, terraform will instead perform create/destroy operations on all instances at the same time (although limited by the global flag -parallelism).

So even with create_before_destroy = true it means it will create n new instances, then destroy the old ones, yet all at the same time. This will resort in downtime for the gap in which the EBS volumes are being swapped from the old to the new instances + the time it takes for the service to come back up afterwards. This would of course not be an issue if we didn't have to swap the EBS volumes because then we could mark the new instance as "created" only after it joined our cluster and passed health checks while keeping the old one still alive so we'd always have at least 1 healthy instance running.

Therefore, to fix this issue, ideally, we'd like a way to set some sort of mutex/lock where we can:

  1. acquire lock
  2. create the first new instance of the resource
  3. destroy the old one (and detach the EBS volume)
  4. attach the EBS volume to the new one
  5. perform health checks / wait for it to join the cluster
  6. release lock, go to the next instance and repeat 1-5.

EDIT: after giving this some more thought, I think only having a sort of mutex on the instance resource is actually not enough to perform a clean rolling update as we'd also need to rollout changes to the EBS volume attachments, route53 records, maybe an ELB, etc. As a workaround now, we created a simple wrapper that will use -target to only apply changes sequentially on predefined resources. However, it would be amazing if we didn't have to rely on wrappers but use a built-in rolling update system within terraform instead ;)

from terraform-plugin-sdk.

jrevillard avatar jrevillard commented on June 13, 2024 4

I'm also very much interested into this. We use TF to deploy on Openstack and the anti-affinity mechanism will fail most of the time when we create instances in parallel. Also, as we have a lot of resources, -parallelism=1 is really slow...

from terraform-plugin-sdk.

matthughes avatar matthughes commented on June 13, 2024 3

Would love this as well. Above use cases ring true. I also have an issue this would solve with OpenStack server_group / anti-affinity.

Scheduling 10 instances with anti-affinity in parallel blows up pretty reliably on our Openstack deployment. It seems like the server maintains a global lock and only allows one instance to be scheduled at a time, leading to timeouts. There is another issue floating around here for custom timeout per resource, but I think this is a more elegant solution for most cases.

from terraform-plugin-sdk.

OffColour avatar OffColour commented on June 13, 2024 3

This would be extremely useful. It's OK to say it's designed for where they're all identical, but that's the end state. Changing, say, the size of VMs created with count would result in them all rebooting at the same time.
As it stands, I can't use count at all.

from terraform-plugin-sdk.

angelroldanabalos avatar angelroldanabalos commented on June 13, 2024 3

I've managed to build a workaround:

First we write an initial state, in my case is in the container where i execute Terraform

resource "null_resource" "set_initial_state" {
  depends_on = [xxxxxxxxx]
  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "echo \"0\" > initial_state.txt"
  }
}

Secondly with a while loop we block all of the threads except one. I have a timeout of 3h because if there's an error the other threads will be blocked for ever

resource "null_resource" "configure" {

  count = var.XXXXXX
  depends_on = [null_resource.set_initial_state]

# we block all of the threads except the one that is equal to the count we are currently evaluating

  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "while [[ $(cat ha_state.txt) != \"${count.index}\" ]]; do echo \"${count.index} is asleep...\";((c++)) && ((c==180)) && break;sleep 60;done"
  }

#  Here you put the code you want to execute in serial

#  Here you put the code you want to execute in serial


#  Finally we increase by 1 the state so that the next thread is executed

  provisioner "local-exec" {
    interpreter = ["bash", "-c"]
    command     = "echo \"${count.index + 1}\" > initial_state.txt"
  }

}

It's not by any means elegant but it works and you don't need to modify parallelism which impacts the whole execution

from terraform-plugin-sdk.

bflad avatar bflad commented on June 13, 2024 3

Just in case it is not explained or clear in this discussion, provider resource and data source logic can artificially constrain remote actions via a mutex or semaphore. There are mutex and semaphore implementations in the Go standard library and various community Go modules. AWS and other large scale providers use this technique.

For example, it is possible to limit all instances of resource type to sequential creations (one at a time) via the Go standard library sync.Mutex:

// Mutex to make resource creation sequential.
//
// This is a very trivial example to show the concept; other
// Go standard library or community Go modules for mutex
// and semaphore handling may use a different setup, such
// as mapping a condition/string to unique mutex/semaphore.
var resourceExampleCreateMutex sync.Mutex

func resourceExampleCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
  resourceExampleCreateMutex.Lock()
  defer resourceExampleCreateMutex.Unlock()

  // ... further resource creation logic ...
}

More complex logic is certainly possible, including using attribute values to tailor mutex and semaphore usage. For example, you could use attribute value, such as a parent resource identifier, to limit child resource operations per parent resource.

For further inspiration, terraform-plugin-sdk version 1 used to contain helper/mutexkv.MutexKV, which supported mutex handling via a string key. The implementation code from that small helper package can be found here: https://github.com/hashicorp/terraform-plugin-sdk/blob/2c03a32a9d1be63a12eb18aaf12d2c5270c42346/helper/mutexkv/mutexkv.go

It is worth mentioning that this type of resource logic does have one particular limitation in that it only applies to a single provider instance (alias), since separate provider instances can have separate configurations and are ran as separate processes. An external coordination system, such as Consul locks or Terraform core natively implementing the a provider locking mechanism, would be required to workaround that particular issue. However for most providers, typically provider instance configurations differ across remote system boundaries that preclude this type of limitation (e.g. AWS services typically use separate operation limits per region since they are different API endpoints and the AWS provider requires separate providers for separate regions currently).

This does not preclude that the SDK should not natively (re)implement this type of functionality, but hopefully this helps some folks in explaining something that can already be done today.

from terraform-plugin-sdk.

MaxFlanders avatar MaxFlanders commented on June 13, 2024 2

+1 this would be very helpful, am building a clustered system where masters are identical (can be built using the same resource), but need to be joined to the cluster 1 at a time.

from terraform-plugin-sdk.

Drupi avatar Drupi commented on June 13, 2024 2

I have another use-case for that. I’m running AWS EKS fully managed by terraform but instead of creating regular ASG’s I’m creating CloudFormation stacks per ASG - because it giving me ability to make rolling update of already existing EC2. The problem is that when I do upgrade, all of CF stacks starting to upgrade ASG’s in the same time - it can cause a denial of service.

from terraform-plugin-sdk.

zzzuzik avatar zzzuzik commented on June 13, 2024 2

Same here, one particular resource (out of our control) dictates parallelism=1 for the whole scale infrastructure of thousands of objects

from terraform-plugin-sdk.

jeannich avatar jeannich commented on June 13, 2024 2

Sharing here an almost complete piece of code implementing the ideas shared here previously.
Main concept is: I want a specific resource "foo.bar" to be deployed sequentially.
To achieve that I use a file mutex that I acquire before "foo.bar" and that I release just after "foo.bar"

Here is the pseudo code:

resource "null_resource" "acquire_file_mutex" {
  count = var.sequential_lock_label != null ? 1 : 0

  # Triggers to ensure mutex protection each time the service instance is updated
  triggers = {
    foo_bar_params_hash = base64sha256(var.foo_bar_params)  # put here any parameter that would cause a foo.bar update
    lock_label = var.sequential_lock_label
  }

  provisioner "local-exec" {
    command = "../scripts/file_mutex.sh acquire ${var.sequential_lock_label}"
  }

  # Make sure we release the lock in case this resource is deleted
  provisioner "local-exec" {
    when    = destroy
    command = "../scripts/file_mutex.sh release ${self.triggers.lock_label}"
  }
}


resource "foo" "bar" {
  params       = var.foo_bar_params

  depends_on = [null_resource.acquire_file_mutex]
}



resource "null_resource" "release_file_mutex" {
  count = var.sequential_lock_label != null ? 1 : 0

  # Trigger to ensure that whenever 'acquire_file_mutex' is run, then this 'release_file_mutex' will be run also
  triggers = {
    acquire_lock_id = null_resource.acquire_file_mutex[count.index].id
  }

  provisioner "local-exec" {
    command = "../scripts/file_mutex.sh release ${var.sequential_lock_label}"
  }

  depends_on = [foo.bar]
}

Here is the code of script file_mutex.sh:

action=$1
lock_name=$2
timeout_sec=$3

root_path="/tmp/file_mutex"

acquire(){

    local locked=0
    local max_sleep_time_sec=4

    local start_time_sec=$(date +%s)
    local elapsed_time_sec=0

    echo "Trying to acquire lock \"$root_path/$lock_name\""
    mkdir -p $root_path  #make sure the root path exists

    while [ $locked -eq 0 ] && [ $elapsed_time_sec -le $timeout_sec ]
    do
        if mkdir $root_path/$lock_name > /dev/null  2>&1; then   # based on the fact that mkdir is an atomic operation
            echo "File mutex acquired"
            locked=1
        else
            random_sleep_time_sec=$((1 + $RANDOM % $max_sleep_time_sec))
            echo "Could not acquire file mutex _ Sleeping for $random_sleep_time_sec sec before retry"
            sleep $random_sleep_time_sec

            current_time_sec=$(date +%s)
            elapsed_time_sec=$((current_time_sec - start_time_sec))
        fi
    done

    if [ $locked -eq 0 ]; then echo "Failed to acquire lock" ; return 1; fi
}

release(){
    if [ -d "$root_path/$lock_name" ]
    then
        rm -rf $root_path/$lock_name
    fi
}



$action

from terraform-plugin-sdk.

apparentlymart avatar apparentlymart commented on June 13, 2024 1

Hi all,

For the moment we don't have any specific plans in this area, due to attention being elsewhere. From re-reading the discussion here I see two main use-cases:

  • Backend cannot deal with so many requests at the same time: in the ideal world, this isn't something a Terraform user should be worrying about, and instead the provider itself should be insulating Terraform from limitations of the underlying API to make it work within Terraform's model. At the moment the provider SDK lacks any first-class features to help with this, and so I expect we will look for ways to improve on this in future SDK changes, but in the mean time some providers use a whole-provider-process synchronization primitive (e.g. a semaphore) to limit concurrency.

  • The ordering of the creation of the instances needs to be controlled in some way, such as if one of them is "special". We intend count to be for creating multiple instances that are equivalent, so this use-case is slightly outside of its scope. In this case I would lean towards being explicit about it and having the special node be a separate resource, and thus the relationship can be seen clearly in configuration:

    resource "aws_instance" "leader" {
      # ...
    }
    resource "aws_instance" "followers" {
      count = 3
    
      # ...
      user_data = "leader=${aws_instance.leader.private_ip}"
    }  

I expect we will re-visit these use-cases in the long run and look for ways to accommodate them better. #66 is somewhat-related to the first use-case too, since it would allow some providers to reduce load on their backend APIs by coalescing similar operations together.

For the moment, and for the foreseeable future, our attention is unfortunately focused elsewhere.

from terraform-plugin-sdk.

jturver1 avatar jturver1 commented on June 13, 2024 1

Not the recommended approach, and using a script vs. cli tool to handle the backend API limitations would help, but - as Cloud vendors seem to be moving faster and faster with releasing new capabilities and associated resource types i.e. when there are no resource provider capabilities for resources you'd like to manage - potentially for many months after release - being able to optionally configure e.g. for_each on local-exec to run sequentially, or based on a "order" index of some kind, would help a lot.

In the example below, I just need to make sure that one iteration of for_each / local-exec does not overlap with another. Sleep is a temporary hack for demo purposes only.

resource "null_resource" "azfw_policy_network_rule" {
  for_each = try(local.network_rules_map != null ? local.network_rules_map : tomap(false), {}) 
  triggers = {
    key                              = each.key
    subscription_id                  = var.subscription_id
    rule_name                        = each.key
    rule_collection_name             = each.value.rule_collection_name
    rule_collection_group_name       = each.value.rule_collection_group_name
    rule_fw_policy_name              = each.value.rule_fw_policy_name
    rule_fw_policy_rg_name           = each.value.rule_fw_policy_rg_name
    rule_description                 = each.value.rule_description
    rule_source_addresses            = each.value.rule_source_addresses
    rule_destination_addresses       = each.value.rule_destination_addresses
    rule_destination_ports           = each.value.rule_destination_ports
    rule_ip_protocols                = each.value.rule_ip_protocols
  }
  provisioner "local-exec" {
    command = <<COMMAND
        az login --service-principal --username $TF_VAR_client_id --password $TF_VAR_client_secret --tenant $ARM_TENANT_ID
        az account set --subscription ${var.subscription_id}
        sleep $(( $RANDOM % 120 ))
        az network firewall policy rule-collection-group collection rule add --name ${each.key} --collection-name ${each.value.rule_collection_name} --rule-collection-group-name ${each.value.rule_collection_group_name} --policy-name ${each.value.rule_fw_policy_name} --resource-group ${each.value.rule_fw_policy_rg_name} --rule-type ${each.value.rule_type} --description '${each.value.rule_description}' --source-addresses ${each.value.rule_source_addresses} --destination-addresses ${each.value.rule_destination_addresses} --destination-ports ${each.value.rule_destination_ports} --ip-protocols ${each.value.rule_ip_protocols}
        sleep $(( $RANDOM % 20 ))
    COMMAND
  }
  provisioner "local-exec" {
    when = destroy
    command = <<COMMAND
        az login --service-principal --username $TF_VAR_client_id --password $TF_VAR_client_secret --tenant $ARM_TENANT_ID
        az account set --subscription ${self.triggers.subscription_id}
        sleep $(( $RANDOM % 120 ))
        az network firewall policy rule-collection-group collection rule remove --name ${each.key} --collection-name ${self.triggers.rule_collection_name} --rule-collection-group-name ${self.triggers.rule_collection_group_name} --policy-name ${self.triggers.rule_fw_policy_name} --resource-group ${self.triggers.rule_fw_policy_rg_name}
        sleep $(( $RANDOM % 20 ))
    COMMAND
  }
}

from terraform-plugin-sdk.

m-yosefpor avatar m-yosefpor commented on June 13, 2024 1

We have many use-cases which this feature is essential. Without having a rolling mechanism for creating resources, terraform is painful in immutable infrastructure at large scales. We have many VMs deployed via terraform in Openstack, and changing some attribute which causes instance recreation is not possible due to the lack of this feature. create_before_destory does not help, since in this case we should create 200 VMs, and then destroy all old 200 VMs which is not feasible. Using -parallelism=1 is very slow as many resources (like VM ports, security_groups, etc), can be changed and created in parallel. Having a per resource parallelism helps a lot.

Right now, we handle this scenarios with writing scripts to apply terraform partially (with some set of targets), multiple times.

I've seen Pulumi project which seems promising and supports this lifecycle scenarios better than terraform for these use-cases.

from terraform-plugin-sdk.

dfairhurst avatar dfairhurst commented on June 13, 2024

I'd like to see this. Our Openstack deployment can only handle adding security rules to a group one at a time. It takes all the requests, but processes them sequentially. The problem is that even if Terraform sends 100 requests in parallel (-parallelism 100) they'll only be processed one by one. If I could specify through the resource that the rules are only to be done with parallelism 1, then Terraform would be able to do other operations in parallel instead and optimise the deployment time.

from terraform-plugin-sdk.

dthvt avatar dthvt commented on June 13, 2024

I'll add another use case:

We are deploying a set of servers behind a load balancer. Each server automatically uses LetsEncrypt w/ DNS validation to get certificates (via user_data startup scripting). However, since we don't want to do TLS termination on the load balancer, each server should have a certificate that is valid for "service.domain.com" in addition to "serverX.domain.com". This is supported by LetsEncrypt, however the DNS validation must thus be done sequentially as the servers are deployed. As it stands now, deploying the identical servers using "count = X" results in a race condition where not all servers can successfully validate w/ LetsEncrypt.

from terraform-plugin-sdk.

AlpayY avatar AlpayY commented on June 13, 2024

I would like to see this too, especially when provisioning with null_resources. Some provisioning commands need to be executed in sequence, that shouldn't mean my virtual machines can't be created in parallel.

from terraform-plugin-sdk.

james-powis avatar james-powis commented on June 13, 2024

I have a reservation system for some physical hardware which is allocated and configured on-demand using models in Terraform, unfortunately the api that finds and reserves the hardware has a pretty nasty race condition which is not easy to overcome.

Disabling parallelism with -parallelism=1 solves the problem, but should not be an absolute requirement.

from terraform-plugin-sdk.

9numbernine9 avatar 9numbernine9 commented on June 13, 2024

Hello! Just chiming in with another possible use case for this feature: reliably installing Azure VM extensions (azurerm_virtual_machine_extension). We have multiple VM extensions that we would like to install after the VM resource is created, but attempting to install multiple VM extensions simultaneously may not work. (Also note that we haven't authored these extensions, so we have little-to-no control over how they operate.)

In some cases, a given VM extension may issue commands to the VM's package manager (e.g. apt, dpkg, etc.) to install any software required for those extensions to operate. Running multiple simultaneous instances of a package manager often doesn't work because they use a lock file to prevent simultaneous operation (which is reasonable). Often these extensions will fail when executed with terraform apply the first time, but will succeed on future invocations once the exclusive lock of whichever extension was previously executing has finished.

Our current workaround are either run terraform apply multiple times (annoying) or run with -parallelism=1 for the entire apply (slow). Being able to control parallelism specifically for one resource type would be really nice in this scenario. 😊

from terraform-plugin-sdk.

faseyiks avatar faseyiks commented on June 13, 2024

Hello, I can add another use-case to resource-level parallelism. there are cases where a parent/top-level resource keeps track of child/dependent resources and therefore updates itself and during which top level resource update further updates by new child resources are in abeyance or left pending. A case in point is the ibm_lb resource where each ibm_lb_listener resource forces an update request to the ibm_lb resource. It would appear that the update process/thread is a singleton. When you have count on the number of listener resources, this breaks terraform with error | "The load balancer with ID 'rxxxx' cannot be updated because its status is 'UPDATE_PENDING'.",

from terraform-plugin-sdk.

cbus-guy avatar cbus-guy commented on June 13, 2024

We are experiencing issues when attempting to bootstrap Chef using a null resource, or, when using Chef provisioner and building servers, either with VMWare or Azure. There are issues with vault permissions being assigned properly, to the node in Chef server. This succeeds when we set parallelism to 1, but fails intermittently, but fairly consistently, when not set to 1. It would be nice to only set the null resource for the bootstrap to a parallelism to 1, but everything else, to be allowed to be ran in parallel.

from terraform-plugin-sdk.

holmesb avatar holmesb commented on June 13, 2024

When selecting the next available IP in netbox, duplicates are created unless parallelism is 1. Be better\faster to be able to do this on just the IP resource.

from terraform-plugin-sdk.

santosh0705 avatar santosh0705 commented on June 13, 2024

I run into the similar issue. Following...

from terraform-plugin-sdk.

angelroldanabalos avatar angelroldanabalos commented on June 13, 2024

I'm also very interested in this feature. Currently i'm migrating a product to the cloud which can increase the number of servers dynamically but in order to do that the servers have to be configured one by one.

It is not very clean to have 10 blocks of resources depending one on another (if we needed 10 server). And if we need to scale to 20 then having to change the code instead of just a variable.

from terraform-plugin-sdk.

nakulkorde avatar nakulkorde commented on June 13, 2024

+1 would be really helpful to have this feature available.

from terraform-plugin-sdk.

mkjmdski avatar mkjmdski commented on June 13, 2024

+1 this is still very needed feature! As a temporal workaround for slowed down pipelines I managed to control terraform runs with two env variables set in the terraform pipeline:

TF_CLI_ARGS_PLAN="-parallelism=80"
TF_CLI_ARGS_APPLY="-parallelism=1"

seems pretty obvious but might be useful for somebody. If you are able to do many read operations but not many write that two envs could make a trick.

from terraform-plugin-sdk.

mbailey-eh avatar mbailey-eh commented on June 13, 2024

Got another use case here. AWS Prefix lists.

If you pass through a list of CIDR ranges to the aws managed prefix in a foreach, it can only modify list 1 IP at a time and if a list in already in the "modify state", it can't add the next IP in the list until it completes the modifcation and then opens it back up to modify again with the next IP. It will fail unless you explicitly state parallelism=1

variable "ip_list" {
  type = list(string)
  default = [
    "1.1.1.1/24",
    "2.2.2.2/24",
    "3.3.3.3/24"
  ]
}

resource "aws_ec2_managed_prefix_list" "managed_prefix_list" {
  name           = "Prefix-List-tf"
  address_family = "IPv4"
  max_entries    = 5


  tags = {
    tf_managed = true
  }
}

resource "aws_ec2_managed_prefix_list_entry" "ip_entry" {
  for_each       = toset(var.ip_list)
  cidr           = each.value
  description    = "samplePrefix-List-tf"
  prefix_list_id = aws_ec2_managed_prefix_list.,anaged_prefix_list.id
}

from terraform-plugin-sdk.

hasan4791 avatar hasan4791 commented on June 13, 2024

In IBM Cloud, inorder to update the worker, we always perform the worker replace which needs to be executed sequentially. Due to the terraform limitation, we aren't able to perform this action & the only way to achieve today is by updating the parallelism config to "1" (default is 10). It would be great, if we have control over the resource creation also, something like "seq_count" or "for_each_seq"

from terraform-plugin-sdk.

hasan4791 avatar hasan4791 commented on June 13, 2024

Thank you @bflad for putting it on spot. I have it implemented in our provider for testing & works sweet. Btw, I'm trying to get rid of some operations from local-exec provisioners to custom provider. Though the sequence execution is achieved, any failures on resource creation doesn't break the loop & other routines continue to run(which is the intended design though). Do i break the core terraform principle, if i have this handling implemented in our provider?

from terraform-plugin-sdk.

bflad avatar bflad commented on June 13, 2024

Glad this is helping, @hasan4791. This is an acceptable design with Terraform, given that the only effect it is having is making core's requests to the provider "take longer" in a time sense. It's not affecting core's operations or behaviors, given that the provider responses are not being modified in any way.

My understanding is that core will finish any in progress operations on failure (controlled by the graph makeup at the time of the failure and parallelism settings), while not starting any new operations. The upstream maintainers would be the authoritative source of information on that topic if you are seeing anything potentially unexpected.

from terraform-plugin-sdk.

francisferreira avatar francisferreira commented on June 13, 2024

Not only we cannot control parallelism, but we cannot control the order in which resources will be created either... Sigh... We have a business requirement that IPs from IP prefixes need to be created with their names reflecting their own IP addresses. For instance, 'pip-101.102.103.104' would have address 101.102.103.104. The way Azure API handles requests support that, for the 1st IP in a prefix will surely be the first available address in it, the 2nd will be the second, and so on. But Terraform falls short and sadly does not offer a way to achieve that allocation pattern.

Okay. We can't control parallelism per resource, but we do have the flag '-parallelism=1' to force the resources to be created sequentially. Hurray, let's kill our performance to adhere to our stupid rule! At least that would work, right? Well, think again... Yes, we can control parallelism with that flag. But NO, WE CANNOT ENSURE ORDERING! For some reason known only by Terraform devs (and the Devil himself), a count-loop is NOT executed in order when we use the '-parellelism=1' flag. Example of one such apply:

azurerm_public_ip.smtp_pfix_02[2]: Creating...
azurerm_public_ip.smtp_pfix_02[2]: Creation complete after 7s [id=/...]
azurerm_public_ip.smtp_pfix_02[6]: Creating...
azurerm_public_ip.smtp_pfix_02[6]: Creation complete after 5s [id=/...]
azurerm_public_ip.smtp_pfix_02[3]: Creating...
azurerm_public_ip.smtp_pfix_02[3]: Creation complete after 4s [id=/...]
azurerm_public_ip.smtp_pfix_02[4]: Creating...
azurerm_public_ip.smtp_pfix_02[4]: Creation complete after 7s [id=/...]
azurerm_public_ip.smtp_pfix_02[11]: Creating...
azurerm_public_ip.smtp_pfix_02[11]: Creation complete after 7s [id=/...]
azurerm_public_ip.smtp_pfix_02[8]: Creating...
azurerm_public_ip.smtp_pfix_02[8]: Creation complete after 7s [id=/...]
azurerm_public_ip.smtp_pfix_02[13]: Creating...
azurerm_public_ip.smtp_pfix_02[13]: Creation complete after 7s [id=/...]
azurerm_public_ip.smtp_pfix_02[10]: Creating...
azurerm_public_ip.smtp_pfix_02[10]: Creation complete after 7s [id=/...]

Why? WHY? Why wouldn't a count-loop be executed in order when parallelism is set to 1? I'm honestly crying inside for having to work with Terraform right now... And yeah, some will come with that old excuse: "TF operates on resources, not resource instances, so we cannot use count or for_each loops to create dependencies between resource instances, and bla, bla, bla..." First, we are not talking about physical dependencies (like that of a VM and a Vnet/Subnet), but simply a logical relationship. Second, why would any iteration loop not execute in order if no parallelism is at play? No respectful language would randomly iterate through a repetition loop.

from terraform-plugin-sdk.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.