Git Product home page Git Product logo

terraform-provider-yandex's Introduction

Terraform Provider

Requirements

  • Terraform 0.12+
  • Go 1.21 (to build the provider plugin)

Building The Provider

Clone repository to: $GOPATH/src/github.com/yandex-cloud/terraform-provider-yandex

$ mkdir -p $GOPATH/src/github.com/yandex-cloud; cd $GOPATH/src/github.com/yandex-cloud
$ git clone [email protected]:yandex-cloud/terraform-provider-yandex

Enter the provider directory and build the provider

$ cd $GOPATH/src/github.com/yandex-cloud/terraform-provider-yandex
$ make build

Using the provider

If you're building the provider, follow the instructions to install it as a plugin. After placing it into your plugins directory, run terraform init to initialize it. Documentation about the provider specific configuration options can be found on the provider's website. An example of using an installed provider from local directory:

Write following config into ~/.terraformrc

provider_installation {
   dev_overrides {
    "yandex-cloud/yandex" = "/path/to/local/provider"
  }

   direct {}
 }

Developing the Provider

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.11+ is required). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

To compile the provider, run make build. This will build the provider and put the provider binary in the $GOPATH/bin directory.

$ make build
...
$ $GOPATH/bin/terraform-provider-yandex
...

In order to test the provider, you can simply run make test.

$ make test

In order to run the full suite of Acceptance tests, run make testacc.

Note: Acceptance tests create real resources, and often cost money to run.

$ make testacc

terraform-provider-yandex's People

Contributors

alex-burmak avatar alexanderkhaustov avatar apilikov avatar art22m avatar baranov1ch avatar cgriggs01 avatar chaoticcube1 avatar denchick avatar diphantxm avatar elemir avatar g0djan avatar gennadyspb avatar kdudkov avatar liubakarlinaau avatar luba239 avatar manykey avatar nar3k avatar nickmoro avatar notanonymousenough avatar ostinru avatar ozerovandrei avatar patsevanton avatar perekalov avatar phmx avatar rkhapov avatar scaiper avatar seukyaso avatar shmel1k avatar yahor-s avatar yandex-cloud-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-yandex's Issues

yc backend storage

Hello,

I propose to implement new yc backend storage and contribute it into https://github.com/hashicorp/terraform/tree/master/backend/remote-state as I suppose all hacks around s3 backend unacceptable for such company as Yandex. Let's discuss how community can be helpful and if someone from inside your team has either some ideas or workarounds how to implement this storage.

State storage is extremely helpful in case of team collaboration development and I'm not sure that it is OK to use some hacks like minio for this functional.

File Provisioning onto Compute Instance fails

I'm trying to copy some files onto 3x Compute Instances hosting MongoDB via the file provisioner plugin. This configuration works in GCP, where I'm using the exact same machine image.

The file provisioner config is e.g.:

  provisioner "file" {
    source      = "path/to/file1.sh"
    destination = "~/file1.sh"
  }

The error I'm getting with TF_LOG=DEBUG is below:

yandex_compute_instance.mongodb[0]: Still creating... [1m10s elapsed]
2020/01/26 00:26:45 [WARN] Provider "registry.terraform.io/-/yandex" produced an unexpected new value for yandex_compute_instance.mongodb[0], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .description: was null, but now cty.StringVal("")
      - .resources[0].gpus: was null, but now cty.NumberIntVal(0)
yandex_compute_instance.mongodb[0]: Provisioning with 'file'...
2020-01-26T00:26:45.979Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:45 using private key for authentication
2020-01-26T00:26:45.979Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:45 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:46.048Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:46 [ERROR] connection error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
2020-01-26T00:26:46.048Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:46 [WARN] retryable error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
2020-01-26T00:26:46.048Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:46 [INFO] sleeping for 1s
2020-01-26T00:26:47.049Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:47 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:47.116Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:47 [DEBUG] Connection established. Handshaking for user root
2020-01-26T00:26:47.600Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:47 [DEBUG] Telling SSH config to forward to agent
2020-01-26T00:26:47.600Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:47 [DEBUG] Setting up a session to request agent forwarding
2020-01-26T00:26:48.021Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [INFO] agent forwarding enabled
2020-01-26T00:26:48.021Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] starting ssh KeepAlives
2020-01-26T00:26:48.021Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] opening new ssh session
2020-01-26T00:26:48.151Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Starting remote scp process:  scp -vt ~
2020-01-26T00:26:48.219Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Started SCP session, beginning transfers...
2020-01-26T00:26:48.219Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:48.232Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Beginning file upload...
2020-01-26T00:26:48.301Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:48.301Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Waiting for SSH session to complete.
2020-01-26T00:26:48.370Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [ERROR] scp stderr: "Sink: C0644 1207 file1.sh\n"
yandex_compute_instance.mongodb[0]: Provisioning with 'file'...
2020-01-26T00:26:48.371Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 using private key for authentication
2020-01-26T00:26:48.372Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:48.442Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Connection established. Handshaking for user root
2020-01-26T00:26:48.952Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Telling SSH config to forward to agent
2020-01-26T00:26:48.952Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Setting up a session to request agent forwarding
2020-01-26T00:26:49.257Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [INFO] agent forwarding enabled
2020-01-26T00:26:49.257Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] starting ssh KeepAlives
2020-01-26T00:26:49.259Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] opening new ssh session
2020-01-26T00:26:49.390Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Starting remote scp process:  scp -vt /etc
2020-01-26T00:26:49.458Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Started SCP session, beginning transfers...
2020-01-26T00:26:49.458Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:49.469Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Beginning file upload...
2020-01-26T00:26:49.539Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:49.539Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Waiting for SSH session to complete.
yandex_compute_instance.mongodb[0]: Still creating... [1m20s elapsed]
2020-01-26T00:26:49.605Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [ERROR] scp stderr: "Sink: C0644 1050 file2.conf\n"
yandex_compute_instance.mongodb[0]: Provisioning with 'file'...
2020-01-26T00:26:49.611Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 using private key for authentication
2020-01-26T00:26:49.611Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:49.683Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:49 [DEBUG] Connection established. Handshaking for user root
2020-01-26T00:26:50.197Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Telling SSH config to forward to agent
2020-01-26T00:26:50.197Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Setting up a session to request agent forwarding
2020-01-26T00:26:50.476Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [INFO] agent forwarding enabled
2020-01-26T00:26:50.476Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] starting ssh KeepAlives
2020-01-26T00:26:50.478Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] opening new ssh session
2020-01-26T00:26:50.615Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Starting remote scp process:  scp -vt ~
2020-01-26T00:26:50.684Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Started SCP session, beginning transfers...
2020-01-26T00:26:50.684Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:50.694Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Beginning file upload...
2020-01-26T00:26:50.767Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:50.767Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Waiting for SSH session to complete.
2020-01-26T00:26:50.836Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [ERROR] scp stderr: "Sink: C0644 198 file3.js\n"
yandex_compute_instance.mongodb[0]: Provisioning with 'file'...
2020-01-26T00:26:50.839Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 using private key for authentication
2020-01-26T00:26:50.839Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:50.924Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:50 [DEBUG] Connection established. Handshaking for user root
2020-01-26T00:26:51.435Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Telling SSH config to forward to agent
2020-01-26T00:26:51.435Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Setting up a session to request agent forwarding
2020-01-26T00:26:51.724Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [INFO] agent forwarding enabled
2020-01-26T00:26:51.724Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] starting ssh KeepAlives
2020-01-26T00:26:51.727Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] opening new ssh session
2020-01-26T00:26:51.864Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Starting remote scp process:  scp -vt ~
2020-01-26T00:26:51.933Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Started SCP session, beginning transfers...
2020-01-26T00:26:51.933Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:51.943Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:51 [DEBUG] Beginning file upload...
2020-01-26T00:26:52.012Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:52.012Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] Waiting for SSH session to complete.
2020-01-26T00:26:52.081Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [ERROR] scp stderr: "Sink: C0644 102 file4.js\n"
yandex_compute_instance.mongodb[0]: Provisioning with 'file'...
2020-01-26T00:26:52.084Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 using private key for authentication
2020-01-26T00:26:52.084Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] Connecting to xxx.xxx.xxx.xxx:22 for SSH
2020-01-26T00:26:52.153Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] Connection established. Handshaking for user root
2020-01-26T00:26:52.653Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] Telling SSH config to forward to agent
2020-01-26T00:26:52.653Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] Setting up a session to request agent forwarding
2020-01-26T00:26:52.976Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [INFO] agent forwarding enabled
2020-01-26T00:26:52.976Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] starting ssh KeepAlives
2020-01-26T00:26:52.983Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:52 [DEBUG] opening new ssh session
2020-01-26T00:26:53.112Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] Starting remote scp process:  scp -vt ~
2020-01-26T00:26:53.178Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] Started SCP session, beginning transfers...
2020-01-26T00:26:53.178Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:53.189Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] Beginning file upload...
2020-01-26T00:26:53.256Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:53.256Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [DEBUG] Waiting for SSH session to complete.
2020-01-26T00:26:53.323Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:53 [ERROR] scp stderr: "Sink: C0644 1741 file5.js\n"
2020/01/26 00:26:53 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: 1 error occurred:
        * 3 problems:

- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.

2020/01/26 00:26:53 [ERROR] <root>: eval: *terraform.EvalSequence, err: 3 problems:

- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.

Error: 3 problems:

- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.



Error: 3 problems:

- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.



Error: 3 problems:

- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.
- Invalid index: The given key does not identify an element in this collection value.

The main lines of interest, I believe:

2020-01-26T00:26:48.219Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Copying input data into temporary file so we can read the length
2020-01-26T00:26:48.232Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Beginning file upload...
2020-01-26T00:26:48.301Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] SCP session complete, closing stdin pipe.
2020-01-26T00:26:48.301Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [DEBUG] Waiting for SSH session to complete.
2020-01-26T00:26:48.370Z [DEBUG] plugin.terraform: file-provisioner (internal) 2020/01/26 00:26:48 [ERROR] scp stderr: "Sink: C0644 1207 file1.sh\n"

So, SSH connection is established fine (so no connection, user access or credentials issue). The problem seems to occur with the scp (protocol incompatibility?). What's seen at the stderr line is the file's permission mask 0644 and length 1207, which is correct, but isn't revealing much.

Any help appreciated!

Too small default timeout for creating snapshot

Hello!

I try make snapshot by terraform and after 20 minutes i get

Error: Error while waiting operation to create snapshot: operation (id=epdasb79satrstnaoyrustwf) wait context done: context deadline exceeded

I suspect problem at this place https://github.com/terraform-providers/terraform-provider-yandex/blob/master/yandex/resource_yandex_compute_snapshot.go#L15

I create snapshot of disk size 100 Gb and snapshot is creating during 50 minutes.
Could you take out timeout parameter to configuration as describe here
https://www.terraform.io/docs/extend/resources/retries-and-customizable-timeouts.html#statechangeconf
Or hint me how i could be this realize. At yours SDK you working with timeout through context. I can't understand how could be stateConf binded with context at yours SDK.

Example how this at aws_db_instance configuration
https://www.terraform.io/docs/configuration/resources.html#operation-timeouts

S3 backend documentation

Hello, could you please clarify correct configuration for storing state using s3 backend. I've noticed that this section is not covered by documentation at all.

The config I tried to use is

provider "yandex" {}
terraform {
  backend "s3" {
    endpoint   = "storage.yandexcloud.net"
    bucket     = "bucket"
    key          = "some/path"
    region     = "us-east-1"
  }
}

Output is

Error configuring the backend "s3": No valid credential sources found for AWS Provider.
	Please see https://terraform.io/docs/providers/aws/index.html for more information on
	providing credentials for the AWS Provider

My friends have kindly suggested the following configuration, but it is still unclear what should be passed into secret_key and access_key attributes

terraform {
  backend "s3" {
    endpoint   = "storage.yandexcloud.net"
    bucket     = "bucket"
    key          = "some/path"
    region     = "us-east-1"
    access_key = "?"
    secret_key = "?"

    skip_requesting_account_id  = true
    skip_credentials_validation = true
    skip_get_ec2_platforms      = true
    skip_metadata_api_check     = true
  }
}

In this case output doesn't differ.

Could you please clarify this use case in documentation, as this is one of the most common use cases.

Работа с yandex_mdb_postgresql_cluster, вопросы и предложения по работе с ресурсом провайдера

Приветствую!

Мы сейчас активно стараемся вести управление инфраструктурой в Я.Облаке с помощью терраформа и переносить разнообразные костыли и велосипеды, которые придумывали ранее для появляющихся ресурсов в провайдере.

Один из активно сейчас используемых ресурсов в провайдере терраформа у нас сейчас это PostgreSQL кластер. И сейчас мы пытаемся его максимально использовать для управления разнообразными конфигурациями, которые у нас есть.

Возможно это только наша специфика использования и она не особо востребована, однако я решил поделиться. Вдруг и нам поможет :)

Общая суть в том, что у нас есть набор кластеров для проектов и в каждом кластере набор баз для этого проекта.
Т.е.:
проект1 - кластер1 - [база-приложения1, база-приложения2, ..., база-приложенияn]
проект2 - кластер2 - [база-приложения21, база-приложения22, ..., база-приложения2n]
и т.д.

Доступы к базам разграничены - на каждую базу свой пользователь.

Все по очень простой причине - много небольших микросервисов и даже минимальный кластер один сервис не загружает, чтобы для каждого приложения создавать отдельный кластер баз.

С чем я столкнулся и что хотелось бы починить, поменять(если возможно) или понять как предполагается использовать (может нам в подходе надо что-то изменить).

  1. При импорте существующего кластера провайдер предлагает пересоздать всех пользователей и почти все базы. По этому поводу даже тикет в саппорте есть.
    При тестах выяснился любопытный ньюанс (про почти все базы) - базы без установленных расширений пересоздавать провайдер не предлагает. Если есть расширения, и даже если они указаны в описании, все равно настаивает на пересоздании.

  2. Описывать кучу повторяющихся блоков довольно неудобно. К тому же когда неизвестно сколько их может быть - это про user/database/permission/extension блоки. Тут очень хочется использовать конструкцию dynamic блоков из терраформа. И вынести создание базы в отдельный модуль, например, с доп. ресурсами необходимыми.
    Однако я столкнулся с проблемой, когда описываю ресурс используе dynamic блоки, то провайдер предлагает каждый раз пересоздавать все ресурсы - и базы и пользователей (все, что разворачивается из dynamic). Что крайне странно и непонятно. При любом изменении или не изменении ресурса.

  3. Оставили идею использования dynamic когда-нибудь в будущем и описали всех пользователей и базы в одном большом ресурсе.
    И тут все равно возникла проблема - при изменении расширений базы в блоке database -> extension провайдер хочет пересоздать базу. Т.е. удалить и создать новую, хотя эта операция не столь радикальна и расширения ставятся/убираются из консоли/командной строки/базы без таких методов.

  4. Не хватает большей информации о создаваемых ресурсах (IP адреса хостов кластера или возможности их получить из какой-либо даты).
    Непосредственно IP адреса нужны например для конфигурирования провайдера postgres. По хосту он не работает. Вообще с postres провайдером есть свои особенности из-за того, что он проверяет все подключения еще на этапе plan. И все пользователи и доступы должны быть созданы до его использования.
    В свою очередь постгрес провайдер нужен для очень многих вещей - от выдачи прав в базе для пользователя (например на чтение), до изменения конфигурации базы/пользователей которые есть в веб-консоли/апи, но которые нельзя указать через яндекс терраформ провайдер.

  5. Про пользователей.
    Вопрос в целом описан в #78 и вызван способом нашего использования баз и доступов. При наличии большого количества небольших баз и пользователей дефолтные значения того же connection limit не подходят. Однако указать их сейчас нет возможности и какие-то базовые параметры для пользователей тоже указать нет возможности (хотя для баз локаль хотя бы есть :) )
    Можно делать многоходовку - создавая пользователя провайдером Я.Облака, а потом провайдером postgres конфигурировать, если бы не несколько ньюансов. Например, если пользователей уже очень много с лимитами по 10-15 коннектов и, к примеру, осталось 20-30 свободных коннектов в базе - пользователя уже не получится автоматически сиздать.
    Надо будет либо менять пользователей, уменьшая им пул подключений (что чревато), а потом создавать пользователя и все обратно откатывать.
    Либо создавать пользователя руками а потом.... а потом его невозможно импортировать в терраформе.

  6. Про создание пользователей.
    Так же довольно нетривиально создавая пользователей указывать им пароли - никто не хочет их в коде хранить, хотелось бы их генерировать извне и отдавать при создании пользователя.

  7. Еще про пользователей и их пароли.
    Очень странно выглядит работа с пользователями, если им менять пароль (или импортировать базу и потом прогнать plan) - провайдер предлагает каждый раз пересоздавать пользователя. При любом изменении. Что тоже не очень хорошо и, возможно, было бы все равно (это как с секретами волта, там терраформ ничего о состоянии секретов не знает и каждый раз их пересоздает), если бы не вероятность потерать все настройки для пользователя из-за удаления и пересоздания. А это означает потери как настроек со стороны managed баз, так и внутри, которые делались postgres провайдером.

Часть из описанного выше - технические ошибки, которые (я надеюсь) поправят.
Другая часть - не ошибки, а относятся к концепции управления пользователями, базами и кластером у вас в облаке.
И если с ошибками можно только ждать исправления, то с концепциями я кощунственно предлагаю пересмотреть и всех поделить :)

У вас даже в API (по документации ) ресурсы/объекты кластер БД, база данных и пользователь разделены и могут использоваться достаточно независимо.
Я, когда городил прошлым летом свой велосипед для создания баз при помощи терраформа, использовал эту идею как основную - отдельно описывается кластер, отдельно пользователь, отдельно база и все их нужные мне параметры.

Так может быть имеет смысл разделить их и у вас в провайдере?

Сделать их отдельными ресурсами postgres_cluster, postgres_user, postgres_database.
Тогда (возможно) будет проще добавить тот функционал коотрый сейчас есть в api/веб-консоли/командной строке отдельно для каждого из этих ресурсов, их отдельно можно настраивать.
Их так же по отдельности или все вместе можно вынести будет в модуль(и) (если захочется все одним блоком описать).

Public static IP

There is no way to create public static IP via Terraform (or create VM with static IP option). It would be great to have this feature and add more code control to cloud infrastructure.

Datasource for yandex_vpc_subnet and yandex_compute_image do not work by resource name

It is not possible to get data about a resource by resource name - I'm getting an error.

variable "fmg_image_name" {
  description = "FortiManager image id"
  default     = "fmg"
}
variable "folder_id" {
  description = "infra folder id"
  default     = "b1gpv67mve1f5snnl7ut"
}

data "yandex_compute_image" "fmg" {
  folder_id = var.folder_id
  name = var.fmg_image_name
}
data "yandex_vpc_subnet" "fmg" {
  folder_id = var.folder_id
  name = "secure-network-subnet-2"
}

output "image_id" {
  value = data.yandex_compute_image.fmg.id
}
output "subnet_id" {
  value = data.yandex_vpc_subnet.fmg.id
}


output: 
Error: failed to resolve data source image by name: image with name "fmg" not found

  on main.tf line 507, in data "yandex_compute_image" "fmg":
 507: data "yandex_compute_image" "fmg" {



Error: failed to resolve data source subnet by name: subnet with name "secure-network-subnet-2" not found

  on main.tf line 511, in data "yandex_vpc_subnet" "fmg":
 511: data "yandex_vpc_subnet" "fmg" {

But all these resources are exist

yc compute image list --folder-name=infra
+----------------------+------+--------+-------------+--------+
|          ID          | NAME | FAMILY | PRODUCT IDS | STATUS |
+----------------------+------+--------+-------------+--------+
| fd8nn6em7rfj5arksfsa | fmg  |        |             | READY  |
+----------------------+------+--------+-------------+--------+

yc vpc subnet list --folder-name=infra
+----------------------+-------------------------+----------------------+----------------+---------------+-----------------+
|          ID          |          NAME           |      NETWORK ID      | ROUTE TABLE ID |     ZONE      |      RANGE      |
+----------------------+-------------------------+----------------------+----------------+---------------+-----------------+
| b0clt3qn0ellalh7n0l0 | secure-network-subnet-2 | enp0suumfsb1bgn56cif |                | ru-central1-c | [10.100.2.0/24] |
| e2li2lh70ociip68clll | secure-network-subnet-1 | enp0suumfsb1bgn56cif |                | ru-central1-b | [10.100.1.0/24] |
| e9b5g56kr4et5b10hhu9 | secure-network-subnet-0 | enp0suumfsb1bgn56cif |                | ru-central1-a | [10.100.0.0/24] |
+----------------------+-------------------------+----------------------+----------------+---------------+-----------------+

Provide a way to get default folder id from provider config

Folder ID (and Cloud ID) is required in provider config:

provider "yandex" {
  cloud_id                 = "cloud_id_here"
  folder_id                = "folder_id_here"
}

For some resources it's required to specify folder id (like for instance_group, why?) or to get image from non-"standard-images" directory it would be useful to get default folder id from provider config.

[PROPOSAL] Switch to Go Modules

As part of the preparation for Terraform v0.12, we would like to migrate all providers to use Go Modules. We plan to continue checking dependencies into vendor/ to remain compatible with existing tooling/CI for a period of time, however go modules will be used for management. Go Modules is the official solution for the go programming language, we understand some providers might not want this change yet, however we encourage providers to begin looking towards the switch as this is how we will be managing all Go projects in the future. Would maintainers please react with 👍 for support, or 👎 if you wish to have this provider omitted from the first wave of pull requests. If your provider is in support, we would ask that you avoid merging any pull requests that mutate the dependencies while the Go Modules PR is open (in fact a total codefreeze would be even more helpful), otherwise we will need to close that PR and re-run go mod init. Once merged, dependencies can be added or updated as follows:

$ GO111MODULE=on go get github.com/some/module@master
$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

GO111MODULE=on might be unnecessary depending on your environment, this example will fetch a module @ master and record it in your project's go.mod and go.sum files. It's a good idea to tidy up afterward and then copy the dependencies into vendor/. To remove dependencies from your project, simply remove all usage from your codebase and run:

$ GO111MODULE=on go mod tidy
$ GO111MODULE=on go mod vendor

Thank you sincerely for all your time, contributions, and cooperation!

Unable to use YC Object storage as backend

There is my backend config

terraform {
  required_version = "0.12.7"

  backend "s3" {
    bucket     = "my-devel"
    region     = "us-east-1"
    endpoint   = "storage.yandexcloud.net"
    key        = "some/path/tf.state"
    access_key = "R............t1"
    secret_key = "hW...............................jyP"

    skip_region_validation      = true
    skip_credentials_validation = true
    skip_metadata_api_check     = true
  }
}

after I try to init this, I got the following error

Initializing modules...

Initializing the backend...
Backend configuration changed!

Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.



Error: Error inspecting states in the "s3" backend:
    SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.
	status code: 403, request id: 1ef7ae2ca045c1b9, host id:

Can't use float for memory

resource "yandex_compute_instance" "xxx" {
  name        = "xxx"
  platform_id = "standard-v1"
  zone        = "ru-central1-b"

  resources {
    cores         = 1
    core_fraction = 20
    memory        = 0.5
  }
  boot_disk {
    disk_id = "xxx"
  }
  network_interface {
    subnet_id = "${data.yandex_vpc_subnet.xxx.id}"
  }
}
Error: Error running plan: 1 error(s) occurred:

* yandex_compute_instance.xxx: 1 error(s) occurred:

* yandex_compute_instance.xxx: unexpected EOF

error google.protobuf.Empty when destroying mongodb cluster

Hi there,

I encountered the error message type "google.protobuf.Empty" isn't linked in when creating/destroying mongodb clusters using the yandex_mdb_mongodb_cluster resource.
The cluster is successfully deleted from Yandex Cloud, but the terraform state isn't updated as the command fails.

Here is an extract of my terraform resource :

data "yandex_vpc_network" "network" {
  name = "network"
}

data "yandex_vpc_subnet" "persistence" {
  name = "development-persistence-private-ru-central1-a"
}

resource "yandex_mdb_mongodb_cluster" "this" {
  name        = "development-cluster"
  environment = "PRESTABLE"
  network_id  = data.yandex_vpc_network.network.id

  cluster_config {
    version = "4.2"
  }

  labels = {
    environment = "development"
  }

  database {
    name = "development-cluster"
  }

  user {
    name     = "mongod"
    password = "password"
    permission {
      database_name = "development-cluster"
    }
  }

  resources {
    resource_preset_id = "b2.nano"
    disk_size          = 16
    disk_type_id       = "network-hdd"
  }

  host {
    zone_id   = "ru-central1-a"
    subnet_id = data.yandex_vpc_subnet.persistence.id
  }
}

Here is a DEBUG log from the command TF_LOG=DEBUG terraform destroy :

2019-12-31T15:36:55.758+0100 [INFO]  plugin.terraform-provider-yandex_v0.27.0_x4: configuring server automatic mTLS: timestamp=2019-12-31T15:36:55.758+0100
2019-12-31T15:36:55.788+0100 [DEBUG] plugin: using plugin: version=5
2019-12-31T15:36:55.788+0100 [DEBUG] plugin.terraform-provider-yandex_v0.27.0_x4: plugin address: network=unix address=/tmp/plugin983585600 timestamp=2019-12-31T15:36:55.788+0100
2019/12/31 15:36:55 [DEBUG] yandex_mdb_mongodb_cluster.this: applying the planned Delete change
2019/12/31 15:36:55 [TRACE] GRPCProvider: ApplyResourceChange
yandex_mdb_mongodb_cluster.this: Destroying... [id=c9q2tjdr541ls6vmscvk]
2019-12-31T15:36:55.864+0100 [DEBUG] plugin.terraform-provider-yandex_v0.27.0_x4: 2019/12/31 15:36:55 [DEBUG] Deleting Mongodb Cluster "c9q2tjdr541ls6vmscvk"
yandex_mdb_mongodb_cluster.this: Still destroying... [id=c9q2tjdr541ls6vmscvk, 10s elapsed]
2019/12/31 15:37:10 [DEBUG] yandex_mdb_mongodb_cluster.this: apply errored, but we're indicating that via the Error pointer rather than returning it: any: message type "google.protobuf.Empty" isn't linked in
2019/12/31 15:37:10 [TRACE] <root>: eval: *terraform.EvalWriteState
2019/12/31 15:37:10 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: any: message type "google.protobuf.Empty" isn't linked in
2019/12/31 15:37:10 [ERROR] <root>: eval: *terraform.EvalSequence, err: any: message type "google.protobuf.Empty" isn't linked in
2019/12/31 15:37:10 [ERROR] <root>: eval: *terraform.EvalOpFilter, err: any: message type "google.protobuf.Empty" isn't linked in
2019/12/31 15:37:10 [TRACE] [walkDestroy] Exiting eval tree: yandex_mdb_mongodb_cluster.this (destroy)

Error: any: message type "google.protobuf.Empty" isn't linked in

add DBMS settings in yandex_mdb_postgresql_cluster for users.

Hi,

I think it could be very useful to add all or some DBMS settings for a user on database cluster and databases creation.
At least some of these settings, like connection limit, which could be a blocking point to create a new user/database.
As an example of an issue - we have a lot of small databases in one cluster and for each database, we use a dedicated user.
By default connection limit for a user is 50 connections and if we use s2.small instance it is not possible to create more than 15 users with default parameters.
Then we are facing a problem - how to add a new database and create a new user.
And even we create a user with manual manipulating (decrease conn limits for all users to allow create a new with 50 connections, then update everything again). When we will need to recreate this cluster - it will be a problem.

Postgres provider is not very suitable here, as it connects to the database and check cluster and databases on plan step, before any actions. An if there is no user/database it will throw an error.

Load Balancer support

Hey,
I'll try to play with Yandex.Cloud but found no way to create Load Balancer.
Could you please help me with them.

terraform plan shows 'user' in postgresql resource to change even if nothing is changed

It is a plain simple postgresql resource:

resource "yandex_mdb_postgresql_cluster" "pg" {
  name        = "pg"
  environment = "PRODUCTION"
  network_id  = var.vpc_id

  config {
    version = 10
    resources {
      resource_preset_id = "b1.medium"
      disk_type_id       = "network-ssd"
      disk_size          = 10
    }
    backup_window_start {
      hours = 0
      minutes = 0
    }
  }
  database {
    name  = "sample"
    owner = var.user
  }

  user {
    name     = var.user
    password = var.user_password
  }

  host {
    zone      = var.az
    subnet_id = var.subnet_id
  }
}

With terraform plan and terraform apply the postgresql cluster is created successfully. But when I run terraform plan again (without any change), it shows

... omit other output

      - user {
          - grants   = [] -> null
          - login    = true -> null
          - name     = "user" -> null
          - password = (sensitive value)

          - permission {
              - database_name = "sample" -> null
            }
        }
      + user {
          + grants   = []
          + login    = true
          + name     = "user"
          + password = (sensitive value)

          + permission {
              + database_name = (known after apply)
            }
        }

Plan: 0 to add, 1 to change, 0 to destroy.

It plans to recreate my user even if nothing is changed. I am a little puzzled about this behavior.

Yandex provider version: v0.41.1

postgresql resource

Hi there 👋

It would be nice if we could create postgresql clusters with a terraform resource.

Changing yandex_kubernetes_node_group auto_scale parameters leads to node group recreation

Hey!

I'm trying to change max in yandex_kubernetes_node_group resource which looks like this:

resource "yandex_kubernetes_node_group" "worker-node" {
  cluster_id  = yandex_kubernetes_cluster.apps0.id
  name        = "node"
  description = "Worker nodes"
  version     = var.yc_apps_version

  labels = {
    "k8s_cluster" = yandex_kubernetes_cluster.apps0.name
    "k8s_role" = "worker"
    "k8s_group_name" = "node"
  }

  instance_template {
    platform_id = "standard-v2"
    nat         = false

    resources {
      memory = 4
      cores  = 2
    }

    boot_disk {
      type = "network-hdd"
      size = 64
    }

    scheduling_policy {
      preemptible = false
    }
  }

  scale_policy {
    auto_scale {
      min = 2
      initial = 2
      max = 5
    }
  }

  allocation_policy {
    location {
      zone = yandex_vpc_subnet.kube-apps-subnet-a.zone
    }
  }

  maintenance_policy {
    auto_upgrade = true
    auto_repair  = true

    maintenance_window {
      day        = "monday"
      start_time = "04:00"
      duration   = "4h"
    }

    maintenance_window {
      day        = "tuesday"
      start_time = "04:00"
      duration   = "4h"
    }

    maintenance_window {
      day        = "wednesday"
      start_time = "04:00"
      duration   = "4h"
    }
  }
}

I'm changing auto_scale max from 5 to 7 and trying to apply changes with terraform apply and as a result Terraform wants to destroy my entire node group and create new one:

~ scale_policy {
          ~ auto_scale {
                initial = 2
              ~ max     = 5 -> 7 # forces replacement
                min     = 2
            }
        }
    }
...
Plan: 1 to add, 0 to change, 1 to destroy.

I've checked the same action through Web console and was able to change max without group recreation, but it would be great if I could apply with Terraform and code.

Provider produced inconsistent final plan

Hello,

When i'm trying to create resource yandex_compute_instance with dynamic secondary_disk block, i get error if array is empty. I try it because in my plan some instance should have addition disks and another shouldn't.
Example:
.....
dynamic "secondary_disk" {
for_each = lookup(each.value,"optional_disk","false") != "false" ? [ for id in each.value.optional_disk: id ]: []
content {
disk_id = secondary_disk.value
}
}

Output:
Error: Provider produced inconsistent final plan

When expanding the plan for
module.instance.yandex_compute_instance.planned_instances["prod-k8s-vpn1-ru-central1-a"]
to include new values learned so far during apply, provider
"registry.terraform.io/-/yandex" produced an invalid new value for
.secondary_disk: block count changed from 1 to 0.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Is it possible to workaround or may be you can fix it ?

Thanks

*UPDATE
Seems that problems not with empty array, but with array who's elements count != 1
Error: Provider produced inconsistent final plan

When expanding the plan for
module.instance.yandex_compute_instance.planned_instances["prod-k8s-mongo2-ru-central1-b"]
to include new values learned so far during apply, provider
"registry.terraform.io/-/yandex" produced an invalid new value for
.secondary_disk: block count changed from 1 to 2.

Specify cloud_id in yandex_resourcemanager_folder datasource

I'm trying to get IDs for resources from data sources (to provide easier code readability) by resource name.
And it is not possible to get information about a folder without specifying clout_id in provider configuration.
I.e. my idea to setup configuration like this:

provider "yandex" {
  token = "${var.token}"
}

data "yandex_resourcemanager_cloud" "my_cloud" {
  name = "${var.cloud_name}"
}

data "yandex_resourcemanager_folder" "my_folder" {
  name = "folder_name"
}

And then use variables:
cloud_id = "${data.yandex_resourcemanager_cloud.my_cloud.id}"
folder_id = "${data.yandex_resourcemanager_folder.my_folder.id}"

But this way does not work, as I receive an error:

Error: Error refreshing state: 1 error occurred:
        * data.yandex_resourcemanager_folder.my_folder: 1 error occurred:
        * data.yandex_resourcemanager_folder.my_folder: data.yandex_resourcemanager_folder.my_folder: failed to resolve data source folder by name: failed to find folder with name "folder_name": request-id = db77cb01-6527-4623-8f64-4a6be4292773 rpc error: code = InvalidArgument desc = cloudId: String value is too short

I'm not able to specify cloud_id variable for the data source. It needs to be set in the provider configuration.
And this behavior looks strange and incorrect.

Yandex.DNS support

Hello, do you have any plans of implementing the Yandex DNS (pdd.yandex.ru and connect.yandex.ru) support in terraform?

It would be very useful for people who use the yandex.connect and yandex.dns services.

Thanks in advance.

yandex_compute_snapshot context deadline exceeded

Terraform Version

Terraform v0.12.12
+ provider.yandex v0.22.0

Affected Resource(s)

  • yandex_compute_snapshot

Terraform Configuration Files

resource "yandex_compute_instance" "docker_registry" {
  allow_stopping_for_update = true

  count = 1

  name        = "docker-registry-${count.index}"
  hostname    = "docker-registry-${count.index}"
  description = "docker registry instance"

  resources {
    cores  = 1
    memory = 1
  }

  boot_disk {
    initialize_params {
      image_id = "fd81d2d9ifd50gmvc03g"
      size = 50
      type = "network-hdd"
    }
  }

  network_interface {
    subnet_id = "${yandex_vpc_subnet.default_subnet["0"].id}"
    nat       = true
  }

  secondary_disk {
    disk_id = "${yandex_compute_disk.docker_registry[count.index].id}"
  }

  metadata = {
    ssh-keys = "ubuntu:${file("~/.ssh/id_rsa.pub")}"
  }
}

resource "yandex_compute_disk" "docker_registry" {
  count = 1
  type = "network-hdd"
  size = 100
}

resource "yandex_compute_snapshot" "docker_registry" {
  count          = 1
  description    = "snapshot docker-registry-${count.index}"
  source_disk_id = "${yandex_compute_instance.docker_registry[count.index].boot_disk[0].disk_id}"
}

Output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # yandex_compute_snapshot.docker_registry[0] is tainted, so must be replaced
-/+ resource "yandex_compute_snapshot" "docker_registry" {
      ~ created_at     = "2019-10-31T11:31:37Z" -> (known after apply)
      ~ description    = "snapshot before big configuration changes docker_registry" -> "snapshot before big configuration changes docker-registry-0"
      ~ disk_size      = 50 -> (known after apply)
      ~ folder_id      = "b1gc7vi2ckqausoc5dr7" -> (known after apply)
      ~ id             = "fd8uk4bajc54e93p1aca" -> (known after apply)
      - labels         = {} -> null
        source_disk_id = "fhmdn1f00pulfjqn7vpn"
      ~ storage_size   = 41 -> (known after apply)
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

yandex_compute_snapshot.docker_registry[0]: Destroying... [id=fd8uk4bajc54e93p1aca]
yandex_compute_snapshot.docker_registry[0]: Still destroying... [id=fd8uk4bajc54e93p1aca, 10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still destroying... [id=fd8uk4bajc54e93p1aca, 20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Destruction complete after 26s
yandex_compute_snapshot.docker_registry[0]: Creating...
yandex_compute_snapshot.docker_registry[0]: Still creating... [10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [30s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [40s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [50s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m0s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m30s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m40s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [1m50s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m0s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m30s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m40s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [2m50s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m0s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m30s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m40s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [3m50s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m0s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m10s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m20s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m30s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m40s elapsed]
yandex_compute_snapshot.docker_registry[0]: Still creating... [4m50s elapsed]

Error: Error while waiting operation to create snapshot: operation (id=fhm0fqegiu0bu78ou7io) wait context done: context deadline exceeded

  on docker-registry.tf line 43, in resource "yandex_compute_snapshot" "docker_registry":
  43: resource "yandex_compute_snapshot" "docker_registry" {

Expected Behavior

Terraform apply finished without errors, snapshot created succesfully.

Actual Behavior

Terraform apply finished with error after timeout exceeded.

Steps to Reproduce

  1. terraform apply

yandex_compute_instance: cloud-init metadata change does not force re-creation

Using yandex_compute_instance resources and updating user-data parameter in metadata does not force re-creation of instance.

Setting allow_stopping_for_update to True does not help

Example of the terraform code:

resource "yandex_compute_instance" "default" {
  count       = var.count
  name        = var.name
  platform_id = var.platform_id
  zone        = var.zone
  labels      = var.labels
  folder_id   = var.folder_id
  
  allow_stopping_for_update = var.allow_stopping_for_update

  resources   {
      ...
  }

  boot_disk {
      ...
  }

  network_interface {
      ...
  }

  metadata = {
    ssh-keys   = "root:XXXXXXX"
    user-data  = templatefile("${var.user_data_file}", "${var.user_data_vars}")
  }
}

Expected behavior on user-data change is force re-create.
Current behavior is update in place which is confusing as requires to manually destroy and apply back.

It could be implemented with a flag similar to allow_stopping_for_update
One could use force_recreate_on_user_data_change but any possibility would be awesome.

There is a lack in the documentation for the user-data parameter too.

yandex_vpc_subnet.name not validated properly at plan stage

yandex_vpc_subnet.name attribute doesn't allow upper-case letters.

When trying to create a subnet with a name production-VM-ru-central1-a,
terraform plan doesn't raise any errors, but terraform apply fails:

Error: Error while requesting API to create subnet: server-request-id = .... client-request-id = .... client-trace-id = ....
rpc error: code = InvalidArgument desc = name: Invalid resource name

It would be really nice to see these errors at plan stage.

Change role fo service account with yandex_resourcemanager_folder_iam_member shows replace but actually update it

Hello,

It is a minor issue but gives some misleading.

I have a service account in a folder with "editor" role and when I update the role with terraform from

resource "yandex_resourcemanager_folder_iam_member" "gterraform_editor" {
  role      = "editor"
  member    = "serviceAccount:ajehfojl9k0li2kjkjv0"
}

to

resource "yandex_resourcemanager_folder_iam_member" "gterraform_editor" {
  role      = "admin"
  member    = "serviceAccount:ajehfojl9k0li2kjkjv0"
}

During plan and update, I'm getting a message:

-/+ destroy and then create replacement

Terraform will perform the following actions:

  # yandex_resourcemanager_folder_iam_member.gterraform_editor must be replaced
-/+ resource "yandex_resourcemanager_folder_iam_member" "gterraform_editor" {
        folder_id = "folder_id"
      ~ id        = "folder_id/editor/serviceAccount:ajehfojl9k0li2kjkjv0" -> (known after apply)
        member    = "serviceAccount:ajehfojl9k0li2kjkjv0"
      ~ role      = "editor" -> "admin" # forces replacement
    }

Plan: 1 to add, 0 to change, 1 to destroy.

But actually happens what I would like to see - account is not replaced, but updated with the role.
No recreation/replacement or ID change.

It seems to better to change warning, that this action will do just an update.

Creating a user without permissions thows an error on apply but creates the user and then can not change/update the user later

I create a cluster using dynamic blocks for databases and users.

variable "pg_user_permissions" {
  type = map(list(object(
    {
      database_name = string
    }
  )))
  default = {
    "test" : [
      {
        database_name = "test"
      }
    ],
    "test2" : [
      {
        database_name = "test2"
      }
    ],
    "test3" : [
      {
        database_name = ""
      }
    ],
  }
}

variable "pg_users" {
  type    = set(string)
  default = ["test", "test2", "test3"]
}

resource "random_password" "pg_passwords" {
  for_each = var.pg_users
  length  = 24
  special = false
}


resource "yandex_mdb_postgresql_cluster" "postgres" {
  name        = "${var.environment}-${var.labels.project_tag}"
  description = var.pg_cluster_description
  folder_id   = data.yandex_resourcemanager_folder.folder.id
  labels      = var.labels

  network_id  = var.network_id
  environment = var.pg_cluster_config.environment


  config {
    version      = var.pg_cluster_config.version
    autofailover = true
    resources {
      resource_preset_id = var.pg_cluster_config.resource_preset
      disk_type_id       = var.pg_cluster_config.disk_type
      disk_size          = var.pg_cluster_config.disk_size
    }
    access {
      data_lens = false
    }
    backup_window_start {
      hours   = var.pg_backup_window.hours
      minutes = var.pg_backup_window.minutes
    }
  }


  dynamic "user" {
    for_each = random_password.pg_passwords
    content {
      name     = user.key
      password = user.value["result"]

      dynamic "permission" {
        for_each = var.pg_user_permissions[user.key]
        content {
          database_name = permission.value.database_name
        }
      }
    }
  }

  dynamic "database" {
    for_each = var.pg_databases
    content {
      name       = database.value.name
      owner      = database.value.owner
      lc_collate = database.value.lc_collate
      lc_type    = database.value.lc_type

      dynamic "extension" {
        for_each = database.value.db_extention
        content {
          name = extension.value
        }
      }
    }
  }

  dynamic "host" {
    for_each = var.db_zones
    content {
      zone             = host.value
      subnet_id        = element(var.subnets, host.key)
      assign_public_ip = var.public_ip
    }
  }
}

And if I do not set in permissions database_name value for the user, on terraform plan there is everything ok:

        user {
            grants   = []
            login    = true
            name     = "test2"
            password = (sensitive value)

            permission {
                database_name = "test2"
            }
        }
      + user {
          + grants   = []
          + login    = true
          + name     = "test3"
          + password = (sensitive value)

          + permission {}
        }
        user {
            grants   = []
            login    = true
            name     = "test"
            password = (sensitive value)

            permission {
                database_name = "test"
            }

But on apply I will receive an error:

Error: error while requesting API to update user in PostgreSQL Cluster "c9qutl4lroqjmknjli88": server-request-id = 6e343e9b-542e-b2eb-9670-b831e837a22e server-trace-id = e3efa2caaffd0bac:1a118b34caff0265:e3efa2caaffd0bac:1 client-request-id = f0c3b492-e7a1-4c00-9d05-7a8e22a5018c client-trace-id = a30ea2ba-5c94-4722-b23b-ce8fe1ae6ab1 rpc error: code = InvalidArgument desc = The request is invalid.
permissions.0.databaseName: Missing data for required field.

  on main.tf line 33, in resource "yandex_mdb_postgresql_cluster" "postgres":
  33: resource "yandex_mdb_postgresql_cluster" "postgres" {

Which is correct, but the user, in my case test3, will be created:

yc managed-postgresql user list --cluster-id=c9qutl4lroqjmknjli88
+--------------------+--------------------------------+------------+
|        NAME        |          PERMISSIONS           | CONN LIMIT |
+--------------------+--------------------------------+------------+
| test               | test                           |         10 |
| test2              | test2                          |         10 |
| test3              |                                |         50 |
+--------------------+--------------------------------+------------+

and then on each update of the cluster terraform will try to change this user (even if I do not change anything in code):

        user {
            grants   = []
            login    = true
            name     = "test2"
            password = (sensitive value)

            permission {
                database_name = "test2"
            }
        }
      + user {
          + grants   = []
          + login    = true
          + name     = "test3"
          + password = (sensitive value)

          + permission {}
        }
      - user {
          - grants   = [] -> null
          - login    = true -> null
          - name     = "test3" -> null
          - password = (sensitive value)
        }

or if I update the user:

        user {
            grants   = []
            login    = true
            name     = "test2"
            password = (sensitive value)

            permission {
                database_name = "test2"
            }
        }
      + user {
          + grants   = []
          + login    = true
          + name     = "test3"
          + password = (sensitive value)

          + permission {
              + database_name = "test2"
            }
        }
      - user {
          - grants   = [] -> null
          - login    = true -> null
          - name     = "test3" -> null
          - password = (sensitive value)
        }

But terraform apply will not make any changes.
I mean terraform apply will apply changes successfully:

terraform apply -compact-warnings -auto-approve
random_password.pg_passwords["test3"]: Refreshing state... [id=none]
random_password.pg_passwords["test"]: Refreshing state... [id=none]
random_password.pg_passwords["test2"]: Refreshing state... [id=none]
data.yandex_resourcemanager_cloud.decathlon: Refreshing state...
data.yandex_resourcemanager_folder.folder: Refreshing state...
yandex_mdb_postgresql_cluster.postgres: Refreshing state... [id=c9qutl4lroqjmknjli88]
yandex_mdb_postgresql_cluster.postgres: Modifying... [id=c9qutl4lroqjmknjli88]
yandex_mdb_postgresql_cluster.postgres: Modifications complete after 2s [id=c9qutl4lroqjmknjli88]

but without any result. Permissions for the user won't be changed and if I restart terraform plan/apply, it will show the same pending changes with the user.
To fix it I need manually update user and grant permissions to any database or delete the user.

resource yandex_mdb_redis_cluster. Error with labels block: Unsupported block type

Hello.
Do I correctly define labels block when I'm creating Redis cluster resource?

It is similar to compute instance resource:

resource "yandex_mdb_redis_cluster" "redis_cluster" {
  name             = var.redis_cluster["name"]
  environment  = var.redis_cluster["environment"]
  network_id     = var.network_id
  folder_id         = data.yandex_resourcemanager_folder.folder.id

  labels {
    billing_costcenter = "var.billing_costcenter"
    git_project_name  = "var.git_project_name"
  }

  config {
    password = "${random_password.password.result}"
  }

  resources {
    resource_preset_id = var.redis_cluster["resource-preset"]
    disk_size          = var.redis_cluster["disk-size"]
  }

  host {
    zone   = var.zone
    subnet_id = var.subnet_id
  }
}

But on plan I'getting an error:

Error: Unsupported block type

  on .terraform/modules/redis_cluster/main.tf line 68, in resource "yandex_mdb_redis_cluster" "redis_cluster":
  68:   labels {

Blocks of type "labels" are not expected here. Did you mean to define argument
"labels"? If so, use the equals sign to assign it a value.

yandex_mdb_mongodb_cluster disk size change don't work

When I try to change disk size for MongoDB Cluster via terraform nothing happens.

terraform plan output always shows the following and no real changes in cluster after applying plan:

      ~ resources {
          ~ disk_size          = 20 -> 30
            disk_type_id       = "network-ssd"
            resource_preset_id = "s2.micro"
        }

Provider version: 0.31.0

Error on importing yandex_storage_bucket resource

Hello,

How to import a previously created storage bucket?

I've added resource into my tf plan:

resource "yandex_storage_bucket" "old_bucket" {
  bucket = "bucket.name"
  access_key = "access_key"
  secret_key = "secret_key"
}

But when I'm trying to do an import, I'm getting an error:

terraform import yandex_storage_bucket.old_bucket bucket.name 
yandex_storage_bucket.old_bucket: Importing from ID "bucket.name"...
yandex_storage_bucket.old_bucket: Import prepared!
  Prepared yandex_storage_bucket for import
yandex_storage_bucket.old_bucket: Refreshing state... [id=bucket.name]

Error: error getting storage client: failed to get default storage client

Panic when reading vpc subnet created before

After ugrading from 0.40.0 to 0.41.0 I get error when reading vpc subnet created earlier. config.sdk.VPC().Subnet().Get returns subnet with nil in the field subnet.DhcpOptions, so flattenDhcpOptions causes panic.

panic: runtime error: invalid memory address or nil pointer dereference
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x12765bd]
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: 
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: goroutine 164 [running]:
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: github.com/terraform-providers/terraform-provider-yandex/yandex.flattenDhcpOptions(...)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/yandex/structures.go:1053
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: github.com/terraform-providers/terraform-provider-yandex/yandex.resourceYandexVPCSubnetRead(0xc0003a5d50, 0x14ef000, 0xc000577d00, 0xc0003a5d50, 0x0)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/yandex/resource_yandex_vpc_subnet.go:242 +0x6fd
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc00002f980, 0xc00067d4f0, 0x14ef000, 0xc000577d00, 0xc0001311d0, 0x0, 0x0)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/github.com/hashicorp/terraform-plugin-sdk/helper/schema/resource.go:455 +0x119
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ReadResource(0xc00000e198, 0x19a5860, 0xc0008a4cc0, 0xc00067d400, 0xc00000e198, 0xc0008a4cc0, 0xc000469b88)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin/grpc_provider.go:525 +0x3d8
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ReadResource_Handler(0x1663240, 0xc00000e198, 0x19a5860, 0xc0008a4cc0, 0xc000089b00, 0x0, 0x19a5860, 0xc0008a4cc0, 0xc0008b4000, 0x18d)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5/tfplugin5.pb.go:3153 +0x217
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0005481a0, 0x19b2f40, 0xc000643800, 0xc0007b2400, 0xc0008023f0, 0x2728ad0, 0x0, 0x0, 0x0)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/google.golang.org/grpc/server.go:1082 +0x50a
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: google.golang.org/grpc.(*Server).handleStream(0xc0005481a0, 0x19b2f40, 0xc000643800, 0xc0007b2400, 0x0)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/google.golang.org/grpc/server.go:1405 +0xcc9
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc00003e360, 0xc0005481a0, 0x19b2f40, 0xc000643800, 0xc0007b2400)
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/google.golang.org/grpc/server.go:746 +0xa1
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4: created by google.golang.org/grpc.(*Server).serveStreams.func1
2020-06-23T23:25:06.870+0300 [DEBUG] plugin.terraform-provider-yandex_v0.41.0_x4:       /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-yandex/vendor/google.golang.org/grpc/server.go:744 +0xa1

This patch works for me:

diff --git a/yandex/structures.go b/yandex/structures.go
index d14c138..c3e07d7 100644
--- a/yandex/structures.go
+++ b/yandex/structures.go
@@ -1048,6 +1048,10 @@ func routeDescriptionToStaticRoute(v interface{}) (*vpc.StaticRoute, error) {
 }
 
 func flattenDhcpOptions(dhcpOptions *vpc.DhcpOptions) []interface{} {
+       if dhcpOptions == nil {
+               return nil
+       }
+
        m := make(map[string]interface{})
 
        if dhcpOptions.DomainName != "" {

Resource "yandex_compute_disk" doesn't change name of disk when added as "secondary_disk" in "yandex_compute_instance".

  1. cat example.tf
resource "yandex_compute_disk" "example" {
  name = "example"
  size = 1
  zone = local.zone
}

resource "yandex_compute_instance" "example" {
  allow_stopping_for_update = true

  folder_id   = local.folder_id
  hostname    = "example"
  name        = "example"
  platform_id = "standard-v2" # Intel Cascade Lake
  zone        = local.zone
  boot_disk {
    auto_delete = true
    initialize_params {
      image_id = data.yandex_compute_image.default.id
    }
  }

  network_interface {
    subnet_id = data.yandex_vpc_subnet.default.id
  }

  resources {
    cores         = 2
    memory        = 1
    core_fraction = 5
  }

  secondary_disk {
    auto_delete = false
    disk_id     = yandex_compute_disk.example.id
  }

  scheduling_policy {
    preemptible = true
  }
}
  1. terraform apply
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # yandex_compute_instance.example will be created
  + resource "yandex_compute_instance" "example" {
      + allow_stopping_for_update = true
      + created_at                = (known after apply)
      + folder_id                 = "ID"
      + fqdn                      = (known after apply)
      + hostname                  = "example"
      + id                        = (known after apply)
      + name                      = "example"
      + platform_id               = "standard-v2"
      + service_account_id        = (known after apply)
      + status                    = (known after apply)
      + zone                      = "ru-central1-a"

      + boot_disk {
          + auto_delete = true
          + device_name = (known after apply)
          + disk_id     = (known after apply)
          + mode        = (known after apply)

          + initialize_params {
              + description = (known after apply)
              + image_id    = "ID"
              + name        = (known after apply)
              + size        = 8
              + snapshot_id = (known after apply)
              + type        = "network-hdd"
            }
        }

      + network_interface {
          + index          = (known after apply)
          + ip_address     = (known after apply)
          + ipv6           = (known after apply)
          + ipv6_address   = (known after apply)
          + mac_address    = (known after apply)
          + nat            = (known after apply)
          + nat_ip_address = (known after apply)
          + nat_ip_version = (known after apply)
          + subnet_id      = "ID"
        }

      + resources {
          + core_fraction = 20
          + cores         = 2
          + memory        = 2
        }

      + scheduling_policy {
          + preemptible = true
        }

      + secondary_disk {
          + auto_delete = false
          + device_name = (known after apply)
          + disk_id     = "ID"
          + mode        = "READ_WRITE"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.
yandex_compute_instance.example: Creating...
yandex_compute_instance.example: Still creating... [10s elapsed]
yandex_compute_instance.example: Creation complete after 16s [id=ID]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Everything is OK!

Next step: change name of disk
3. cat main.tf

resource "yandex_compute_disk" "example" {
  name = "example-changed"
  size = 1
  zone = local.zone
}
... 
  1. terraform apply
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # yandex_compute_disk.example will be updated in-place
  ~ resource "yandex_compute_disk" "example" {
        created_at  = "2019-07-10T11:40:54Z"
        folder_id   = "ID"
        id          = "ID"
        labels      = {}
      ~ name        = "example" -> "example-changed"
        product_ids = []
        size        = 1
        status      = "ready"
        type        = "network-hdd"
        zone        = "ru-central1-a"
    }

Plan: 0 to add, 1 to change, 0 to destroy.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Actual result is:
In last two lines in "Plan" - 1 to change. In "Apply" - 0 changed.
Change of disk name wasn't applied.
Expected result is:
In last two lines in "Plan" - 1 to change. In "Apply" - 1 changed.
Change of disk name should be applied.

p.s.
Also after "apply" if you run terraform plan it always return:

Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # yandex_compute_disk.example will be updated in-place
  ~ resource "yandex_compute_disk" "example" {
        created_at  = "2019-07-10T11:40:54Z"
        folder_id   = "ID"
        id          = "ID"
        labels      = {}
      ~ name        = "example" -> "example-changed"
        product_ids = []
        size        = 1
        status      = "ready"
        type        = "network-hdd"
        zone        = "ru-central1-a"
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Service account authorization

Hello,

As far as I see, currently there is no support for service account authorization.
My manifest is the following:

provider "yandex" {}
resource "yandex_vpc_network" "production" {
  name = "production"
}

I've tried to pass service account key id and secret through YC_CLOUD and YC_TOKEN variables, but the error I've got says

* yandex_vpc_network.production: Error while requesting API to create network: request-id = 12dee9f4-74aa-44e9-a882-35089eee859e rpc error: code = InvalidArgument desc = Request validation error: Exactly one of 'yandexPassportOauthToken' and 'jwt' should be specified.

Besides it should be mentioned that oauth token works fine, but its TTL is 1 year, which is extremely unsafe for collaborative development, moreover absence of service account authorization makes whole provider useless and not production-ready.

Could you please take care of this issue, or let's discuss how I could help you to fix it.

Error creating Instance Group in v0.42.0

Hey! I am not able to create Instance Group in v0.42.0. However, the same code works in v0.41.1.

Resource declaration:

resource "yandex_compute_instance_group" "docker-swarm-managers" {
  name = "docker-swarm-managers"
  service_account_id = yandex_iam_service_account.docker-swarm.id
  allocation_policy {
    zones = ["ru-central1-a", "ru-central1-b", "ru-central1-c"]
  }
  deploy_policy {
    max_unavailable = 1
    max_expansion = 0
  }
  scale_policy {
    fixed_scale {
      size = 3
    }
  }
  instance_template {
    platform_id = "standard-v2"
    resources {
      cores         = 2
      memory        = 1
      core_fraction = 5
    }
    boot_disk {
      initialize_params {
        image_id = "fd8vqk0bcfhn31stn2ts"
        size     = 10
      }
    }
    secondary_disk {
      device_name = "glusterfs"
      initialize_params {
        size = 10
      }
    }
    metadata = {
      ssh-keys = "ubuntu:${file("ssh/yandex-cloud-prod.pub")}"
    }
    scheduling_policy {
      preemptible = true
    }
    network_interface {
      network_id = yandex_vpc_network.main-network.id
      subnet_ids = [yandex_vpc_subnet.subnet-1.id, yandex_vpc_subnet.subnet-2.id, yandex_vpc_subnet.subnet-3.id]
    }
  }
}

crash.log

Can't change instance_group resource with load_balancer defined

I didn't change load_balancer_spec, but terraform prints error:

yandex_compute_instance_group.prerender: Modifying... [id=xxx]

Error: Error while requesting API to update Instance group "xxx": request-id = b34f12e2-1bb8-48f1-af97-45533163a2b3 rpc error: code = InvalidArgument desc = Validation failed:
  - load_balancer_spec: load_balancer_spec cannot be changed

  on instance_group.tf line 6, in resource "yandex_compute_instance_group" "xxx":
   6: resource "yandex_compute_instance_group" "xxx" {

yandex_vpc_subnet does not detach route_table

After creating subnet with route table I'm unable to detach or delete route table.
Steps to reproduce:

  • Create subnet with route table
resource "yandex_vpc_subnet" "lab-subnet-a" {
  v4_cidr_blocks = ["10.2.0.0/16"]
  zone           = "ru-central1-a"
  description    = "description"
  network_id     = yandex_vpc_network.this.id
  route_table_id = yandex_vpc_route_table.lab-rt-a.id
}

resource "yandex_vpc_route_table" "lab-rt-a" {
  network_id = yandex_vpc_network.this.id

  static_route {
    destination_prefix = "10.2.0.0/16"
    next_hop_address   = "172.16.10.10"
  }
}
  • set description and route_table_id to null, or comment whole lines. Terraform plan will be like:
resource "yandex_vpc_subnet" "lab-subnet-a" {
  v4_cidr_blocks = ["10.2.0.0/16"]
  zone           = "ru-central1-a"
  description    = null #"description"
  network_id     = yandex_vpc_network.this.id
  route_table_id = null #yandex_vpc_route_table.lab-rt-a.id
}
Terraform will perform the following actions:

  # yandex_vpc_subnet.lab-subnet-a will be updated in-place
  ~ resource "yandex_vpc_subnet" "lab-subnet-a" {
        created_at     = "2020-04-24T13:37:41Z"
      - description    = "description" -> null
        folder_id      = "b1gt0iskjdm3od1v1ugn"
        id             = "e9bmijvqccme1o24k6gb"
        labels         = {}
        network_id     = "enp6nelibq0barsg397n"
        route_table_id = "enp4hs4ss80qb481fqpp"
        v4_cidr_blocks = [
            "10.2.0.0/16",
        ]
        v6_cidr_blocks = []
        zone           = "ru-central1-a"
    }

Obviously, apply does not detach route table.
Also in interface i can see route table still attached after terraform apply.
image

  • When i try to destroy yandex_vpc_route_table, i get error

Error: error reading Route table "": server-request-id = 8f4d8d21-1b93-ba6e-b3b2-bd7d1982582c client-request-id = 9ceaa2ec-5e3d-4000-90b2-d8d96a39dff3 client-trace-id = 1528b1d2-bb9c-4761-aa19-fb334c0f051f rpc error: code = FailedPrecondition desc = Route table enp4hs4ss80qb481fqpp is associated with a subnet

However, it works as intended with description field.

Provider version: 0.38
Terraform version: 0.12.24

provider level folder is ignored in yandex_iam_service_account resource

Resource definition

resource "yandex_iam_service_account" "sa" {
  name      = "robot"
}

terraform plan works OK, but exception occurrs during terraform apply

terraform plan -out plan.bin
... omit output

terraform apply -auto-approve plan.bin
...
Error: Error getting folder ID while creating service account: cannot determine folder_id: please set 'folder_id' key in this resource or at provider level

  on main.tf line 29, in resource "yandex_iam_service_account" "sa":
  29: resource "yandex_iam_service_account" "sa" {

folder_id is already defined in provider:

provider "yandex" {
  alias       = "ru_prod"
  version     = "~> 0.40"
  cloud_id    = "xxxxxxxx"
  folder_id   = "xxxxxxxx"
}

Current work around is create a data source for current folder, and reference it from yandex_iam_service_account resource

resource "yandex_iam_service_account" "sa" {
  folder_id = data.yandex_resourcemanager_folder.folder.folder_id
  name      = "robot"
}

Yandex provider version: v0.41.1

data source yandex_resourcemanager_cloud requires folder_id to get cloud_id

I'm not sure when it started to be, but today I've faced with next issue:

I'm trying to get cloud id by its name:

data "yandex_resourcemanager_cloud" "cloud" {
  name = var.cloud_name
}

But on plan step I'm receiving an error:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.yandex_resourcemanager_cloud.cloud: Refreshing state...

Error: failed to resolve data source cloud by name: cannot determine folder_id: please set 'folder_id' key in this resource or at provider level

  on main.tf line 89, in data "yandex_resourcemanager_cloud" "cloud":
  89: data "yandex_resourcemanager_cloud" "cloud" {

I'm not sure, but previously it worked without specifying folder_id at any level.
Because normally I use the next pipeline in my scripts: get cloud id by cloud name -> get folder id by folder name and cloud id.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.