Git Product home page Git Product logo

terraform-provider-elasticsearch's Introduction

terraform-provider-elasticsearch

Test

This is a terraform provider that lets you provision Elasticsearch and Opensearch resources, compatible with v6 and v7 of Elasticsearch and v1 of Opensearch. Based off of an original PR to Terraform.

Using the Provider

Terraform 0.13 and above

This package is published on the official Terraform registry. Note, we currently test against the 1.x branch of Terraform - this should continue to work with >= 0.13 versions, however, compatibility is not tested in the >= 2.x version of this provider.

Terraform 0.12 or manual installation

Or download a binary, and put it in a good spot on your system. Then update your ~/.terraformrc to refer to the binary:

providers {
  elasticsearch = "/path/to/terraform-provider-elasticsearch"
}

See the docs for more on manual installation.

Terraform 0.11

With version 2.x of this provider, it uses version 2.x of the Terraform Plugin SDK which only supports Terraform 0.12 and higher. Please see the 1.x releases of this provider for Terraform 0.11 support.

Usage

provider "elasticsearch" {
    url = "https://search-foo-bar-pqrhr4w3u4dzervg41frow4mmy.us-east-1.es.amazonaws.com" # Don't include port at the end for aws
    aws_access_key = ""
    aws_secret_key = ""
    aws_token = "" # if necessary
    insecure = true # to bypass certificate check
    cacert_file = "/path/to/ca.crt" # when connecting to elastic with self-signed certificate
    sign_aws_requests = true # only needs to be true if your domain access policy includes IAM users or roles
}

API Coverage

Examples of resources can be found in the examples directory. The resources currently supported from the: opensource Elasticsearch, XPack and OpenDistro/OpenSearch distributions are described below.

Elasticsearch

Kibana

  • Kibana Object
    • Visualization
    • Search
    • Dashboard
  • Kibana Alerts

XPack

OpenDistro/OpenSearch

Examples

resource "elasticsearch_index_template" "test" {
  name = "terraform-test"
  body = <<EOF
{
  "template": "logstash-*",
  "version": 50001,
  "settings": {
    "index.refresh_interval": "5s"
  },
  "mappings": {
    "_default_": {
      "_all": {"enabled": true, "norms": false},
      "dynamic_templates": [ {
        "message_field": {
          "path_match": "message",
          "match_mapping_type": "string",
          "mapping": {
            "type": "text",
            "norms": false
          }
        }
      }, {
        "string_fields": {
          "match": "*",
          "match_mapping_type": "string",
          "mapping": {
            "type": "text", "norms": false,
            "fields": {
              "keyword": { "type": "keyword" }
            }
          }
        }
      } ],
      "properties": {
        "@timestamp": { "type": "date", "include_in_all": false },
        "@version": { "type": "keyword", "include_in_all": false },
        "geoip" : {
          "dynamic": true,
          "properties": {
            "ip": { "type": "ip" },
            "location": { "type": "geo_point" },
            "latitude": { "type": "half_float" },
            "longitude": { "type": "half_float" }
          }
        }
      }
    }
  }
}
EOF
}

# A saved search, visualization or dashboard
resource "elasticsearch_kibana_object" "test_dashboard" {
  body = "${file("dashboard_path.txt")}"
}

Example watches (target notification actions must be setup manually before hand)

# Monitor cluster status with auth being required
resource "elasticsearch_xpack_watch" "cluster-status-red" {
  watch_id = "cluster-status-red"
  body = <<EOF
{
  "trigger": {
    "schedule": {
      "interval": "1m"
    }
  },
  "input": {
    "http": {
      "request": {
        "scheme": "http",
        "host": "localhost",
        "port": 9200,
        "method": "get",
        "path": "/_cluster/health",
        "params": {},
        "headers": {
          "Authorization": "Basic ${base64encode('username:password')}"
        }
      }
    }
  },
  "condition": {
    "compare": {
      "ctx.payload.status": {
        "eq": "red"
      }
    }
  },
  "actions": {
    "notify-slack": {
      "throttle_period_in_millis": 300000,
      "slack": {
        "account": "monitoring",
        "message": {
          "from": "watcher",
          "to": [
            "#my-slack-channel"
          ],
          "text": "Elasticsearch Monitoring",
          "attachments": [
            {
              "color": "danger",
              "title": "Cluster Health Warning - RED",
              "text": "elasticsearch cluster health is RED"
            }
          ]
        }
      }
    }
  },
  "metadata": {
    "xpack": {
      "type": "json"
    },
    "name": "Cluster Health Red"
  }
}
EOF
}

# Monitor JVM memory usage without auth required
resource "elasticsearch_xpack_watch" "jvm-memory-usage" {
  watch_id = "jvm-memory-usage"
  body = <<EOF
{
  "trigger": {
    "schedule": {
      "interval": "10m"
    }
  },
  "input": {
    "http": {
      "request": {
        "scheme": "http",
        "host": "localhost",
        "port": 9200,
        "method": "get",
        "path": "/_nodes/stats/jvm",
        "params": {
                  "filter_path": "nodes.*.jvm.mem.heap_used_percent"
                },
        "headers": {}
      }
    }
  },
  "condition": {
    "script": {
      "lang": "painless",
      "source": "ctx.payload.nodes.values().stream().anyMatch(node -> node.jvm.mem.heap_used_percent > 75)"
    }
  },
  "actions": {
    "notify-slack": {
      "throttle_period_in_millis": 600000,
      "slack": {
        "account": "monitoring",
        "message": {
          "from": "watcher",
          "to": [
            "#my-slack-channel"
          ],
          "text": "Elasticsearch Monitoring",
          "attachments": [
            {
              "color": "danger",
              "title": "JVM Memory Pressure Warning",
              "text": "JVM Memory Pressure has been > 75% on one or more nodes for the last 5 minutes."
            }
          ]
        }
      }
    }
  },
  "metadata": {
    "xpack": {
      "type": "json"
    },
    "name": "JVM Memory Pressure Warning"
  }
}
EOF
}

For use with AWS Opensearch domains

Please see the documentation for details.

Development

Requirements

go build -o /path/to/binary/terraform-provider-elasticsearch

Running tests locally

Start an instance of ElasticSearch locally with the following:

./script/install-tools
export OSS_IMAGE="opensearchproject/opensearch:1.2.0"
export ES_OPENDISTRO_IMAGE="opensearchproject/opensearch:1.2.0"
export ES_COMMAND=""
export ES_KIBANA_IMAGE=""
export OPENSEARCH_PREFIX="plugins.security"
export OSS_ENV_VAR="plugins.security.disabled=true"
export XPACK_IMAGE="docker.elastic.co/elasticsearch/elasticsearch:7.10.1"
docker-compose up -d
docker-compose ps -a

When running tests, ensure that your test/debug profile has environmental variables as below:

  • ELASTICSEARCH_URL=http://localhost:9200_
  • TF_ACC=1

Debugging this provider

Build the executable, and start in debug mode:

$ go build
$ ./terraform-provider-elasticsearch -debuggable # or start in debug mode in your IDE
{"@level":"debug","@message":"plugin address","@timestamp":"2022-05-17T10:10:04.331668+01:00","address":"/var/folders/32/3mbbgs9x0r5bf991ltrl3p280000gs/T/plugin1346340234","network":"unix"}
Provider started, to attach Terraform set the TF_REATTACH_PROVIDERS env var:

        TF_REATTACH_PROVIDERS='{"registry.terraform.io/phillbaker/elasticsearch":{"Protocol":"grpc","ProtocolVersion":5,"Pid":79075,"Test":true,"Addr":{"Network":"unix","String":"/var/folders/32/3mbbgs9x0r5bf991ltrl3p280000gs/T/plugin1346340234"}}}'

In another terminal, you can test your terraform code:

$ cd <my-project/terraform>
$ export TF_REATTACH_PROVIDERS=<env var above>
$ terraform apply

The local provider will be used instead, and you should see debug information printed to the terminal.

Licence

See LICENSE.

Contributing

  1. Fork it ( https://github.com/phillbaker/terraform-provider-elasticsearch/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

terraform-provider-elasticsearch's People

Contributors

9rnt avatar basvandijk avatar blamarvt avatar elrob avatar enteris avatar goatherder avatar gordonbondon avatar jfroche avatar jheilcoveo avatar john-delivuk-rl avatar krmnn avatar mindbat avatar phillbaker avatar ramenjosh avatar rm-hull avatar robsonsutton avatar rtoma avatar s-starostin avatar sam-super avatar sergei-ivanov avatar steveteuber avatar subsetpark avatar tdegiacinto avatar tonylovesdevops avatar tophercullen avatar travisby avatar volym3ad avatar wiston999 avatar yann-soubeyrand avatar yasumoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-elasticsearch's Issues

Support for Elasticsearch v6.x+

Hey @phillbaker,

I'm targeting an Elasticsearch cluster which is on version 6.x, but I'm having issues due to breaking API changes from v5 to v6. This is understandable given this provider is using the v5 release branch github.com/olivere/elastic.

To fix my issue in the short term, I've created a private fork that is now upgraded to use the v6 client.

But ideally, I wouldn't want to rely on my workaround and instead I would like to see a way to use your provider despite of the target ES version. Have you put any though in allowing this provider handle different versions of Elasticsearch?

Alternatively, we could perhaps confirm if the v6 API/client is backwards compatible with the v5 Index Template, Snapshot Repository and Kibana Object APIs that this provider currently uses. If so, consider upgrading to the client to v6. Though this doesn't solve the problem at hand in the long term.

[Request] Cut New Release

Currently it seems that 0.8.1 is not current with Master. We tried to use it and it fails with a http timeout. But Master does not.

Can we get a new release cut off of master?

Add import support for elasticsearch_kibana_object

Currently you cannot import existing Kibana objects:

$ terraform import elasticsearch_kibana_object.default_space_advanced_settings config:7.6.2

elasticsearch_kibana_object.default_space_advanced_settings: Importing from ID "config:7.6.2"...

Error: elasticsearch_kibana_object.default_space_advanced_settings (import id: config:7.6.2): import elasticsearch_kibana_object.default_space_advanced_settings (id: config:7.6.2): resource elasticsearch_kibana_object doesn't support import

I thought this might be easy so I had a go on my fork:

https://github.com/tdmalone/terraform-provider-elasticsearch/commit/92ae78dc1859bfa62cb1e08f15001f19e92fb664

and unfortunately it wasn't that easy :P

$ terraform import elasticsearch_kibana_object.default_space_advanced_settings config:7.6.2

elasticsearch_kibana_object.default_space_advanced_settings: Importing from ID "config:7.6.2"...
elasticsearch_kibana_object.default_space_advanced_settings: Import complete!
  Imported elasticsearch_kibana_object (ID: config:7.6.2)
elasticsearch_kibana_object.default_space_advanced_settings: Refreshing state... (ID: config:7.6.2)

Error: elasticsearch_kibana_object.default_space_advanced_settings (import id: config:7.6.2): 1 error occurred:
        * import elasticsearch_kibana_object.default_space_advanced_settings result: config:7.6.2: elasticsearch_kibana_object.default_space_advanced_settings: unexpected end of JSON input

Maybe related: #22 & #70

Add ability to deactivate/activate watches

Watches can be deactivated via a link in Kibana.

Looking at the format of a watch from the API (GET _watcher/watch/<watch-id>), this is stored outside of the actual watch definition, in status.state.active, so currently wouldn't be able to be modified by this provider:

{
  "found" : true,
  "_id" : "test-watch",
  "_version" : 75859,
  "_seq_no" : 11301972,
  "_primary_term" : 39,
  "status" : {
    "state" : {
      "active" : true,
      "timestamp" : "2020-04-26T10:12:04.648Z"
    },
    "last_checked" : "2020-04-27T06:36:23.163Z",
    "actions" : {
      <snip>
    },
    "execution_state" : "execution_not_needed",
    "version" : 75859
  },
  "watch" : {
    "trigger" : {
        <snip>
    },
    "input" : {
      <snip>
    }
...

Would love to see this available as perhaps an optional attribute active, defaulting to true.

(I might have a go at a PR for this if I can figure it out - but happy for anyone else to beat me to it)

Interpolation Support for URL

Hi,

I'm trying to use this ElasticSearch provider for Terraform.

My ElasticSearch endpoint URL is not static. There's Terraform code responsible for creating the ElasticSearch cluster (AWS ElasticSearch Service) and there's Terraform code responsible for configuring indexes.
So I have a module creating the ES cluster that outputs the cluster's endpoint (aws_elasticsearch_domain.<name>.endpoint) that I need to use to configure your ElasticSearch provider.

provider "elasticsearch" {
  url = "${module.es.endpoint}"
  version  = "0.5.0"
  insecure = true
}

It seems like the variable interpolation doesn't work here. When setting an explicit value (like https://<blabla>.eu-central-1.es.amazonaws.com) it works fine. But using a variable (module output in this case) has the same result as an empty string:

Error: Error refreshing state: 1 error(s) occurred:

* provider.elasticsearch: health check timeout: no Elasticsearch node available

[terragrunt] 2018/11/05 17:52:34 Hit multiple errors:
exit status 1

So my questions are

  1. is my interpretation correct - variable interpolation does not work for url?
  2. how could I otherwise use a previously unknown endpoint URL?

Cheers + thanks, Christian

resource elasticsearch_index_template doesn't support import

Hi,
First, thanks for your work on this provider :)

When importing index templates an error is thrown saying that:
Error: resource elasticsearch_index_template doesn't support import

Is it expected? Is there any plan to make index templates importable (or reason to not do it)?

Data Sources for index templates and kibana objects

The index templates and kibana objects are JSON files with a specific structure. The current approach of using raw JSON works. However, it would be nice to have data source to create the visualizations, index patterns and dashboards.

For an inspiration, the aws_iam_policy_document. The advantage of the aws_iam_policy_document data source is, the policy is easier to read and can be validated by Terraform. The document itself will be converted to the AWS IAM JSON format. The aws_iam_policy will then be used to create, manage and destroy the actual AWS IAM Policy.

It would be nice to use a similar approach for this provider and have data sources for visualizations, index patterns and dashboards.

For example:

data "elasticsearch_kibana_visualization" "test_visualization_v6" {
  id = "response-time-percentile"
  title = "Total response time percentiles"
  description = ""

  visualization = {
      visState = <<EOF
{
  "title": "Total response time percentiles",
  "type": "line",
  "params": {
    "addTooltip": true,
    "addLegend": true,
    "legendPosition": "right",
    "showCircles": true,
    "interpolate": "linear",
    "scale": "linear",
    "drawLinesBetweenPoints": true,
    "radiusRatio": 9,
    "times": [],
    "addTimeMarker": false,
    "defaultYExtents": false,
    "setYExtents": false
  },
  "aggs": [
    {
      "id": "1",
      "enabled": true,
      "type": "percentiles",
      "schema": "metric",
      "params": {
        "field": "app.total_time",
        "percents": [
          50,
          90,
          95
        ]
      }
    },
    {
      "id": "2",
      "enabled": true,
      "type": "date_histogram",
      "schema": "segment",
      "params": {
        "field": "@timestamp",
        "interval": "auto",
        "customInterval": "2h",
        "min_doc_count": 1,
        "extended_bounds": {}
      }
    },
    {
      "id": "3",
      "enabled": true,
      "type": "terms",
      "schema": "group",
      "params": {
        "field": "system.syslog.program",
        "size": 5,
        "order": "desc",
        "orderBy": "_term"
      }
    }
  ],
  "listeners": {}
}
EOF

     kibanaSavedObjectMeta = <<EOF
{
  "index": "filebeat-*",
  "query": {
    "query_string": {
      "query": "*",
      "analyze_wildcard": true
    }
  },
  "filter": []
}
EOF
  } 
}

resource "elasticsearch_kibana_object" "test_visualization_v6" {
  body = "${data.elasticsearch_kibana_visualization.test_visualization_v6.json}"
}

The advantage of using a for example data source for the visualization instead of a resource for the visualization is, that if something changes or is not possible with the data source. The user can always fall back to raw JSON in the elasticsearch_kibana_object resource. The other advantage is the elasticsearch_kibana_object can stay very generic. The data sources can provide the more specific needs of a visualization, dashboard etc.

Getting Error Error 400 (Bad Request): Rejecting mapping update to [.kibana]

Was trying to create visualization with the example doc on elasticsearch v6.4 but got error elasticsearch_kibana_object.test_visualization: elastic: Error 400 (Bad Request): Rejecting mapping update to [.kibana] as the final mapping would have more than 1 type: [visualization, doc] [type=illegal_argument_exception]

main.tf

resource "elasticsearch_kibana_object" "test_visualization" {
  body = <<EOF
[
  {
    "_id": "response-time-percentile",
    "_type": "visualization",
    "_source": {
      "title": "Total response time percentiles",
      "visState": "{\"title\":\"Total response time percentiles\",\"type\":\"line\",\"params\":{\"addTooltip\":true,\"addLegend\":true,\"legendPosition\":\"right\",\"showCircles\":true,\"interpolate\":\"linear\",\"scale\":\"linear\",\"drawLinesBetweenPoints\":true,\"radiusRatio\":9,\"times\":[],\"addTimeMarker\":false,\"defaultYExtents\":false,\"setYExtents\":false},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"percentiles\",\"schema\":\"metric\",\"params\":{\"field\":\"app.total_time\",\"percents\":[50,90,95]}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"@timestamp\",\"interval\":\"auto\",\"customInterval\":\"2h\",\"min_doc_count\":1,\"extended_bounds\":{}}},{\"id\":\"3\",\"enabled\":true,\"type\":\"terms\",\"schema\":\"group\",\"params\":{\"field\":\"system.syslog.program\",\"size\":5,\"order\":\"desc\",\"orderBy\":\"_term\"}}],\"listeners\":{}}",
      "uiStateJSON": "{}",
      "description": "",
      "version": 1,
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"index\":\"filebeat-*\",\"query\":{\"query_string\":{\"query\":\"*\",\"analyze_wildcard\":true}},\"filter\":[]}"
      }
    }
  }
]
EOF
}

Not sure if I'm doing something wrong

Refreshing state for kibana object

Provider version: 0.2.0
Terraform version: 0.10.8

When kibana object was deleted manually or elastic search cluster was recreated, refreshing stage fails on any command like plan, apply, destroy with similar errors:

Error: Error refreshing state: 1 error(s) occurred:

* elasticsearch_kibana_object.cloudtrail_kibana_object: 10 error(s) occurred:

* elasticsearch_kibana_object.cloudtrail_kibana_object[4]: elasticsearch_kibana_object.cloudtrail_kibana_object.4: elastic: Error 404 (Not Found)
* elasticsearch_kibana_object.cloudtrail_kibana_object[6]: elasticsearch_kibana_object.cloudtrail_kibana_object.6: elastic: Error 404 (Not Found)

Manually removing them from state helps.

Support for OpenDistro for ElasticSearch?

AWS introduced it's own open source licensed plugins for ElasticSearch. They call it OpenDistro for ElasticSearch.

There goal is to have four additional plugins for:

  • Security (#45)
  • SQL (doesn't have state, so not applicable for terraform)
  • Alerting
  • Performance Analyzer (doesn't have state, so not applicable for terraform)
  • Index State Management (#50)

@phillbaker are you planning to support those extensions?

AWS seems to slowly introduce those new plugins into their installation through Service Software updates. They started rolling out the alerting feature with AWS ElasticSearch Service upgrade R20190221.

It would be nice to be able to create alerts via Terraform in this plugin.

New release with Elasticsearch 7 features

First off, thanks for this provider!

I noticed that the v0.11.0 release was done 9 days ago and then within a couple of days, Elasticsearch 7 features were released. Do you have time to cut a new release?

Thanks again for curating this provider and helping the community!

OpenDistro Role example

There is an OpenDistro Role example:

resource "elasticsearch_opendistro_role" "test" {
  role_name = "test"
  index_permissions {
    index_patterns = [
      "test*"
    ]
    allowed_actions = [
      "read"
    ]
  }

  tenant_permissions {
    tenant_patterns = [
      "global_tenant"
    ]
    allowed_actions = [
      "kibana_all_write"
    ]
  }
}

How can I specify a few index permissions?

License

Could you attach a license to this repo? It looks like Terraform's license is MPL 2.0. I'm not certain that's what you'd need or want - just throwing it out there. I'd like to use/contribute to this but can't until it has a license.

ability to create an index

When using ILM, the initial index needs to be created, because it has an alias on it, which cannot be set from the template. It also defines the naming pattern.

It would be good to be able to do, we need effectively to run:
PUT /%3Cmetricbeat-dev-%7Bnow%2Fy%7Byyyy%7D%7D-000001%3E { "aliases": { "metricbeat-dev": { "is_write_index": true } } }

If the index does not exist. If it does exist, do nothing. On destroy, have the option to delete it.

'Cannot unmarshal object' when creating a new elasticsearch_kibana_object

I'm unable to create a new elasticsearch_kibana_object that looks like this:

resource "elasticsearch_kibana_object" "test" {
  body = <<EOF
{
  "test": "yes"
}
  EOF
}
elasticsearch_kibana_object.default_space_advanced_settings: Creating...
  body:  "" => "{\n  \"test\": \"yes\"\n}\n  "
  index: "" => ".kibana"

Error: Error applying plan:

1 error occurred:
        * elasticsearch_kibana_object.default_space_advanced_settings: 1 error occurred:
        * elasticsearch_kibana_object.default_space_advanced_settings: json: cannot unmarshal object into Go value of type []map[string]interface {}

It sounds like a provider problem, but just in case it was a problem with the schema of the object I also tried a fully correct object (a space, with top-level space and type keys), and got the same error.

(Also tried without the heredocs, to avoid the extra whitespace around the JSON string, and got the same result).

Maybe related: #22 & #69

Add search template

Hi, I'd like to know if there's a way to add a search template using this elasticsearch provider.

Check here for more info.

I did some tests using elasticsearch_index_template and elasticsearch_index, both worked great!!!

An elasticsearch_script would be amazing! :)

context deadline exceeded at terraform plan

I got following error at execute terraform plan when I use terraform-provider-elasticsearch binary downloaded from release page (both static and non-static binary) on my mac book.

Error: Head https://*****.ap-northeast-1.es.amazonaws.com: context deadline exceeded

When I use self build (git clone & go build) binary works fine. Other mac users are OK ?

$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.15.4
BuildVersion:	19E287

Terraform v0.12.23

terraform-provider-elasticsearch version: 1.0.0

can't build

Hello, trying to build this for linux and it's pretty hard since go is on 1.9 now and dep is the standard for dependency management...

so, I made a docker container, installed go 1.7, tried to use glide install to get dependencies, and a bunch failed on github auth (even though my github ssh key was in my agent and working)

So I tried dep next and actually got further, but then go build fails with:

root@69f858b7e81b:~/go/src/phillbaker/terraform-provider-elasticsearch# go build
# phillbaker/terraform-provider-elasticsearch/vendor/google.golang.org/grpc/transport
vendor/google.golang.org/grpc/transport/http_util.go:485: f.fr.SetReuseFrames undefined (type *http2.Framer has no field or method SetReuseFrames)
# phillbaker/terraform-provider-elasticsearch/vendor/github.com/hashicorp/terraform/terraform
vendor/github.com/hashicorp/terraform/terraform/context.go:691: undefined: sort.Slice```

Please let me know how we can help - we'd really like to use this but we need it built for linux :)

Provider variable interpolation error during import

Hello,
i'm my actual scenario i got an error during import of an AWS API Gateway resource due to elasticsearch provider issue:

$ terraform import aws_api_gateway_rest_api.APIGW-FE <apigw_id>
Acquiring state lock. This may take a few moments...

Error: Provider "elasticsearch" depends on non-var "aws_elasticsearch_domain.ES-DOMAIN-5.0/aws_elasticsearch_domain.ES-DOMAIN-5.N". Providers for import can currently
only depend on variables or must be hardcoded. You can stop import
from loading configurations by specifying `-config=""`.

It seems the provider does not support variable interpolation.
Below our provider configuration:

  provider "elasticsearch" {
    url = "https://${aws_elasticsearch_domain.ES-DOMAIN-5.endpoint}"
    aws_access_key = "${aws_iam_access_key.ES.id}"
    aws_secret_key = "${aws_iam_access_key.ES.secret}"
    insecure = true
    version = "0.6.0"
  }

Are you able to resolve this issue?
Thank you

Best Regards,
Claudio

Error refreshing State on v1.0.0

Hi,
Im testing the latest version, and i get this issue:
elasticsearch_ingest_pipeline.logging-kibana-reader: Refreshing state... [id=logging-pipeline]
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalReadState, err: missing expected [
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalSequence, err: missing expected [
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalReadState, err: missing expected [
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalSequence, err: missing expected [
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalReadState, err: missing expected [
2020/03/31 20:24:48 [ERROR] : eval: *terraform.EvalSequence, err: missing expected [

Im using exactly same code that with latest version

Update to latest Terraform version

The latest version of Terraform is currently 0.11.5, but the provider uses some old 0.9.x version.

It would be great to get it updated.

Support for elasticsearch pipelines?

Hi @phillbaker

Thanks for this amazing repo!

We currently use gitlab and CI capabilities to store our index / pipeline templates and it will be nice if this provider will support also pipelines resources.

Also, different question, how the json validation is performed during plan / apply?

Thanks
D.

Dashboard Example

There is an Dashboard example in the README.md:

# A saved search, visualization or dashboard
resource "elasticsearch_kibana_object" "test_dashboard" {
  body = "${file("dashboard_path.txt")}"
}

However, how does the content of the dashboard_path.txt file looks like? It would be nice to have an example.

Use xpack_role

Hi, im trying to add a role like this (works with curl )
{
"applications": [
{
"application": "kibana-.kibana",
"privileges": [
"feature_discover.all",
"feature_visualize.all",
"feature_dashboard.all",
"feature_maps.all",
"feature_apm.read",
"feature_graph.all",
"feature_timelion.all",
"feature_uptime.read"
],
"resources": [
""
]
}
],
"transient_metadata": {
"enabled": true
},
"run_as": [],
"cluster": [],
"indices": [
{
"privileges": [
"read",
"monitor"
],
"field_security": {
"except": [],
"grant": [
"
"
]
},
"allow_restricted_indices": false,
"names": [
"filebeat-*"
]
}
],
"metadata": {}
}

how can i use the elasticsearch_xpack_role resource?

Make roles, users, and role-mappings importable?

Hey, @phillbaker! Bless you for writing this provider; excited to start using it for the ES clusters on my team :)

I've run into one blocker, though. All of the ES clusters I'd like to put under tf management (with this provider) have existing users, roles, etc. These aren't currently importable, so my tf plans think they need to create new copies of all of these.

I see other resources like indices and index templates are importable. Could we get roles/users/role-mappings to also be importable?

How to get the raw JSON for searches, visualization and dashboard?

How can I get the raw JSON for searches, visualization and dashboards to be able to use them in this module?

What do I have to do if I want to export existing searches, visualization and dashboards in the correct format? Is there a specific API endpoint I can use?

Error when update force_destroy from false to true in elasticsearch_index

Hi,

When we have already created an index with force_destroy = false, and after we want to update the value to force_destroy = true, we have the following error:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # elasticsearch_index.index_v1 will be updated in-place
  ~ resource "elasticsearch_index" "index_v1" {
        aliases                           = jsonencode(
            {
                index = {}
            }
        )
        codec                             = "best_compression"
      ~ force_destroy                     = false -> true
        id                                = "index_v1"
        load_fixed_bitset_filters_eagerly = false
        name                              = "index_v1"
        number_of_replicas                = "2"
        number_of_shards                  = "3"
        refresh_interval                  = "30s"
        routing_partition_size            = 0
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

elasticsearch_index.index_v1: Modifying... [id=index_v1]

Error: elastic: Error 400 (Bad Request): Validation Failed: 1: no settings to update; [type=action_request_validation_exception]

  on index.tf line 1, in resource "elasticsearch_index" "index_v1":
   1: resource "elasticsearch_index" "index_v1" {

Steps to reproduce:

  • Create one index with force_destroy = false
  • Update from force_destroy = false to force_destroy = true

Wrong ES version is detected

Running the provider on 0.12.29 against AWS Elasticsearch 6.8 it fails with:

2020-08-11T21:54:20.895Z [DEBUG] plugin.terraform-provider-elasticsearch_v1.4.1: 2020/08/11 21:54:20 [INFO] Using AWS: ap-southeast-2
2020/08/11 21:54:21 [ERROR] module.elasticsearch-sdm-config: eval: *terraform.EvalConfigProvider, err: ElasticSearch is older than 5.0.0!
2020/08/11 21:54:21 [ERROR] module.elasticsearch-sdm-config: eval: *terraform.EvalSequence, err: ElasticSearch is older than 5.0.0!
2020/08/11 21:54:21 [ERROR] module.elasticsearch-sdm-config: eval: *terraform.EvalOpFilter, err: ElasticSearch is older than 5.0.0!
2020/08/11 21:54:21 [ERROR] module.elasticsearch-sdm-config: eval: *terraform.EvalSequence, err: ElasticSearch is older than 5.0.0!
/../
Error: ElasticSearch is older than 5.0.0!
  on ../../../modules/elasticsearch-sdm-config/es_provider.tf line 6, in provider "elasticsearch":
   6: provider "elasticsearch" {

Provider is set here:

provider "elasticsearch" {
    url = "https://masteruser:${var.masteruser_password}@${var.es_endpoint}"
    healthcheck = false
}

Xpack Watches provider validates missing resources wrongly

I ran into an issue with the elasticsearch_watch resource. In the code, we validate if the watch id already exists by checking if the elastic client response is different from a NotFound. This is a bad assumption as we will assume any other error type to be a watch id collision.

func resourceElasticsearchWatchCreate(d *schema.ResourceData, m interface{}) error {
	// Determine whether the watch already exists.
	watchID := d.Get("watch_id").(string)
	_, err := resourceElasticsearchGetWatch(watchID, m)
	if !elastic6.IsNotFound(err) && !elastic7.IsNotFound(err) {
		log.Printf("[INFO] watch exists: %+v", err)
		return fmt.Errorf("watch already exists with ID: %v", watchID)
	}

We should instead get the actual response and verify the watch id was found, the ES API is pretty explicit about this.

{
  "found": false,
  "_id": "this-is-a-missing-watch"
}

I was actually getting a watch already exists with ID error when my issue was getting 403 Forbidden.

Terraform Provider for ECE

Hi Phill, just an update that I've created a new provider for Elastic Cloud Enterprise (ECE) to support creation of ECE clusters. I'm not sure if you're using ECE, but it might be useful if you are:

https://github.com/Ascendon/terraform-provider-ece

I'm also planning to look at adding some additional capabilities to this provider for some upcoming use cases.

Matt

how can we use this for terraform 0.12?

As this syntax
terraform { required_providers { elasticsearch = { source = "phillbaker/elasticsearch" version = "1.4.1" } } }
is not available in terraform 0.12, is there a workaround on using the provider?

thanks!

elasticsearch_kibana_object saved search

I am new to ES/Kibana and I am trying to manage all the configuration in code.

I am having trouble configuring my body to create a saved search. I see some visualization examples in the docs and in the tests but I do not see a saved search in either location. Do you have a generic example you could share with me for ES 7.1 that I could use to correctly craft my configuration?

Thanks in advance for the help!

[elasticsearch_index] Don't recreate datemath index if ILM/ISM created a match

Coming out of #55 (comment)

The main trick here, is that the index returned, is the resolved name, so that the name we know it by (the pattern), and the resolved name, are different.
I think, we should basically accept that, if we search for the pattern, and we get an index back, then we can consider it exists and we don't need to re-create it, regardless of it we made the new one, or if it was created by ILM. If we search for the pattern and it returns no records, then we need to create an index, even if some older index existed when our pattern resolved to a different date based index name.

Given the that other settings can change on an index and that date math could be used for daily indices, the conservative approach is to have terraform state map to the original created index (the fix in #55). In some regards, having multiple services manage these indices is a bit outside of the "pure terraform" approach.

This upstream functionality may be related: hashicorp/terraform#15485

In addition from #55 (comment)

Thinking about it: what if -- when creating an index -- we inspect the output for a lifecycle policy (incase it was added to the template, not in the terraform code itself). IF it has that, AND used date math in the name, we track index.lifecycle.rollover_alias instead? As long as there exists an index with the same alias and is_write_index: true then terraform code for an "initial" index did its job.

It feels a little too dynamic for most terraform applications, but just a thought on something "stable" we can use for tracking.

Another option might be to add a "dont_look_for_me_in_elasticsearch" boolean flag in index where it only cares about the TF state, not the ES state 😆

Xpack roles:elastic: Error 400 (Bad Request): request body is required [type=parse_exception]

Im having this error:
Error 400 (Bad Request): request body is required [type=parse_exception]

On a 7.5.0 Cluster,
with this configuration:

resource "elasticsearch_xpack_role" "kibana-admin" {
  role_name = "kibana-admin"
  indices {
    privileges = ["all"]
    field_security  = "*"
    names = ["*"]
  }

  applications {
      application = "kibana-.kibana"
      privileges = ["all"]
      resources = ["*"]

  }
  cluster = ["all"]
}

Feature request: create the cluster AND the index in the same run

As far as I understand, I can use this provider only if I already have the ES cluster - because I need to have its URL. But to have it all in one run, I'd need to write something like the below. There is an old closed issue #11 which suggest it was working, once... (with a hand wired https prefix) but if I try it now, terraform plan complains "health check timeout: no Elasticsearch node available" (kinda true)

provider "elasticsearch" {
  url = aws_elasticsearch_domain.my-freshly-created-domain.endpoint
}

Request: Darwin build target

It'd be awesome if the Releases included versions for OSes other than Linux. My team is using this on OSX, and we're currently having to build it locally in order to use it.

Watches don't seem to detect diffs made outside of Terraform

Thanks again for putting this provider together! I'm finally getting around to implementing it into our workflows and am looking forward to no longer having to version our Elastic resources 'manually' :)

Doing some quick testing, and I think I'm unable to get the provider to generate a diff on a watch that has been edited outside of Terraform. This is obviously a key part of managing the resources via Terraform, that we want to be able to detected unintended drift in the live resources.

Firstly, some necessary data:

  • Terraform v0.11.14
  • latest release of terraform-provider-elasticsearch (I think - installed it fresh today, unsure how to 100% check the version though as terraform version shows 'unversioned' for the provider)
  • cluster is on Elastic Cloud
  • sniff is set to false in provider config
  • cluster username and password are provided via environment variables
  • there are no other provider settings configured

What I did:

  1. Successfully created a new watch via Terraform, with a very simple config (see below)
  2. Adjusted in Terraform the value of trigger.schedule.interval to 100m, to confirm that the diff is detected and updated (works great!)
  3. Adjusted manually via Kibana the value of trigger.schedule.interval to 102m, and ran terraform apply again. At this point I would expect Terraform to want to set the interval back to 100m again - however it detects no diff; no changes to apply (at this point I also tried changing it back to 10m in my code, Terraform then generates a diff but it notes that I'd be changing '100m' to '10m' - i.e. it isn't aware of the 102m that I set manually in the meantime.

It seems like a bug - I can't think of something I've wrong here as it's quite a simple test - but would appreciate any insight you can offer. I'm very inexperienced in Go but I might be able to offer a PR given a few pointers!

Watch config used:

resource "elasticsearch_watch" "test" {
  watch_id = "test-watch"

  body = <<EOF
    {
      "trigger": {
        "schedule": {
          "interval": "10m"
        }
      },
      "input": {
        "search": {
          "request": {
            "indices": ["filebeat-*"],
            "body": {
              "query": {
                "match_all" : {}
              }
            }
          }
        }
      },
      "condition": {
        "always": {}
      },
      "actions": {},
      "metadata": {
        "name": "test"
      }
    }
  EOF
}

Missing arguments

Hi,

example in README shows template, order, settings, mapping arguments available for elasticsearch_index_template resource. But they are not present in provider iteslf. Only available arguments are name and body. Am i missing something?

Index Template creation uses create: True

We use create = true in https://github.com/phillbaker/terraform-provider-elasticsearch/blob/master/resource_elasticsearch_index_template.go#L116. This behavior prevents an user from transitioning to Terraform on an existing cluster as resources might already exist:

Error: elastic: Error 400 (Bad Request): index_template [index1] already exists [type=illegal_argument_exception]

  on elasticsearch/index/index1.tf line 35, in resource "elasticsearch_index_template" "index1":
  35: resource "elasticsearch_index_template" "index1" {



Error: elastic: Error 400 (Bad Request): index_template [index2] already exists [type=illegal_argument_exception]

  on elasticsearch/index/index2.tf line 35, in resource "elasticsearch_index_template" "index2":
  35: resource "elasticsearch_index_template" "index2" {



Error: elastic: Error 400 (Bad Request): index_template [index3] already exists [type=illegal_argument_exception]

  on elasticsearch/index/index3.tf line 35, in resource "elasticsearch_index_template" "index3":
  35: resource "elasticsearch_index_template" "index3" {
resource "elasticsearch_index_template" "index1" {
  name = "index1"
  body = <<EOF
{
    "aliases": {
        "index1": {}
    },
    "index_patterns": [
        "index1-*"
    ],
    "settings": {
        "index": {
            "lifecycle": {
                "name": "index1",
                "rollover_alias": "index1"
            },
            "number_of_shards": "2",
            "number_of_replicas": "1",
            "routing": {
                "allocation": {
                    "require": {
                        "box_type": "hot"
                    }
                }
            }
        }
    }
}
EOF
}

I'm happy to create a PR to always set create to false but I might be using the provider the wrong way, am I?

References:

Support for Elastic Cloud

Trying this out against an Elastic Cloud deployment (Similar to ECE). Just trying to create an index template. When I do an apply, I get the following:

Error: no active connection found: no Elasticsearch node available

The Elastic cloud deployment uses port 9243. I can curl to the ECS endpoint and get an appropriate response.

When I try the Terraform to a locally installed ES instance, it works w/o issue. Port 9200 and 9243.

provider "elasticsearch" {
  url = "https://XXXXXXXXX.us-west-2.aws.found.io:9243"
  username = "XXXXXXXXXX"
  password = "XXXXXXXXXX"
}

resource "elasticsearch_index_template" "test" {
  name = "terraform-test"
  body = <<EOF
{
  "template" : "test-*",
  "settings" : {
    "index" : {
      "refresh_interval" : "30s"
    }
  }
}
EOF
}

Add Watcher resource

Firstly, thanks so much for doing the work to put this provider together, in absence of an official provider!

I'd love to have support for Watchers - is this something you might consider adding? It looks like there's an implementation at https://github.com/ansoni/terraform-provider-elastic/blob/master/resource_elasticsearch_watcher.go which might be able to be lifted to here with attribution (its MIT licensed), but I will also post in that repo to ask if the owner is interested in collaborating here (makes sense to combine forces).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.