Git Product home page Git Product logo

terraform-provider-http's Introduction

Terraform Provider: HTTP

The HTTP provider interacts with generic HTTP servers. It provides a data source that issues an HTTP request exposing the response headers and body for use within a Terraform deployment.

Documentation, questions and discussions

Official documentation on how to use this provider can be found on the Terraform Registry. In case of specific questions or discussions, please use the HashiCorp Terraform Providers Discuss forums, in accordance with HashiCorp Community Guidelines.

We also provide:

  • Support page for help when using the provider
  • Contributing guidelines in case you want to help this project
  • Design documentation to understand the scope and maintenance decisions

The remainder of this document will focus on the development aspects of the provider.

Compatibility

Compatibility table between this provider, the Terraform Plugin Protocol version it implements, and Terraform:

HTTP Provider Terraform Plugin Protocol Terraform
>= 2.x 5 >= 0.12
>= 1.1.x, <= 1.2.x 4, 5 >= 0.11
<= 1.0.x 4 <= 0.11

Requirements

Development

Building

  1. git clone this repository and cd into its directory
  2. make will trigger the Golang build

The provided GNUmakefile defines additional commands generally useful during development, like for running tests, generating documentation, code formatting and linting. Taking a look at it's content is recommended.

Testing

In order to test the provider, you can run

  • make test to run provider tests
  • make testacc to run provider acceptance tests

It's important to note that acceptance tests (testacc) will actually spawn terraform and the provider. Read more about they work on the official page.

Generating documentation

This provider uses terraform-plugin-docs to generate documentation and store it in the docs/ directory. Once a release is cut, the Terraform Registry will download the documentation from docs/ and associate it with the release version. Read more about how this works on the official page.

Use make generate to ensure the documentation is regenerated with any changes.

Using a development build

If running tests and acceptance tests isn't enough, it's possible to set up a local terraform configuration to use a development builds of the provider. This can be achieved by leveraging the Terraform CLI configuration file development overrides.

First, use make install to place a fresh development build of the provider in your ${GOBIN} (defaults to ${GOPATH}/bin or ${HOME}/go/bin if ${GOPATH} is not set). Repeat this every time you make changes to the provider locally.

Then, setup your environment following these instructions to make your local terraform use your local build.

Testing GitHub Actions

This project uses GitHub Actions to realize its CI.

Sometimes it might be helpful to locally reproduce the behaviour of those actions, and for this we use act. Once installed, you can simulate the actions executed when opening a PR with:

# List of workflows for the 'pull_request' action
$ act -l pull_request

# Execute the workflows associated with the `pull_request' action 
$ act pull_request

Releasing

The release process is automated via GitHub Actions, and it's defined in the Workflow release.yml.

Each release is cut by pushing a semantically versioned tag to the default branch.

License

Mozilla Public License v2.0

terraform-provider-http's People

Contributors

apparentlymart avatar appilon avatar austinvalle avatar automaticgiant avatar bendbennett avatar bflad avatar claire-labry avatar dependabot[bot] avatar faultymonk avatar grubernaut avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar hc-github-team-tf-provider-devex avatar jkroepke avatar katbyte avatar kgcurran avatar kmoe avatar korotovsky avatar lawliet89 avatar mildwonkey avatar mkjois avatar paultyng avatar radeksimko avatar russmack avatar sbgoods avatar stack72 avatar t0rr3sp3dr0 avatar team-tf-cdk avatar teamterraform avatar tombuildsstuff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-http's Issues

Feature request: file upload like multipart/form-data or curl --form option

Hi there,

Do you have some plans to add the possibility to file upload like HTTP file upload via Content-Type: multipart/form-data or the same functionality curl --request PUT --form "image=@./myimage.png"?

Expected Behavior

File upload bt HTTP multipart/form-data protocol

Actual Behavior

Тo support yet

Body not known until after apply

Terraform Version

v1.0.11

Affected Resource(s)

  • data.http (2.1.0)

Terraform Configuration Files

data "http" "test" {
  url = "https://checkpoint-api.hashicorp.com/v1/check/terraform"
}

Debug Output

  # module.test.data.http.test will be read during apply                            
  # (config refers to values not yet known)         
 <= data "http" "test"  {                                                                                
      + body             = (known after apply)                                                           
      + id               = (known after apply)                                                           
      + response_headers = (known after apply)
      + url              = "https://checkpoint-api.hashicorp.com/v1/check/terraform"                     
    }   

Expected Behavior

I might be wrong here, but I expected the body of the response to be populated during Terraform's "refresh" phase, as per docs:

If the query constraint arguments for a data resource refer only to constant values or values that are already known, the data resource will be read and its state updated during Terraform's "refresh" phase, which runs prior to creating a plan. This ensures that the retrieved data is available for use during planning and so Terraform's plan will show the actual values obtained.

Background

I have stumbled upon this issue while trying to fetch a Kubernetes manifest from an external source. The manifest is a single YAML file that consists of multiple documents, so I'm splitting them up by the document marker (---) and then feed them into kubernetes_manifest using for_each. See the module below for ference:

data "http" "yaml_manifest" {
  url = var.url
}

locals {
  parsed_manifests = [
    for manifest in split("\n---\n", data.http.yaml_manifest.body) :
    yamldecode(manifest)
  ]
  named_manifests = {
    for manifest in local.parsed_manifests :
    join(":", [
      manifest.apiVersion,
      manifest.kind,
      manifest.metadata.name,
    ]) => manifest
  }
}


resource "kubernetes_manifest" "manifest" {
  for_each = local.named_manifests
  manifest = each.value
}

This results in a following error:

│ Error: Invalid for_each argument
│ 
│   on ../modules/utils/k8s_remote_manifest/main.tf line 22, in resource "kubernetes_manifest" "manifest":
│   22:   for_each = local.named_manifests
│     ├────────────────
│     │ local.named_manifests will be known only after apply
│ 
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to
│ first apply only the resources that the for_each depends on.

So basically the request is not made during the "refresh" phase even though the URL for the datasource is known before apply. Is this an actual bug or am I simply misinterpreting the documentation?

Documentation on http Data Source

This issue was originally opened by @oonisim as hashicorp/terraform#24159. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

0.12

...

Terraform Configuration Files

As in https://www.terraform.io/docs/providers/http/data_source.html

data "http" "example" {
  url = "https://checkpoint-api.hashicorp.com/v1/check/terraform"

  # Optional request headers
  request_headers {
    "Accept" = "application/json"
  }
}

Debug Output

Crash Output

Expected

data "http" "example" {
  url = "https://checkpoint-api.hashicorp.com/v1/check/terraform"

  # Optional request headers
  request_headers = {    # <----- Map, instead of block?
    "Accept" = "application/json"
  }
}

Actual Behavior

Error: Unsupported block type

  on cf_test.tf line 3, in data "http" "s3_web_index_html":
   3:   request_headers {

Blocks of type "request_headers" are not expected here. Did you mean to define
argument "request_headers"? If so, use the equals sign to assign it a value.

Steps to Reproduce

  1. terraform init
  2. terraform apply

Additional Context

References

Utility Providers Upgrade

Terraform CLI and Provider Versions

Terraform v1.1.9
Provider v2.1.0

Use Cases or Problem Statement

As part of the "uplift" of the Terraform Utility Providers a number of tasks (see below) are being undertaken.

Proposal

  • GitHub action to test all minor Terraform versions >= 0.12
  • Acceptance tests to use TestCheckFunc (see docs and example)
  • Adoption of tflog (see docs)
  • Removal of deprecated fields, types and functions
  • Update Makefile
  • Switch linting to golangci-lint
  • Use terraform-plugin-docs
  • Add DESIGN.md
  • Update README.md
  • Update CONTRIBUTING.md
  • Update SUPPORT.md
  • Go 1.17 upgrade

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Migrate acceptance testing to terraform-plugin-testing

Terraform CLI and Provider Versions

N/A

Use Cases or Problem Statement

A new terraform-plugin-testing Go module has been released. New testing functionality will only land in terraform-plugin-testing and this should allow most providers to stop depending directly on terraform-plugin-sdk.

Proposal

Follow the migration guide: https://developer.hashicorp.com/terraform/plugin/testing/migrating

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Add an option to not parse the body

It would be helpful to have a way to ignore the response body when it is binary and you only care about the headers. In addition, an option to change from GET to HEAD would be handy.

Affected Resource(s)

  • data.http

Terraform Configuration Files

data "http" "artifact" {
  url           = "https://example.com/artifact"
  response_body = false
  http_method   = "HEAD"
}

locals {
  hash = lookup(data.http.artifact.response_headers, "x-checksum-sha1", "")
  body = data.http.artifact.body # null
}

State migration for old attribute "body" is missing.

Terraform CLI and Provider Versions

terraform 1.2.5
provider 3.0.0

Terraform Configuration

irrelevant

Expected Behavior

Proper migration of old state.

Actual Behavior

… could not be decoded from the state: unsupported attribute "body". :(

Steps to Reproduce

Create state with old module, update to new module and try to apply again.

How much impact is this issue causing?

High

Logs

No response

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature Request: Handle remote archive types

Given a URL that serves application/zip content (or some other archive content), I would like to be able to reference it in other resources that require it (see example below). I poked around the http or the archive providers and noticed that the http provider came the closest to this, but it doesn't handle application/zip content based on the docs:

At present this resource can only retrieve data from URLs that respond with text/* or application/json content types, and expects the result to be UTF-8 encoded regardless of the returned content type header.

It would be nice to be able to reference remote archive types and use them for resource that require archive types.

See example configuration below.

Terraform Version

Terraform v0.11.7 locally, but using Terraform Enterprise as well.

Affected Resource(s)

  • data.http

Terraform Configuration Files

Example:

data "http" "foobar-zip" {
     url = "http://location.to.zip/foobar.zip"
}

resource "aws_lambda_function" "lambda-foobar" {
  function_name    = "lambda-foobar"
  handler          = "com.foobar::FooBar"
  role             = "lambda-role"
  runtime          = "java8"
  filename         = "${file(data.http.foobar-zip)}"
  source_code_hash = "${base64sha256(file(data.http.foobar-zip))}"

  depends_on = [
    "data.http.foobar-zip",
  ]
}

Debug Output

Content-Type is not a text type. Got: application/zip

Expected Behavior

Fetches bytes of the zip and references them as the file for the lambda.

Actual Behavior

It fails the plan with the Debug Output listed above

Relevant Issues

Feature request: Option to ignore failures for data source "http"

Current Terraform Version

$ terraform version
cliv: Executing /home/willis2/.cliv/terraform=1.1.2/bin/terraform
Terraform v1.1.2
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.70.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/template v2.2.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0

Use-cases

We use the http data source to query a list of current IP CIDRs from vendors such as CloudFlare, so we can add those ranges to our security groups. However, occasionally one of those http sources may return a 500 error. We don't need to pull this value on every single terraform apply, because the old values that were already applied will continue to work until the service returns.

Attempted Solutions

# CloudFlare China IPs
data "http" "cloudflare_china_ips" {
  url = "https://api.cloudflare.com/client/v4/ips?china_colo=1"
  request_headers = {
    Accept = "application/json"
  }
}

This is an example of our code. I have not found a workaround for this issue because Terraform simply dies with "500 Error".

An alternative to this would be to use a null resources with "local-exec" provisioner to call curl or wget, but that would not necessarily be portable (Dockerized versions of Terraform might not include curl, and behavior would have to change on Windows). Since there is already a data source for doing an HTTP GET. we might as well extend its functionality to support ignoring these failures.

(On earlier versions of Terraform, it actually gives no indication that this module was causing the error (even with TF_DEBUG=TRACE); I had to upgrade Terraform to figure out it was coming from this module block.)

Proposal

Support an input to the module such as ignore_errors = true.

Alternate proposal: support specifying the range of return statuses that signify success (http_status_success_range = [ 200, 599 ]).

It also would be great to support passing this as an environment variable so when the error occurs we can temporarily enable/disable this behavior and continue to apply our infrastructure, rather than having to modify the code to ignore errors, which is not necessarily what we want.

References

Enhancement: Allow http headers to be marked as sensitive

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

0.12.x

Affected Resource(s)

Please list the resources as a list, for example:
http

Enhancement detail

headers may contain sensitive information such as persistent auth information.
Woud be useful to be able to mark as sensitive so that the header is not echoed to any plan/apply output
Treated in same way as sensitive values in state files as other providers

v3.2.0 No Longer Supports Proxy Settings

Terraform CLI and Provider Versions

Terraform v1.1.6
on darwin_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v3.74.3
+ provider registry.terraform.io/hashicorp/http v3.2.0
+ provider registry.terraform.io/hashicorp/null v3.2.0

Terraform Configuration

data "http" "foobar-access-token" {
  url = "https://foo.bar/v2/token?name=${var.token_name}"

  request_headers = {
    X-FOO-BAR-TOKEN: var.foo_session_token
  }
}

Expected Behavior

The HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables should be respected, and the http request should be made using the proxy values defined in them.

Actual Behavior

All proxy values are ignored, and ultimately the request fails with a timeout because direct access to public IPs from our environment is completely blocked.

Steps to Reproduce

  1. terraform apply

How much impact is this issue causing?

High

Logs

No response

Additional Information

PR #125 causes a breaking change for anyone using the provider in an environment that configures HTTP_PROXY, HTTPS_PROXY, and/or NO_PROXY environment variables.

The default behavior of http.Client does not use an empty http.Transport, but instead, has several properties configured in the DefaultTransport. There is also no other way to configure the proxy setting through the provider, since it is not exposed as an input. This renders v3.2.0 completely unusable for anyone that requires use of an HTTP proxy.

This line in the PR creates a new http.Transport{} and only assigns a new tls.Config to it, but doesn't configure any of the other properties that would have previously be configured (Proxy = ProxyFromEnvironment being the primary issue we experienced).

I assume the intention was to maintain compatibility for those of us on previous versions, since it was not released as a Major version. I believe this could have been accomplished by assigning tr := &http.DefaultTransport and then further customizing that instance with the TLSClientConfig, instead of starting "from scratch" with an empty http.Transport.

Code of Conduct

  • I agree to follow this project's Code of Conduct

Feature request: Add timeout and retry option

Hi there,

Some time, http servers take some time to respond or are not ready yet to accept requests. It would be nice to add some kind of retry logic and timeout option to handle those use cases.

Please see hashicorp/terraform-provider-aws#11426 and specially hashicorp/terraform-provider-aws#11426 (comment).

Perhaps hashicorp/go-retryablehttp can help us to achieve this.

I can open a PR for this. But I want to know if this is something we can add in this provider and if it the right direction.

Terraform HTTP not working with content-type xml

This issue was originally opened by @awasilyev as hashicorp/terraform#22913. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.12.2
+ provider.auth0 v0.2.0
+ provider.http v1.1.1
+ provider.null v2.1.2

Terraform Configuration Files

data "http" "saml_metadata_jenkins" {
  url = "https://${var.domain_name}.auth0.com/samlp/metadata/${auth0_client.jenkins[0].id}"
  request_headers = {
    Accept = "application/xml"
  }
}

Expected Behavior

I was hoping to use the XML from the Auth0 endpoint in my jenkins configuration recipe

A curl -v against the endpoint shows the correct content-type

< HTTP/1.1 200 OK
< Server: nginx
< Date: Thu, 26 Sep 2019 10:42:41 GMT
< Content-Type: application/xml; charset=utf-8

Actual Behavior

Terraform doesn't seem to understand the data it get's from the endpoint.

Error: Content-Type is not a text type. Got: application/xml; charset=utf-8

  on modules/auth_ext/main.tf line 147, in data "http" "saml_metadata_jenkins":
 147: data "http" "saml_metadata_jenkins" {

References

Something similar to hashicorp/terraform#17027

Allow XML Content-Type

Hello,
I believe the issue is that XML content type is not allowed in the return.

Terraform Version

Terraform v0.14.9

Affected Resource(s)

  • data.http

Terraform Configuration Files

data "http" "what_xml" {
  url = "https://whatever.com/file.xml"
  # Optional request headers
  request_headers = {
    Accept = "application/xml"
  }
}

Debug Output

Warning: Content-Type is not recognized as a text type, got "application/xml;charset=UTF-8"

  on keycloak.tf line 24, in data "http" "what_xml":
  24: data "http" "what_xml" {

If the content is binary data, Terraform may not properly handle the contents
of the response.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

The HTTP request should be saved into the body.

Actual Behavior

An error occured.

References

Enable go-changelog Automation

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's CHANGELOG handling should be switched to go-changelog, including:

  • Adding .changelog directory and template files
  • Enabling automation for regenerating the CHANGELOG (e.g. scripts or GitHub Actions)
  • (If enhancements are made available upstream in time) Enabling automation for checking CHANGELOG entries and formatting

Releases missing from Repo

Hi the Releases are missing from this repo meaning the latest version cannot be queried via the github api.

We use this functionality to dynamically ensure we are running the latest version of terraform. Could this be added please?

Allow PUT requests

Terraform CLI and Provider Versions

Terraform v1.2.8
on darwin_arm64

Your version of Terraform is out of date! The latest version
is 1.3.2. You can update by downloading from https://www.terraform.io/downloads.html

Use Cases or Problem Statement

Can't send PUT requests.

Proposal

Maybe don't validate the method at all or add PUT

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Migrate Documentation to terraform-plugin-docs

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's documentation should be switched to terraform-plugin-docs, including:

  • Migrating directory structures and files as necessary
  • Enabling automation for documentation generation (e.g. make gen or go generate ./...)
  • Enabling automated checking that documentation has been re-generated during pull request testing (e.g. no differences)

Do not follow HTTP redirects

Terraform CLI and Provider Versions

terraform version
Terraform v1.2.6
on darwin_amd64

Use Cases or Problem Statement

The current version of this provider does not expose an option to not implictly follow HTTP redirects.

It also does not document the behaviour of HTTP redirection, as highlighted in #60.

In my use-case, I would like to use this module to call an HTTP server with some specific headers, such as Authorization. This server will then respond with a S3 URL which I would like to pass to another module that I do not want to propagate the origin server-specific request headers to.

Proposal

I propose that the provider adds the option no_follow_redirects to explicitly disable HTTP redirection and instead return the response of the first HTTP request made.
no_follow_redirects instead of follow_redirects to make it obvious the default behaviour is to follow redirects, and setting the option to true disables this behaviour.

I also propose that it adds an output named location that describes the absolute URL of the request that made the final response returned by the provider.
In the case that the server responds with a Location header, the location attribute is the absolute URL of the Location header value relative to the request that made the final HTTP request.

Example 1

url is http://example.org which returns HTTP 200 OK

status_code is 200
location is http://example.org

Example 2

url is http://example.org which returns HTTP 302 Found and Location of /redirected
no_follow_redirects is false

<-- provider makes another request -->

status_code is 200
location is http://example.org/redirected

Example 3

url is http://example.org which returns HTTP 302 Found and Location of /redirected
no_follow_redirects is true

status_code is 302
location is http://example.org/redirected

How much impact is this issue causing?

Medium

Additional Information

I have written an implementation of this proposal at https://github.com/relvacode/terraform-provider-http/tree/feature/explicit-follow-redirects

Code of Conduct

  • I agree to follow this project's Code of Conduct

Maintain TF >= 0.12 Compatibility with v3.0.0

Terraform CLI and Provider Versions

TF >= 0.12
Provider 3.0.0

Use Cases or Problem Statement

  • Preserve compatibility with TF >= 0.12 for longer
  • Rely on TF core decision for when to discontinue support for >= 0.12

Proposal

  • Switch provider to use Protocol 5 server
  • update CHANGELOG
  • remove mention of protocol 6 only compatibility
  • add compatibility matrix in README
  • update test providers (use the protocol 5 fields)
  • update milestone and version to be v3.0.0

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

TLS handshake timeout with HTTP datasource

Hi there, the issue here is with the http provider as the moment it tries to fetch data from the given HTTPs URL, it ends up on TLS handshake timeout

Terraform Version

Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/azurerm v2.38.0
+ provider registry.terraform.io/hashicorp/http v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.0

Affected Resource(s)

Terraform apply/destroy not happening

Terraform Configuration Files

https://github.com/cpu601/terraform-azurerm-hcs/blob/master/main.tf#L10-L12

data "http" "cloud_hcs_meta" {
  url = "https://raw.githubusercontent.com/hashicorp/cloud-hcs-meta/master/ama-plans/defaults.json"
}

Expected Behavior

Terraform command should go thru but since http datasource is failing, the whole execution fails

Actual Behavior

Terraform command executions are failing as http datasource is timing out while reaching

Error: Error making request: Get "https://raw.githubusercontent.com/hashicorp/cloud-hcs-meta/master/ama-plans/defaults.json": net/http: TLS handshake timeout

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform destroy -auto-approve

Important Factoids

Running HCS on Azure

Switch to GitHub Actions and goreleaser Release Process

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's release process should be switched to goreleaser to match the documented Terraform Registry publishing recommendations. This includes:

  • Creating necessary .goreleaser.yml and .github/workflows/release.yml configurations for tag-based releases (see also: TF-279 RFC)
  • Ensuring necessary GitHub or Vault tokens are in place to fetch release secrets
  • Ensuring provider and internal release process documentation is updated

[Documentation] Unclear what this provider is actually offeering from README.md

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.

Affected Resource(s)

Please list the resources as a list, for example:

  • opc_instance
  • opc_storage_volume

This is about documentation. What does this provider-http does and offers? It is not clear by lookign aat the repo.

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.

Debug Output

Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist.

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

Expected Behavior

What should have happened?

Actual Behavior

What actually happened?

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • GH-1234

HTTP errors not displayed

Terraform Version

➜ terraform -v
Terraform v0.12.12
+ provider.azurerm v1.31.0
+ provider.http v1.1.1

Affected Resource(s)

provider.http

Terraform Configuration Files

data "http" "authorized_keys" {
  url = "http://example.com/authorized_keys"
}

Debug Output

Unnecessary, as the issue shows up in the normal output

Expected Behavior

When we have an HTTP error (for example, DNS for the site is not resolved), we should know what the error was.

Actual Behavior

Error displays as Error: Error during making a request: http://example.com/authorized_keys

Steps to Reproduce

  1. Create a http data source with an URL to a site that isn't accessible (for example, only resolves from an internal DNS and you are not connected to it)
  2. Verify that the error message doesn't tell you anything

Important Factoids

The issue appears to be in https://github.com/terraform-providers/terraform-provider-http/blob/master/http/data_source.go#L65, which should be printing "err" instead of "url" (or maybe both?)

Detailed messaging on http error responses

We are using the http provider in several places in our code. When one of our http resources returns an error response like 401 or 404 it can be hard to determine which one failed since the error message doesn't include details about the request.

Terraform Version

Terraform v0.14.3

  • provider registry.terraform.io/hashicorp/http v2.1.0

Affected Resource(s)

http_http datasource

Terraform Configuration Files

# One of these calls fails with a 404... but which one?
data "http" "foobar" {
  url = "https://google.com/foobar"
}

data "http" "helloworld" {
  url = "https://google.com/helloworld"
}

Expected Behavior

It would be nice if the error message indicated which request failed. For example:

Error: HTTP request error from [https://google.com/thispagedoesntexist]. Response code: 404

Actual Behavior

The displayed error is:

Error: HTTP request error. Response code: 404

Regression in http 2.0.0 data source behavior from 0.13.5 to 0.14.0 when fetching url with content type application/x-x509-ca-cert

This issue was originally opened by @tolga-luminary-cloud as hashicorp/terraform#27382. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform configuration using http data source 2.0.0 behaves differently in 0.13.5 and 0.14.0. Version 0.13.5 correctly outputs the body of response whose content type is "application/x-x509-ca-cert". Version 0.14.0 does not output anything.

Gist here: https://gist.github.com/tolga-luminary-cloud/8c97a5f56ffc2c28f9273624c2c66013

File:

Terraform Version

Any terraform version including and after 0.14.0 regresses. Any version including and before 0.13.5 works fine

Terraform v0.14.0
+ provider registry.terraform.io/hashicorp/http v2.0.0

Your version of Terraform is out of date! The latest version
is 0.14.3. You can update by downloading from https://www.terraform.io/downloads.html

Terraform Configuration Files

data "http" "auth0_cert_pem" {
  url = "https://exampleco-enterprises.us.auth0.com/pem"
}

output "pem" {
  value = data.http.auth0_cert_pem.body
}

Debug Output

https://gist.github.com/tolga-luminary-cloud/1a0a6a2d39bb1e616726dbcb1651722d

Crash Output

Expected Behavior

Version 0.13.5 results in the following output, which is the expected behavior

Outputs:

pem = -----BEGIN CERTIFICATE-----
MIIDGTCCAgGgAwIBAgIJNzfupXD/AyUaMA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV
BAMTH2V4YW1wbGVjby1lbnRlcnByaXNlcy5hdXRoMC5jb20wHhcNMTcwOTE2MTcy
NzA3WhcNMzEwNTI2MTcyNzA3WjAqMSgwJgYDVQQDEx9leGFtcGxlY28tZW50ZXJw
cmlzZXMuYXV0aDAuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
zI/nsoa5PJ/gG57P/aNCJqEvWKXA1GXg9oJ1kCCgQzNC7HlCEmW0WMoi46gla0io
y9VmBi0w2H5OFaiOAoeB8p9Vq3HrEXkxXK3LthWW8E4wQIOCTvQY/Rfj2vgTQDd3
vG9zCwdno+fkMesALdl/tpic5NP/uz/vh/Qjj7yLFAgSLLINWNJPR+MZ+2KS8UlF
OHZ74UTf1IO2xbl/P2xAMH9w5Fb+UPIAFevryzRagS23zKCt6l+8F8hREH8QWqlQ
pCWlzrZ7Qa4GmWkCcL36o6VoV4+ppSqFrQ1Z2MQEJVncbNSs6ypQy0G9QMhM0vBu
cUUEKovFXsi8jLQVjVwTYwIDAQABo0IwQDAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud
DgQWBBR9eXLA97SMUCSBwpmOKkXOZCGZAjAOBgNVHQ8BAf8EBAMCAoQwDQYJKoZI
hvcNAQELBQADggEBALZbQRlOb6J8c3SZv/NGbyI2rCzFTAxTrYF86B8jzJOX8zMd
Fsuru6IPG1h/Y4NLPANnKLhVJAzvUzVr0Xu5HwLFGaLoA+9PRiYxKOshg1QXQ7ql
udItPjL3sfB9CPkqupwEfABFHkyp1pcQeHSXi0BsowHOd9NA6OVjdwH0pF1SiTWc
vaqvtGMEXL1ksiap5QBDjKUO+OMjDdLslatQ107PaRGVYWqWieo/8sxvE+vjBNch
BD6krGd14D4LDHDaGmh9ie41gUDRI+pbtY31P37lskfG9zEDnwSBl6yccnW1nyPZ
2djrWl3OiVTUfa5wWveLyF45kO14DPyxpL10wQY=
-----END CERTIFICATE-----

Actual Behavior

No output was observed

Steps to Reproduce

  1. install terraform version >= 0.14.0
  2. terraform init
  3. terraform aply

Additional Context

The terraform configuration works in version 0.13.5. I believe this behavior is a regression in terraform core and not in http provider plugin, because the same plugin version is used with different tf versions, leading to different behavior.

References

Hard coded response code of 200

I'm not really having an issue per say with this provider, but I would like to inquire into why the response codes on this provider are hard coded to 200.

if resp.StatusCode != 200

Why not the range of 200? At a min

200 | OK | [RFC7231, Section 6.3.1]
201 | Created | [RFC7231, Section 6.3.2]
202 | Accepted | [RFC7231, Section 6.3.3]

Thanks

Bump Development/Build Minimum Go Version to 1.17

Terraform CLI and Provider Versions

N/A (main branch development)

Use Cases or Problem Statement

Following the Go support policy and given the ecosystem availability and stability of the latest Go minor version, it's time to upgrade. This will ensure that this project can use recent improvements to the Go runtime, standard library functionality, and continue to receive security updates

Proposal

  • Run the following commands to upgrade the Go module files and remove deprecated syntax such as //+build:
go mod edit -go=1.17
go mod tidy
go fix
  • Ensure any GitHub Actions workflows (.github/workflows/*.yml) use 1.18 in place of any 1.17 and 1.17 in place of any 1.16 or earlier
  • Ensure the README or any Contributing documentation notes the Go 1.17 expected minimum
  • (Not applicable to all projects) Ensure the .go-version is at least 1.17 or later
  • Enable the tenv linter in .golangci.yml and remediate any issues.

How much impact is this issue causing?

Medium

Additional Information

Code of Conduct

  • I agree to follow this project's Code of Conduct

Add timeout and retry to framework version of provider

Terraform CLI and Provider Versions

v1.2.2

Use Cases or Problem Statement

The http provider is being migrated to the Terraform Plugin Framework.

There are outstanding pull requests and issues that have addressed or requested, respectively that timeouts and retries be implemented for HTTP requests. Specifically:

Proposal

This PR will incorporate pull requests and issues that have addressed or requested, respectively that timeouts and retries be implemented for HTTP requests.

Closes: #87
Closes: #71
Closes: #49

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

application/octet-stream for yaml

Terraform Version

Terraform v0.14.8
provider registry.terraform.io/hashicorp/http v2.1.0

Affected Resource(s)

  • data http

Terraform Configuration Files

data "http" "cert_manager" {
  url = "https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml"
}

output "content" {
  value = data.http.cert_manager.body
}

Debug Output

Warning: Content-Type is not recognized as a text type, got "application/octet-stream"

  on cert_manager.tf line 34, in data "http" "cert_manager":
  34: data "http" "cert_manager" {

If the content is binary data, Terraform may not properly handle the contents
of the response.

Expected Behavior

Should download yaml file

Actual Behavior

Shows warning and do nothing

Steps to Reproduce

  1. terraform apply

Description

Pls remove limit on header for stream. As a modern repository jetstack/cert-manager (and many other repos) don't have released yaml inside repo thus it cant be accessed through raw.githubusercontent.com with right text headers.

Create provider design doc

Maintenance of the standard library providers prioritises stability and correctness relative to the provider's intended feature set. Create a design document describing this feature set, and any other design considerations which influence the architecture of the provider and what can and cannot be added to it.

To include:

Document the design considerations and decisions around including other HTTP verbs in this provider.

Some context: #20 (comment)

#85

Limit output on cli

Terraform CLI and Provider Versions

Terraform version 1.0.11

Use Cases or Problem Statement

The ouptut of the plugin, depending on what you fetch, can be huge and makes every apply very tedious.

Proposal

Could we either have a flag that limits this output to a few lines maybe or even better, show only the changes? Otherwise every apply needs to be scrolled and that makes it very .

How much impact is this issue causing?

High

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Add ability to store/cache result in state to avoid "will be read during apply"

Terraform CLI and Provider Versions

Terraform v1.3.6
on linux_amd64

Use Cases or Problem Statement

The data source is always doing an HTTP request even if I know the source will not change unless I change the URL.

For example, I get a remote JSON file with some configuration, but this file is static and will not change. In the following example I'm using a file from a git repository, but this can also be some other static file that will not change. For example some static api like https://grafana.com/api/dashboards/9614/revisions/1/download.

locals {
  nginx_ingress_version = "4.4.0"
}

# Fetching static JSON where the response_body will only change when  I change the URL 
data "http" "grafana_nginx_ingress_controller" {
  request_headers = {
    Accept = "application/json"
  }
  url = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/helm-chart-${local.nginx_ingress_version}/deploy/grafana/dashboards/nginx.json"
  lifecycle {
    postcondition {
      condition     = contains([200], self.status_code)
      error_message = "Error fetching Grafana Dashboard JSON file. Got HTTP Status code ${self.status_code}: ${self.response_body}"
    }
  }
}

The result from the above example is always:

Terraform will perform the following actions:                                      
                                                                                   
  # data.http.grafana_nginx_ingress_controller will be read during apply                                                                         
  # (depends on a resource or a module with changes pending)                       
 <= data "http" "grafana_nginx_ingress_controller" {                               
      + body             = (known after apply)                                     
      + id               = (known after apply)                                     
      + request_headers  = {                                                       
          + "Accept" = "application/json"                                          
        }                                                                          
      + response_body    = (known after apply)                                     
      + response_headers = (known after apply)                                     
      + status_code      = (known after apply)                                     
      + url              = "https://raw.githubusercontent.com/kubernetes/ingress-nginx/helm-chart-4.4.0/deploy/grafana/dashboards/nginx.json"                         
    }

Even if the source data does not change and I've already applied that change. (because it is part of the "apply" phase. According to this (hashicorp/terraform#25805 (comment)) comment the data source should cache the result if the input does not change.

I understand why that is not happening here, but for some use cases it might be useful. (for example when using the above use case with a static API)

The reason why this poses an issue is when you're reviewing the plan and it gets filled with these kind of messages. It makes it harder to validate the plan and to see if anything that should not happen, happens. It also poses an impact on the resources that uses this data result, as commented in the following issue: #101

Proposal

It would be good if there is some method to enable the caching of the result in the state unless the input configuration changes. By either some kind of a setting or by default (but that would mean a breaking change).

How much impact is this issue causing?

Medium

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

secrets leaking on error

Hi!
The http provider is leaking secrets on error, relevant source here: if err != nil {
It would be ideal to have this dump controlled via the log level.

Terraform Version

  • 1.0.2
  • 1.0.3

Affected Resource(s)

Please list the resources as a list, for example:

  • http

Terraform Configuration Files

variable "super_sensitive_value" {
  type      = string
  sensitive = true
  default   = "apparently_not_sensitive_enough"
}

data "http" "use_vault_result_in_a_call" {
  url = var.super_sensitive_value
}

Debug Output

│ Error: Error making request: Get "apparently_not_sensitive_enough": unsupported protocol scheme ""
│
│   with data.http.use_vault_result_in_a_call,
│   on main.tf line 18, in data "http" "use_vault_result_in_a_call":
│   18: data "http" "use_vault_result_in_a_call" {

Expected Behavior

The variable super_sensitive_value is expected to be hidden from logs as its marked as sensitive.

Actual Behavior

The variable super_sensitive_value is leaked to the log.

Steps to Reproduce

  1. terraform apply

References

Enable http data source to accept binary data

Terraform CLI and Provider Versions

 $ terraform version
Terraform v1.1.8
on darwin_amd64
provider "registry.terraform.io/hashicorp/http" {
  version = "2.2.0"
}

Use Cases or Problem Statement

HTTP data sources can be used to download externally provided tools and binary data.
For example, to manage a Kubernetes cluster, you can obtain the same version of the kubectl command as the cluster as follows

data "http" "kubectl" {
  url = "https://storage.googleapis.com/kubernetes-release/release/v1.24.0/bin/darwin/amd64/kubectl"
}

However, current HTTP data sources expect response data to be text data, and dumping response data, for example, as shown below, will produce corrupted data.

resource "local_sensitive_file" "kubectl" {
  filename        = "${path.module}/kubectl"
  content_base64  = base64encode(data.http.kubectl.response_body)
  file_permission = "0755"
}

Proposal

Add the base64_response_body read-only attribute, which is mainly intended for binary data.
When reading values through this attribute, the provider is not aware of the content, but treats it only as an octet-stream.
The result is encoded in base64 format for safe handling as a string in Terraform.

Example

data "http" "kubectl" {
  url = "https://storage.googleapis.com/kubernetes-release/release/v1.24.0/bin/darwin/amd64/kubectl"
}

resource "local_sensitive_file" "kubectl" {
  filename        = "${path.module}/kubectl"
  content_base64  = data.http.kubectl.base64_response_body
  file_permission = "0755"
}

How much impact is this issue causing?

Medium

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

remote saml_metadata_document

This issue was originally opened by @tomdavidson as hashicorp/terraform#5848. It was migrated here as a result of the provider split. The original body of the issue is below.


I would like to reference a remote document over https in creating a aws_iam_saml_provider rather than a local file, something similar to modules' source = "github.com/.... such as:

resource "aws_iam_saml_provider" "default" {
    name = "myprovider"
    saml_metadata_document = "${file("https://domain.local/idp/shibboleth")}"
}

The remote file seems especially relevant in this case - am I overlooking existing functionality?

Expand documentation to include HEAD and POST requests

Terraform CLI and Provider Versions

Terraform v1.2.5

Use Cases or Problem Statement

Request methods HEAD and POST can be optionally specified, along with a request body following the changes made in Allow optionally specifying HTTP request method and body. The documentation only shows how to make GET requests.

Proposal

Update the documentation to illustrate how to make HEAD and POST requests.

How much impact is this issue causing?

Low

Additional Information

References

Code of Conduct

  • I agree to follow this project's Code of Conduct

Getting warning for application/vnd.docker.distribution.manifest.v2+json content types

When using the http data source with Accept = "application/vnd.docker.distribution.manifest.v2+json" request header, we are receiving Content-Type warnings. Shouldn't "application/*+json" Content-Types be treated the same as application/json?

Terraform Version

Terraform v0.14.7

  • provider registry.terraform.io/hashicorp/http v2.1.0

Affected Resource(s)

http_http datasource

Expected Behavior

No warnings should be shown

Actual Behavior

Warning: Content-Type is not recognized as a text type, got "application/vnd.docker.distribution.manifest.v2+json"

Steps to Reproduce

  1. terraform plan

Bump Expected Minimum Go Version to 1.18

Terraform CLI and Provider Versions

TF: v1.2.7
Provider: v3.0.1

Use Cases or Problem Statement

Following the Go support policy and given the ecosystem availability of the latest Go minor version, it's time to upgrade. This will ensure that this project can use recent improvements to the Go runtime, standard library functionality, and continue to receive security updates.

Proposal

  • Run the following commands to upgrade the Go module files and automatically fix outdated Go code:
go mod edit -go=1.18
go mod tidy
go fix
  • Ensure any GitHub Actions workflows (.github/workflows/*.yml) use 1.19 in place of any 1.18 and 1.18 in place of any 1.17 or earlier
  • Ensure the README or any Contributing documentation notes the Go 1.18 expected minimum
  • (Not applicable to all projects) Ensure the .go-version is at least 1.18 or later

How much impact is this issue causing?

Low

Additional Information

References

Code of Conduct

  • I agree to follow this project's Code of Conduct

http provider too strict with application/json content type

This issue was originally opened by @sebastien-prudhomme as hashicorp/terraform#15164. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

0.9.7

Affected Resource(s)

"http" datasource

Terraform Configuration Files

data "http" "openstack" {
  url = "http://169.254.169.254/openstack/latest/meta_data.json"
}

Actual Behavior

Not working:

Error refreshing state: 1 error(s) occurred:

* data.http.main: 1 error(s) occurred:

* data.http.main: data.http.main: Content-Type is not a text type. Got: application/json; charset=UTF-8

In source code builtin/providers/http/data_source.go:

func isContentTypeAllowed(contentType string) bool {
	allowedContentTypes := []*regexp.Regexp{
		regexp.MustCompile("^text/.+"),
		regexp.MustCompile("^application/json$"),
}

The regexp for "application/json" is too strict, as the charset can also be included by the web server

Allow the accepted types to be specified in the resource

Hi there,
I would like to download the SAML metadata document and use it in my SAML provider in AWS. Unfortunately the http provider does not allow you to do that.

My suggestion is to use the "Accept" request header value to determine whether or not the data is accepted.

Terraform Version

Terraform v0.11.7
+ provider.auth0 v0.1.11
+ provider.http v1.0.1

Affected Resource(s)

  • http

Terraform Configuration Files

data "http" "auth0-saml-metadata" {
  url = "https://${var.auth0_domain}/samlp/metadata/${auth0_client.oauth-cli.client_id}"
		request_headers {
    "Accept" = "application/xml"
  }
}

resource "aws_iam_saml_provider" "default" {
  name                   = "auth0-${replace(var.auth0_domain,".","-")}-provider"
  saml_metadata_document = "${data.http.auth0-saml-metadata.body}"
}

Expected Behavior

The SAML metadata document is downloaded and its body passed into the IAM SAML provider.

Actual Behavior

* data.http.auth0-saml-metadata: data.http.auth0-saml-metadata: Content-Type is not a text type. Got: application/xml; charset=utf-8

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
data "http" "auth0-saml-metadata" {
  url = "http://httpbin.org/xml"
}

References

Base64 encode response body

Terraform CLI and Provider Versions

terraform v1.2.2
provider v2.2.0

Use Cases or Problem Statement

Currently, the http provider examines the Content-Type header to determine whether the response body is text.
If binary data is contained within the response body, its conversion to string() renders the response body unusable and potentially problematic for Terraform to handle.

Proposal

Add an attribute which holds a base64 encoded version of the response body.

How much impact is this issue causing?

Low

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Choose whether IPv4 or IPv6 is used

Terraform CLI and Provider Versions

Terrraform: 1.3.6
Provider: 3.2.1

Use Cases or Problem Statement

Sometimes only IPv4 or IPv6 traffic is supported by an endpoint. For example, when using this provider to get the current user's IP address from a service like icanhazip.com

Proposal

A field like protocol_version that can be either 4 or 6

How much impact is this issue causing?

Medium

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Enhancement: post output as JSON to URL

This issue was originally opened by @MarcelT-NL as hashicorp/terraform#23389. It was migrated here as a result of the provider split. The original body of the issue is below.


Current Terraform Version

0.12.14

Use-cases

We need to run post-provisioning scripts in our CI/CD pipeline which takes the output of Terraform. It would be great if Terraform could post the output JSON to an https endpoint.

E.g. for output of generated:

  • Acme keys
  • IP addresses
  • etc

Attempted Solutions

None, we use the cloud solution at terraform.io. No post options there.

Proposal

Add a post_url to the output command, where the JSON output is sent to:

output "certificate_private_key" {
   ...
   post_url:   "https://<url>"
   ...
}

All output should be combined in one JSON and sent in one go.

Feature Request: Corresponding resource type

Howyas

It would be useful to be able to fall back on the http module for cases where other providers don't provide all the necessary functionality. In that case, I'd like a http resource type that:

  • Supported various HTTP verbs
  • Supported specifying different verbs for Create, Read, Update and Delete ( eg PUT, GET, POST, and DELETE, or POST, GET, POST, and DELETE ).
  • Support not making the http call unless some of the parameters change; perhaps some of the existing lifecycle support could enable this.
  • Support specifying a regex pattern ( or list of regex pattern ) for acceptable response headers and HTTP status codes.

support following 302 redirects

I'm using data "http" "someyaml" {} to fetch official crds/yamls for deploying to kubernetes. Some of these yamls send a 302 redirect that the http client is not following. It would be nice to expose a follow redirects argument to provide this control.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.