Git Product home page Git Product logo

terraform-provider-hydra's Introduction

terraform-provider-hydra

The Terraform Hydra provider is a plugin for Terraform that allows for declarative management of a Hydra instance. You can find it on the Terraform Registry.

Requirements

To use this provider, you will need the following:

NOTE: You can use this provider with commit 6e53767 (at the absolute earliest), but it has a known issue where some internal fields were not nullified, leading to state differences between Hydra and Terraform, and is not recommended.

Getting started

To get started with this provider, you'll need to create a configuration file that will tell Terraform to use this provider. This will look something like the following snippet:

terraform {
  required_providers {
    hydra = {
      version = "~> 0.1"
      source  = "DeterminateSystems/hydra"
    }
  }
}

After that's done, you'll need to specify where your Hydra instance can be reached and provide credentials for this provider to be able to work its magic:

NOTE: Hard-coded credentials are not recommended, so while it is possible to use them (just uncomment the username and password items and fill them in with valid values), you are urged to use the HYDRA_USERNAME and HYDRA_PASSWORD environment variables.

provider "hydra" {
  host = "https://hydra.example.com"
  # username = "alice"
  # password = "foobar"
}

Now that you can connect to Hydra, it's time to create a project with the hydra_project resource:

resource "hydra_project" "nixpkgs" {
  name         = "nixpkgs"
  display_name = "Nixpkgs"
  description  = "Nix Packages collection"
  homepage     = "https://nixos.org/nixpkgs"
  owner        = "alice"
  enabled      = true
  visible      = true
}

You can attach a jobset to this project with the hydra_jobset resource:

NOTE: The check_interval is 0 for this example to prevent Hydra from starting an evaluation on the entirety of Nixpkgs. Change this to a non-zero value if you would like to tell Hydra it can start evaluating this jobset.

resource "hydra_jobset" "trunk-flake" {
  project     = hydra_project.nixpkgs.name
  state       = "enabled"
  visible     = true
  name        = "trunk-flake"
  type        = "flake"
  description = "master branch"

  flake_uri = "github:NixOS/nixpkgs/master"

  check_interval    = 0
  scheduling_shares = 3000
  keep_evaluations  = 3

  email_notifications = true
  email_override      = "[email protected]"
}

That's it for the basic usage of this provider!

You may also want to check out the example configurations inside the examples/ directory.

Importing from an existing Hydra instance

You can migrate from a hand-configured Hydra to Terraform-managed configuration files using our included generator, ./tools/generator.sh.

The generator enumerates the server's projects and jobsets, generating a .tf file for each project. The generator also produces a script of terraform import commands.

The workflow is:

  1. Execute generator.sh
  2. Commit the generated .tf files to your repository
  3. Execute the generated terraform import script
  4. Discard the terraform import script, as it should not be necessary anymore

Your Terraform network and state file will now have up-to-date data for all of your existing project and jobset resources, and a terraform plan should report no differences were detected.

$ cd tools
$ nix-shell
# Usage: generator.sh <server-root> <out-dir> <import-file>
#
#     Arguments:
#         <server-root>    The root of the Hydra server to import projects and jobsets from.
#         <out-dir>        The directory to output generated Terraform configuration files to.
#         <import-file>    Where to write the generated list of 'terraform import' statements.
nix-shell$ ./generator.sh hydra.example.com outdir generated-tf-import.sh

Development

In addition to the dependencies for using this provider, hacking on this provider also requires the following:

Running locally

This assumes a running instance of Hydra is available.

$ nix-shell
nix-shell$ make install
nix-shell$ cd examples/default
nix-shell$ terraform init && terraform plan

Regenerating API bindings

This will fetch the latest hydra-api.yaml from Hydra and generate API bindings against that specification.

$ nix-shell
nix-shell$ make api

Running acceptance tests locally

NOTE: You should use a throwaway Hydra instance to prevent anything unexpected happening.

$ nix-shell
nix-shell$ HYDRA_HOST=http://0.0.0.0:63333 HYDRA_USERNAME=alice HYDRA_PASSWORD=foobar make testacc

Contributing

Pull requests are welcome. When submitting one, please follow the checklist in the template to ensure everything works properly.

The typical contribution workflow is as follows:

  1. Make your change
  2. Format it with make fmt (requires goimports)
  3. Verify it builds with make build
  4. Install it with make install
  5. Spin up a local Hydra server to test with (see the Hydra documentation on Executing Hydra During Development)
  6. Extend one of the examples so that it will exercise your change (or write your own example!)
  7. Remove the .terraform.lock.hcl file (if it exists) and run terraform init && terraform apply
  8. Once everything looks good, write a test for your change
  9. Commit and open a pull request (be sure to follow the checklist in the template)

FAQ

Q. Does this provider support Basic Authentication?

A. Yes! Just set the HYDRA_HOST environment variable to e.g. https://user:[email protected]. You can also set the host in your configuration to this, but hard-coded credentials are insecure and not recommended.

License

MPL-2.0

terraform-provider-hydra's People

Contributors

cole-h avatar dependabot[bot] avatar flexiondotorg avatar grahamc avatar hoverbear avatar lheckemann avatar lucperkins avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

terraform-provider-hydra's Issues

terraform doesn't track deletion of the declarative spec file project setting

To convert a project from declarative to non-declarative, you remove the spec file from the project configuration, Declarative spec file (Leave blank for non-declarative project configuration).

Expected:
Running terraform apply against a project that has had the spec file removed should re-apply the declarative spec file.

What happens:

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

If you change any of the other metadata (enabled, description, etc) it's rewritten.

Reading jobsets doesn't read the flake_uri

Consider this:

  # hydra_jobset.flakes_blender-bin will be updated in-place
  ~ resource "hydra_jobset" "flakes_blender-bin" {
      + flake_uri           = "github:edolstra/nix-warez?dir=blender"
        id                  = "flakes/blender-bin"
        name                = "blender-bin"
        # (8 unchanged attributes hidden)
    }

this is after an import. It looks like the code for reading jobset data doesn't save the flake_uri.

Host, username, and password seem to need to be passed in the environment

Terraform CLI and Terraform Hydra Provider Version

Terraform v1.0.5
on linux_amd64
+ provider registry.terraform.io/determinatesystems/hydra v0.1.2

Affected Resource(s)

Provider configuration.

Terraform Configuration Files

terraform {
  required_providers {
    hydra = {
      version = "~> 0.1"
      source  = "DeterminateSystems/hydra"
    }
  }
}

provider "hydra" {
  host     = "https://foobar" 
  username = "[email protected]"
}

Expected Behavior

Terraform should prompt for my password and use the provided host and username.

Actual Behavior

$ terraform apply
provider.hydra.password
  The password for the Hydra user specified in `username`.

  Enter a value: aoeu

╷
│ Error: Missing required attribute
│ 
│   on <input-prompt> line 1:
│   (source code not available)
│ 
│ The attribute "host" is required, but no definition was found.
╵
╷
│ Error: Missing required attribute
│ 
│   on <input-prompt> line 1:
│   (source code not available)
│ 
│ The attribute "username" is required, but no definition was found.
╵

Steps to Reproduce

  1. terraform apply

References

  • #0000

Make test configs more generic

e.g. make it so that we can provide the resource name itself, as well as maybe a way to "compose" configuration snippets? This will let us deduplicate some of the configs that only have slight differences. I'm imagining something like:

func composeConfigSnippets(resourceName string, projectName string, jobsetName string, snippets ...ConfigSnippets) string {
	// TODO
}

where ConfigSnippets is just func() string that will be appended to the config. Maybe it'll accept some args, unknown until this is actually attempted.

Or maybe...

func composeConfigSnippets(commonConfig string, snippets ...ConfigSnippets) string {
	// TODO
}

where commonConfig is stuff that won't change / be tested in our tests.

Deploying from Hydra with Terraform

Description

It'd be slick to implement the concepts from https://determinate.systems/posts/hydra-deployment-source-of-truth in Terraform.

Specifically:

  • Get the build information for a job's latest successful build, and latest successful build from a completed evaluation
  • Get information about a specific constituent of an aggregate job

New or Affected Resource(s)

data "hydra_job"
data "hydra_job_aggregate_constituent"

Potential Terraform Configuration

Deploying from a job:

data "hydra_job" "latest_myapp" {
    project = "myapp"
    jobset = "main"
    job = "myapp"
    wait_for_all_jobs = true # latest-successful ... consider a better name?
}

resource "aws_instance" "web" {
  # ...

  provisioner "remote-exec" {
    command = "nix-env -p /nix/var/nix/profiles/myapp --set '${data.hydra_job.latest_myapp.outputs.out.path}'"
  }
}

Deploy the constituent of an aggregate job:

data "hydra_job_aggregate_constituent" "serverconfig" {
    project = "myapp"
    jobset = "main"
    job = "release_gate"
    constituent = "serverconfig"
    wait_for_all_jobs = true
}

resource "aws_instance" "web" {
  # ...

  provisioner "remote-exec" {
    inline = [
      "nix-env --profile /nix/var/nix/profiles/system --set '${data.hydra_job_aggregate_constituent.serverconfig.outputs.out.path}'",
      "/nix/var/nix/profiles/system/bin/switch-to-configuration switch",
    ]
  }
}

References

Jobsets should complain if the type is flake but nix_expression is set

# hydra_jobset.flakes_dhdm will be updated in-place
  ~ resource "hydra_jobset" "flakes_dhdm" {
        id                  = "flakes/dhdm"
        name                = "dhdm"
        # (8 unchanged attributes hidden)

      + nix_expression {}
    }

I had erroneously provided a nix_expression field ,but no flake_uri field. This should be caught as an error.

Provider does not support creating both a project and a jobset at the same time

When applying a tf config containing both a new hydra_project and a new hydra_jobset within that project, it seems like the creation is not applied in the right order / with the right dependencies, and the jobset creation fails because the project does not exist yet. A second terraform apply is able to create the jobset.

(TF version: opentofu v1.8.0, provider registry.opentofu.org/determinatesystems/hydra v0.1.2 - I have not tried the HEAD version of this repo, lmk if this sounds like something that was fixed already and is just not released)

Add an importer script for use with existing Hydra instances

Existing script written by @grahamc:

#!/usr/bin/env nix-shell
#!nix-shell -i bash -p jq shellcheck
# shellcheck shell=bash
set -eu
shellcheck "$0"

server_root="https://hydra.nixos.org/"
inputFile=$(dirname "$0")/projects

strFromNull() (
    local input=$1
    local expr=$2

    echo "$input" | jq -r "if $expr == null then \"\" else $expr end"
)

boolFrom() (
    local input=$1
    local expr=$2

    echo "$input" | jq -r "$expr"
)

intFrom() (
    local input=$1
    local expr=$2

    echo "$input" | jq -r "$expr + 0"
)

jobsetTypeFrom() (
    local input=$1
    local expr=$2

    val=$(echo "$input" | jq -r "$expr + 0")
    case "$val" in
        0)
            echo "legacy";
            ;;
        1)
            echo "flake";
            ;;
        *)
            echo "UNKNOWN";
            ;;
    esac
)

jobsetStateFrom() (
    local input=$1
    local expr=$2

    val=$(echo "$input" | jq -r "$expr + 0")
    case "$val" in
        0)
            echo "disabled";
            ;;
        1)
            echo "enabled";
            ;;
        2)
            echo "one-shot";
            ;;
        3)
            echo "one-at-a-time";
            ;;
        *)
            echo "UNKNOWN";
            ;;
    esac
)

renderProject() (
    proj=$1

    name=$(echo "$proj" | jq -r .name)
    displayname=$(strFromNull "$proj" ".displayname")
    description=$(strFromNull "$proj" ".description")
    homepage=$(strFromNull "$proj" ".homepage")
    owner=$(echo "$proj" | jq -r .owner)
    enabled=$(boolFrom "$proj" .enabled)
    visible=$(boolFrom "$proj" '.hidden == false')

    cat <<-TPL
resource "hydra_project" "$name" {
    name         = "$name"
    display_name = "$displayname"
    homepage     = "$homepage"
    description  = "$description"
    owner        = "$owner"
    enabled      = $enabled
    visible      = $visible
}
TPL
echo terraform import "hydra_project.$name" "$name" >&2
)

inputDefinitionLegacy() (
    jobset=$1

    nixexprinput=$(strFromNull "$jobset" ".nixexprinput")
    nixexprpath=$(strFromNull "$jobset" ".nixexprpath")

    cat <<-TPL
  nix_expression {
    file = "$nixexprpath"
    in   = "$nixexprinput"
  }

TPL

    echo "$jobset" | jq -c '.inputs | to_entries | .[] | .value' | while read -r input; do
        name=$(strFromNull "$input" '.name')
        emailResponsible=$(echo "$input" | jq -r '.emailresponsible == true')
        type=$(strFromNull "$input" '.type')
        value=$(echo "$input" | jq '.value')
        if [ "$value" = "" ]; then
            value="\"\""
        fi

    cat <<-TPL
  input {
    name              = "$name"
    type              = "$type"
    value             = $value
    notify_committers = $emailResponsible
  }

TPL
    done

)

inputDefinitionFlake() (
    jobset=$1

    flake=$(strFromNull "$jobset" ".flake")

    cat <<-TPL
  flake_uri = "$flake"

TPL

)


renderJobset() (
    project=$1
    name=$2
    jobset=$3

    echo "$jobset" | jq . >&2

    state=$(jobsetStateFrom "$jobset" ".enabled")
    description=$(strFromNull "$jobset" ".description")
    type=$(jobsetTypeFrom "$jobset" ".type")
    visible=$(boolFrom "$jobset" '.visible == true')
    keep_evaluations=$(intFrom "$jobset" ".keepnr")
    scheduling_shares=$(intFrom "$jobset" ".schedulingshares")
    check_interval=$(intFrom "$jobset" ".checkinterval")

    email_notifications=$(echo "$jobset" | jq -r '.enableemail == true')
    email_override=$(strFromNull "$jobset" ".emailoverride")

    case "$type" in
        "legacy")
            inputdefinition=$(inputDefinitionLegacy "$jobset")
            ;;
        "flake")
            inputdefinition=$(inputDefinitionFlake "$jobset")
            ;;
        *)
            inputdefinition="UNKNOWN INPUT TYPE"
            ;;
    esac

    resourcename=$(echo "${project}_$name" | tr '.' '_')

    cat <<-TPL
resource "hydra_jobset" "$resourcename" {
  project     = hydra_project.$project.name
  state       = "$state"
  visible     = $visible
  name        = "$name"
  type        = "$type"
  description = "$description"

$inputdefinition

  check_interval    = $check_interval
  scheduling_shares = $scheduling_shares

  email_notifications = $email_notifications
  email_override      = "$email_override"
  keep_evaluations    = $keep_evaluations
}

TPL
echo terraform import "hydra_jobset.$resourcename" "${project}/$name" >&2
)


main() (
    jq -c '.[]' < "$inputFile" | (
        i=0
        while read -r project; do
            i=$((i + 1))
            projectname=$(echo "$project" | jq -r .name)
            #if [ "$projectname" == "patchelf" ]; then
            (
                (
                    renderProject "$project"

                    projectname=$(echo "$project" | jq -r .name)
                    echo "$project" | jq -r '.jobsets[]' | while read -r jobsetname; do
                        renderJobset "$projectname" "$jobsetname" "$(curl --silent --header "Accept: application/json" "$server_root/jobset/$projectname/$jobsetname" | jq .)"
                    done
                ) > generated."$projectname".tf
            )&
            #fi
        done

        wait
    )

    echo "done"
)


main

Just need to review and clean it up. Maybe rewrite in another language (Python?). Should go into a new contrib/ directory.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.