Git Product home page Git Product logo

terraform-azure-portworx's Introduction

azure-portworx

Description

Terraform module to install Portworx into an OCP/ARO/IPI cluster on Azure, compatible with modules from https://modules.cloudnativetoolkit.dev

Prerequisites

This module has 2 manual steps that must be completed before successful deployment:

  1. Azure service principal/credentials
  2. Portworx configuration

Azure service principal/credentials

The provided scripts/portworx-prereq.sh script will collect/create the necessary service principle. The script required the resource group name, cluster name, and cluster type as input. Optionally the subscription id can be provided. If not provided, the subscription id will be looked up.

  1. Log into your Azure account using the az cli.

  2. Run the scripts/portworx-prereq.sh script.

    ./scripts/portworx-prereq.sh -t aro -g rg-name -n cluster-name
  3. If successful, the output of the script will look like the following. The output values can be provided as input to the automation.

    {
      "azure_client_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
      "azure_client_secret": "XXXXXXX",
      "azure_tenant_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
      "azure_subscription_id": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
    }

Alternatively, you can use known credentials for an existing service principal to allow Portworx to provision volumes for the cluster.

Service principle details

A service principal (service account) is used by the Portworx deployment to provision storage volumes that will be leveraged by Portworx once deployed into the OpenShift cluster. There are some specifics for service principals when deploying portworx, as detailed below:

  • ARO Clusters: - For ARO clusters, you must use the service principal that was created in the background when the ARO cluster was created.

  • IPI Clusters: - For IPI clusters, you must create a service principal that has the following permissions:

    • Microsoft.ContainerService/managedClusters/agentPools/read
    • Microsoft.Compute/disks/delete
    • Microsoft.Compute/disks/write
    • Microsoft.Compute/disks/read
    • Microsoft.Compute/virtualMachines/write
    • Microsoft.Compute/virtualMachines/read
    • Microsoft.Compute/virtualMachineScaleSets/virtualMachines/write
    • Microsoft.Compute/virtualMachineScaleSets/virtualMachines/read

Before attempting to deploy this module, you can log into the az cli, and manually run the scripts/portworx-prereq.sh script, which will handle both of these cases. This script will output the credentials that are required to successfully deploy Portworx into the cluster. The output will be a JSON structure like:

Portworx configuration

This module requires a Portworx configuration. Portworx is available in 2 flavors: Enterprise and Essentials.

Portworx Essentials is free forever, but only supports a maximum of 5 nodes on a cluster, 200 volumes, and 5TB of storage.
Portworx Enterprise requires a subscription (has 30 day free trial), supports over 1000 nodes per cluster, and has unlimited storage.

More detailed comparisons are available at: https://portworx.com/products/features/

Instructions for obtaining your portworx configuration are available at portworx config

You can see an example in the Example usage section below.

Software dependencies

The module depends on the following software components:

Command-line tools

  • terraform >= v0.15

Terraform providers

  • nil

Module dependencies

This module makes use of the output from other modules:

  • github.com/cloud-native-toolkit/terraform-ocp-login.git
    • provides the cluster_config_file variable for the azure-portworx module.

Example usage

Note: osb_endpoint and user_id are only required in portworx_config if type is essentials. These values are not required for type enterprise.

module "cluster-login" {
  source = "github.com/cloud-native-toolkit/terraform-ocp-login.git"

  server_url = var.server_url
  login_user = var.cluster_username
  login_password = var.cluster_password
  login_token = ""
  ca_cert = var.ca_cert  
}

module "azure-portworx" {
  source = "./module"

  azure_client_id       = var.azure_client_id
  azure_client_secret   = var.azure_client_secret
  azure_subscription_id = var.azure_subscription_id
  azure_tenant_id       = var.azure_tenant_id
  cluster_config_file   = module.terraform-ocp-login.platform.kubeconfig
  cluster_type          = "IPI"
  portworx_spec_file    = "${path.module}/px_spec.yaml"
}

Acknowledgements

This module is a derivative of https://github.com/ibm-hcbt/terraform-ibm-cloud-pak/tree/main/modules/portworx_aws

terraform-azure-portworx's People

Contributors

seansund avatar rich-ehrhardt avatar cloudnativetoolkit avatar triceam avatar

Watchers

Noel Colón avatar  avatar Kyle Bigler avatar Budi Darmawan avatar Matthew Perrins avatar

terraform-azure-portworx's Issues

Variables not correct when generating from BOM

First issue: malformed variable in generated output

This is due to formatting in module.yaml. While it is valid terraform, the parser in iascable is more strict, and requires consistent syntax, wrapped in quotes, with proper spacing between variables.

Generated output looks like:

variable "azure-portworx_variable cluster_name {" {
  type = string
  description = "The name of the ARO cluster"
}

Second issue: portworx config is of type string, when it should be a complex object

This could be related to metadata parsing... not sure.

The generated config output looks like:

variable "azure-portworx_portworx_config" {
  type = object
  description = "Portworx configuration"
}

Where it should look like:

variable "azure-portworx_portworx_config" {
  type = object({
    type=string,
    cluster_id=string,
    enable_encryption=bool,
    user_id=string,
    osb_endpoint=string
  })
  description = "Portworx configuration"
}

VM Disks are not removed after testing

Portworx creates VM Disk at the Azure level that are not removed with the destroy action. Azure only permits a maximum of 8 disks per VM. This causes problems when running testing against the same cluster.

Went to run 210-azure-portworx and got the following error

Went to run 210-azure-portworx and got the following error , this was generated as part of the Maximo automation

drwxr-xr-x    3 root     root          4096 May 22 14:35 .
drwxr-xr-x    8 root     root          4096 May 22 14:35 ..
-rw-r--r--    1 root     root           532 May 22 14:35 210-azure-portworx-storage.auto.tfvars
drwxr-xr-x    2 root     root          4096 May 22 14:35 docs
-rw-r--r--    1 root     root          1288 May 22 14:35 main.tf
-rw-r--r--    1 root     root            23 May 22 14:35 providers.tf
lrwxrwxrwx    1 root     root            36 May 22 14:35 terraform.tfvars -> /workspaces/current/terraform.tfvars
-rw-r--r--    1 root     root          2726 May 22 14:35 variables.tf
-rw-r--r--    1 root     root            96 May 22 14:35 version.tf
bash-5.1# terraform init
There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Argument or block definition required
│
│ On main.tf line 18: An argument or block definition is required here.

Scripts need to include path to binaries

Command runs calling kubectl and oc need to include the path to the binaries for terraform to function within a container. Needs a path variable added in front of each call.
Current example:
kubectl delete
Should be example:
${BIN_DIR}/kubectl delete

Azure credentials do not align with other modules

Azure credentials have an azure-* prefix which does not align with other Azure modules which do not have this prefix. This causes issues when using this module with others in a BOM. Need to remove the prefix.

Module metadata should depend on cluster interface

The module metadata has a cluster dependency only on ocp_login. Instead the dependency should use the interface so any of the cluster modules can be used.

    - id: cluster
      interface: github.com/cloud-native-toolkit/automation-modules#cluster
      refs: []

README does not reflect variables update

The change to the variables handling has not been reflected in the README example usage section. The README refers to the portworx_config variable which no longer exists and has been replaced by portworx_spec or portworx_spec_file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.