Git Product home page Git Product logo

rover's Introduction

Gitter

Azure Terraform SRE - Landing zones on Terraform - Rover

โš ๏ธ This solution, offered by the Open-Source community, will no longer receive contributions from Microsoft. Customers are encouraged to transition to Microsoft Azure Verified Modules for Microsoft support and updates.

Azure Terraform SRE provides you with guidance and best practices to adopt Azure.

The CAF rover is helping you managing your enterprise Terraform deployments on Microsoft Azure and is composed of two parts:

  • A docker container

    • Allows consistent developer experience on PC, Mac, Linux, including the right tools, git hooks and DevOps tools.
    • Native integration with Visual Studio Code, GitHub Codespaces.
    • Contains the versioned toolset you need to apply landing zones.
    • Helps you switching components versions fast by separating the run environment and the configuration environment.
    • Ensure pipeline ubiquity and abstraction run the rover everywhere, whichever pipeline technology.
  • A Terraform wrapper

    • Helps you store and retrieve Terraform state files on Azure storage account.
    • Facilitates the transition to CI/CD.
    • Enables seamless experience (state connection, execution traces, etc.) locally and inside pipelines.

The rover is available from the Docker Hub in form of:

Getting starter with CAF Terraform landing zones

If you are reading this, you are probably interested also in reading the doc as below: :books: Read our centralized documentation page

Community

Feel free to open an issue for feature or bug, or to submit a PR.

In case you have any question, you can reach out to tf-landingzones at microsoft dot com.

You can also reach us on Gitter

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

Code of conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

rover's People

Contributors

arnaudlh avatar brk3 avatar bsamodien avatar businessglitch avatar chianw avatar davesee avatar dependabot-preview[bot] avatar dependabot[bot] avatar hanganhhung123 avatar hassanhassoun avatar hattan avatar heintonny avatar hieumoscow avatar iriahk89 avatar justin-dynamicd avatar jvhd-vw avatar laurentlesle avatar lodrantl avatar matt-ffffff avatar naeemdhby avatar nepomuceno avatar nyuen avatar papunsenapati07 avatar pmatthews05 avatar rguthriemsft avatar rodmhgl avatar seanlok-grabtaxi avatar shahedc avatar tallengft avatar toxicwar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rover's Issues

Execute rover from Azure Container Instance

To deploy the rover in Azure ACI

# Create the resource group
az group create --name caf-rover1 --location southeastasia

# Deploy the rover in a container instance
az container create --name rover-13-rc1 \
    -g caf-rover1 --image aztfmod/roverdev:vnext-13-rc1 \
    --secure-environment-variables SSH_PASSWD='Change!Password' \
     --dns-name-label rover --ports 22

# Get the ssh endpoint
az container show --resource-group caf-rover1 --name rover-13-rc1 \
    --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table

# Login the rover with the fqdn provided on the previous command
ssh [email protected]

# When on the rover you have access to the rover command to manage your landing zones deployment
[vscode@wk-caas-2da2cc8585684af3b3bc87465c467f09-398bdb3712ad30b73cb4ae ~]$ rover

  /$$$$$$   /$$$$$$  /$$$$$$$$       /$$$$$$$                                        
 /$$__  $$ /$$__  $$| $$_____/      | $$__  $$                                       
| $$  \__/| $$  \ $$| $$            | $$  \ $$  /$$$$$$  /$$    /$$/$$$$$$   /$$$$$$ 
| $$      | $$$$$$$$| $$$$$         | $$$$$$$/ /$$__  $$|  $$  /$$/$$__  $$ /$$__  $$
| $$      | $$__  $$| $$__/         | $$__  $$| $$  \ $$ \  $$/$$/ $$$$$$$$| $$  \__/
| $$    $$| $$  | $$| $$            | $$  \ $$| $$  | $$  \  $$$/| $$_____/| $$      
|  $$$$$$/| $$  | $$| $$            | $$  | $$|  $$$$$$/   \  $/ |  $$$$$$$| $$      
 \______/ |__/  |__/|__/            |__/  |__/ \______/     \_/   \_______/|__/      
                                                                                     
                                                                                                                                                           
              version: aztfmod/roverdev:vnext-13-rc1

[vscode@wk-caas-2da2cc8585684af3b3bc87465c467f09-398bdb3712ad30b73cb4ae ~]$ exit
logout
Connection to rover.southeastasia.azurecontainer.io closed.

to consider - upload launchpad state even if terraform apply fails

Hello Folks,
bear with me if I am missing some information but when deploying launchpad first time, and it creates X resoruces and if fails on resource X+1 as an example - on error that X+1 resource already exists in azure cloud, then the tf state is not uploaded to the storage and we have to manually go through azure portal and delete all X resource which has been created. and run launchpad again.

would it be possible in this section of code below, in function initialize_state add concept of try and catch?

even if apply fails, don't exit bash script at all but execute "upload_state" even partial?

"apply")

I am not geek in bash but that would be quite easy using command1 || command2.

thanks

Peter

Implement auto-complete to display the commands available

This issue implements an auto-completion feature in the rover to simplify the discovery of the commands available

When typing rover the rover would display the rover modes available like
--landingzone --clone --help

When type rover --landingzone it would display the landingzone attributes
-a -launchpad -env --help

seperate "rover" logic to allow custom container environments?

It seems 99% of the rover container is pretty standard fare, and that most the "magic" is the rover bash which has a clever way of determining the appropriate tfstate location in a standard way.

I have a few clients who love this model of approach, but would like to use custom dev containers with extra tools like packer/ansible (yes it makes things bigger, but standardizing for gitops is worth it). Are the plans to offer this wrapper seperatelly? Would be a huge win in terms of flexibility. Basically thinking of something like terragrunt: extra binary that depends on TF making it ultra portable.

2007 Tooling upgrade

versionTerraform=0.12.29
versionAzureCli=2.9.1
versionKubectl=v1.18.6
versionGit=2.27.0
versionTflint=v0.18.0
versionTfsec=v0.24.1

Too many command line arguments in GitHub Actions

When CI is applying launchpad for level 200, with the new var-folder arguemtn we get aToo many command line arguments. Configuration path expected.
as per https://github.com/aztfmod/terraform-azurerm-caf/runs/1223930727?check_suite_focus=true

When using the folder argument for the variables:
Expanding variable files: /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/*.tfvars

tf_action                     : 'apply'
command and parameters        : '-var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/compute.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/configuration.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/diagnostics_definition.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/diagnotics_log_analytics.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/dynamic_secrets.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_azuread_api_permissions.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_azuread.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_custom_roles.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_keyvault_policies.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_managed_identities.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/iam_role_mapping.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/keyvaults.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/networking_nsg_definition.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/networking.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/storage_accounts.tfvars -var-file /__w/terraform-azurerm-caf/terraform-azurerm-caf/public/landingzones/caf_launchpad/scenario/200/subscriptions.tfvars -parallelism=30 -var random_length=5 -var prefix=g294693489 \'

When running release aztfmod/rover:2009.0210 have error with AZ CLI

Hello there, I'm using the latest version of the rover image, I'm using a DevContainer in visual studio code with the following docker-compose:

#-------------------------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.
#-------------------------------------------------------------------------------------------------------------
 
  version: '3.7'
  services:
    rover:
      image:  aztfmod/rover:2009.0210

    
      labels:
        - "caf=Azure CAF"
   
      volumes:
        - ..:/tf/caf
        - volume-caf-vscode:/home/vscode
        - ~/.ssh:/tmp/.ssh-localhost:ro
   
        - /var/run/docker.sock:/var/run/docker.sock 
   
      # Overrides default command so things don't shut down after the process ends.
      command: /bin/sh -c "while sleep 1000; do :; done" 
   
  volumes:
     volume-caf-vscode:
      labels:
        - "caf=Azure CAF"

  

However when trying to do az login, I have the following error:

[vscode@f9b13def0ec9 caf]$ az login
Could not access runpy._run_module_as_main
AttributeError: module 'runpy' has no attribute '_run_module_as_main'

Things worked perfectly in previous releases.

Unable to destroy launchpad for launchpad_opensource_light

Run running the destroy for launchpad_opensource_light

launchpad /tf/launchpads/launchpad_opensource_light destroy -var location=southeastasia

You get the following error message

Error: Error building account: Error getting authenticated object ID: Error parsing json result from the Azure CLI: Error waiting for the Azure CLI: exit status 1

  on main.tf line 1, in provider "azurerm":
   1: provider "azurerm" {


Error on or near line 532; exiting with status 1

This happens only after you deploy and/or destroy landing zones, in order to delete, you have to refresh credentials and context using:

rover logout

And loging again using

Rover login

Then you can try deleting the launchpad again using

launchpad /tf/launchpads/launchpad_opensource_light destroy -var location=southeastasia -auto-approve

[bug] pre-commit points to python not found in image

I found a bug in the LZ code and wanted to make the change in the container and push the changes to a branch on the repo. When trying to commit, i got the following error:


[vscode@92a0aca62a3e caf]$ git checkout -b bug/allowed_resource_type_policy
Switched to a new branch 'bug/allowed_resource_type_policy'

[vscode@92a0aca62a3e caf]$ git status
On branch bug/allowed_resource_type_policy
Changes to be committed:
  (use "git restore --staged <file>..." to unstage)
        modified:   landingzones/landingzone_caf_foundations/blueprint_foundations_governance/policies/builtin/allowed_resource_type.tf

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   landingzones/landingzone_caf_foundations/blueprint_foundations.sandpit.auto.tfvars


[vscode@92a0aca62a3e caf]$ git commit -m "Fixed the policy definition by encoding the input array."
/usr/bin/env: python3.7: No such file or directory

i found the .git/hooks/pre-commit file to be the culprit. After removing the file, i was able to commit it.

I am running in WSL2 --> Container for VSCode

limitation of having one launchpad per subscription due to hardcoded tags.workspace=="level0"

Hello Folks,
we wanted to have more launchpads within one subscription as separation between environments (dev,test,...). We fast run into the problem when deloying second launchpad because the pipeline always find existing one because it is done by filter based on resource type = storage account with tags: tfstate=level0 & workspace=level0.

Here is reference to the code:

id=$(az storage account list --query "[?tags.tfstate=='level0' && tags.workspace=='level0'].{id:id}" -o json | jq -r .[0].id)

As easy fix it would be to make the tag "workspace" as parameter instead of expect value "level0" which is hardcoded - that's limitation for 1 launchpad per 1 subscription.

functions.sh, function plan(), missing var-file

shell script functions.sh
function plan()
terraform plan is called without var-file
subsequently incorrect names used during apply

during bootstrap var-file is passed to launchpad.sh
When "deploying from scratch", var-file is not passed to initialize_state function in functions.sh and subsequently to above mentioned plan and apply causing default values being used for names in launchpad_opensource (level0 repo).

Terraform output fails if workspace folder doesn't exist

the Rover needs the /tfstates// folder to output error messages, terraform commands other than init/plan/apply do no create this folder by default thus creating the following error:

/tf/rover/functions.sh: line 613: /home/vscode/.terraform.cache/tfstates/demo/landingzone_caf_foundations_stderr.txt: No such file or directory

Cannot deploy launchpad-100 if not the owner of subscription

I was classic service administrator, but not owner of subscription. Was getting permission error on tfstate storage account, which was fixed by adding roles as owner in subscription. A validation script to develop for validating all the prerequisites in subscription before creating the launchpad.

Implement the help command

There is an initial implementation to display the help for the rover --clone. This issue extends the implementation to the other commands available

Proposed approach:

rover --help to display all commands available
rover --clone --help to display the clone command
rover --landingzone -h or rover -lz --help to display the landingzone help command

When an a value is missing for a given attribute the rover displays the example and the detailed explanation of the attribute.
For example rover --clone-branch requires to set the name of the branch. The rover will fails and display the examples and the detailed explanation of the --clone-branch

Add rover --clone to bring landing zones dependencies

Rover clone is used to bring the landing zones dependencies you need to deploy your landing zone

By default the rover will clone the azure/caf-terraform-landingzones into the local rover folder /tf/caf/landinzones

Examples:
- Clone the launchpad: rover --clone-folder /landingzones/launchpad
- Clone the launchpad in different folder: rover --clone-destination /tf/caf/landingzones/public --clone-folder /landingzones/launchpad
- Clone the launchpad (branch vnext): rover --clone-folder-strip 2 --clone-destination /tf/rover/landingzones --clone-folder /landingzones/launchpad --clone-branch vnext

- Clone the CAF foundations landingzone: rover --clone-folder /landingzones/landingzone_caf_foundations
- Clone the AKS landingzone: rover --clone aztfmod/landingzone_aks --clone-destination /tf/caf/landingzones/landingzone_aks

--clone-branch set the branch to pull the package.
By default is not set use the master branch.

--clone-destination change the destination local folder.
By default clone the package into the /tf/caf/landingzones folder of the rover

--clone-folder specify the folder to extract from the original project

  Example: --clone-folder /landingzones/landingzone_caf_foundations will only extract the caf foundations landing zone

--clone-folder-strip is used strip the base folder structure from the original folder

  In the GitHub package of azure/caf-terraform-landingzones, the data are packaged in the following structure
  caf-terraform-landingzones-master/landingzones/launchpad/main.tf
  [project]-[branch]/landgingzones/[landingzone]
  To reproduce a nice folder structure in the rover it it possible to set the --clone-folder-strip to 2 to remove [project]-[branch]/landingzones and only retrieve the third level folder

  Default to 2 when using azure/caf-terraform-landingzones and 1 for all other git projects

--clone specify a GitHub organization and project in the for org/project
The default setting if not set is azure/caf-terraform-landingzones

Use EPEL repository to accelerate rover build

Using EPEL repository might be faster than building git during rover preparation. Other than accelerating the build time, this repo could also be useful for integration of openvpn.

[feature] Add rover support for multiple subscriptions

In order to support multi subscriptions deployment mode, add rover support for additional fields including:

  • Subscription_ID
  • Tenant_ID
  1. Write the specs to support multiple remote backend - one for the Level 0 TF state and the other one for the deployment (where is the level0 [subscription, storage account...], where is the current state for the deployment)
  2. validate the specs - needs at least 2 reviewers
  3. implement the code in the rover
  4. unit and integration test
  5. automation (Azure DevOps and GitHub Actions)

Bug: Unable to bring up devops hosted agent in rover 1314

When trying to bring up devops hosted agent using rover 1314, VM creation fails in pipeline with ๐Ÿ‘

Error: Code="VMExtensionProvisioningError" Message="VM has reported a failure when processing extension 'install_azure_devops_agent'. Error message: \"Enable failed: failed to execute command: command terminated with exit status=1\n[stdout]\nstart\ninstall Ubuntu packages\nHit:1 http://azure.archive.ubuntu.com/ubuntu xenial InRelease\nHit:2 

Investigating in the CustomScript Extension shows:

Microsoft.Azure.Extensions.CustomScript 2.1.3 Provisioning failed Error 
...
Allowing agent to run docker Docker version 18.09.7, build 2d0083d Rover docker image 2005.1314 Using default tag: latest [stderr] sent invalidate(passwd) request, exiting sent invalidate(group) request, exiting Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install... Executing /lib/systemd/systemd-sysv-install enable docker Error response from daemon: pull access denied for 2005.1314, repository does not exist or may require 'docker login' 

Seems like the rover version is not including full Docker image as expected by the script aztfmod/rover:2005.1314 instead of 2005.1314

Install Packer binaries in the Rover container

Requesting to add the Packer binaries in the Rover container as there are requirements to create Custom Images.

Instructions to be included :

sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install packer-${versionPacker}
cd /usr/bin
sudo mv packer packer.io #CentOS has an existing package by the same name; so Packer has to be renamed.

ref : https://learn.hashicorp.com/tutorials/packer/getting-started-install?in=packer/getting-started

Thank you.

Rover Launchpad Error

Hi

Yesterday it seemed to be working ok. Today when I create the launchpad "./rover.sh" it runs through and then has the following error:

`Error: Error reading queue properties for AzureRM Storage Account "tfstatelvz8kdrtv33ip0flk": queues.Client#GetServiceProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: "\ufeffAuthenticationFailedServer failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:2d7d39f0-b003-0073-55cc-a6737f000000\nTime:2019-11-29T15:48:02.5881860ZRequest date header too old: 'Fri, 29 Nov 2019 13:04:34 GMT'" error: invalid character 'รฏ' looking for beginning of value

on storage.tf line 24, in resource "azurerm_storage_account" "stg":
24: resource "azurerm_storage_account" "stg" {
`

Help on this would be great.

Cheers,

Mark

Extend launchpad tags

Add to rover the capability to add tags depending on the runtime information:

  • rover version
  • launchpad deployed
  • other tags that are passed in command line arguments using "-tags" for instance.

Deployment metadata to be added

Scenario:

When checking at a deployment on a subscription, it is hard to tell how, when and which code was used to deploy a particular TF environment.

Proposed solution:

For each tfstate file on Azure storage, we plan to add in the Azure blob metadafa, the following fields:

  • Version of rover used
  • Rover_logged_in user that deployed the environment
  • Checksum of commit
  • Source repo
  • Branch version
  • Checksum of the commit

Create a Rover Test image to include GO tooling

I need additional tooling for unit testing in the pipelines such as go - we can either add the tools in the rover image :

  • you can test your unit test in the dev environment
  • larger image

or create a dedicated Rover Test image from the rover base iamge

  • lightweight
  • cannot test in the dev environment

Implement an output feature to Terraform commands

Implement an option to output terraform commands to a specified file.
This would probably require to implement a more structured cli parameter

rover landingzone_name tf_command tf_options -o filename

Running the rover to deploy a landingzone without an action fails

Running the rover to deploy a landingzone without specifying an action causes the deployment to fail
image

with errors

./launchpad.sh: line 146: [: ==: unary operator expected
./launchpad.sh: line 151: [: ==: unary operator expected
./launchpad.sh: line 157: [: ==: unary operator expected

Rover error if Azure CLI output is changed from json

If we change the output format of the Azure CLI configuration from the default json to other formats like table the launchpad commands fail.

Steps to reproduce (inside the rover container):

  • Run the az configure command
  • Do you wish to change your settings? (y/N): y
  • What default output format would you like?
    [1] json - JSON formatted output that most closely matches API responses.
    [2] jsonc - Colored JSON formatted output that most closely matches API responses.
    [3] table - Human-readable output format.
    [4] tsv - Tab- and Newline-delimited. Great for GREP, AWK, etc.
    [5] yaml - YAML formatted output. An alternative to JSON. Great for configuration files.
    [6] yamlc - Colored YAML formatted output. An alternative to JSON. Great for configuration files.
    [7] none - No output, except for errors and warnings.
    Please enter a choice [Default choice(1)]: 3 (I assume that choosing 4, 5 or 6 should also fail)
  • Would you like to enable logging to file? (y/N): n (not important)
  • Microsoft would like to collect anonymous Azure CLI usage data to improve our CLI. Participation is voluntary and when you choose to participate, your device automatically sends information to Microsoft about how you use Azure CLI. To update your choice, run "az configure" again.
    Select y to enable data collection. (Y/n): n (not important)
  • CLI object cache time-to-live (TTL) in minutes [Default: 10]: 10 (not important)
  • Run any launchpad command, for example: launchpad/tf/launchpads/launchpad_opensource_light apply

An error similar to this should happen:

parse error: Invalid numeric literal at line 1, column 5
Error on or near line 54; exiting with status 1

I suspect from the error message that the output is being parsed and since the format is completely different it fails.

This is a minor bug because there's always the workaround to revert the output format but fixing this can improve the reliability of the solution.

Detecting "owner" privileges on subscription for launchpad

Make sure the logged-in user has owner role on the subscription before running the launchpad deployments

The following command can be used to test if the logged-in user has Owner role

az role assignment list --role "Owner" --assigne ${TF_VAR_logged_user_objectId}

Identify the dependent landing zones when deploying a landing zone

A landing zones can depends on 0, 1 or many landing zones. When running the rover to deploy a landing zone it should detect if the dependent landing zones are already deployed in the target workspace.

The following example depends on 2 landing zones ("landingzone_caf_foundations.tfstate" and "landingzone_networking.tfstate")
image

The expected behavior should be:

  • if one or all the dependent landing zones don't exist the rover should fails highlighting the missing landing zones
  • if the landing zones exist the rover proceed to the deployment

Landing zones can be checked into the level0 launchpad container

Passthrough convention does not work properly (prefix-name-postfix)

while using passthrough as naming convention prefix is being added despite use_prefix is set false.

hard to trace the source of the issue as it seems to be on multiple location.
First when prefix is not provided it being set by some default values to "" (and not null) conditions on multiple places checks only whether its equal NULL hence generating random string for it

Also cafnaming provider is not specifically handling passthrough option and thus working with what ever prefix comes as input (sometime generated because of above)

handling naming convention is very inconsistent across abstraction layers.

consider specific version of providers instead of latest in docker build

Hello Folks,

it would be worth to consider explicit tag when cloning azuredevops provider (

git clone https://github.com/aztfmod/terraform-provider-azurecaf.git && \
) in docker build.

the concern I have is the docker image of rover every time it is built even from the same commit it does not need to be idempotent due to "dynamic" clone of the latest code from different repository of azuredevops provider instead of cloning specific version.

maybe it is intentional but I don't see benefit.

I have come into the problem that I have built the rover again today, from the same code base and couldn't pass the bootstrap steps of caf foundation which were working the days before.

after investigation I realized that there is clone from different repo as I described above.

and what is more. the investigation had been difficult because in container when I attached inside, unfortunately - there is used hard-coded value in the name of azure devops provider - more here issue reported - microsoft/terraform-provider-azuredevops#341

this could be also fix on this codebase - (

./scripts/build.sh
) when before executing build.sh you overwrite file PROVIDER_VERSION.txt :

`# to force the docker cache to invalidate when there is a new version
ADD "https://api.github.com/repos/microsoft/terraform-provider-azuredevops/git/ref/tags/v${versionAzureDevopsTerraform}" version.json

RUN cd /tmp && \
git clone --branch "v${versionAzureDevopsTerraform}" https://github.com/microsoft/terraform-provider-azuredevops.git && \
cd terraform-provider-azuredevops && \
echo ${versionAzureDevopsTerraform} > PROVIDER_VERSION.txt && \
./scripts/build.sh

Rover only seems to run in caf-terraform-landingzones

So I want ot expand on the landingzone example in the caf-terraform-landingzones prject, but I cant seem to run this container "outside" of it.

Example, if I clone this repo and simply open it I get errors when it tries to launch the container:

Start: Run in container: cd /home/vscode/.vscode-server/bin/2af051012b66169dde0c4dfae3f5ef48f787ff69; export VSCODE_AGENT_FOLDER=/home/vscode/.vscode-server; /home/vscode/.vscode-server/bin/2af051012b66169dde0c4dfae3f5ef48f787ff69/server.sh --extensions-download-dir /home/vscode/.vscode-server/extensionsCache --install-extension 4ops.terraform --install-extension mutantdino.resourcemonitor --install-extension eamodio.gitlens --force
[8828 ms] Remote-Containers server: events.js:200
      throw er; // Unhandled 'error' event
      ^

Error: listen EACCES: permission denied /home/vscode/.gnupg/S.gpg-agent
    at Server.setupListenHandle [as _listen2] (net.js:1289:21)
    at listenInCluster (net.js:1354:12)
    at Server.listen (net.js:1453:5)
    at internal/util.js:278:30
    at new Promise (<anonymous>)
    at bound  (internal/util.js:277:12)
    at /tmp/vscode-remote-containers-server-fa0151898feb4380a1b0e4d8aebc2a0f76250387.js:1:30280
Emitted 'error' event on Server instance at:
    at emitErrorNT (net.js:1333:8)
    at processTicksAndRejections (internal/process/task_queues.js:81:21) {
  code: 'EACCES',
  errno: 'EACCES',
  syscall: 'listen',
  address: '/home/vscode/.gnupg/S.gpg-agent',
  port: -1
}
[8876 ms] Remote-Containers server terminated (code: 1, signal: null).

The container then hangs on installing extensions and fails to ever get to a state where I have bash access.

If i run the same .devcontainer contents in the caf-terraform-landingzones project, everything loads fine.

Is the container expecting a specific project layout? If I wantto split my landingzones into multiple repos with a custom layout ... does this fail to acocunt for that?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.