Git Product home page Git Product logo

azure / data-landing-zone Goto Github PK

View Code? Open in Web Editor NEW
161.0 20.0 63.0 314.34 MB

Template to deploy a single Data Landing Zone of the Data Management & Analytics Scenario (former Enterprise-Scale Analytics). The Data Landing Zone is a logical construct and a unit of scale in the architecture that enables data retention and execution of data workloads for generating insights and value with data.

License: MIT License

PowerShell 30.35% Shell 12.69% Scala 2.59% Dockerfile 0.28% Bicep 54.09%
arm azure architecture data-platform enterprise-scale policy-driven bicep data-mesh data-fabric enterprise-scale-analytics

data-landing-zone's Introduction

Cloud-scale Analytics Scenario - Data Landing Zone

Objective

The Cloud-scale Analytics Scenario provides a prescriptive data platform design coupled with Azure best practices and design principles. These principles serve as a compass for subsequent design decisions across critical technical domains. The architecture will continue to evolve alongside the Azure platform and is ultimately driven by the various design decisions that organizations must make to define their Azure data journey.

The Cloud-scale Analytics architecture consists of two core building blocks:

  1. Data Management Landing Zone which provides all data management and data governance capabilities for the data platform of an organization.
  2. Data Landing Zone which is a logical construct and a unit of scale in the Cloud-scale Analytics architecture that enables data retention and execution of data workloads for generating insights and value with data.

The architecture is modular by design and allows organizations to start small with a single Data Management Landing Zone and Data Landing Zone, but also allows to scale to a multi-subscription data platform environment by adding more Data Landing Zones to the architecture. Thereby, the reference design allows to implement different modern data platform patterns like data-mesh, data-fabric as well as traditional datalake architectures. Cloud-scale Analytics has been very well aligned with the data-mesh approach, and is ideally suited to help organizations build data products and share these across business units of an organization. If core recommendations are followed, the resulting target architecture will put the customer on a path to sustainable scale.

Cloud-scale Analytics


The Cloud-scale Analytics architecture represents the strategic design path and target technical state for your Azure data platform.


This respository describes the Data Landing Zone, which is where data is persisted and data workloads are executed. A Data Landing Zone is a unit of scale of the Cloud-scale Analytics architecture pattern and it enables regional deployments, clear seperation of ownership, chargeback of cost, in-place data sharing within and across Data Landing Zones and many other much asked benefits. In addition, it is possible to scale within Data Landing Zones with cross-functional Data Integration and Data Product teams. The reference design targets a self-service approach for these teams to overcome bottlenecks and the need for a central team for cloud service deployments. The Data Landing Zone reference implementation will create a consistent setup inside a subscription and will deploy storage accounts as well as data processing services like Azure Synapse, Azure Data Factory as well as Azure Databricks.

Note: Before getting started with the deployment, please make sure you are familiar with the complementary documentation in the Cloud Adoption Framework. Also, before deploying your first Data Landing Zone, please make sure that you have deployed a Data Management Landing Zone. The minimal recommended setup consists of a single Data Management Landing Zone and a single Data Landing Zone.

Deploy Cloud-scale Analytics

The Cloud-scale Analytics architecture is modular by design and allows customers to start with a small footprint and grow over time. In order to not end up in a migration project, customers should decide upfront how they want to organize data domains across Data Landing Zones. All Cloud-scale Analytics architecture building blocks can be deployed through the Azure Portal as well as through GitHub Actions workflows and Azure DevOps Pipelines. The template repositories contain sample YAML pipelines to more quickly get started with the setup of the environments.

Reference implementation Description Deploy to Azure Link
Cloud-scale Analytics Scenario Deploys a Data Management Landing Zone and one or multiple Data Landing Zones all at once. Provides less options than the the individual Data Management Landing Zone and Data Landing Zone deployment options. Helps you to quickly get started and make yourself familiar with the reference design. For more advanced scenarios, please deploy the artifacts individually. Deploy To Azure
Data Management Landing Zone Deploys a single Data Management Landing Zone to a subscription. Deploy To Azure Repository
Data Landing Zone Deploys a single Data Landing Zone to a subscription. Please deploy a Data Management Landing Zone first. Deploy To Azure Repository
Data Product Batch Deploys a Data Workload template for Data Batch Analysis to a resource group inside a Data Landing Zone. Please deploy a Data Management Landing Zone and Data Landing Zone first. Deploy To Azure Repository
Data Product Streaming Deploys a Data Workload template for Data Streaming Analysis to a resource group inside a Data Landing Zone. Please deploy a Data Management Landing Zone and Data Landing Zone first. Deploy To Azure Repository
Data Product Analytics Deploys a Data Workload template for Data Analytics and Data Science to a resource group inside a Data Landing Zone. Please deploy a Data Management Landing Zone and Data Landing Zone first. Deploy To Azure Repository

Deploy Data Landing Zone

To deploy the Data Landing Zone into your Azure Subscription, please follow the step-by-step instructions:

  1. Prerequisites
  2. Create repository
  3. Setting up Service Principal
  4. Template Deployment
    1. GitHub Action Deployment
    2. Azure DevOps Deployment
  5. Known Issues

Contributing

Please review the Contributor's Guide for more information on how to contribute to this project via Issue Reports and Pull Requests.

data-landing-zone's People

Contributors

abdale avatar amanjeetsingh avatar analyticjeremy avatar elyusubov avatar esbran avatar genegc avatar hallihan avatar marvinbuss avatar mboswell avatar microsoftopensource avatar mike-leuer avatar rocavalc avatar vanwinkelseppe avatar viniciussouza avatar xigyenge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-landing-zone's Issues

Automate adding ADF managed identity into Purview Data Curator role.

Unable to push lineage to Purview because Purview Data Curator role is not granted to factory's managed identity.
ADF can't connect to purview to push lineage.

purview-connection

After adding ADF Managed Identity in the Purview Data Curator role, ADF can connect and send data lineage to Purview.
purview-connected

Issue to deploy metadata services [Retryable Error]

Describe the bug
Cannot proceed with operation because resource /subscriptions/ed780b0d-a01c-4a39-982b-949f0c8c84e3/resourceGroups/rpdlz01-dev-network/providers/Microsoft.Network/virtualNetworks/rpdlz01-dev-vnet/subnets/ServicesSubnet used by resource /subscriptions/ed780b0d-a01c-4a39-982b-949f0c8c84e3/resourceGroups/rpdlz01-dev-metadata/providers/Microsoft.Network/networkInterfaces/rpdlz01-dev-sqlserver001-private-endpoint.nic.ccece0ec-b039-4bf9-9945-5538cae5196b is not in Succeeded state. Resource is in Updating state and the last operation that updated/is updating the resource is PutSubnetOperation.

Steps to reproduce

  1. Deploy Data Management Zone
  2. Deploy Data Landing Zone without SHIR

Screenshots
image
image
image

Warning message in Log Analytics due to Network Isolation settings

Describe the bug

A warning message in Log analytics workspace after creating it through the global template.

Message is : This resource has irregular Network Isolation settings that need your attention. Starting August 16, Network Isolation will be strictly enforced which will block queries to this workspace.
View and edit settings

Steps to reproduce

  1. Create the Data Mgmt Zone and Data Landing Zone with the global template
  2. Go in the LA instance created in the landing zone

Screenshots

image

Improve SHIR installation

Describe the solution you'd like
Improve SHIR installation and make download mechanism more resilient.

Databricks automation

Automate the end to end setup process of a Databricks workspace on Azure with:

  1. Application logging,
  2. Cluster policies,
  3. Hive metastore setup,
  4. SCIM Enterprise Application setup

Wrong path in ADF param file

On params.dataFactory001.json:

  • parameter for sqlServerId and sqlDatabaseId is having the wrong values, pointing to dn001-sqlserver002 (Server on which we should have the HiveMetastoredb not the AdfMetastoreDB)
  • results in failing the deployment of the data factory in the data-node subscription

Add scripts to validate, plan and deploy the IaC

Describe the solution you'd like

Using PowerShell scripts to validate (ARM template toolkit), plan (what-if) and deploy will help to have a clean pipeline (ADO/GitHub) and also allow to use/test it locally with the same confidence.

Private Endpoints created by integration ADF are stuck in Pending state

Describe the bug
After 15+ minutes, private endpoints created by ADF are in Pending state and they need to be manually approved.

The deployment finish successfully, however, the linked services which leverages PEs won't work until PEs are approved.

Steps to reproduce

  1. Deploy Data Landing Zone
  2. Check managed private endpoints for ADF in integration services

Screenshots
image
image
image
image

ACTION REQUIRED: Microsoft needs this private repository to complete compliance info

There are open compliance tasks that need to be reviewed for your data-node repo.

Action required: 4 compliance tasks

To bring this repository to the standard required for 2021, we require administrators of this and all Microsoft GitHub repositories to complete a small set of tasks within the next 60 days. This is critical work to ensure the compliance and security of your Azure GitHub organization.

Please take a few minutes to complete the tasks at: https://repos.opensource.microsoft.com/orgs/Azure/repos/data-node/compliance

  • The GitHub AE (GitHub inside Microsoft) migration survey has not been completed for this private repository
  • No Service Tree mapping has been set for this repo. If this team does not use Service Tree, they can also opt-out of providing Service Tree data in the Compliance tab.
  • No repository maintainers are set. The Open Source Maintainers are the decision-makers and actionable owners of the repository, irrespective of administrator permission grants on GitHub.
  • Classification of the repository as production/non-production is missing in the Compliance tab.

You can close this work item once you have completed the compliance tasks, or it will automatically close within a day of taking action.

If you no longer need this repository, it might be quickest to delete the repo, too.

GitHub inside Microsoft program information

More information about GitHub inside Microsoft and the new GitHub AE product can be found at https://aka.ms/gim or by contacting [email protected]

FYI: current admins at Microsoft include @marvinbuss, @daltondhcp, @esbran

Databricks HMS Jar Files Installation Fails

Describe the bug
When deploying a data landing zone, the SetupDatabricks.ps1 script does not install .jar files as expected

Steps to reproduce

  1. Run SetupDatabricks.ps1
  2. Notice that your .jar files aren't in the DBFS like they're supposed to be

(pull request forthcoming)

Bug Report

Describe the bug
When deploying SHIR for Purview and ADF within the Data Landing Zone, the deployment fails due to a naming conflict Both SHIRs have the same name. Hence, this must be updated.

Steps to reproduce

  1. Deploy Data Landing Zone
  2. Deploy SHIR for ADF and Purview
  3. Deployment of both SHIR fails

Screenshots
image

Databricks Private Link

Add Databricks Private Endpoint for Control Plane once available. This reduces number of IPs that we have to whitelist for outbound connectivity.

Containers on Enriched and Curated Data Lakes

When deploying a Data Landing Zone, the ESA documentation states that the enriched and curated data lake account will have two containers. One for enriched and another for curated.

The current deployment only creates one container as opposed the two.

Improve Databricks Cost tagging

  • Use regex rule for cost tagging in Databricks
  • Test regex rule
  • Validate that tags are assigned to VMs and other Azure resources in the Databricks managed resource group
  • Add documentation about the Databricks cost management

Variables Created by "GeneratePipelineVariables.ps1" Are Not Available in Pipeline

Describe the bug
In the ADO deployment pipeline, we are calling the code/GeneratePipelineVariables.ps1 script. It takes a JSON string output from a previous tasks, unwinds the properties within the JSON object, and creates variables for each property. However, these variables are not being properly exported and cannot be referenced by subsequent tasks in the pipeline.

Example
A "generate variables" task will produce output like this:

Retrieved input: {"storageAccountId":{"type":"String","value":"/subscriptions/17588eb2-2943-461a-ab3f-00a3ceac3112/resourceGroups/jpdlz-integration/providers/Microsoft.Storage/storageAccounts/jpdlzartifactsa001"},"storageAccountName":{"type":"String","value":"jpdlzartifactsa001"},"storageAccountContainerName":{"type":"String","value":"scripts"}}
Setting output 'storageAccountId'
Setting output 'storageAccountName'
Setting output 'storageAccountContainerName'

Given this result, I should be able to reference $(storageAccountName) in my pipeline. However, when the Pipeline executes a task trying to use that variable, it will fail. AzDO doesn't even try to expand the variable. For example, I will get an error message like: Storage account: $(storageAccountName) not found.

This behavior is exhibited on a variety of task types.

WORK AROUND!
We don't need to generate pipeline variables. We can just reference members of the original variable. From the example above, $(storageAccountName) does not work... but $(storageDetails.storageAccountName.value) does work.

Feature Request: Resource locks

Given that we would like to encourage CI/CD, it would be a useful to have some protection on key resources, to discourage them from being deleted by rogue pipelines.

Suggest a resource lock may be a good way to achieve this, particularly for storage accounts and key vaults.

Enables the user to create the infrastructure locally in a simple way

Describe the solution you'd like

Nowadays the the project is relaying on the usage of the ADO Pipeline or the GitHub action to deploy the infrastructure what makes it difficult to run it locally.

To remove the pipeline dependency and to also have a clean pipeline I suggest adding an ARM template holder called azure-deploy.json that will call all the others template.

Error while trying run the devops pipeline

2021-06-04T14:57:30.1977593Z ##[section]Starting: Upload File to Artifact Storage Account
2021-06-04T14:57:30.1986753Z ==============================================================================
2021-06-04T14:57:30.1987357Z Task : Azure PowerShell
2021-06-04T14:57:30.1987801Z Description : Run a PowerShell script within an Azure environment
2021-06-04T14:57:30.1988223Z Version : 4.185.0
2021-06-04T14:57:30.1988604Z Author : Microsoft Corporation
2021-06-04T14:57:30.1989056Z Help : https://aka.ms/azurepowershelltroubleshooting
2021-06-04T14:57:30.1989537Z ==============================================================================
2021-06-04T14:57:30.3922450Z ## Validating Inputs
2021-06-04T14:57:30.3979434Z ## Validating Inputs Complete
2021-06-04T14:57:30.3980245Z ## Initializing Az module
2021-06-04T14:57:30.3980851Z Generating script.
2021-06-04T14:57:30.3982366Z Formatted command: . '/home/vsts/work/1/s/code/UploadBlob.ps1' -ResourceGroupName "dlzme-dev-integration" -StorageAccountName "dlzmedevartifact001" -StorageAccountContainerName "scripts" -File "/home/vsts/work/1/s/code/installSHIRGateway.ps1" -Blob "installSHIRGateway.ps1"
2021-06-04T14:57:30.3989682Z ## Az module initialization Complete
2021-06-04T14:57:30.3990463Z ## Beginning Script Execution
2021-06-04T14:57:30.4012254Z [command]/usr/bin/pwsh -NoLogo -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command . '/home/vsts/work/_temp/fc0a37d5-a5c8-4cb5-b56b-28b55d398197.ps1'
2021-06-04T14:57:30.4076960Z Saved!
2021-06-04T14:57:30.8736589Z �[91mParserError: �[0m/home/vsts/work/_temp/fc0a37d5-a5c8-4cb5-b56b-28b55d398197.ps1:4
2021-06-04T14:57:30.8738471Z �[96mLine |
2021-06-04T14:57:30.8739744Z �[96m 4 | �[0m … /work/1/s/code/installSHIRGateway.ps1" -Blob "installSHIRGateway.ps1�[96m"�[0m
2021-06-04T14:57:30.8741015Z �[96m | �[91m ~
2021-06-04T14:57:30.8742423Z �[91m�[96m | �[91mThe string is missing the terminator: ".
2021-06-04T14:57:30.8743394Z �[0m
2021-06-04T14:57:30.8943309Z ##[error]PowerShell exited with code '1'.
2021-06-04T14:57:30.8954543Z ## Script Execution Complete
2021-06-04T14:57:30.8975207Z ##[section]Finishing: Upload File to Artifact Storage Account

DL Zone deployment Uk south failed with DM Zone in japan east

Correlation id: 842ddec7-2250-425e-91be-e81b6d3c46fe

DLzone Region: Uk South
DM Zone Region: Japan Eas

image

Network Service failure:
{
"status": "Failed",
"error": {
"code": "DeploymentFailed",
"message": "Bei der Ressourcenbereitstellung ist mindestens ein Fehler aufgetreten. Listen Sie die Bereitstellungsvorgänge auf, um Details anzuzeigen. Weitere Informationen zur Verwendung finden Sie unter https://aka.ms/DeployOperations.",
"details": [
{
"code": "BadRequest",
"message": "{\r\n "error": {\r\n "code": "RemotePeeringIsDisconnected",\r\n "message": "Das Peering \"/subscriptions/73996236-393b-4e42-b339-9d61ab0e572c/resourceGroups/dlzone-dev-network/providers/Microsoft.Network/virtualNetworks/dlzone-dev-vnet/virtualNetworkPeerings/dmzonedev-dev-vnet\" kann nicht erstellt oder aktualisiert werden, weil Remotepeering \"/subscriptions/73996236-393b-4e42-b339-9d61ab0e572c/resourceGroups/dmzonedev-dev-network/providers/Microsoft.Network/virtualNetworks/dmzonedev-dev-vnet/virtualNetworkPeerings/dlzone-dev-vnet\", das auf das übergeordnete virtuelle Netzwerk \"/subscriptions/73996236-393b-4e42-b339-9d61ab0e572c/resourceGroups/dlzone-dev-network/providers/Microsoft.Network/virtualNetworks/dlzone-dev-vnet\" verweist, sich im Zustand \"Getrennt\" befindet. Aktualisieren Sie das Remotepeering, oder erstellen Sie es neu, um es wieder in den Zustand \"Initiiert\" zu versetzen. Das Peering wechselt in den Zustand \"Getrennt\", wenn das virtuelle Remotenetzwerk oder das Remotepeering gelöscht und neu erstellt wird.",\r\n "details": []\r\n }\r\n}"
}
]
}
}

how to reproduce:

1.Deploy Data Management Zone succesfully
2. Deploy DL Zone in a Region
3. Delete DL Zone in a region
4. Create DLZone with same prefix in another region

Missing VM for SHIR

To scan data sources in Azure Purview through private link, we currently need a VM to deploy and register a self-hosted integration runtime. I believe this has not been yet considered inside the ESA.

  • We need to consider VM(s) for SHIR Inside ESA.
  • Additional design considerations may be required for manageability and security across multiple data landing zones

@abdale

Add parameters to specify VNet address

Today the Virtual Network ARM is using a fixed address space, it would be nice to allow the user to choose the one that best feet's its needs.

It is still not clear to me the best approach to do it, by asking for the first octet together with the dataNodeNumber I just wanted to capture it.

"FeatureNotSupportedForAccount" for Storage Account

When deploying Storage Account - Raw, it fails with the error:

Error: ERROR: Deployment failed. Correlation ID: b00f71fc-b7d2-44d2-8415-6428b3c4fa9e. ***
  "error": ***
    "code": "FeatureNotSupportedForAccount",
    "message": "Routing Preferences is not supported for the account."
  ***
***

Workaround: Commented routing preferences.

AzDO Deployment Pipeline Uses Tasks That Only Run on Windows

Describe the bug
The jobs defined in the .ado/workflows/dataNodeDeployment.yml AzDO Pipeline are configured to run on Ubuntu VMs. One of the tasks tries to use an "AzureFileCopy" task, but that task can only run on Windows VMs.

Steps to reproduce

  1. Deploy a data landing zone using AzDO Pipelines
  2. During the step named "Upload file to storage account 001", you receive the error: The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.

AzureFileCopy Task Fails

Describe the bug

When deploying the data landing zone via ADO Pipelines, an "AzureFileCopy" task is used to upload a PowerShell script to blob storage for use in the SHIR deployment process. However, this task fails with the following error:

##[error]Storage account: "jpdlzartifactsa001" not found. The selected service connection 'Service Principal' supports storage accounts of Azure Resource Manager type only.
  • The storage account "jpdlzartifactsa001" exists in the same subscription to which the ADO ARM Service Connection is attached.
  • The service principal (which was also used to create the storage account) has been granted "Storage Blob Data Contributor" access to the storage account.
  • The same result is observed when using both v3 and v4 of the "AzureFileCopy" task

Microsoft support forums have several reports from both internal and external users who have experienced the same problem. The PG says "it works on my machine" and no one has identified a cause or a solution. Examples:

Suggested Solution
Forget the "AzureFileCopy" task. Use a PowerShell script like we do in the GitHub Actions pipeline:

"Uploading file to Storage Account 001"
$storageAccount = Get-AzStorageAccount -ResourceGroupName "${{ env.AZURE_RESOURCE_GROUP_NAME_INTEGRATION }}" -Name "${{ steps.artifact_storage_001_deployment.outputs.storageAccountName }}"
$ctx = $storageAccount.Context
Set-AzStorageBlobContent -Context $ctx -Container "${{ steps.artifact_storage_001_deployment.outputs.storageAccountContainerName }}" -File "infra/SelfHostedIntegrationRuntime/installSHIRGateway.ps1" -Blob "installSHIRGateway.ps1" -Force

This will keep the ADO pipeline more consistent with the GitHub Action, and it will allow us to avoid a troublesome black-box task type.

Feature Request: Install SHIR via Custom Data

Describe the solution you'd like
Use the custom data feature of Azure VMs instead of pointing to a storage account for installing the SHIR Gateway onto a VMSS. This will improve the security of the setup and remove a storage account that currently needs to be deployed as part of ESA. In addition, it will simplify the CI/CD pipelines of the Data Landing Zone.

Vnet deployment when conditioned databricks nsg

The deployment is not taking into consideration the false variable defined below which is conditioning the deployment of the databricks nsg:
"deployDatabricksNsg": { "type": "bool", "defaultValue": false

Error received:
"code": "InvalidResourceReference", "message": "Resource /subscriptions/2150d511-458f-43b9-8691-6819ba2e6c7b/resourceGroups/DN001-NETWORK/providers/Microsoft.Network/networkSecurityGroups/DN001-DATABRICKS-NSG referenced by resource /subscriptions/2150d511-458f-43b9-8691-6819ba2e6c7b/resourceGroups/dn001-network/providers/Microsoft.Network/virtualNetworks/dn001-vnet was not found. Please make sure that the referenced resource exists, and that both resources are in the same region.", "details": [ *** "code": "NotFound", "message": "Resource /subscriptions/2150d511-458f-43b9-8691-6819ba2e6c7b/resourceGroups/DN001-NETWORK/providers/Microsoft.Network/networkSecurityGroups/DN001-DATABRICKS-NSG not found

Workaround - Deployed the databricsk nsg without conditioning it with the option of not.

PowerShell - Add role assignments

The second set of commands here seem to yield an error:

PS C:\Users\x> New-AzRoleAssignment -ObjectId $spObjectId -RoleDefinitionName "Network Contributor" -Scope "/subscriptions/xyz/resourceGroups/datamgmt-network"


RoleAssignmentId   : /subscriptions/xyz/resourceGroups/datamgmt-network/providers/Microso
                     ft.Authorization/roleAssignments/xyz
Scope              : /subscriptions/xyz/resourceGroups/datamgmt-network
DisplayName        : esasp
SignInName         :
RoleDefinitionName : Network Contributor
RoleDefinitionId   : xyz
ObjectId           : xyz
ObjectType         : ServicePrincipal
CanDelegate        : False
Description        :
ConditionVersion   :
Condition          :



PS C:\Users\x> New-AzRoleAssignment -ObjectId $spObjectId -RoleDefinitionName "Network Contributor" -ResourceGroupName "datamgmt-network"
**New-AzRoleAssignment : The role assignment already exists.**
At line:1 char:1
+ New-AzRoleAssignment -ObjectId $spObjectId -RoleDefinitionName "Netwo ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : CloseError: (:) [New-AzRoleAssignment], CloudException
    + FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureRoleAssignmentCommand

I'm wondering if this is redundant after the first command is successfully executed?

Duplicate Task Name in AzDO Deployment YAML

Describe the bug
In .ado/workflows/dataNodeDeployment.yml, the task name generate_pipeline_variables_001 is used twice. Azure DevOps Pipelines require each task to have a unique name so the pipeline validation fails.

Steps to reproduce

  1. Start the pipeline in Azure DevOps pipeline
  2. Immediately get the error: Job Deployment: The step name generate_pipeline_variables_001 appears more than once. Step names must be unique within a job.

Template failing on privateDnsZoneGroups when deploying to Data Management Zone without Azure Firewall and Private DNS Zones

Describe the bug
Deployment of templates failes on privateDnsZoneGroups for serveral resources if deployed to a Data Management Zone without Azure Firewall and Private DNS Zones.

Error message:

{
    "status": "Failed",
    "error": {
        "code": "LinkedAuthorizationFailed",
        "message": "The client 'c37c27fa-5b67-4a0d-8d50-bbad8ecf70e2' with object id 'c37c27fa-5b67-4a0d-8d50-bbad8ecf70e2' has permission to perform action 'Microsoft.Network/privateEndpoints/privateDnsZoneGroups/write' on scope '/subscriptions/3a517887-399a-4758-9d12-e055e804eb9d/resourcegroups/lexdlz01-dev-logging/providers/Microsoft.Network/privateEndpoints/lexdlz01-dev-vault003-private-endpoint/privateDnsZoneGroups/default'; however, it does not have permission to perform action 'Microsoft.Network/privateDnsZones/join/action' on the linked scope(s) '/subscriptions/48fc3def-3fff-4933-afcf-cdcc2a8da06a/resourceGroups/lexdmz-dev-global-dns/providers/Microsoft.Network/privateDnsZones/privatelink.vaultcore.azure.net' or the linked scope(s) are invalid."
    }
}

Please add option or suggestion on how to deploy Data Landing Zone to a Data Management Zone that is deployed without Azure Firewall and Private DNS Zones. See discussion on ([https://github.com/Azure/data-management-zone/discussions/211])

Steps to reproduce

  1. Deploy Data Management Zone without Azure Firewall and Private DNS Zones by setting parameter enableDnsAndFirewallDeployment to false in params.dev.json
  2. Deploy Data Landing Zone according to instructions for Azure Devops.

Screenshots
image

image

Document typos

By opening the project on VsCode using the extension streetsidesoftware.code-spell-checker I was able to identify some minor typos.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.