Git Product home page Git Product logo

azure-network-security's Introduction

Azure Network Security

GitHub GitHub contributors GitHub last commit GitHub issues GitHub repo size

Welcome to the Azure Network Security community repository. This repo is designed to help:

  • Deploy Azure NetSec resources programmatically using scripts and templates.
  • Manage configuration of NetSec resources at scale using scripts, templates, Logic Apps (API), and Azure Policy
  • Integrate NetSec resources with other tools, such as Azure Sentinel.
  • Discover best practices and interesting techniques.
  • Contribute back to your community with samples you create and think others would find useful.

Other Resources

What's new?

Please find the latest artifacts in this repo on our What's New Page

Support

All automations within this repository are provided as is, without SLA or official support. However, if you have an issue please fill out a bug report and reference the automation artifact, so the community can try to solve it.

Wiki

This project has its own Wiki which will provide you with further information about the Azure Network Security community, how to contribute, templates to use and other resources.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

Add in your new or updated contributions to GitHub

Note: If you are a first time contributor to this repository, Fork the repo before cloning.

Brand new or update to a contribution via these methods:

Pull Request

  • After you push your changes, you will need to submit the Pull Request (PR)
  • After submission, check the Pull Request for comments
  • Make changes as suggested and update your branch or explain why no change is needed. Resolve the comment when done.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

azure-network-security's People

Contributors

andrewmathuj avatar anthony-roman avatar armudiraj avatar arun-mudiraj avatar ashish-kapila avatar ccmartins avatar chboeh avatar cocallaw avatar danielswart avatar david-frazee avatar dependabot[bot] avatar graemefoster avatar gumoden avatar igorpag avatar jchancellor-ms avatar laragoldstein13 avatar microsoftopensource avatar mischaboender avatar mohitkusecurity avatar ms-sambell avatar reijoh avatar saleembseeu avatar shabaz-github avatar sushantmsft avatar swiftsolves-msft avatar thomash0815 avatar tobystic avatar tremansdoerfer-microsoft avatar vansummeren avatar xstof avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-network-security's Issues

Securing Virtual Hub with Basic SKU Azure Firewall

Describe the bug
When securing an existing virtual hub with Azure Firewall, it is possible to select the Basic SKU unit from the configurator, accepting basic policies, still in the same configurator. However, once created the firewall is showed and billed as a standard one.

Reproduce
Steps to reproduce the behavior:
From the Azure portal

  1. Create a Basic tier Firewall policy form the Firewall manager (i.e. DenyAll)
  2. Create a Virtual Wan and an unsecured Virtual HUB
  3. In the Virtual HUB, under the Security menu, select Azure Firewall and Firewall Manager
  4. Jump to the second tab Azure Firewall
  5. Select Basic (preview) tier and select the policy created at step 1.
  6. Jump to the Review + Confirm step and validate that the Basic tier has well being selected
  7. Click Create and wait...
  8. Go on the freshly provisioned firewall resource and notice that it is of "Standard" tier

Expected behavior

  • In accordance with the billing documentation, it should not be possible to select "Basic" firewall tier to secure a Virtual HUB (Adapt Portal UI configurator mask + validation process).
    OR
  • If the configurator / validation process that is correct, a Basic firewall should be provisioned instead of a Standard one, and the pricing should appear as well in the pricing documentation

Screenshots
image

Environment- if applicable
Azure Portal UI "Version: 11.48.5.1 (v11.48.0.1#9717ae6517.230204-0143) Signed"

Desktop (please complete the following information if applicable):

Logs- if applicable

Additional context

Documentation

Describe the bug
What destination table do we need to select for Diagnostic Settings if we want to use Azure Firewall Workbook: "Azure diagnostic" option or "resource specific"?

[Workbook] WAF Triage no longer works by rules

Describe the bug

As a Devops I've always loved the Application Gateway WAF Triage Workbook. It used to work perfectly
Recently I've noticed a strange behaviour when using the "by rules" tab.
The workbook is not able to retrieve the "Requests on selected host and url" and the third column always displays "The query returned no results."

If I use Kusto queries or the "by url" tabs I can totally see the requests and, therefore, I'm able to better investigate false positive issues

Reproduce
Steps to reproduce the behavior:

  1. Open the WAF triage workbook
  2. Click on "triage by rule" tab
  3. Click on the most common one (or any other rule listed)
  4. Click on the hostname
  5. Click on any of the host path displayed

Expected behavior

The list of requests that triggered the specific rule on that specific host and path.

Screenshots
By rule triage
By rule triage

By url triage
By url triage

Desktop (please complete the following information if applicable):

  • OS: MacOs Catalina 10.15.7
  • Browser Chrome
  • Version 103.0.5060.53 (Official Build) (x86_64)

Error when using -FetchDnsRecordsFromAzureSubscription in Cloudshell

Describe the bug
When executing .\Get-DanglingDnsRecords.ps1 -FetchDnsRecordsFromAzureSubscription in CloudShell execution fails with the error:
Import-Module:
Line |
13 | … Import-Module -name $module -Scope Local -Force
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Assembly with same name is already loaded

Reproduce
Steps to reproduce the behavior:

  1. Open CloudShell in Azure Portal
  2. Execute the following command in CloudShell Invoke-WebRequest -Uri "https://raw.githubusercontent.com/Azure/Azure-Network-Security/master/Cross%20Product/Find%20Dangling%20DNS%20Records/Get-DanglingDnsRecords.ps1" -OutFile "Get-DanglingDnsRecords.ps1"
  3. Execute the following command in CloudShell .\Get-DanglingDnsRecords.ps1 -FetchDnsRecordsFromAzureSubscription
  4. See error:
    Import-Module:
    Line |
    13 | … Import-Module -name $module -Scope Local -Force
    | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    | Assembly with same name is already loaded

Expected behavior
A list of dangling CNAME records for the subscription listed.

Environment- if applicable
Azure portal CloudShell

Support dangling ns records

So that we can stop entire sub-domain takeovers
As a network administrator
I would like to be notified of ns records in a dns zone where I don't have a corresponding dns zone for the subdomain.

Describe the solution you'd like
I'd like a new resource type, ns-record, to be added to the list of entries detected as dangling.

Describe alternatives you have considered
Nope - for large dns deployments ns records are routinely used to simplify management.

Additional context
ns-records point to generic azure dns name-servers.
If I

  • own mydomain.com and have Azure DNS managing it.
  • create another Azure DNS record to manage sub.mydomain.com
  • create an ns record set against mydomain.com pointing to the name servers controlling sub,mydomain.com (an Azure generic dns server such as "ns1-09.azure-dns.com")
  • delete the sub-domain without deleting the ns record.

Someone else can now create a dns zone for sub.mydomain.com and has a chance of getting it on the same ns1-09.azure-dns.com Azure dns servers. When they do then they've effectively taken over an entire subdomain.

Happy to submit a pull request to detect this!

"FakeData" term used

Is your feature request related to a problem? Please describe.
The queries use the term "FakeData"

image

Documentation clarity

Describe the bug
Hi, this Workbook looks awesome. I'm just confused with the Documentation.

To use the Workbook with Resource Specific logs enabled, use the deployment button below.

In Firewall context, what does this mean? What's the difference between the two Workbooks?

When deploying via ARM Template, please make sure you know what Resource ID (Log Analytics Workspace) you're wanting to use.

Why do I need to know it? At least I don't see which Template parameter should take this value. The Workbook itself then has selectors for any Workspace in any Subscription.

Many thanks for clarification. :)

CName shows missing if FQDN has mixed case.

Describe the bug
Getting incorrect results for App Services, could be others as well, with mixed case for the name / FQDN.
Azure Front Door seems fine because in Azure it shows the mixed case and doesn't convert it to lower.

Reproduce
Steps to reproduce the behavior:
Create App Service with mixed case sensitivity: Example: MyCustomApp01.azurewebsites.net.
The App will show the domain as all lowercase for App Services under default domain.
create a CNAME with all lowercase pointing to the mixed case MyCustomApp01.azurewebsites.net

Expected behavior
A clear and concise description of what you expected to happen.
Expecting a match in App service domain name and DNS. But since the case does not match, it is showing as not matching. Perhaps adjust the script to force everything to lowercase then compare.

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable

  • What version of CLI was used [Az –version]

Desktop (please complete the following information if applicable):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
Add any other context about the problem here.

The script is not fetching any DNS records in my DNS Zone

Describe the bug
When i run the script it fetches all my azure resources perfectly , but its not able to find any Azure DNS in my Subscriptions even though I have multiple DNS zones in my subscription

Reproduce
Steps to reproduce the behavior:
Run the Script as provided in the examples and its not fetching no DNS zone or records
Expected behavior
A clear and concise description of what you expected to happen.
I except it to find all Azure DNS and DNS entires , compare and let me know if I have any dangling DNS entries lying around

image
image
image

Custom Service Tags

Is your feature request related to a problem? Please describe.
A clear and concise description of the problem. E.g Provide a feature to [...], I'm looking to [...]
Within an organization, we may have numerous IPs advertised through BGP, and potentially an onprem environment. We may leverage external services as well over certain public IPs through our partners. Both of these we have to either setup inbound allow policies on our Network Security Groups, and if we have numerous subscriptions in our organization and numerous regions, appying these network security groups is not simple to make uniform and secure as we have to have this list of IPs we have to allow all the time.

Similarly if we want to setup UDRs on our subnets to those certain IPs, we have to remember them all the time, and manually assign them to our route tables on our subnets. That is a pain and can cause inconcistency.

This can only be simplified via IaC such as terraform where we publish modules and release them in each subscription.

If we can have custom service tags, it will make NSGs and route tables simpler as we can create a custom NSGS, where the NSG rules can dynamically update to the company's approved custom IP service tags. Company's IPs are constantly changing internally, just like Microsoft's Service Tags. So if we need to securely allow Microsoft Service tags as they get updated withour breaking our PaaS workloads, this will make it much smoother and more secure for organizations as well, if they can publish their own service tags for their approved resources. And when they make updates to their service tags, it gets picked up more quickly by their NSGs and route tables, rather then slowly performing releases via terraform everywhere.

Describe the solution you'd like
A clear and concise description of what you want to happen.

Custom Service Tags for NSGs and route tables that organizations themselves can supply with their approved IPs.

Describe alternatives you have considered
A clear and concise description of any alternative solutions or features you've considered.

Terraform module registry, terraform version control per subscription per region per environment (prod/nonprod/dev) where we update our prod IPs and NonProd IPs as they get updated, so our NSGs can block communication between each other.

This is very slow and can only be solved for the aprpoved IP ranges as we release. It leaves some behind until we get to it. It's also difficult for certain subnets with their own custom NSG.

As for example, the same NSG can be re-used on multiple subnets. Similarly Route tables. And we may have custom subnets with custom NSGs not in use by other subnets, and similarly route tables.

But updating them as we need to update our approved IPs for whitelisting/blocking/UDRs statically via IPs individually is not secure.

Being able to do via custom service tags that we manage our selves makes solves so many problems.

Additional context
Add any other context or screenshots about the feature request here.

Here is Microsoft's own Service Tags which update.
https://www.microsoft.com/en-us/download/details.aspx?id=56519

Having this support for ourselves as well can simplify management and security on our resources.

Question about FQDN vs CNAME fields in Dangling DNS script

Hi everyone,

At my company we've been reading up on Azure's recommendations for how to detect dangling DNS records, and came across this repository. I'm curious if there's a specific reason why the FQDN value is being assigned to the CNAME field for a given record set. Here is one example, but it occurs in multiple places, including in the final logic for determining if a given record is dangling or not.

Thanks for your time!

For Azure Front Door services, the script marks root domains incorrectly as dangled

Describe the bug
Azurefd.net root domains are incorrectly marked as dangled

I have pasted a snippet of the Get-DanglingDnsRecords.ps1 below where it is supposed to be checking Azure Front Door based subdomains and root domains. First it looks at subdomains by matching against the "." prefix and increments the $count variable when it finds them. If it does it adds to matching resources, if not it adds to missing resources.

If the Azure Front Door domain is a root domain, or in other words does not have a "." prefix, $count is not incremented and it goes straight to the missing resources. It will not successfully perform a check to see if the domain exists in the Azure resource hash.

Reproduce
In this hypothetical scenario we have an Azure front door service on test.azurefd.net, with subdomain subdomain.test.azurefd.net.

If you had a CSV CNameDNSMap.csv with the following contents such as:

CName Fqdn
first.test.com test.azurefd.net
second.test.com subdomain.test.azurefd.net

Then executed:
.\Get-DanglingDnsRecords.ps1 -InputFileDnsRecords .\CNameDNSMap.csv

The result would be first.test.com will be detected as dangling, second.test.com will not. The expected result is both are not detected as dangling.

Solution
This could be fixed (for example) by removing the "." prefix match so it will check both subdomain and root domains:
$count = (($AzResourcesHash.GetEnumerator() | Where { $item.FQDN -match $_.key }) | Measure-Object).Count

<--- original code snippet --->

        #Azurefd can have subdomains also which we cannot mark as dangled
        If ($item.FQDN) {
            $key = $item.Fqdn.trim(" ").tolower()

            #Azurefd can have subdomains also which we cannot mark as dangled
            if($item.FQDN -match "azurefd.net")
            {
               $count = (($AzResourcesHash.GetEnumerator() | Where { $item.FQDN -match  "."+$_.key}) | Measure-Object).Count
               if($count -gt 0)
               {
                 [void]$AzCNameMatchingResources.add($item)
               }
               else
               {
                  [void]$AzCNameMissingResources.add($Item)
               }

            }
            else 

<--- original code snippet --->

the msg_s column does not exist in table AzureDiagnostics, many queries for the workbook need to be updated

the msg_s column does not exist in table AzureDiagnostics for the (Azure Firewall Workbook), many queries for the workbook need to be updated.

sample one:

let materializedData =
materialize(
AzureDiagnostics
| where Category == "AzureFirewallApplicationRule"
| where Resource in~ (split("{Resource:label}", ", "))
| project msg_s, Resource, TimeGenerated);
union
(
materializedData
| where msg_s has "Web Category:" and msg_s has ". Url"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Url:" Url ". Action: " Action ". Rule Collection:" RuleCollection ". Rule:" Rule ". Web Category:" WebCategory
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s has ". Url" and msg_s has ". No rule matched"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Url:" Url ". Action: " Action ". No rule matched" *
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s !has ". Url" and msg_s has ". No rule matched"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Action: " Action ". No rule matched" *
),
(
materializedData
| where msg_s has "Web Category:" and msg_s !has ". Url"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Action: " Action ". Rule Collection:" RuleCollection ". Rule:" Rule ". Web Category:" WebCategory
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s !has ". Url" and msg_s !has "Rule Collection" and msg_s !has " Reason: "
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Action: " Action ". " RuleCollection ". " Rule
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s !has ". Url" and msg_s !has "Rule Collection" and msg_s !has "TLS extension was missing"
| where msg_s has " Reason:"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort ". Action: " Action ". Reason: " Rule "."
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s !has ". Url" and msg_s !has "TLS extension was missing" and msg_s !has "No rule matched"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Action: " Action ". Rule Collection: " RuleCollection ". Rule: " Rule
),
(
materializedData
| where msg_s !has "Web Category:" and msg_s !has ". Url" and msg_s !has "Rule Collection" and msg_s !has " Reason: "
| where msg_s has "Rule Collection Group"
| parse msg_s with Protocol " request from " SourceIP ":" SourcePort " to " FQDN ":" DestinationPort ". Action: " Action ". Policy:" Policy ". Rule Collection Group:" RuleCollectionGroup ". Rule Collection: " RuleCollection ". Rule: " Rule
)
| where RuleCollection matches regex ".*"
| summarize Count = count(), last_log = datetime_diff("second", now(), max(TimeGenerated)) by RuleCollection, Rule, WebCategory

DNS - Find Dangling DNS Records Error on run "Assembly with same name is already loaded"

Describe the bug
The "DNS - Find Dangling DNS Records" script fails on first time run to scan subscriptions and produce a list of dangling CNAMEs. Script will run successfully in locally from my own machine.

Reproduce
Right after installing, importing, and then running the module as follows:
Get-DanglingDnsRecords -FetchDnsRecordsFromAzureSubscription
I see some progress and it seems to finish fetching subscriptions then begin processing subscriptions & DnsZones/DnsRecordSets
Before the command errors out with the following error message:
Import-Module: Line | 13 | … Import-Module -name $module -Scope Local -Force | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Assembly with same name is already loaded

Expected behavior
Expected script to finish & produce report

Screenshots
image

Environment- if applicable

  • Using the Cloud Shell within Azure
  • New Cloud Shell being created/tested
  • Installed latest version of module as of today
    image

Desktop (please complete the following information if applicable):
N/A - Cloud Shell

Incorrectly flagging root of domain as dangling

Describe the bug
This tool reports the root (or apex) of the domain as "dangling" when in fact it has been mapped. This appears to be due to the tool ONLY recognising & querying CNAME DNS records. However, the root (or apex) of a domain can NOT have a CNAME associated with it. (A CNAME record is not allowed to coexist with any other data. If a CNAME for the apex is created, DNS resolution will break for the domain. See RFC 1912 section-2.4)

To achieve an Azure mapping for the apex behaviour, an ANAME record must be created in the DNS. This ANAME record can be mapped to the Azure resource, or more usually, mapped to the CNAME that is in turn mapped to an Azure resource.

For example, we may wish to map both "www.contoso.com" subdomain and the root domain, "contoso.com" to a CDN resource. to accomplish this, we an create a CNAME record for "www.contoso.com" that maps to the CDN endpoint "contosowebsite.azureedge.net" . However to map "contoso.com" to the same endpoint, we can NOT create a CNAME. To map the apex record "contoso.com" we create an ANAME record in the DNS that point to "www.contoso.com" or to "contosowebsite.azureedge.net" This now has the effect that DNS lookups for both "www.contoso.com" and "contoso.com" both resolve to "contosowebsite.azureedge.net". Thus "contoso.com" is NOT a dangling domain!!

Reproduce
Steps to reproduce the behavior:

  1. Create a CDN custom domain of "contoso.com" to an endpoint
  2. Verify the domain by using the CDNVERIFY method (we can't use auto verification as we can't crate a CNAME)
  3. Create an ANAME record in the DNS pointing "contoso.com" to the resource endpoint
  4. Browse to the endpoint using "contoso.com" and note that it is correctly mapped.
  5. Run Get-DanglingDnsRecords.ps1
  6. Note that "contoso.com" is INCORRECTLY flagged as a dangling domain!!

Expected behaviour
The apex (root) of a domain should be checked properly to see if it is in fact dangling.
i.e. check to see if an ANAME exists that is mapped to a valid Azure resource (either explicitly or via mapping to a CNAME that in turn is correctly mapped)
The apex of a domain should NOT be reported as dangling if it in fact resolves to a valid Azure resource.

Screenshots
N/A

Environment- if applicable
N/A

Desktop (please complete the following information if applicable):
N/A

Logs- if applicable
N/A

Additional context
N/A

getting error #need help#

after deplyoing this workbook, I am getting below error.
"project-reorder: Failed to resolve attribute as a column entity: policyScopeName_s..."

Linked Resource of LogAnalytics Workspace is not respected if you have access to multiple subscriptions

Describe the bug
If you have multiple subscriptions and deploy the template even though you specify an analytics workspace correctly the workbook is connected to your default subscription. However, the link is created correctly in the workbook as Linked Resource and even the JSON View is pointing to the correct resource

Reproduce
Steps to reproduce the behavior:
Deploy the Workbook to a different subscription and specify an analytics workspace with the correct value for the template

Expected behavior
The workbook is connected to the correct analytics workspace.

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable
no CLI used. Deployed directly from GIT repository

Desktop (please complete the following information if applicable):

  • OS: Microsoft Windows
  • Browser Edge
  • Version

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
Add any other context about the problem here.

pre-requisite for syn-flood script

Prerequisites for Flood script:
https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Breaking%20Point%20SDK

Please confirm library still exists or if refactored to another name
pip3 install bpcddos returns error:
ERROR: Could not find a version that satisfies the requirement bpcddos (from versions: none)

pip3 install bpcddos.zip
ERROR: Could not install packages due to an EnvironmentError: [Errno 2]

Other pre-reqs work

Version:Python 3.7
OS: Windows 10
pip version 20.0.2

Cannot run scripts as-directed because they are not signed

Describe the bug

  1. The documentation page at https://github.com/Azure/Azure-Network-Security/tree/master/Cross%20Product/Find%20Dangling%20DNS%20Records has instructions to download/clone the repo and then run the script in PowerShell.
  2. By default, PowerShell on Windows will not run downloaded scripts unless they are signed. So I get the below error:

PS C:\Users\Stuff\Find Dangling DNS Records> ./Get-DanglingDnsRecords.ps1
./Get-DanglingDnsRecords.ps1 : File C:\Users\Stuff\Find Dangling DNS Records\Get-DanglingDnsRecords.ps1 cannot be loaded.
The file C:\Users\Stuff\Find Dangling DNS Records\Get-DanglingDnsRecords.ps1 is not digitally signed. You cannot run this script on the current system.

Reproduce

(See above)

Expected behavior

I expected this script, from Microsoft, no-less, to be signed and ready to execute without needing to jump through any hoops to appease the PowerShell security gods.

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable

PS C:\Users\Stuff\Find Dangling DNS Records> Get-Host


Name             : ConsoleHost
Version          : 5.1.19041.546
InstanceId       : f53aa37d-e823-4130-a656-8b07844a1d45
UI               : System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture   : en-GB
CurrentUICulture : en-US
PrivateData      : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy
DebuggerEnabled  : True
IsRunspacePushed : False
Runspace         : System.Management.Automation.Runspaces.LocalRunspace

Desktop (please complete the following information if applicable):

Windows 10 Pro for Workstations, version 2004.

Logs- if applicable

None

Additional context

None

Add Hostname and Request URI as filter in Azure WAF dashboard

Is your feature request related to a problem? Please describe.
Currently if you have multiple endpoints hosted on Application gateway/ Frontdoor,the data is way too much for the workbook dashboard and it is not easy to filter.

Describe the solution you'd like
Create filter for Hostname as well as URI so that it is easier for the teams using the dashboard to check data related to their endpoints and URIs
Workbook filters

Describe alternatives you have considered
Create these filters in the dashboard but then it wouldn't be possible to keep track with the original dashboard created here.

Give a rough idea as to the costs of deploying this

Currently deploying this for a class, using my personal Azure account. I have concerns as to the cost that is going to be picked up from it. It would be nice to know, for example, that this is going to cost me roughly $10 a day (or whatever it is) as well as the frighteningly shocking cost of the DDOS service option...

(Specific to the "Network Security Lab" template is where my concern lies)

WAF Triage Workbook error

Describe the bug
Error received when connecting to Log Analytics Workspace. There is no policyScopeName_s table.

Reproduce
Steps to reproduce the behavior:
I deploy the Application Gateway WAF triage workbook, and chose my Subscription and Log Analytics Workspace
I then get an error in the middle (Select the scope of the WAF policy) and received the error "project-reorder: Failed to resolve attribute as a column entity: policyScopeName_s..."

Expected behavior
Expected the workbook to show some data

Screenshots
AGW-WAFTriage-Error

Environment- if applicable
Used the link to Deploy to Azure

Desktop (please complete the following information if applicable):

  • OS: Windows 10
  • Browser: Chrome and Edge get same results

Additional context
There is no policyScopeName_s table in Log Analytics. We are using only v2 APGs and are checking all boxes under Diagnostics Settings to export
This is a Log Analytics Workspace for most of our Azure resources - so it has diagnostic logs for both a few FD WAFs and a few AG WAFs. I also have setup the Azure Monitor Workbook for WAF with no issues pointing to the same Log Analytics Workspace.

CNAMEs pointing to Azure Frontdoor endpoints in Standard Pricing tier report as dangling.

Describe the bug
Dangling DNS Finder is not able to recognize Azure front door endpoints that are created against the Front Door and CDN profiles in the Azure Front Door Standard pricing tier.

Reproduce

  1. Create a new Standard Front Door and CDN profiles and specify:
  • Azure Front Door offering
  • Quick Create option
  1. On the Create a Front Door profile page, populate the required fields:
    Azure Subscription / Resource Group / Resource Group Location / Name
  • Fill these values to suit your environment
    Profile Details:
    • Tier: Standard
      Endpoint Settings
  • Endpoint name: www-example-com
  • Origin Type: Custom
  • Origin host name: 1.1.1.1
  • Caching: disabled
  • Waf policy: blank
  1. Create a new CNAME record in your DNS provider with these values:
  • Name: www
  • Zone: example.com
  • Value: www-example-com-00000000000001.z01.azurefd.net
  • TTL 3600
  1. Create a new json file on your computer called records.json
  2. Populate the records.json file with content containing your public DNS records as follows, then save the file:
[
  {
    "CNAME": "www",
    "FQDN": "www-example-com-00000000000001.z01.azurefd.net",
    "ZoneName": "example.com",
    "ResourceGroup": null,
    "resourceProvider": null
  }
]
  1. Invoke the Dangling DNS finder tool:
Get-DanglingDnsRecords -InputFileDnsRecords ./records.json

Expected behavior
As we have created an endpoint (named www-example-com) against an Azure Standard Frontdoor within our Azure subscripion, we expect the tool to not identify any Dangling DNS records for Azure Frontdoor

AzureResourceProviderName AzureResourceCount AzureCNameMatchingResources AzureCNameMissingResources                     
------------------------- ------------------ --------------------------- --------------------------                     
Azure API Management                       0                           0                          0                     
Azure Container Instance                   0                           0                          0                     
Azure CDN                                  0                           0                          0                     
Azure Front Door                           0                           1                          0                     
Azure App Service                        111                           1                          0                     
Azure Blob Storage                       111                           0                          0                     
Azure Public IP addresses                  1                           0                          0                     
Azure Classic Cloud                        1                           0                          0                     
Azure Traffic Manager                      0                           0                          0        

Screenshots
NA

Environment- if applicable

Python (Darwin) 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]

Desktop (please complete the following information if applicable):

  • OS: MacOS

Logs- if applicable

Additional context

Application rule log statistics

Hi ,

I have deployed "Workbook - Azure Firewall Monitor Workbook " . Application rule log statistics page is not showing any log statics.

I can see the log statistics for Network rule log statistics but not Application rule log statistics.

image

Geo location blocking

Lack of geoblocking. A Simple way to handle outbound or inbound traffic filtering is to block different geos from the firewall, this is something that Azure Web application firewall supports, but is not a feature that is available on azure firewall.

Dangling DNS Finder fails with latest version of Az

Describe the bug
The Azure Dangling DNS Finder does not work with Az.ResourceGraph version 0.10.0

Reproduce
Steps to reproduce the behavior:

  1. issue Install-module -name Az and ensure you've pulled the latest version (5.9.0)
  2. Issue Get-Command "Search-AzGraph", ensure you have version 0.10.0
CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Search-AzGraph                                 0.10.0    Az.ResourceGraph
  1. Run the Get-DanglingDnsRecords
  2. See error
InvalidOperation: You cannot call a method on a null-valued expression. 
At C:\Octopus\Tentacle\Work\20210504200347-1591704-260\scripts\dns-management\Get-DanglingDnsRecords.ps1:338 char:13 
+             $key = $psitem.$keyName.trim(" ").tolower() 

Expected behavior
The tool works and is able to retrieve Az-Resources as usual.

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable

  • What version of CLI was used [Az –version] 5.9.0

Desktop (please complete the following information if applicable):

  • OS: Windows 10
  • Browser Firefox
  • Version [e.g. 22]

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
It looks like there is a change in Search-AzGraph (version 0.10.0) where the data returned is now encapsulated in data.
For example:
instead of

$key = $psitem.$keyName.trim(" ").tolower() 

It would need to be:

$key = $psitem.data.$keyName.trim(" ").tolower() 

My work-around was to:

Option 1. Ensure a previous version of Az.ResourceGraph is used.

Option 2. Update the Dangling dns finder to account for this change.

https://github.com/Azure/Azure-Network-Security/blob/master/Cross%20Product/DNS%20-%20Find%20Dangling%20DNS%20Records/AzDanglingDomain/Export/Get-DanglingDnsRecords.ps1#L308

to look like this:
return ($(Search-AzGraph -Query $query @params)).data

Validation Error

When clicking button deploy to Azure, once you fill in the template to run it, displays
Error
{"code":"InvalidTemplateDeployment","details":[{"code":"InvalidParameter","target":"imageReference","message":"The following list of images referenced from the deployment template are not found: Publisher: MicrosoftWindowsDesktop, Offer: Windows-10, Sku: rs5-pro, Version: latest. Please refer to https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage for instructions on finding available images."}],"message":"The template deployment 'Microsoft.Template-20211019090730' is not valid according to the validation procedure. The tracking id is '1662c5d4-0078-44ca-bab0-3693a7e58d7e'. See inner errors for details."}

Reproduce
Steps to reproduce the behavior:

  1. Go to 'https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Lab%20Template%20-%20WAF%20Attack%20Testing%20Lab'
  2. Click on 'Deploy to Azure' blue button
  3. Fill in the form
  4. Click create
  5. See error

Expected behavior
ARM deployment to test WAF Security Components

Screenshots
{
"code": "InvalidTemplateDeployment",
"details": [
{
"code": "InvalidParameter",
"target": "imageReference",
"message": "The following list of images referenced from the deployment template are not found: Publisher: MicrosoftWindowsDesktop, Offer: Windows-10, Sku: rs5-pro, Version: latest. Please refer to https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage for instructions on finding available images."
}
],
"message": "The template deployment 'Microsoft.Template-20211019090730' is not valid according to the validation procedure. The tracking id is '1662c5d4-0078-44ca-bab0-3693a7e58d7e'. See inner errors for details."
}

Environment- if applicable

Desktop (please complete the following information if applicable):

  • OS: [Win 10]
  • Browser [Microsoft Edge Version 94.0.992.50 (Official build) (64-bit) ]

Logs- if applicable
Validate Deployment
Tue Oct 19 2021 09:07:34 GMT-0500 (Central Daylight Time)
Summary
Operation name
Validate Deployment
Time stamp
Tue Oct 19 2021 09:07:34 GMT-0500 (Central Daylight Time)
Event initiated by
[email protected]
Error code
InvalidTemplateDeployment
Message
The template deployment 'Microsoft.Template-20211019090730' is not valid according to the validation procedure. The tracking id is '9458c7df-b14e-496b-9849-762156cfc082'. See inner errors for details.

JSON
{
"authorization": {
"action": "Microsoft.Resources/deployments/validate/action",
"scope": "/subscriptions/f9f75a63-4045-41ee-8ad1-589a763fe201/resourceGroups/WAFV2-DEMO/providers/Microsoft.Resources/deployments/Microsoft.Template-20211019090730"
},
"caller": "[email protected]",
"channels": "Operation",
"claims": {
"aud": "https://management.core.windows.net/",
"iss": "https://sts.windows.net/77cba36f-f434-4ff6-ad61-99214f0d6305/",
"iat": "1634651911",
"nbf": "1634651911",
"exp": "1634655811",
"http://schemas.microsoft.com/claims/authnclassreference": "1",
"aio": "ATQAy/8TAAAAFi9O98C/IEmMhRPJ2rrXG0KO6h2KVBsuhVsDYpv7Gzmnqs3l2JXwlWwAjayuvjqy",
"http://schemas.microsoft.com/claims/authnmethodsreferences": "pwd",
"appid": "c44b4083-3bb0-49c1-b47d-974e53cbdf3c",
"appidacr": "2",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname": "Admin",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname": "Global",
"groups": "1ff665cb-ac55-4155-b638-c580ed23e75c",
"ipaddr": "201.191.218.18",
"name": "GlobalAdmin",
"http://schemas.microsoft.com/identity/claims/objectidentifier": "a838ef36-2bc1-498a-a538-061214816b45",
"puid": "10032000EE078721",
"pwd_exp": "20477",
"pwd_url": "https://portal.microsoftonline.com/ChangePassword.aspx",
"rh": "0.AXYAb6PLdzT09k-tYZkhTw1jBYNAS8SwO8FJtH2XTlPL3zx2ADI.",
"http://schemas.microsoft.com/identity/claims/scope": "user_impersonation",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier": "j4PYxXSHn7uCcWu3OqJnwtQO4k3HDrpsB7DOPtQN3fA",
"http://schemas.microsoft.com/identity/claims/tenantid": "77cba36f-f434-4ff6-ad61-99214f0d6305",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name": "[email protected]",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn": "[email protected]",
"uti": "cnnL8Mc5pU2XnzTYtMIUAQ",
"ver": "1.0",
"wids": "62e90394-69f5-4237-9190-012177145e10",
"xms_tcdt": "1602710383"
},
"correlationId": "9458c7df-b14e-496b-9849-762156cfc082",
"description": "",
"eventDataId": "cfe57627-c650-4b15-b645-fdcfde15e3ab",
"eventName": {
"value": "EndRequest",
"localizedValue": "End request"
},
"category": {
"value": "Administrative",
"localizedValue": "Administrative"
},
"eventTimestamp": "2021-10-19T14:07:34.8482674Z",
"id": "/subscriptions/f9f75a63-4045-41ee-8ad1-589a763fe201/resourceGroups/WAFV2-DEMO/providers/Microsoft.Resources/deployments/Microsoft.Template-20211019090730/events/cfe57627-c650-4b15-b645-fdcfde15e3ab/ticks/637702492548482674",
"level": "Error",
"operationId": "9458c7df-b14e-496b-9849-762156cfc082",
"operationName": {
"value": "Microsoft.Resources/deployments/validate/action",
"localizedValue": "Validate Deployment"
},
"resourceGroupName": "WAFV2-DEMO",
"resourceProviderName": {
"value": "Microsoft.Resources",
"localizedValue": "Microsoft Resources"
},
"resourceType": {
"value": "Microsoft.Resources/deployments",
"localizedValue": "Microsoft.Resources/deployments"
},
"resourceId": "/subscriptions/f9f75a63-4045-41ee-8ad1-589a763fe201/resourceGroups/WAFV2-DEMO/providers/Microsoft.Resources/deployments/Microsoft.Template-20211019090730",
"status": {
"value": "Failed",
"localizedValue": "Failed"
},
"subStatus": {
"value": "BadRequest",
"localizedValue": "Bad Request (HTTP Status Code: 400)"
},
"submissionTimestamp": "2021-10-19T14:09:02.1509723Z",
"subscriptionId": "f9f75a63-4045-41ee-8ad1-589a763fe201",
"tenantId": "77cba36f-f434-4ff6-ad61-99214f0d6305",
"properties": {
"statusCode": "BadRequest",
"serviceRequestId": null,
"statusMessage": "{"error":{"code":"InvalidTemplateDeployment","message":"The template deployment 'Microsoft.Template-20211019090730' is not valid according to the validation procedure. The tracking id is '9458c7df-b14e-496b-9849-762156cfc082'. See inner errors for details.","details":[{"code":"InvalidParameter","target":"imageReference","message":"The following list of images referenced from the deployment template are not found: Publisher: MicrosoftWindowsDesktop, Offer: Windows-10, Sku: rs5-pro, Version: latest. Please refer to https://docs.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage for instructions on finding available images."}]}}",
"eventCategory": "Administrative",
"entity": "/subscriptions/f9f75a63-4045-41ee-8ad1-589a763fe201/resourceGroups/WAFV2-DEMO/providers/Microsoft.Resources/deployments/Microsoft.Template-20211019090730",
"message": "Microsoft.Resources/deployments/validate/action",
"hierarchy": "f9f75a63-4045-41ee-8ad1-589a763fe201"
},
Additional context
Then, it does not deploy

Inaccurate IP GeoData Lookup

Describe the bug
The external data source that the Azure Firewall Workbook relies upon to determine the location of IP addresses is very out of date.

https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv

Reproduce
Steps to reproduce the behavior:

  1. Go to: https://raw.githubusercontent.com/datasets/geoip2-ipv4/master/data/geoip2-ipv4.csv
  2. Search for "85.239.32.0/19" (actual client ip is: 85.239.52.9)
  3. See that it is in Russia
  4. Go to: https://www.iplocation.net/ip-lookup
  5. Search for "85.239.52.9"
  6. See that it is reported as being in Atlanta, Georgia

Expected behavior
IP Address geo-location lookup is accurate

Screenshots
N/A

Environment- if applicable
N/A

Desktop (please complete the following information if applicable):
N/A

Logs- if applicable
N/A

Additional context
The last commit to the public dataset was 6 years ago.

This policy deletes custom DNS Servers on VNETs when enabling DDoS settings

Describe the bug
We deployed this policy to an environment and found the DNS Servers configuration were removed as a result of the deployment. DDoS Settings were enabled, but when we tried to modify the policy to keep the DNS Server configuration in place, the policy will not deploy
Reproduce
Steps to reproduce the behavior:

  1. Go to Assign the policy and remediate
  2. Check that VNETs no longer have custom DNS Servers Defined
  3. See error

Expected behavior
VNET Deployment should happen without any property changes happening on the resource

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable

  • What version of CLI was used [Az –version]

Desktop (please complete the following information if applicable):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
Add any other context about the problem here.

Failed to create a 'Settings' from the text 'Assembly references and imported namespaces serialized as XML namespaces'.

Describe the bug
D:\powershell\dns\Get-DanglingDnsRecords.ps1 : Failed to create a 'Settings' from the text 'Assembly references and imported namespaces serialized as XML namespaces'.
At line:1 char:1

  • ./Get-DanglingDnsRecords.ps1 -FetchDnsRecordsFromAzureSubscription - ...
  •   + CategoryInfo          : OperationStopped: (:) [Get-DanglingDnsRecords.ps1], XamlObjectWriterException
      + FullyQualifiedErrorId : System.Xaml.XamlObjectWriterException,Get-DanglingDnsRecords.ps1
    
    

Reproduce
Steps to reproduce the behavior:
Try to run the command
Get-DanglingDnsRecords.ps1 -FetchDnsRecordsFromAzureSubscription
Expected behavior

Add ruleId_s to query output

Is your feature request related to a problem? Please describe.
In order to tune the WAF (i.e. disable offending rules) the actual rule ID is needed. While this can still be found using the text of the message, it takes extra steps.

Describe the solution you'd like
Please return ruleId_s as part of the LAW queries.

Describe alternatives you have considered
I thought about adding this myself to the deployed version of the workbook I have except those changes would be lost the next time we updated to the latest official version.

Question: automate Azure Firewall rules configuration

Is there a way to handle firewall rules as a parameter from a parameter file? My scenario is that I would like users to request firewall openings and upon approval have a script modify a parameter file where it can either add or delete rules. I figured this would be an easier task then to have a script modify the whole firewall resource template. Thoughts?

Kind regards

[System.Management.Automation.PSCustomObject] does not contain a method named 'toarray'.

While running ".\Get-DanglingDnsRecordsPsDesktop.ps1 -FetchDnsRecordsFromAzureSubscription" this happens:

Get-DanglingDnsRecordsPsDesktop.ps1 : Method invocation failed because [System.Management.Automation.PSCustomObject] does not contain a method named 'toarray'.

Link is not working (404)

Describe the bug
the link to the DDoS custom policy json is not working (404) (https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20DDoS%20Protection/Azure%20Policy%20Definitions/Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy)

This link is not working:
(https://github.com/Azure/Azure-Network-Security/blob/master/Azure%20DDoS%20Protection/Policy%20-%20DDOS%20Restrict%20creation%20of%20Azure%20DDoS%20Protection%20Standard%20Plans%20with%20Azure%20Policy/AzurePolicyRuleDenyDDoSPlan.json)

Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots/images to help explain your problem.

Environment- if applicable

  • What version of CLI was used [Az –version]

Desktop (please complete the following information if applicable):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
Add any other context about the problem here.

Resource Specific logs enabled Firewall workbook still using AzureDiagnostics table

Describe the bug
When deploying the Azure Firewall Workbook with Resource Specific logs enabled, the very first metrics in the Azure Firewall Overview tab are using the AzureDiagnostics table which is not populated when using diagnostics settings with Resource Specific tables.

Reproduce
Steps to reproduce the behavior:

  1. Enable Diagnostic settings 'allLogs' and 'allMetrics' on the Firewall with 'Resource specific' as Destination table
  2. Install the Azure Firewall_ResourceSpecific_ARM.json Workbook from this repo
  3. Go to the newly created workbook and select the configured Firewall
  4. Click on the Azure Firewall Overview tab

Expected behavior
All metrics should use Resource Specific Tables when deploying the Resource Specific Azure Firewall Workbook.

Screenshots
Screenshot 2023-04-21 at 5 20 45 PM

Environment- if applicable
used Azure Firewall_ResourceSpecific_Gallery.json

Desktop (please complete the following information if applicable):
N/A

Logs- if applicable
N/A

Additional context
If using both type of Diagnostic settings (AzureDiagnostics and Resource Specific) pointing to 2 different log analytics workspaces, I can manage to have the workbook work correctly if reading the 2 log analytics workspaces. But that means store twice the same kind of events in a different format

Azure Firewall Workbook - Parsing bug

Describe the bug
The Azure Firewall workbook parsing for unstructured logs isn't functioning for Application and Network rule queries. Leading to the dashboard providing incomplete information.

Reproduce
Steps to reproduce the behavior:

  1. Load the Azure Firewall workbook.
  2. Click on the Application tab.
  3. Scroll down to the panel "All IP addresses events"
  4. Look at the Action column

Expected behavior
The log data should be parsed correctly.

Screenshots
MicrosoftTeams-image (8)

Sentinel Playbook - Block IP: Multiple WAF Policies Not Respected in PUT

Describe the bug
Logic app is assembling all rules from all WAF policies in to a single WAF policy. Any differences between WAF policies are combined together. Different policies may have conflicting priority metrics for different custom rules, even if combining the custom rules from all WAF policies to a single policy is a desired behavior.

Reproduce
Steps to reproduce the behavior:

  1. Create multiple WAF policies with differing block/allow rules.
  2. Trigger incident in Sentinel with IP as an entity.
  3. Run playbook from incident.
  4. Inspect PUT request at the end of the Logic App.

Expected behavior
Each WAF policy custom rule set should exist only in the policy in which it belongs.

Additional context
Works fine in environments with a single WAF policy.

Necessary "Role Assignments" for the LogicApp are not added

Describe the bug
Issue behavior: When the Sentinel Playbook (Logic App) is executed, the execution fails at the point where it tries to read the WAF policy attached to the application gateway. Following is an excerpt from the error message
{
"error": {
"code": "AuthorizationFailed",
"message": "The client '{logicapp-systemassignedIdentity's-PrincipalId}' with object id '{logicapp-systemassignedIdentity's-PrincipalId} does not have authorization to perform action 'microsoft.network/applicationgateways/read' over scope '/subscriptions/{SubscriptionId}/resourcegroups/rg-netsecninja/providers/microsoft.network/applicationgateways/soc-ns-ag-wafv2' or the scope is invalid. If access was recently granted, please refresh your credentials."
}
}
Similar issues would be observed when the following actions are performed by the playbook

  • Update the WAF policy with the identified suspicious IP address
  • Read the WAF policy associated with the Front Door Instance
  • Update the WAF policy associated with the FD

This behavior is observed when the playbook gets executed (based on the configured analytics rule)
Reference to the tech community article
https://techcommunity.microsoft.com/t5/azure-network-security-blog/integrating-azure-web-application-firewall-with-azure-sentinel/ba-p/1720306
The relative path of the playbook
Azure%20WAF/Playbook%20-%20WAF%20Sentinel%20Playbook%20Block%20IP

Reproduce
Steps to reproduce the behavior:

  1. Deploy the resources needed for the WAF testing lab (Azure WAF/Lab Template - WAF Attack Testing Lab/AzNetSecdeploy_Juice-Shop_AZFW-Rules_Updated.json)
  2. Perform the lab exercise steps 1 through 4 Part1
  3. Create a Sentinel resource atop the Log Analytics workspace that was created before step#1 was performed
  4. Deploy the playbook from this path (Azure WAF/Playbook - WAF Sentinel Playbook Block IP/template.json)
  5. Configure the analytics rule based on the documentation provided in this article
  6. The playbook should be triggered when the analytics rule condition is met.
    The app would error out in the step where it tries to read the WAF policy on the gateway/frontdoor

Expected behavior
The playbook should get executed successfully

Screenshots
Error information has been provided in the Description section

Environment- if applicable
NA

Desktop (please complete the following information if applicable):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Logs- if applicable

  • If logs are available, please provide relevant snippets

Additional context
A Pull Request with the necessary fixes has been created and is pending approval (as of 08/18) - #173

DNSTimeBrush not getting set. Azure Firewall Webhook / DNS Proxy logs not shown

I deployed same ARM template in my lower environment in November, it could collect all the metrics including "dns proxy logs". Some how now the template is throwing "This query could not run because some parameters are not set. Please set: DNSTimeBrush". Please advise, I have enabled dns logs in firewall "Diagnostic settings"

Error

I was able to execute successfully few month back but facing below issue now, would really appreciate any support.
I have csv file containing cname and fqdn. using cloudshell and getting error as below
Search-AzGraph: /home/ameya/.local/share/powershell/Modules/AzDanglingDomain/0.0.5/Export/Get-DanglingDnsRecords.ps1:144
..esources = (Search-AzGraph -Query $( -join ($query, ' | count'))).Cou...
Value cannot be null. (Parameter 'source')

Dependencies

Hi,
In order for this workbook to function correctly, does the Azure Firewall and log analytics workspace have to be in the same subscription?

I have my firewall in a different subscription and resource group to that of the log analytics workspace.
If I deploy the workbook to the same subscription as the Firewall, I can select the firewall but can't select log analytics workspace.
If I deploy the workbook to the same subscription as the workspace, I can select the workspace but not the Firewall.

Please advise.

Thanks

In Get-DanglingDnsRecords.ps1

Only with parameter inputfilednsrecord and a csv file, get this error:

Search-AzGraph : No subscriptions were found to run query. Please try to add them implicitly as param to your request (e.g. Search-AzGraph -Query '' -Subscription
'11111111-1111-1111-1111-111111111111')

The file has the correct columns, CNAME and FQDN.

DNS Entries ending with an dot (Dangling DNS Records)

Most of our DNS entries are ending with a dot, so they don't find a match in dangling dns records

I have changed the following and then they will match, maybe there is an better way to do this

Get-DanglingDnsRecords.ps1 first line is line 502

            $key = $item.Fqdn.trim(" ").tolower()
            if($key.EndsWith('.')) {
                $key = $key -replace ".$"
            } 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.