Git Product home page Git Product logo

terraform-aci-nac-aci's People

Contributors

andbyrne avatar bdewulfpersonal avatar conmurphy avatar danischm avatar dependabot[bot] avatar guilinyan avatar jgomezve avatar juchowan avatar khalil12138 avatar marehler avatar maxiturne avatar yil8cisco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-aci-nac-aci's Issues

Feature Request - Multiple Match rules per context in apic.tenants.l3outs.import_route_map.contexts

Current example and usage only allows for a single match rule per context in import_route_map and export_route_map

          import_route_map:
            description: desc
            type: global
            contexts:
              - name: CONTEXT1
                description: desc1
                action: deny
                order: 2
                match_rule: MATCH1
                set_rule: SET1

Would like to be able to define multiple match rules per context similar to apic.tenants.polices

New usage and example would be:

          import_route_map:
            description: desc
            type: global
            contexts:
              - name: CONTEXT1
                description: desc1
                action: deny
                order: 2
                match_rule: 
                  - MATCH1
                  - MATCH2
                  - MATCHn
                set_rule: SET1

Maintenance groups

Maintenance groups does not seem to work, when applying them they show up as failed because no version is specified.

Also using them fails:
image
image

Here is my config:

---
apic:
  node_policies:
    update_groups:
      - name: odd
      - name: even
---
apic:
  node_policies:
    nodes:
      - id: 2101
        pod: 1
        role: leaf
        update_group: odd
      - id: 2102
        pod: 1
        role: leaf
        name: LabDrLeaf2102
        update_group: even
      - id: 2901
        pod: 1
        role: spine
        update_group: odd

Apply service graph template uncertainty.

Good day,

After you have created a service graph template in ACI you have to right click it to and click apply. This apply option does not seem to be available from the module.

Is it anything that I dont know, or is this not implemented per now?

Best regards

Question - Pod Policies and auto generation.

While trying to apply pod policies it fails like this:


│ Error: The post rest request failed

│   with module.aci.module.aci_fabric_pod_profile_auto["1"].aci_rest_managed.fabricRsPodPGrp["pod-1"],
│   on .terraform/modules/aci/modules/terraform-aci-fabric-pod-profile/main.tf line 35, in resource "aci_rest_managed" "fabricRsPodPGrp":
│   35: resource "aci_rest_managed" "fabricRsPodPGrp" {

│ Code: 400 Response: [map[error:map[attributes:map[code:182 text:Validation failed: Validation failed: there is POD selector of type ALL and one of type range. Last considered for validation:
│ Dn0=uni/fabric/podprof-default/pods-default-typ-ALL, ]]]], err: %!s(<nil>). Please report this issue to the provider developers.

Here is my pod policy config:

---
apic:
 pod_policies:
   pods:
     - id: 1
       tep_pool: 10.4.96.0/19
       policy: default

If I try to add this to the fabric policies:

---
apic:
 fabric_policies:
   pod_profiles:
     - name: default
       selectors:
         - name: default 
           type: all
 pod_policies:
   pods:
     - id: 1
       tep_pool: 10.4.96.0/19
       policy: default

It fails like this:

│ Error: The post rest request failed

│   with module.aci.module.aci_fabric_pod_profile_manual["pod-1"].aci_rest_managed.fabricPodS["pod-1"],
│   on .terraform/modules/aci/modules/terraform-aci-fabric-pod-profile/main.tf line 25, in resource "aci_rest_managed" "fabricPodS":
│   25: resource "aci_rest_managed" "fabricPodS" {

│ Code: 400 Response: [map[error:map[attributes:map[code:182 text:Validation failed: POD Ids overlap. Dn0=uni/fabric/podprof-pod-1, ]]]], err: %!s(<nil>). Please report this issue to the provider developers.

I'm having this enabled:

apic:
  auto_generate_switch_pod_profiles: true
  auto_generate_pod_profiles: true
  fabric_policies:
    pod_profile_name: "pod-\\g<id>"
    pod_profile_pod_selector_name: "pod-\\g<id>"

Not quite sure about how the auto generation things work either so if someone could explain that aswell it would be awesome.

Multiple sr_mpls_infra_l3out entries in sr_mpls_l3outs ?

Hello,

First of all, thanks for your work, that's great !

I'm using 4 SR-MPLS_L3OUT in the infra tenant : SR-MPLS_RT1, SR-MPLS_RT2, SR-MPLS_RT3 and SR-MPLS_RT4.
In a tenant configuration on nac yaml files, i would like to add all 4 sr_mpls_infra_l3out entries in 1 sr_mpls_l3outs like this (with 2) :

      sr_mpls_l3outs:
        - name: SR-MPLS_DEV
          external_endpoint_groups:
              - name: ExtEPG_SR-MPLS_DEV
                subnets:
                  - prefix: 0.0.0.0/0
                contracts:
                  consumers:
                  - Contract_Permit-All_DEV
          vrf: VRF_DEV
          sr_mpls_infra_l3out: SR-MPLS_L3Out_RT1
          outbound_route_map: RM4RC_export_RT1
          inbound_route_map: RM4RC_import_RT1
          sr_mpls_infra_l3out: SR-MPLS_L3Out_RT2
          outbound_route_map: RM4RC_export_RT2
          inbound_route_map: RM4RC_import_RT2

I understand with the online documentation that sr_mpls_infra_l3out, outbound_route_map and inbound_route_map are strings and then cannot be duplicate.

Confirmed by terraform's error :

terraform apply
module.aci.data.utils_yaml_merge.model: Reading...
╷
│ Error: Error reading YAML string
│
│   with module.aci.data.utils_yaml_merge.model,
│   on .terraform\modules\aci\merge.tf line 20, in data "utils_yaml_merge" "model":
│   20: data "utils_yaml_merge" "model" {
│
│ Error reading YAML string: yaml: unmarshal errors:
│   line 137: mapping key "sr_mpls_infra_l3out" already defined at line 133
│   line 138: mapping key "outbound_route_map" already defined at line 134
│   line 139: mapping key "inbound_route_map" already defined at line 135

I was able to create 4 sr_mpls_l3outs with 1 sr_mpls_infra_l3out on each but that is not the way it should work because it adds 3 more L3Out per tenant.
It would be nice to be capable of associate different outbound_route_map and inbound_route_map (and even external_endpoint_groups) to each sr_mpls_infra_l3out !
Is there a way to do this configuration ? Or should I wait for an upcoming release with this feature ? A maybe a "feature request" ?

Thank you.

Resource changes where values are not defined.

Some resources prompts for changes when values are not defined, this is affecting;

Bridge Domain, where vmac is changed from "not applicable" to "" when not defined:

  # module.aci.module.aci_bridge_domain["mgmt/192.168.0.0"].aci_rest_managed.fvBD will be updated in-place
  ~ resource "aci_rest_managed" "fvBD" {
      ~ content    = {
          ~ "vmac"                  = "not-applicable" -> ""
            # (17 unchanged elements hidden)
        }
        id         = "uni/tn-mgmt/BD-192.168.0.0"
        # (3 unchanged attributes hidden)
    }

Inband and outband node adress where ipv6 addr and gateway is not defined is also changed from "::" to "":

  # module.aci.module.aci_inband_node_address["1901"].aci_rest_managed.mgmtRsInBStNode will be updated in-place
  ~ resource "aci_rest_managed" "mgmtRsInBStNode" {
      ~ content    = {
          ~ "v6Addr" = "::" -> ""
          ~ "v6Gw"   = "::" -> ""
            # (3 unchanged elements hidden)
        }
        id         = "uni/tn-mgmt/mgmtp-default/inb-inband/rsinBStNode-[topology/pod-1/node-1901]"
        # (3 unchanged attributes hidden)
    }
  # module.aci.module.aci_oob_node_address["1901"].aci_rest_managed.mgmtRsOoBStNode will be updated in-place
  ~ resource "aci_rest_managed" "mgmtRsOoBStNode" {
      ~ content    = {
          ~ "v6Addr" = "::" -> ""
          ~ "v6Gw"   = "::" -> ""
            # (3 unchanged elements hidden)
        }
        id         = "uni/tn-mgmt/mgmtp-default/oob-ooband/rsooBStNode-[topology/pod-1/node-1901]"
        # (3 unchanged attributes hidden)
    }

L3Out Interface Profiles description missing

Hi,

This is a small request for improvement :

I noticed that the description field of the interface_profiles was not available (work as designed/as documented) :
image

It would be great to add it like this

l3outs:
             [...]
         node_profiles:
             [...]
              interface_profiles:
                - name: vl10
                  description: "SVI Vlan 10" # Support this
                  interfaces:
                   [...]

Thanks in advance

MCP key configuration not pushed to ACI


apic:
access_policies:
mcp:
action: false
admin_state: true
frequency_sec: 5
initial_delay: 300
loop_detection: 5
per_vlan: true
key: $ECRETKEY1

Error: The post rest request failed 29s
10149│ 29s
10150│ with module.nac-aci.module.aci_mcp[0].aci_rest_managed.mcpInstPol, 29s
10151│ on .terraform/modules/nac-aci/modules/terraform-aci-mcp/main.tf line 1, in resource "aci_rest_managed" "mcpInstPol": 29s
10152│ 1: resource "aci_rest_managed" "mcpInstPol" { 29s
10153│ 29s
10154│ Code: 400 Response: [map[error:map[attributes:map[code:182 text:Password is 29s
10155│ required for MCP Instance Policy.]]]], err: %!s(). Please report this 29s
10156│ issue to the provider developers.

Not taking into account when pushing this configuration or when changing the key it is not changing anything when hitting terraform apply.

Not sure if this is expected behavior ?

NaC version = 0.8.1
Terraform module = v2.13.2

Annotation for Tenant

Hello,
I could not find how to add annotation natively via Nexus-as-Code for an ACI tenant.
I used "aci_rest_managed" resource and it worked but it would be good to have in NaC module itself.

Thank you.

MGMT EPG only supports oob/inb for validation and NOT default

Within the mgmt tenant, there is a default node mgmt epg (Out-of-Band EPG - default) , however, this value is not accepted as entry for validation for different fabric policies such as dns, tacacs etc.

However this is the default epg used by the APIC controllers and we cannot change this, resulting in mismatching node mgmt epg for leafs/spines vs apic and thus this reflects the contract that needs to be applied.

Screenshot 2023-11-27 at 15 20 50

It would make sense to use the default oob epg that is available under the mgmt tenant to have all leafs/spines/apic use the default oob epg.

ERROR:

Error: Invalid value for variable

│ on .terraform/modules/nac-aci/aci_fabric_policies.tf line 160, in module "aci_dns_policy":
│ 160: mgmt_epg_type = try(each.value.mgmt_epg, local.defaults.apic.fabric_policies.dns_policies.mgmt_epg)
│ ├────────────────
│ │ var.mgmt_epg_type is "default"

│ Allowed values are inb or oob.

│ This was checked by the validation rule at .terraform/modules/nac-aci/modules/terraform-aci-dns-policy/variables.tf:16,3-13.

Policy location

I dont quite understand why the location of the policies is different, for example:

err_disabled_recovery resides under fabric_policies:

apic:
  fabric_policies:
    err_disabled_recovery:
      interval: 360
      mcp_loop: true
      ep_move: true
      bpdu_guard: true

and mcp resides under access_policies:

apic:
access_policies:
 mcp:
   action: false
   admin_state: true
   key: cisco
   frequency_sec: 5
   initial_delay: 300
   loop_detection: 5
   per_vlan: false

even though both have the same path in the gui:

Location in GUI: Fabric » Access Policies » Policies » Global » Error Disabled Recovery Policy
Location in GUI: Fabric » Access Policies » Policies » Global » MCP Instance Policy default

Autogeneration of LF/SPINE fabric/access profiles fails when suffix is added

For example, in the aci_fabric_policies.tf module "aci_fabric_leaf_switch_profile_auto" (starting code line 368), the replace statement does not account for when a suffix is added in the default values for autogenerating fabric/access leaf/spine profiles, etc.

Working "LEAF\g" -> output is LEAF1001, LEAF1002 etc..
Not working "LEAF\g_SwPro" -> output is LEAF

Expected behaviour is to account for suffixes added to the naming

Bug: Port-tracking feature configuration push causes the feature to break in ACI

Setting the following feature causes the feature to break on ACI (tested with version 6.04(d)), and it causes the interfaces to go into FabricTrack Oper State. (see screenshots output after using NaC abstraction)

Setting port-tracking

port_tracking:
  admin_state: true
  delay: 120
  min_links: 2

Using the native terraform module works as expected if configured in the main.tf (as part of test)

Optional include the support for [include_apic_ports] in the abstraction layer (https://registry.terraform.io/providers/CiscoDevNet/aci/latest/docs/resources/port_tracking#include_apic_ports)

NaC version = 0.8.1
Terraform ACI module = v2.13.2

Screenshot 2024-02-22 at 10 49 11 Screenshot 2024-02-22 at 10 48 52

Need enhancement for configure Fabric L2 MTU / Port MTU size

Hello,

When testing netascode/nac-aci/aci v0.8.0 with below yaml file as input on APIC v4.2, simulator, default Fabric L2 MTU Policy / Port MTU size(bytes) will be updated between 9000 and 9216 every time after running terraform apply with the same input yaml file. That's because below yaml input will be applied by two modules -- terraform-aci-fabric-l2-mtu and terraform-aci-l2-mtu-policy and the two modules are all applied on default Fabric L2 MTU Policy / Port MTU size(bytes), but with different values.

apic:
  fabric_policies:
    l2_port_mtu: 9000
    l2_mtu_policies:
      - name: default
        port_mtu_size: 9216

If module aci_l2_mtu_policy is designed for customized L2 MTU Policy, how about excluding default as showed below?

Line 150 https://github.com/netascode/terraform-aci-nac-aci/blob/eda11599284526188037bfaecb6db17beeec7eca/aci_fabric_policies.tf
from
for_each = { for policy in try(local.fabric_policies.l2_mtu_policies, []) : policy.name => policy if local.modules.aci_l2_mtu_policy && var.manage_fabric_policies }
to
for_each = { for policy in try(local.fabric_policies.l2_mtu_policies, []) : policy.name => policy if local.modules.aci_l2_mtu_policy && var.manage_fabric_policies && policy.name != "default" }

image

image

Enhancement to TF module endpoint-loop-protection

Currently only the following values are accepted for i.e. "port-disable" and "bd-learn-disable", missing the option to deselect either option, just to generate a log entry. Disabling the interface or disable learning within a BD can be aggressive and have widespread affect. (ex. trunk interface towards legacy network)

ep_loop_protection:
  admin_state: true
  detection_interval: 180
  detection_multiplier: 10
  **action: port-disable**

thanks

Alexander

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.