Git Product home page Git Product logo

ansible-junos-evpn-vxlan's Introduction

Ansible Junos Configuration for EVPN/VXLAN

Sample project using Ansible and Jinja2 template to generate configurations and manage Juniper devices deployed in EVPN/VXLAN Fabric mode.

In this project you'll find:

  • (1) Sample project for ansible with Playbooks and variables to generate EVPN/VXLAN configuration for multi-pod EVPN/Fabric in a multi-tenants environment.
  • (2) Examples of configuration EVPN/VXLAN for QFX5k, QFX10k & MX.
  • (3) Severals Jinja2 templates, packaged and documented into Ansible roles that can be reuse in other Ansible projects to easily generate Overlay & Underlay configuration.
  • (4) Playbook to check the health of an EVPN/VXLAN Fabric.

Info on EVPN/VXLAN

White Paper on EVPN/VXLAN available on Juniper.net http://www.juniper.net/assets/us/en/local/pdf/whitepapers/2000606-en.pdf

Documentation

The complete documentation is available here

Examples of configuration

All examples of configuration are available in the config directory: Here are some links to specific features:

Contributing

Please refer to the file CONTRIBUTING.md for contribution guidelines.

Requirements

ansible-junos-evpn-vxlan's People

Contributors

dgarros avatar ksator avatar mpergament avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-junos-evpn-vxlan's Issues

add channelized interface support

Hello
This is a corner case and this is not related to evpn-vxlan so feel free to push back this issue:
if we want to use channelized interface support (xe-0/0/3:1 as example) (as example on qfx10002 if the fabric switch has only 10 Gbps ports) it currently breaks the yaml structure in the sample.topology.yml.
Example, this doesnt work:

 spine-02:
    port1: { name: et-0/0/13,    peer: leaf-01,      pport: port2,     type: ebgp, link: 3,   linkend: 1 }
    port2: { name: et-0/0/12,    peer: leaf-02,      pport: port2,     type: ebgp, link: 4,   linkend: 1 }
    port3: { name: xe-0/0/3:1,    peer: fabric-01,    pport: port2,     type: ebgp, link: 7,   linkend: 1 }
    port4: { name: xe-0/0/2:0,    peer: fabric-02,    pport: port2,     type: ebgp, link: 8,   linkend: 1 }

we just need to add quotes to fix the issue (to make the interface name a string, so the colom doesnt break the yaml structure).
This is working:

 spine-02:
    port1: { name: et-0/0/13,    peer: leaf-01,      pport: port2,     type: ebgp, link: 3,   linkend: 1 }
    port2: { name: et-0/0/12,    peer: leaf-02,      pport: port2,     type: ebgp, link: 4,   linkend: 1 }
    port3: { name: "xe-0/0/3:1",    peer: fabric-01,    pport: port2,     type: ebgp, link: 7,   linkend: 1 }
    port4: { name: "xe-0/0/2:0",    peer: fabric-02,    pport: port2,     type: ebgp, link: 8,   linkend: 1 }

But we need to re-channelized the channelized interface because ansible will overwrite running configuration ("set chassis fpc 0 pic 0 port 0 channel-speed 10g" as example)

duplicate RD on MX L3

Hello,

this template generates a duplicate RD on MX:
https://github.com/JNPRAutomate/ansible-junos-evpn-vxlan/blob/master/roles/overlay-evpn-mx-l3/templates/main.conf.j2

so each tenant has 2 routing instances (VRF and VS) with the same RD so the config doesnt commit on MX devices.

route-distinguisher {{ loopback_ip }}:{{tenant.id}}

I fixed it locally using a diff value for each routing instance but we should fix in on this repo:

$ more roles/overlay-evpn-mx-l3/templates/main.conf.j2 | grep disti
        route-distinguisher {{ loopback_ip }}:10{{tenant.id}};
        route-distinguisher {{ loopback_ip }}:{{tenant.id}};

many thanks

inter PODs L2

inter POD L2 between severs is not working.
Because of the term as-path in the policy bgp-ipclos-out in all the spines, there is no vxlan tunnels between leaves in pod1 an dleaves in pod2: so, inter POD L2 between severs is not working.
In order to fix this, I had to deactivate this term. Once this term is deactivate, L2 communocation beytween PODs is OK (because leaves has full mesh vtep between them).
I know we need this policy for anothe purpose, but maybe we should rewrite it differently otherwise inter POD is KO.

lab@spine-01> show configuration | compare rollback 1
[edit policy-options policy-statement bgp-ipclos-out]
!     inactive: term as-path { ... }

{master:0}
lab@spine-01> show configuration policy-options policy-statement bgp-ipclos-out
term loopback {
    from {
        protocol direct;
        route-filter 100.0.0.11/32 orlonger;
    }
    then {
        community add MYCOMMUNITY;
        next-hop self;
        accept;
    }
}
inactive: term as-path {
    from {
        as-path asPathLength2;
        community MYCOMMUNITY;
    }
    then reject;
}

pb.save.config.yaml variable issue

Hello
The playbook pb.save.config.yaml is still using the old variable host: "{{ junos_host }}"
we just need to change it to host: "{{ ansible_ssh_host }}"
Many thanks

MGLAG Role

Damien,
Do you think we can create an MCLAG Role, in order to automate/templatize an MCLAG+CLOS+VXLAN architecture? Or is that done somewhere else?
Thoughts?

'p2p' is undefined - pb.generate.variables

Hello,
when I run pb.generate.variables.yaml I get

TASK [generate-underlay-bgp : Generate Underlay YAML] **************************************************************************************
task path: /home/rrusso/ipfabric/roles/generate-underlay-bgp/tasks/main.yaml:3
fatal: [spine-01]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}
fatal: [spine-02]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}
fatal: [leaf-02]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}
fatal: [leaf-01]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}
fatal: [leaf-03]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}
fatal: [leaf-04]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'p2p' is undefined"}

The error disappears when I run the playbook again. The problem is caused because "refresh_inventory" is missing in the second playbook. I think it should be:

https://github.com/JNPRAutomate/ansible-junos-evpn-vxlan/blob/master/pb.generate.variables.yaml
- name: Generate variables for underlay
  hosts: [spine, leaf]
  connection: local
  gather_facts: no
  pre_tasks:  
    - include_vars: "{{ topology_file }}"
    - meta: refresh_inventory
  roles:
    - generate-underlay-bgp

In this way, group_vars/all.yml, which was created in the first playbook, is taken into account also in the second playbook. I'm using Ansible 2.3.1.0

R

OSPF role fixes

Hi Demian

i will start to add issues for the changes I made
Under the OSPF role (roles/underlay-ospf/templates/main.conf.j2 )

The {{ host.loopback.ip }} var is not defined

-    router-id {{ host.loopback.ip }};
+    router-id {{ loopback_ip }};

Under the ospf interfaces I add bfd and authentication

 {% for neighbor in underlay.neighbors %}
-            interface {{ neighbor.interface }};
+            interface {{ neighbor.interface }} {
+                               interface-type p2p;
+                               hello-interval 3;
+                               authentication {
+                                       md5 2 key "{{ underlay_ospf_pass_hash }}";
+                               }
+                               bfd-liveness-detection {
+                                       version automatic;
+                                       minimum-interval 300;
+                                       multiplier 3;
+                                       detection-time {
+                                               threshold 1000;
+                                       }
+                               }
+                       }
+
+
 {% endfor %}

For 100g interfaces on qfx10K you have to define the speed in under chassis so I add speed var to the topology

+{% for neighbor in underlay.neighbors %}
+{% if neighbor.speed is defined and neighbor.speed == "100g" %}
+chassis {
+    fpc {{ neighbor.interface.split('/')[-3].split('-')[-1] }} {
+        pic {{ neighbor.interface.split('/')[-2] }} {
+            port {{ neighbor.interface.split('/')[-1] }} {
+                speed 100g;
+            }
+        }
+    }
+}
+{% endif %}
+{% endfor %}

Thanks

Nitzan

/* This block of configuration has been generated by the role ... */

Hello,
I think we should remove these lines (which are comming from the templates in the roles) from the junos configuration running on the devices:

/* This block of configuration has been generated by the role underlay-ebgp for Ansible */
/* This block of configuration has been generated by the role overlay-evpn-qfx-l2 for Ansible */
...

We can use this ansible module to remove these lines before to push them on Junos devices http://docs.ansible.com/ansible/lineinfile_module.html

as:

  • we do not know where the block finish on the junos devices
  • most important: the junos device re-order the cli, so under the block /* This block of configuration has been generated by the role underlay-ebgp for Ansible */, we have the interfaces configuration (included the ones that were configured from another role as we can see below
lab@leaf-01> show configuration | match block
/* This block of configuration has been generated by the role underlay-ebgp for Ansible */
/* This block of configuration has been generated by the role overlay-evpn-qfx-l2 for Ansible */
lab@leaf-01> show configuration | find block
/* This block of configuration has been generated by the role underlay-ebgp for Ansible */
interfaces {
    xe-0/0/12 {
        description "to access";
        flexible-vlan-tagging;
        unit 10 {
            vlan-id 10;
        }
        unit 11 {
            vlan-id 11;
        }
    }
    xe-0/0/13 {
        description "to access";
        flexible-vlan-tagging;
        unit 10 {
            vlan-id 10;
        }
        unit 11 {
            vlan-id 11;
        }
        unit 12 {
            vlan-id 12;
        }
        unit 13 {
            vlan-id 13;
        }
    }
    et-0/0/48 {
        description " * to spine-01";
        mtu 9192;
        unit 0 {
            family inet {
                mtu 9000;
                address 172.16.0.1/31;
            }
        }
    }

"Check connectivity ANY2ANY between Leaf" failure (when inter PODs)

The tests in the task Check connectivity ANY2ANY between Leaf (galaxy/junos_ping) of the playbook pb.check.underlay.yaml fail when it is cross POD tests.

  • inside a POD it is OK.
  • between PODs they fail.

Ideally we should rewrite this task.

They fail for 2 reasons:

    policy-statement bgp-ipclos-out {
        term loopback {
            from {
                protocol direct;
                route-filter {{ loopback_ip }}/32 orlonger;
            }
            then {
{% if underlay.community is defined %}
                community add MYCOMMUNITY;
{% endif %}
                next-hop self;
                accept;
            }
        }
{% if underlay.community is defined %}
        term as-path {
            from {
                as-path asPathLength2;
                community MYCOMMUNITY;
            }
            then reject;
        }
{% endif %}

Provides a role to populate ZTP server

Hi

Can we add a role to generate configuration for ZTP server and copy device's configuration to the ZTP repository ?

It will kindly help to quickly setup a POC with a Zero Touch Provisioning approach.

Regards,

TiTom

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.