Git Product home page Git Product logo

cloud-native-data-center-networking's Introduction

Cloud Native Data Center Networking

Code repository for the O'Reilly book 'Cloud Native Data Center Networking'. You can get the book either via Safari or an online bookseller such as Amazon.

Book Cover

All my code has been tested on a Ubuntu laptop running either 18.04 or 19.04. If you experience an issue, please file a ticket and I'll do what I can to help, no promises. If you send me a fix via a pull request, I'll be grateful and incorporate the fix as quickly as I can.

Software Used

Software Version
Vagrant 2.2.5
vagrant-libvirt
Virtualbox 6.0.6
Ansible 2.8.4
FRRouting 7.2 & version in Cumulus 3.7.x

The vagrant-libvirt link contains instructions on installing libvirt, QEMU and KVM for various Linux distributions. I use libvirt because it spins up VMs in parallel, making the entire setup a breeze on most modern laptops. For example, on my Lenovo Yoga 920 with an i7-8550U processor and 16GB RAM running Ubuntu 19.04, I can spin up all of the different simulations (using Cumulus rather than Arista) in less than two minutes, and still have a functioning laptop i.e. I'm browsing, editing code etc. Virtualbox is more universally supported such as on Windows and Macs, but is much slower. Remember to use the Vagrant-vbox file to spin up the simulation using Virtualbox.

Even though I tested using Virtualbox 6.0.6, the virtualbox images for Cumulus VX have been built with Virtualbox Guest Additions for 5.1.8. That did not pose a problem for my testing and I don't use shared folders.

Vagrant Boxes Used

Vagrant uses VM images called boxes for spinning up the VMs. I use Vagrant boxes that should automatically download the appropriate Vagrant box when you run vagrant up. If that doesn't happen, you'll need to download the Vagrant box manually. Some Vagrant boxes such as Arista's needs to be downloaded from their website. You can spin up a libvirt image of Arista's VM using the instructions on this link.

The Vagrant boxes used in the simulation include:

Vagrant Box Version
CumulusCommunity/cumulus-vx > 3.6, < 4.0
generic/ubuntu1604 latest

I use Ubuntu 16.04 because the playbooks haven't been migrated to use Netplan, the method to configure network interfaces, used in releases starting with Ubuntu 18.04. I also use the specific Ubuntu boxes as they support libvirt images. In many cases, you can convert a Vagrant virtualbox image into a libvirt image via the Vagrant plugin, vagrant-mutate. The docker-ce Ubuntu box removes the need to install Docker. But you can use any other Ubuntu 1604 image that is supported by libvirt, if you wish. If you choose to use a different Ubuntu image than generic/ubuntu1604, then remember to change the name at the top of your Vagrantfile.

Repository Organization

There are three main scenarios (each with its own set of subscenarios) that warrant a full and separate simulation. The three scenarios are:

  1. Deploying OSPF in a 2-tier Clos topology. This is described in chapter 13 of the book. This has the following additional subscenarios:

    1. Traditional Numbered OSPF
    2. Unnumbered OSPF
    3. OSPF running on the host with containers
  2. Deploying BGP in a 2-tier Clos topology. This is described in chapter 15 of the book, and has the following additional subscenarios:

    1. Traditional Numbered BGP
    2. Unnumbered BGP
    3. BGP running on the host with containers
  3. Deploying EVPN with VXLAN. This is described in chapter 17 of the book. It too has additional subscenarios which are:

    1. Centralized Routing with eBGP
    2. Distributed Routing with eBGP
    3. OSPF + iBGP with Distributed Routing

Each of these scenarios also has validation playbooks as described in chapter 18. Those validation playbooks have been embedded inside each of the appropriate scenarios.

The topologies used in this github differ from the ones used in the deployment chapters in the book. They've been simplified and expanded. Simplified by reducing the number of servers to enable the simulation to run on a 16GB RAM laptop. Expanded by using a generic single attach and a dual-attach topology for all scenarios described in the book. But, I've stayed true to the IP addresses and ASNs used in the book. Only in the case of EVPN have the servers under each pair of switches been put in different VLANs to demonstrate multiple VNIs. I've also stayed true to the interface names used in the book and in this repository. Thus, the configuration files should look mostly alike.

Singly attached servers is common in larger networks while dual-attached servers is common in enterprise and smaller networks.

The dual-attached server topology used across all the scenarios looks like this:

Dual-Attach Topology

The singly-attached server topology used across all the scenarios looks like this:

Single-Attach Topology

In the singly-attached servers topology, the peerlinks between the leaves exist, but are not used and CLAG is not configured.

Starting/Stopping the Topology

vagrant doesn't take a filename as an option and so depending on the hypervisor you're choosing - KVM/libvirt or Virtualbox - you must copy the appropriate Vagrantfile to Vagrantfile. Thus, if you're using KVM, copy Vagrantfile-kvm to Vagrantfile.

Once you've the file called Vagrantfile, you start the topology with vagrant up. If you want to spin up only a subset of the nodes, you can do so by specifying the nodes you want to spin up. For example, if you wish to spin up the network with just the leaves and spines without the servers and edges, you can do so using vagrant up spine01 spine02 leaf01 leaf02 leaf03 leaf04. The playbooks should all run with the limited topology too, but you'll see errors (only once) for the nodes that cannot be found.

To destroy the topology, you run vagrant destroy -f. To destroy a subset of nodes, you specify the node names to destroy. For example, run vagrant destroy -f leaf03 leaf04 to destroy only the nodes leaf03 and leaf04.

Running the Playbooks

We use Ansible to run the playbooks. After starting the topology, go to the appropriate playbook directory: ospf, bgp or evpn. In that directory, you can deploy the configuration via the command ansible-playbook -b deploy.yml. You can switch between subscenarios by running the reset.yml playbook within each of the scenarios before running the deploy.yml playbook. However, due to some reason I've not figured out yet, the reset followed by deploy doesn't work for the distributed anycast gateway scenario with EVPN. For that scenario alone, you need a fresh spin up. I'm troubleshooting why this might be so.

The names of the three scenarios supported by ospf and bgp are: numbered, unnumbered, docker. The first configures the switches to run the numbered version of the protocol and the second, the unnumbered version. The third installs docker and FRR and is used to test routing on the host.

The -b option is to inform Ansible to execute the commands as root.

By default, the unnumbered version is run. To run any of the non-default versions, you must pass the name via an extra option when invoking ansible-playbook. For example, to run the numbered version, you'd run the playbook as: ansible-playbook -b -e 'scenario=numbered' deploy.yml.

The docker scenario takes longer to finish as it requires the installation of both docker and FRR on each server.

The playbooks for each of the scenarios uses Ansible as a dumb file copier. The code samples under the Ansible directory provides options for the dual attach topology and a single scenario with different levels of sophistication.

A common way to test if the configuration is working correctly is to test reachability between servers and from the server to the internet. The internet facing router is provided with an address 172.16.253.1 that can be used to ping from the servers to check that the path is correctly plumbed all the way to the edge of the data center. Pinging from one server to another from reachability is also good. Run the playbook ping.yml via ansible-playbook ping.yml and the playbook should validate that all appropriate entities such as gateway, neighbor node etc. are pingable from a given node.

As this book is vendor-neutral, with demonstrations and samples using specific vendors owing to my familiarity and the availability of Vagrant boxes, I've not followed the methodology of writing playbooks dictated by any vendor. I've tried to use the most easily understandable and open source code as much as possible. Specifically in the case of Cumulus, all FRR configuration is viewable as /etc/frr/frr.conf. All bridging (L2) and VXLAN configuration is under /etc/network/interfaces because FRR does not support any L2 or VXLAN configuration as of version 7.2. You can access FRR's shell via the vtysh command. vtysh provides network operators unfamiliar with Linux with a shell that's more familiar. Cumulus-familiar network operators can also use the NCLU commands available under the net family of commands.

When to Ignore Errors in Running the Playbook

When you run the reset.yml playbook, the reload networking task on switches fails with a fatal UNREACHABLE error. Ignore this as this is caused by switching the eth0 interface from mgmt VRF to default VRF.

What Scenarios Are Working and Tested

The status as of Dec 21, 2019 is as follows:

The following scenarios and subscenarios are working for both topologies:

  • OSPF with all subscenarios. The exit, firewall and edge nodes are not configured in any of the subscenarios of each of these scenarios.

  • BGP and EVPN all subscenarios. All nodes including exit, edge and firewall nodes are configured and working.

The remaining OSPF configuration is being worked on and should be up before the end of the year. Validation playbooks, playbooks illustrating better use of Ansible etc. are also not ready and will be added after adding the OSPF configuration support.

cloud-native-data-center-networking's People

Contributors

ddutt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-native-data-center-networking's Issues

How can OSPF unnumbered interface get the mac address of its peer route interface?

Hi Dinesh,

This is actually one OSPF puzzle I'm trying to figure out rather than an issue. Figure 13-4 on page 274 shows the unnumbered OSPF configuration for a two-layer clos topo. The route interfaces at the two ends of the link are in different networks, for example, the ip addresses of the two ends of the link between spine01 and leaf01 are 10.0.0.21/32 and 10.0.0.11/32. So in theory, sipne01 can't get the MAC of leaf01 via ARP because ARP requires that the subnet of IP in the request should be the same of the sender. Similarly, leaf01 can't get the MAC of spine01 either.

So here is my question: how exactly IP-MAC mapping is done in unnumbered OSPF?

Thanks,
Huabing

无标题

chapter 5: BGP’s behavior in a Clos network

  1. I guess the "SSo" is a typo in this sentence "As an example, traffic from SS2 cannot reach SSo as indicated by the crossed out link.", it should be" As an example, traffic from SS2 cannot reach SSp as indicated by the crossed out link.".

  2. "To illustrate this, in Figure 5-7, S11 updates the other connected spines about its lost connectivity to S11." should be "L11 updates the other connected spines about its lost connectivity to S11."

Chapter 15: FRR and RFC 5549

  1. The RIB process now adds a static ARP entry for 169.254.0.1 with this MAC address, with the peering interface as the outgoing interface.

I don't understand the bold part since I think there is no outgoing interface for an ARP entry.

chapter 5: BGP’s behavior in a Clos network

  1. L11 withdraws reachability to the loopback IP address of S11 because reachability to
    the other leaves is still possible via the other leaves.

I think "L11 withdraws reachability to the loopback IP address of S11" just because the link between L11 and S11 went down.

  1. But because the spines could never talk to each other because of how ASNs were configured, L11’s update to the other spines is dropped by them.

L11 is a leaf router, so this statement doesn't make sense. Should "L11" be "S11" here?

Unclear playbook messages

Hi!
I'm very excited to experiment with this setup but unfortunately I faced with an ansible error at the very beginning. The vagrant up command works well and even the dummy.yml playbook run well on all host. I wanted to work with a subset of nodes (leafs+spines+servers, no edges, internet or firewall). Then the ansible-playbook -b deploy.yml at the evpn folder give me the output below:

 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'


PLAY [localhost] **********************************************************************************************************************************************

TASK [Check that the scenario is a valid one] *****************************************************************************************************************
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

PLAY [all] ****************************************************************************************************************************************************
skipping: no hosts matched
 [WARNING]: Could not match supplied host pattern, ignoring: network


PLAY [network] ************************************************************************************************************************************************
skipping: no hosts matched
 [WARNING]: Could not match supplied host pattern, ignoring: servers


PLAY [servers] ************************************************************************************************************************************************
skipping: no hosts matched

After that, if I ssh into leaf01 for example, there are no working FRR daemon, vtysh fails to open. I got the following logs:

vagrant@leaf01:~$ sudo cat /var/log/frr/frr.log 
2019-12-25T21:29:45.050478+00:00 cumulus frr[1979]: Loading capability module if not yet done.
2019-12-25T21:29:45.081096+00:00 cumulus frr[1979]: Starting Frr daemons (prio:10):.
2019-12-25T21:29:45.457553+00:00 cumulus frr[1979]: Exiting: failed to connect to any daemons.
2019-12-25T21:29:45.464037+00:00 cumulus frr[1979]: Exiting from the script
2019-12-25T21:29:45.521550+00:00 cumulus frr[2001]: Stopping Frr monitor daemon: (watchfrr).
2019-12-25T21:29:45.585789+00:00 cumulus frr[2001]: Stopping Frr daemons (prio:0): (zebra) (bgpd) (ripd) (ripngd) (ospfd) (ospf6d) (isisd) (babeld) (pimd) (ldpd) (nhrpd) (eigrpd) (sharpd) (pbrd) (vrrpd).
2019-12-25T21:29:45.608464+00:00 cumulus frr[2001]: Stopping other frr daemons..
2019-12-25T21:29:45.613493+00:00 cumulus frr[2001]: Removing remaining .vty files.
2019-12-25T21:29:45.613992+00:00 cumulus frr[2001]: Removing all routes made by FRR.
2019-12-25T21:29:45.665313+00:00 cumulus frr[2001]: Exiting from the script

I'm not sure if I did something wrong or there are some misconfiguration in my setup. I use Ubuntu 19.10, Vagrant and KVM libvirt installed from the Ubuntu repos.

centralized dual evpn has mismatching mtu

when I run suzieq interface assert I find

(suzieq) jpiet@a1:/tmp/pycharm_project_304/suzieq$ python3 suzieq/cli/suzieq-cli  interface assert --namespace=dual-evpn
    namespace  hostname ifname peerHostname peerIfname               timestamp     mtu  peerMtu assert
0   dual-evpn    edge01   eth1       exit01       swp5 2020-04-13 16:55:53.600  1500.0     1500   pass
1   dual-evpn    edge01   eth2       exit02       swp5 2020-04-13 16:55:53.600  1500.0     1500   pass
2   dual-evpn    exit01   swp1      spine01       swp6 2020-04-13 16:55:53.600  9216.0     9216   pass
3   dual-evpn    exit01   swp2      spine02       swp6 2020-04-13 16:55:53.600  9216.0     9216   pass
4   dual-evpn    exit01   swp5       edge01       eth1 2020-04-13 16:55:53.600  1500.0     1500   pass
5   dual-evpn    exit01   swp6     internet       swp1 2020-04-13 16:55:53.600  1500.0     1500   pass
6   dual-evpn    exit02   swp1      spine01       swp5 2020-04-13 16:55:53.600  9216.0     9216   pass
7   dual-evpn    exit02   swp2      spine02       swp5 2020-04-13 16:55:53.600  9216.0     9216   pass
8   dual-evpn    exit02   swp5       edge01       eth2 2020-04-13 16:55:53.600  1500.0     1500   pass
9   dual-evpn    exit02   swp6     internet       swp2 2020-04-13 16:55:53.600  1500.0     1500   pass
10  dual-evpn  internet   swp1       exit01       swp6 2020-04-13 16:55:53.600  1500.0     1500   pass
11  dual-evpn  internet   swp2       exit02       swp6 2020-04-13 16:55:53.600  1500.0     1500   pass
12  dual-evpn    leaf01   swp1      spine01       swp1 2020-04-13 16:55:53.600  1500.0     9216   fail
13  dual-evpn    leaf01   swp2      spine02       swp1 2020-04-13 16:55:53.600  1500.0     9216   fail
14  dual-evpn    leaf01   swp3       leaf02       swp3 2020-04-13 16:55:53.600  1500.0     1500   pass
15  dual-evpn    leaf01   swp4       leaf02       swp4 2020-04-13 16:55:53.600  1500.0     1500   pass
16  dual-evpn    leaf02   swp1      spine01       swp2 2020-04-13 16:55:53.600  1500.0     9216   fail
17  dual-evpn    leaf02   swp2      spine02       swp2 2020-04-13 16:55:53.600  1500.0     9216   fail
18  dual-evpn    leaf02   swp3       leaf01       swp3 2020-04-13 16:55:53.600  1500.0     1500   pass
19  dual-evpn    leaf02   swp4       leaf01       swp4 2020-04-13 16:55:53.600  1500.0     1500   pass
20  dual-evpn    leaf03   swp1      spine01       swp3 2020-04-13 16:55:53.600  1500.0     9216   fail
21  dual-evpn    leaf03   swp2      spine02       swp3 2020-04-13 16:55:53.600  1500.0     9216   fail
22  dual-evpn    leaf03   swp3       leaf04       swp3 2020-04-13 16:55:53.600  1500.0     1500   pass
23  dual-evpn    leaf03   swp4       leaf04       swp4 2020-04-13 16:55:53.600  1500.0     1500   pass
24  dual-evpn    leaf04   swp1      spine01       swp4 2020-04-13 16:55:53.600  1500.0     9216   fail
25  dual-evpn    leaf04   swp2      spine02       swp4 2020-04-13 16:55:53.600  1500.0     9216   fail
26  dual-evpn    leaf04   swp3       leaf03       swp3 2020-04-13 16:55:53.600  1500.0     1500   pass
27  dual-evpn    leaf04   swp4       leaf03       swp4 2020-04-13 16:55:53.600  1500.0     1500   pass
28  dual-evpn   spine01   swp1       leaf01       swp1 2020-04-13 16:55:53.600  9216.0     1500   fail
29  dual-evpn   spine01   swp2       leaf02       swp1 2020-04-13 16:55:53.600  9216.0     1500   fail
30  dual-evpn   spine01   swp3       leaf03       swp1 2020-04-13 16:55:53.600  9216.0     1500   fail
31  dual-evpn   spine01   swp4       leaf04       swp1 2020-04-13 16:55:53.600  9216.0     1500   fail
32  dual-evpn   spine01   swp5       exit02       swp1 2020-04-13 16:55:53.600  9216.0     9216   pass
33  dual-evpn   spine01   swp6       exit01       swp1 2020-04-13 16:55:53.600  9216.0     9216   pass
34  dual-evpn   spine02   swp1       leaf01       swp2 2020-04-13 16:55:53.600  9216.0     1500   fail
35  dual-evpn   spine02   swp2       leaf02       swp2 2020-04-13 16:55:53.600  9216.0     1500   fail
36  dual-evpn   spine02   swp3       leaf03       swp2 2020-04-13 16:55:53.600  9216.0     1500   fail
37  dual-evpn   spine02   swp4       leaf04       swp2 2020-04-13 16:55:53.600  9216.0     1500   fail
38  dual-evpn   spine02   swp5       exit02       swp2 2020-04-13 16:55:53.600  9216.0     9216   pass
39  dual-evpn   spine02   swp6       exit01       swp2 2020-04-13 16:55:53.600  9216.0     9216   pass
Assert failed

Need help with some confusion while translating this book

Hi @ddutt ,

This is Huabing Zhao from China. Your awesome work"cloud native data center networking" is introduced to the Chinese technical community by China Electric Power Press, and I'm working on the translation with a small team right now.

While we're translating this book, We want to make the translation perfectly correct in all the technical details to convey the meaning in its original version. So far, we encounter some confusion in the first few charters. So I think maybe I could turn to you for help.

Since I can't find other ways to contact you, I create this issue to try to reach out to you. If it's possible, I can send you detailed questions by email.

Thanks for your time.

Huabing Zhao

chapter 5: Link-state protocol’s behavior in a Clos network

Every backbone area router recomputes its routes. The spine routers remove S11 from their next-hop list to reach L11 and its prefixes.

Should it be "Every backbone area router recomputes its routes. The super spine routers remove S11 from their next-hop list to reach L11 and its prefixes. "?

chapter 17 Deploying Network Virtualization

Page 368 : "A set of VNIs is mapped to a VRF and the VRF is associated with a unique VNI. "

I would like to make sure I understand this sentence correctly. Do you mean: "Multiple VNIs can be mapped to a VRF, and the id of each VNI is unique?"

chapter 9: Services

The border leaves learn the routes via BGP (or OSPF or IS-IS), which puts routes learned via the internal network in the green VRF and routes learned via the internet router (typically just the default route)
in the gray VRF.

Should it be "black" VRF?

Chapter 5:How Routing Table Lookups Work

Each next hop in the list contains the outgoing interface, and optionally the next-hop router’s IP address. When the next hop is another router, the next-hop entry contains the IP address of
the next-hop router.

These two sentences seem contradictory. The former says IP is optional, the latter says that IP is contained in the entry.

chapter 5: Whom do I talk to?

  1. In FRR, you can configure the interface over which you want BGP to peer with the neighbor.

What's the implication of " which you want BGP to peer with the neighbor"? Could you please elaborate on that?

  1. They use one of the infinite knobs available to allow the leaves to see one another’s prefixes. This is needlessly complex.

What's the metaphor " infinite knobs" refers to?

chapter 9: Hybrid Cloud Connectivity

Furthermore, you can not use multicast or broadcast inside a VPC. The VPC also doesn’t run a routing protocol.

I don't understand how the servers in different subnets of the VPC communicate with each other without any routing protocol in the VPC. Is that because there is only one router in the VPC, so no need to exchange routes?

chapter 5: Link-state protocol’s behavior in a Clos network

All of the leaves also compute the area summary routes to announce to the other backbone area routers. S11, which saw the change, sends the new updates to its neighbors in the backbone area, the super-spines. This update propagates through the backbone area until all routers in the backbone area have received it. This includes leaves in other pods. We’re assuming that the spines are not summarizing in this case (see the “Route Summarization in Clos Networks” on page 105 for details on summarization).

  1. All of the leaves also compute the area summary routes to announce to the other backbone area routers.
    Since leaves are not in backbone area, and spines are ABR, I think it should be"All of the spines also compute the area summary routes to announce to the other backbone area routers(The super spines). "

  2. "This includes leaves in other pods."
    Should it be "This includes spines in other pods."? Because leaves are not in backbone area.

3. If my first two assumptions were correct, then they will contradict this sentence "We’re assuming that the spines are not summarizing in this case (see the “Route Summarization in Clos Networks” on page 105 for details on summarization)."

I think I may misunderstand most of this paragraph, could you please help me with that? Thanks.

Chapter 2: Use of Chassis as a Spine Switch

This below sentence on page 35:
"For example, with a 256-port chassis spine switch and a 64-port leaf switch, you can build 256 × 32 =
8,192 servers instead of the 2,048 switches available with just 64-port switches.“

I guess the "switches" in "2048 switches" is a typo, it should be servers.

CHAPTER 13: OSPF Route Types

They use OSPF to learn about the destinations internal to this AS and to inform the internal network about the external destinations

Should it be "They use OSPF to learn about the destinations internal to this AS and to inform the external network about the internal destinations" ?

chapter 15: Routing Policy

So a first stab at arouting policy would be the following:

if prefix equals 172.16.0.0/16 then accept
else if prefix equals 10.0.0.0/24 then accept
else reject

But this would accept anyone accidentally announcing the subnet 10.0.254.0/26, as an example.

Looks like it should be "But this would accept anyone accidentally announcing the subnet 10.0.0.0/26, as an example."

chapter 13:Con€guration with Servers Running OSPF: IPv4

Page 282, Example 13-6, the ip address of docker0 is missing.

!
interface docker0
 ip ospf area 1
 ip ospf bfd
!

Would it be like the below configuration?

!
interface docker0
 ip address 192.168.10.x/32
 ip ospf network point-to-point
 ip ospf area 1
 ip ospf bfd
!

chapter 5: Routing Protocols in Clos Networks

I'm confused by this paragraph "the leaf and spine router links belong to the level 1 (area 1). The spine and super-spine router links belong to level 2 (or backbone) area."

Which level do the spine router links belong to? level 1, leve 2, or both? Could you please elaborate on that?

chapter 5: What do I tell them?

"For other interfaces such as loopback, you can add a clause to advertise the address on that interface without attempting to peer."

What's the meaning of "add a clause"? Is it a configuration command?

chapter 5: Link-state protocol’s behavior in a Clos network

With IS-IS or OSPF, each pod is in its own level or area, level 1 or the nonbackbone area. The leaves are the area border routers. The leaves and spines form the level 2 (or backbone) routers.

My understanding is that the spines and super spines form the level 2 routers, more precisely, spines are area border routers between nonbackbone and backbone. So the sentence should be" The spines are the area border routers(connecting nonbackbone and backbone). The spines and super spines form the level 2 (or backbone) routers." Is my understanding correct?

Chapter 14: Multipath Selection

Hi Dinesh,

This paragraph is confusing for me, especially what the "virtual services" refer to. Why multiple servers will announce reachability to the same service virtual IP? Are all these servers directly connected to the virtual service? It seems like Service Cluster IPs in a Kubernetes cluster because a virtual Cluster IP actually exists in an IPTables rule on every node. Could you please elaborate on it? Thanks!

In the second deployment scenario, when virtual services are deployed by servers,multiple servers will announce reachability to the same service virtual IP address. Because the servers are connected to different leaves to ensure reliability and scalability, the spines will again receive a route from multiple different ASNs, for which theAS_PATH lengths are identical, but the specific ASNs within the path itself are not

Chapter 1: The promise of zero con€guration

The two ends of an interface must be configured to be in the same subnet for routing to even begin to work.

Hi, Dinesh, I don't get this sentence. How can an interface have two ends? I only know that a link can have two ends. Could you please explain this to me? Thanks!

chapter 5: Link-State Dissected

I don't get the below paragraph, could you please elaborate it? Thanks.

"R1 and R4, of course, advertise their reachability to their locally attached subnets as well, except that they advertise them via locally attached links to those subnets (there are other ways such as via redistribute, but we’ll ignore them for now). "

chapter 9: Connecting the Clos Topology to the External World

I don't understand the sentence in bold, could you please elaborate it for me? Thanks.

In very large networks, if the number of spines exceeds the port count of a border
leaf, it is no longer possible to connect the border leaves to all spines. In such situa‐
tions, another tier of switches is added to the network to mitigate this problem. Given
that you can easily find switches that are 64 ports of 100GbE, you need something
approaching that many spines to run into this condition.

chapter 15: Peering with BGP Speakers on the Host

router bgp 65011
  bgp router-id 10.0.0.11
  neighbor peer-group SERVERS
  neighbor SERVERS remote-as external
  neighbor 172.16.1.1 peer-group SERVERS
  neighbor 172.16.1.2 peer-group SERVERS

In the above configuration, neighbor section should start from 172.16.1.2 since 172.16.1.1 is leaf01's interface IP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.