I think now is a good time to take a recap on what we have built so far with VMware Cloud Foundation (VCF). We’ve done a number of activities to date, notably the deployment of the management domain in part 1. Then we spend some time deploying the vRealize Suite of products in parts 2, 3 and 4. In part 5, we commissioned some additional ESXi hosts and then most recently we created our first workload domain in part 6, which included the deployment of NSX-T 2.5. Now we come to quite a long section, which is the deployment of NSX-T Edge. Why are we doing this? Well, my end-goal is to deploy PKS, and allow me to deploy Kubernetes clusters on VCF. At this date, NSX-T Edge deployments have not been automated in VCF, so this will be a manual approach.

A word of warning – this is one of the longest posts I’ve done for some time, but you’ll see why as you read through it. I basically wanted to capture all of the steps involved in deploying an NSX-T Edge appliance, the NSX-T configuration steps, and finally the BGP configuration steps on both NSX-T and your upstream router so that overlay segments built in NSX-T can route out.

PKS and NSX-T – Why?

Let’s talk about why we want to have NSX-T for Enterprise PKS first.With this integration, when we deploy applications in Kubernetes, they automatically get their own network segments, providing per-tenant isolation. These applications also get their own Load Balancers automatically, avoiding developers needing to raise “Can you create me a new Load Balancer?” ticket with IT.

The PKS management virtual machines (BOSH, PKS, Harbor) are deployed on their own PKS management network segment. However these VMs all need to speak to vCenter. Kubernetes masters and workers will be deployed on their own PKS service network segment and need to be able to communicate back to PKS management VMs on the management network segment (e.g. BOSH for validation). There may also a requirement for Kubernetes nodes to reach an external repository for container images). Kubernetes nodes also need to speak to vCenter to request creation of persistent volumes on vSphere storage.

Thus the PKS management and service networks need to be able to communicate to each other, and should also be able to communicate externally to vCenter server, and maybe externally to the internet as well. The end goal of this exercise is to create two overlay networks that can talk to each other, as well as talk externally to my vCenter Server.

NSX-T Edge Overview

I get a mental block when it comes to networking, so for me it is usually good to review what we want to achieve with the NSX-T Edge. If you are already au-fait with all of this, feel free to jump straight to the Configuring section below. Conceptually, the purpose of an NSX-T Edge is straight forward. It provides the ability for traffic to travel east/west in an overlay/tunnel across multiple ESXi hosts and NSX-T Edges. It also facilitates north/south traffic to travel out of the overlay network (or specifically VMs on the overlay) to external networks. And then of course, there are the services to consider, e.g. NAT, Firewall, BGP, etc.

Transport Nodes and Transport Zones

There is a lot of NSX-T terminology to get to grips with. Let’s start with some basics – transport nodes and transport zones. Transport nodes are simply nodes that are capable of participating in an NSX-T network, for example an ESXi host or an NSX-T Edge. Transport zones control the reach of a layer 2 networks. A transport zone can support either overlays (for east/west traffic) or VLANs (for north/south traffic). Transport nodes are part of a transport zone. Each transport node is configured with an N-VDS, an NSX-T virtual switch. For each transport zone that an NSX Edge belongs to, a single N-VDS virtual switch gets added and configured on the NSX Edge. We will see this in practice shortly.

The Overlay is often referred to as the TEP (tunnel endpoint) networking. It is an internal tunnel between transport nodes (using Geneve). For the VLAN, many docs and posts often referred to as the Uplink network. This is used to connect between NSX Edge Uplinks and upstream physical routers/switches.

Your NSX-T Edge virtual appliance comes with 4 network connections – 1 for management, 1 for the overlay/tunnel and 2 for VLAN/uplink connections. When you install an NSX-T Edge as a virtual appliance, internal interfaces are created called fp-ethX, where X is 0, 1, 2, and 3. These interfaces are allocated for uplinks to an upstream physical router/switch and for overlay tunneling. When you create the NSX-T Edge transport node, you can select fp-ethX interfaces to associate with the uplinks and which to associate with the overlay tunnel.

Network after Workload Domain (WLD) deployment

As part of the Workload Domain deployment in part 6, VCF did a number of tasks automatically when deploying NSX-T. An overlay network was created on all of the hosts, and the VMkernel ports (management, vSAN and vMotion) were all moved to NSX-T network segments. Here is a view of the networking from a vSphere perspective once the WLD has been deployed.

Above, you can see the network segments (logical switches) created by NSX-T for the VMkernel traffic. All VMkernel interfaces have been migrated to NSX-T at this point. This can clearly be seen if you select any of the WLD hosts and examine its Virtual Switches. The distributed portgroups are actually on an NSX-T N-VDS, as show below.

A word of warning – do not remove the unused vSphere distributed switch. This is required if you ever wish to delete the workload domain, since VCF will migrate networking from NSX-T back to the vSphere distributed switch.

Now, if we then login to the NSX-T Manager, we can see the results of the fully automated workload domain deployment from an NSX-T perspective. First off, here is the appliances view. The creation of the WLD deployed 3 x NSX-T Managers (previously known as controllers) for our domain, and also plumbed up a virtual IP for the cluster.

We can also see that there are two transport zones created for the hosts, one is an overlay and the other is an uplink/VLAN:

There are also a number network segments created for the VMkernel traffic (along with corresponding logical switches). As seen earlier, these segments/logical switches are also visible in the vSphere UI.

There are a bunch of other network related items configured as well of course, such as host transport nodes, uplink profiles and transport node profiles. We do not need to worry about these in the context of hosts, but we will discuss them in the context of NSX-T Edges shortly.

Configuring NSX-T for VCF Workload Domain

OK – let now get to the purpose of this post; let’s go through the steps involved in getting NSX-T Edges deployed and configured so that we can have our PKS Management and Service VMs talk to each and talk externally to networks outside of the overlay. But before that, let’s discuss the VLANs that I have available in my lab.

VLAN 70 – NSX-T Host Overlay for Workload Domain

VLAN 80 – NSX-T Edge Overlay for Workload Domain

VLAN 50 – External Network

When deploying this WLD, the overlay network VLAN was requested. In this deployment, I chose VLAN 70. Note that VLAN 70 and 80 are also routed to each other.

Step 1 – Create network segments for edge appliance uplinks

In my lab, I created two network segments, one for the uplink/VLAN network and the other for my overlay/tunnel network. Both overlay and VLAN segments are configured as trunks using VLAN 0-4094. Both are also using the vlan-tz-xxx transport zone that has already been created. Take care to not pick the overlay-tz-xxx for the transport zone.

Step 2 – Deploy the NSX-T Edge appliance

I don’t have much input to add here, apart from the fact that the appliance should be pulled down from My VMware. The configuration of the Edge can be small/medium or large. During deployment, you can pick the networks for the Edge using the segments created in step 1. Typically, a management connection is placed on network 0, the overlay is placed on network 1, and uplink connections are placed on network 2 (and 3 if desired). I also enable SSH on the Edges, as it allows us to query the status of the tunnels, service routers and distributed routers that are built on the Edge as we go through the configuration. I find this extremely useful.

Step 3 – Create Edge Overlay, Edge VLAN and Edge Cluster profiles

Profiles are used when creating transport zones/N-VDS for connecting to the correct NSX-T Edge appliance network. To make things easier, I normally create a unique profile for the overlay, and each of the two uplinks. We also make an edge cluster profile, but that just takes the defaults.

1 x Overlay/TEP profile (trunk) – allow PKS networks to talk to each other/internally

Either 1 or 2 x Uplink/VLAN profile (trunk) – allow PKS networks to talk to vCenter/externally

NSX-T Edge Cluster Profile – a common configuration for the edge node

Note that the edge overlay profile should be placed on a VLAN that is not the same as the host overlay profile, but these two vlans should be routable to each other so that the hosts and edges can communicate, as highlighted in the introduction. In the edge-overlay-profile for my edge overlay network, I will use VLAN 80. In this setup, my hosts overlay network is on VLAN 70.

Profiles are found in NSX-T Manager under System > Fabric > Profiles. I usually make the name of the active uplink the same as I expect to find on the N-VDS edge. When it comes to configuring the edge later on (step 6), it makes it easier to match.

3.1 edge-overlay-profile

Active Uplinks: Fp-eth0 – this is the second NSX-T Edge appliance virtual NIC/network 1.

– this is the second NSX-T Edge appliance virtual NIC/network 1. VLAN: 80

MTU: 1600. MTU of 1600 is required only for for tunnel/encapsulation (Geneve) overhead.

3.2 edge-uplink-profile

Active Uplinks: Fp-eth1 – this is the third NSX-T Edge appliance virtual NIC/network 2.



– this is the third NSX-T Edge appliance virtual NIC/network 2. VLAN: 50

MTU: 1500

You could now create a third profile for the other uplink, the fourth NSX-T Edge appliance virtual NIC/network 2. This would be Fp-eth2. This would also use VLAN 50 and an MTU of 1500. However, I’m just building a single uplink for my Edge in this example.

3.3 edge-cluster-profile

Just build this with the defaults. In fact, it may not even be necessary as a profile already exists with the defaults.

Step 4 – Create an Edge Transport Zone

In this step, we are creating a transport zone for the edge uplink. The edges and the hosts will share the same overlay transport zone (even though they will be on different VLANs, 70 and 80).

Transport zone configurations are found in NSX-T Manager under System > Fabric > Transport Zones. You need to provide the following content:

Provide an N-VDS Name

Set Membership = Standard

Set Traffic Type = VLAN

You should now have 3 transport zones; 2 which were automatically created for your hosts during the WLD deployment, and your new edge one:

Step 5 – Join NSX-T Edge to NSX-T Manager

At this point, we need to add the NSX-T Edge to the NSX-T Manager. This requires command line access to both the NSX-T Manager and NSX-T Edges. The step are (1) SSH to the NSX-T Manager and retrieve the cluster certificate thumbprint, and (2) use the thumbprint to join the Edge to the Manager.

Note: Do not log into the individual NSX-T managers/controller. If you log into the non-master manager (terminology may not be correct), you will not be able to retrieve the cluster certificate thumbprint. SSH to the cluster virtual IP address to get the thumbprint. This will automatically put you on the master manager.

Here are the steps from my lab environment. Note the warning timeout at the end – this did not appear to have any impact on the operation, and seems to happen sporadically. I am only going to join a single Edge in this example. You will need to repeat this operation on the other Edge in a typical production environment.

cormac@pks-cli:~$ ssh admin@wld-nsx-mgr-01 admin@wld-nsx-mgr-01's password: ****** NSX CLI (Manager, Policy, Controller 2.5.0.0.0.14663978). Press ? for command list or enter: help wld-nsx-clr-02> get certificate cluster thumbprint 79f68816f7b702b842a0f75d061f0b0133e375fde4190724cd94974ee88e3c8d wld-nsx-clr-02> cormac@pks-cli:~$ ssh admin@w01-edge-01.rainpole.com admin@w01-edge-01.rainpole.com's password: ****** * TIPS: To reconfig management interface, please refer to these CLIs 1) stop service dataplane 2) set interface interface-name vlan vlan-id plane mgmt (for creating vlan sub-interface) 3) set interface interface-name ip x.x.x.x/24 gateway x.x.x.x plane mgmt (for static ip) set interface interface-name dhcp plane mgmt (for dhcp) 4) start service dataplane To config in-band management interface, please refer to these CLIs 1) set interface mac mac-addr vlan vlan-id in-band plane mgmt 2) set interface eth0.vlan ip x.x.x.x/24 gateway x.x.x.x plane mgmt (for static ip) set interface eth0.vlan dhcp plane mgmt (for dhcp) Last login: Fri Jan 24 17:15:22 2020 from 10.27.51.18 NSX CLI (Edge 2.5.0.0.0.14663982). Press ? for command list or enter: help w01-edge-01> join management-plane wld-nsx-mgr-01.rainpole.com thumbprint \ 79f68816f7b702b842a0f75d061f0b0133e375fde4190724cd94974ee88e3c8d username admin Password for API user: Node successfully registered as Fabric Node: b62e0136-4346-11ea-a08a-00505682a9a5 Warning: Timeout occurred while waiting for edge-service to connect with Manager w01-edge-01>

5.1 Examine logical routers on the NSX-T Edge

We will revisit this step from time to time to show the behaviour, but once you are logged on the Edge, you can display the logical routers. At present, since the Edge has not been configured, there are none. But we will see how the tunnel, service routers and distributed routers get added as we go through the setup.

w01-edge-01> get logical-routers Logical Router UUID VRF LR-ID Name Type Ports w01-edge-01>

Step 6 – Configure the Edge Transport Nodes

This is where a lot of the work in the previous steps all comes together. During the configuration of the Edge Transport Node, we will use the Transport Zones and the profiles that we created earlier. Edge Node configuration is found in NSX-T Manager under System > Fabric > Nodes > Edge Transport Nodes. When we configure the NSX-T edge, the first thing we do is choose the transports zone that we want on the edge. In our case, we want the overlay transport zone (the same one used for the hosts) and the new uplink transport zone that we created in step 4.



6.1 Move Transport Zones from Available to Selected

As shown below, move the two transport zone from Available to Selected in the General tab. Once done, select the N-VDS tab next to the General tab for N-VDS configuration:

6.2 Configure N-VDS for Overlay

Select the correct N-VDS for the overlay Transport Zone. Two will be displayed, one for each Transport Zone selected. Choose the automatically created N-VDS and not the uplink N-VDS you created earlier in step 4. Next, choose the overlay profile from step 3.1, then an IP address, gateway and subnet mask for the overlay n-vds. This IP address will be plumbed up on the VLAN as specified by the overlay proile, in this example VLAN 80. The Virtual NIC is then chosen (which connection on the NSX-T Edge appliance to attach to). Normally overlay is Fp-eth0, and this is why I use the same name for the active uplink when creating the profiles in step 3 so that they are easy to match here. This is an example from my setup:

6.3 Configure N-VDS for uplink

We need to create a second N-VDS for the uplink transport zone so simply click on + ADD N-VDS. Again, select the N-VDS. This time, the Edge Switch Name will be the N-VDS name that was used when creating the Edge Transport Zone in step 4. Other information required are uplink profile and the Virtual NIC, in this case Fp-eth1.

If there was another Edge uplink Transport Zone to be configured, this would need to be moved from Available to Selected in step 6.1, and then you would simply repeat step 6.3, and repeat the previous steps for Virtual NIC Fp-eth2. However at this point, our edge is now configured with two transport zones (overlay and uplink):

Step 7 – Create Edge Cluster

Even though there is only a single Edge node, an Edge Cluster must still be created. Edge Cluster configuration is found in NSX-T Manager under System > Fabric > Nodes > Edge Clusters. Simply click +ADD, fill in the name, move the edge transport node from Available to Selected, and click ADD. The profile for the edge cluster was built in step 3.3 and is added here. This step is quite simple, and the edge cluster, once created, should look something like this:

Step 8 – Validate the Overlay Network

This is an important step before we go any further. You need to be able to ping the overlay IP address in the N-VDS of the edge from the overlay network on the hosts with a non-segmented packet size of 1600 MTU (and vice-versa). Let’s begin with the hosts. First, here are the overlay interfaces and IP addresses from two hosts. Let’s confirm that they can ping first (which they already should, or we probably wouldn’t even be at this point). With the esxcfg-vmknics -l command, we are looking for vxlan NetStack interfaces. We then use the vmkping ++netstack=vxlan -s 1600 -I <interface> to test.

[root@esxi-dell-g:~] esxcfg-vmknic -l Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack . . vmk10 10 IPv4 147.70.0.3 255.255.255.0 147.70.0.255 00:50:56:61:c6:81 9000 65535 true DHCP vxlan vmk10 10 IPv6 fe80::250:56ff:fe61:c681 64 00:50:56:61:c6:81 9000 65535 true STATIC, PREFERRED vxlan vmk11 11 IPv4 147.70.0.2 255.255.255.0 147.70.0.255 00:50:56:6c:64:58 9000 65535 true DHCP vxlan vmk11 11 IPv6 fe80::250:56ff:fe6c:6458 64 00:50:56:6c:64:58 9000 65535 true STATIC, PREFERRED vxlan vmk50 bbf952c9-22d5-4e9d-a16b-1fd39c60ee96 IPv4 169.254.1.1 255.255.0.0 169.254.255.255 00:50:56:67:59:7f 1500 65535 true STATIC hyperbus vmk50 bbf952c9-22d5-4e9d-a16b-1fd39c60ee96 IPv6 fe80::250:56ff:fe67:597f 64 00:50:56:67:59:7f 1500 65535 true STATIC, PREFERRED hyperbus [root@esxi-dell-g:~] [root@esxi-dell-l:~] esxcfg-vmknic -l Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack . . vmk10 10 IPv4 147.70.0.7 255.255.255.0 147.70.0.255 00:50:56:6b:11:18 9000 65535 true DHCP vxlan vmk10 10 IPv6 fe80::250:56ff:fe6b:1118 64 00:50:56:6b:11:18 9000 65535 true STATIC, PREFERRED vxlan vmk11 11 IPv4 147.70.0.6 255.255.255.0 147.70.0.255 00:50:56:64:0f:e2 9000 65535 true DHCP vxlan vmk11 11 IPv6 fe80::250:56ff:fe64:fe2 64 00:50:56:64:0f:e2 9000 65535 true STATIC, PREFERRED vxlan vmk50 e57e18d4-3351-4132-b85b-bd5cf820e4a6 IPv4 169.254.1.1 255.255.0.0 169.254.255.255 00:50:56:62:c6:85 1500 65535 true STATIC hyperbus vmk50 e57e18d4-3351-4132-b85b-bd5cf820e4a6 IPv6 fe80::250:56ff:fe62:c685 64 00:50:56:62:c6:85 1500 65535 true STATIC, PREFERRED hyperbus [root@esxi-dell-l:~] [root@esxi-dell-g:~] vmkping ++netstack=vxlan -I vmk10 -s 1600 147.70.0.7 PING 147.70.0.7 (147.70.0.7): 1600 data bytes 1608 bytes from 147.70.0.7: icmp_seq=0 ttl=64 time=0.312 ms 1608 bytes from 147.70.0.7: icmp_seq=1 ttl=64 time=0.188 ms 1608 bytes from 147.70.0.7: icmp_seq=2 ttl=64 time=0.187 ms --- 147.70.0.7 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.187/0.229/0.312 ms

The hosts overlay looks good. Let’s now shift our attention to the NSX-T Edge and log into it once more. We now should have a TUNNEL logical router created. I should now be able to ping the ESXi hosts in the same overlay, even though they are on different VLANs (remember the hosts are on VLAN 70 and my edge is on VLAN 80). We can navigate to the tunnel logical router on the Edge, and do our ping tests from there.

w01-edge-01> get logical-routers Logical Router UUID VRF LR-ID Name Type Ports 736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 w01-edge-01> vrf 0 w01-edge-01(vrf)> get interfaces Logical Router UUID VRF LR-ID Name Type 736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL Interfaces (IPv6 DAD Status A-Assigned, D-Duplicate, T-Tentative) Interface : 9fd3c667-32db-5921-aaad-7a88c80b5e9f Ifuid : 261 Mode : blackhole Interface : 4dfb6d8f-75b5-5b50-af16-7bc95737c4da Ifuid : 287 Name : Mode : lif IP/Mask : 147.80.0.100/24 MAC : 00:50:56:82:4f:fe LS port : 4b4a31ff-b7be-5262-a284-5d05900ebc3a Urpf-mode : PORT_CHECK DAD-mode : LOOSE RA-mode : RA_INVALID Admin : up Op_state : up MTU : 9000 Interface : f322c6ca-4298-568b-81c7-a006ba6e6c88 Ifuid : 260 Mode : cpu w01-edge-01(vrf)> ping 147.70.0.3 PING 147.70.0.3 (147.70.0.3): 56 data bytes 64 bytes from 147.70.0.3: icmp_seq=0 ttl=63 time=0.901 ms 64 bytes from 147.70.0.3: icmp_seq=1 ttl=63 time=0.984 ms 64 bytes from 147.70.0.3: icmp_seq=2 ttl=63 time=0.475 ms ^C w01-edge-01(vrf)> --- 147.70.0.3 ping statistics --- 4 packets transmitted, 3 packets received, 25.0% packet loss round-trip min/avg/max/stddev = 0.475/0.787/0.984/0.223 ms w01-edge-01(vrf)>

This looks successful. I am able to reach across the VLANs and communicate to transport nodes in the same overlay. Let’s do one last check back on the hosts to verify that they can reach the Edge.

[root@esxi-dell-g:~] vmkping ++netstack=vxlan -I vmk10 -s 1600 147.80.0.100 PING 147.80.0.100 (147.80.0.100): 1600 data bytes 1608 bytes from 147.80.0.100: icmp_seq=0 ttl=63 time=0.259 ms 1608 bytes from 147.80.0.100: icmp_seq=1 ttl=63 time=0.232 ms 1608 bytes from 147.80.0.100: icmp_seq=2 ttl=63 time=0.201 ms --- 147.80.0.100 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.201/0.231/0.259 ms [root@esxi-dell-g:~]

This all looks good. We can now continue with the creation of our tier 0 and tier 1 routers, and enable overlay network segments to reach the external network.

Step 9 – Tier 0 Logical Router

This step involves creating the Tier 0 Logical Router.

9.1 Create an external router port for the T0

First, we need to build a router port for the tier 0 logical router to reach externally. To build a router port, the configuration is found under Advanced Network and Security > Switching > Switches > +Add > New Logical Switch. Since my external network is VLAN 50, this is the VLAN that I am adding to this router port/logical switch.



9.2 Create a Tier 0 Logical Router

Now we can build the Tier 0 Logical Router. Tier 0 Logical Router configuration is found under Advanced Network and Security > Routers > Routers. Click + to add a Tier 0 Router. Provide a name and the name of the NSX-T Edge Cluster. As I have only a single Edge transport node in my cluster, the Failover Mode entries aren’t too relevant. If you plan on having multiple Edge transport nodes in the cluster, refer to the relevant NSX-T 2.5 documentation on how best to configure the Failover Mode for your environment.

9.3 Connect the external router port to the Tier 0 router

To add the external port to the T0, first click on the name of Tier 0 router. Now, select Configuration and Router Ports. Next click +ADD to add the external router port (created is step 9.1) to the T0 logical router (created in step 9.2). Give the router port a name, ensure Type is Uplink, set MTU to 1500 and for Transport Node, which will be the NSX-T Edge.

For a Logical Switch, select the external router port created in step 9.1.

Switch port name should be set to ‘Attach to a new switch port‘ (it will be automatically created). Note that the screenshot below might be confusing since it is an ‘Edit’ of the Router Port after it has already been created. Therefore the screenshot is showing that the Logical Switch Port is already attached to an existing switch port, and it is displaying the Switch Port Name to which is attached. When creating this port uplink connection for the first time, ensure Logical Switch Port is set to ‘Attach to new switch port‘ and not an existing switch port – this will be the default behaviour anyway.

Finally, for Subnets, you need to add an IP address that will identify this router on your external network and VLAN. Here is my configuration, where this router will be using IP address 192.50.0.253 on VLAN 50.

We will leave the T0 for the moment, but will come back to it later as we need to configure BGP, the Border Gateway Protocol. BGP is needed so that our upstream router can learn about the overlay network segments that we are going to be building, and route traffic appropriately.

9.4 Validate T0 external router

Let’s now verify that the T0 external router can reach the externally on VLAN 50. With the T0 in place, the NSX-T Edge now has a service router in place. Let’s see if it can ping any addresses externally on VLAN 50.

w01-edge-01> get logical-routers Logical Router UUID VRF LR-ID Name Type Ports 736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 267cf518-f1fd-4b13-9008-5e0bf2662a96 5 3074 SR-T0 SERVICE_ROUTER_TIER0 5 w01-edge-01> vrf 5 w01-edge-01(tier0_sr)> get route Flags: t0c - Tier0-Connected, t0s - Tier0-Static, B - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, > - selected route, * - FIB route Total number of routes: 1 t0c> * 192.50.0.0/24 is directly connected, uplink-307, 00:00:52 w01-edge-01(tier0_sr)> ping 192.50.0.1 PING 192.50.0.1 (192.50.0.1): 56 data bytes 64 bytes from 192.50.0.1: icmp_seq=0 ttl=64 time=4.589 ms 64 bytes from 192.50.0.1: icmp_seq=1 ttl=64 time=1.899 ms ^C --- 192.50.0.1 ping statistics --- 3 packets transmitted, 2 packets received, 33.3% packet loss round-trip min/avg/max/stddev = 1.899/3.244/4.589/1.345 ms w01-edge-01(tier0_sr)> ping 192.50.0.10 PING 192.50.0.10 (192.50.0.10): 56 data bytes 64 bytes from 192.50.0.10: icmp_seq=0 ttl=64 time=2.955 ms 64 bytes from 192.50.0.10: icmp_seq=1 ttl=64 time=1.189 ms ^C --- 192.50.0.10 ping statistics --- 3 packets transmitted, 2 packets received, 33.3% packet loss round-trip min/avg/max/stddev = 1.189/2.072/2.955/0.883 ms w01-edge-01(tier0_sr)>

Looks good – let’s keep going with our configuration. Now to see if we can get our overlay network segments able to reach the outside world (in our case, IP addresses on VLAN 50). To do that, we need to build a Tier 1 Logical Router, connect the T1 to the T0, and then connect our overlay network segments to the T1.

Step 10 – Tier 1 Logical Router

Our T1 logical router will have an uplink connection to our T0 logical router and downlink connections to our PKS network segments. This will allow our network segments to reach the outside world, once some additional configuration is put in place, such as Route Redistribution on the T0, and Route Advertising on the T1.

10.1 Create a Tier 1 Logical Router

The T1 configuration is quite straightforward – provide the name of the T1, the name of the T0 router, the Edge Cluster and the Edge Cluster Members, of which I only have one, as seen below. That’s it. If your Edge Cluster had 2 members, which production environments will probably have, add them both here.

10.2 Validate Logical Routers

We can quickly check the Edge logical routers and see the distributed router now in place:

w01-edge-01> get logical-router Logical Router UUID VRF LR-ID Name Type Ports 736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 267cf518-f1fd-4b13-9008-5e0bf2662a96 5 3074 SR-T0 SERVICE_ROUTER_TIER0 5 14b5a772-fa42-45b3-b5f5-695584a9f5b6 6 3076 SR-T1 SERVICE_ROUTER_TIER1 5 802116e0-2639-46ad-90b3-33edafadd5d8 7 3073 DR-T0 DISTRIBUTED_ROUTER_TIER0 4 w01-edge-01>

10.3 Route Advertisement

We now need to advertise our routes on our T1. This will be needed later by BGP when it comes to discovering our overlay network segments. Although I have enabled all connections here, most likely you only use a subset of these in production. Click on the T1 Logical Router name, select Routing > Route Advertisement. Enable it, then move all routes and endpoints to Yes. Note that at the moment, there are currently 0 networks advertised (shown at bottom of screenshot below). Once our network segments are connected to the T1, they should both show up here as advertised.

Step 11 – Build Network Segments (for PKS)

Next, we will build some network segments on the overlay network. As mentioned a few times now, my end goal is to deploy PKS, so I am going to create a network segment for the PKS management VMs and the PKS Service VMs. To do this, select Networking > Segments > Add Segments. Both segments use the overlay-tz-xxx Transport Zone and do not have a VLAN ID set. This action also creates corresponding NSX-T Logical Switches, and the network segments appear in the vSphere UI for use by virtual machines.

Step 12 – Connect Segments (Logical Switches) to Tier 1 Router

Next step is to configure the network segments to reach the external network (VLAN 50). To achieve this, the corresponding logical switches for the network segments are attached to the Tier 1 logical router. Navigate to Advanced Networking and Security > Routers > Routers and click on the Tier 1 router. Select Configuration > Router Ports and click +ADD.

Here are the downlink tier 1 router ports for the PKS Management network and the PKS Service network. You simply provide a router port name, the Logical Switch (created as part of step 12) and a subnet. In my case, I provided a CIDR of /24 for each subnet.

The PKS Management segment router port is 147.70.10.1 and PKS Service segment is on router port 147.70.20.1. Any virtual machines placed on the network segments (CIDR /24) should be able to ping these router port IP addresses.

12.1 Validate Network Segments

These network segments now show up on the Edge Service Router.

w01-edge-01> get logical-router Logical Router UUID VRF LR-ID Name Type Ports 736a80e3-23f6-5a2d-81d6-bbefb2786666 0 0 TUNNEL 3 267cf518-f1fd-4b13-9008-5e0bf2662a96 5 3074 SR-T0 SERVICE_ROUTER_TIER0 5 14b5a772-fa42-45b3-b5f5-695584a9f5b6 6 3076 SR-T1 SERVICE_ROUTER_TIER1 5 802116e0-2639-46ad-90b3-33edafadd5d8 7 3073 DR-T0 DISTRIBUTED_ROUTER_TIER0 4 07557296-2a54-4e76-9146-9b9da9600d11 9 3075 DR-T1 DISTRIBUTED_ROUTER_TIER1 5 w01-edge-01> vrf 5 w01-edge-01(tier0_sr)> get route Flags: t0c - Tier0-Connected, t0s - Tier0-Static, B - BGP, t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected, t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT, t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, > - selected route, * - FIB route Total number of routes: 6 t0c> * 100.64.128.0/31 is directly connected, downlink-318, 00:06:32 t1c> * 147.70.10.0/24 [3/0] via 100.64.128.1, downlink-318, 00:02:46 t1c> * 147.70.20.0/24 [3/0] via 100.64.128.1, downlink-318, 00:01:53 t0c> * 192.50.0.0/24 is directly connected, uplink-307, 00:08:53 t0c> * fcd1:f05a:91b5:3000::/64 is directly connected, downlink-318, 00:06:32 t0c> * fe80::/64 is directly connected, downlink-318, 00:06:32 w01-edge-01(tier0_sr)>

At this point, I find it useful to deploy a simple VM, and connect it to these networks in turn. If you deploy two VMs on the different segments, one on 147.70.10.0 and the other on 147.70.20.0, they should be able to ping each other, as well as the router ports ip addresses assigned in step 12.1. The VMs should be able to ping the external router port (192.50.0.253) which we configured earlier in step 9.3. At this point, we will still not be able to ping the outside world, since anything we try to ping externally will have no idea how to reply to us. This is where we need BGP, the Border Gateway Protocol.

Step 13 – BGP Configuration

13.1 Upstream Router BGP configuration

Before configuring BGP, you need to get some information from your upstream router. My router already had a BGP configuration setup for another NSX-T Edge deployment, so I just needed to add myself as a new neighbor to the existing BGP configuration on the router. Let’s look at the configuration currently on my upstream router:

console(config)#show running-config | section bgp router bgp 5700 bgp log-neighbor-changes bgp router-id 192.50.0.141 timers bgp 60 180 neighbor 192.50.0.254 remote-as 5600 neighbor 192.50.0.254 update-source loopback 1 exit

The neighbor 192.50.0.254 is a T0 router port on my other NSX-T deployment. Here are the BGP changes that I made to my running configuration (in blue) on the upstream router to add my new NSX-T tier 0 router as a neighbor:

console(config)#show running-config | section bgp router bgp 5700 bgp log-neighbor-changes bgp router-id 192.50.0.141 timers bgp 60 180 neighbor 192.50.0.253 remote-as 5500 neighbor 192.50.0.253 update-source loopback 1 neighbor 192.50.0.254 remote-as 5600 neighbor 192.50.0.254 update-source loopback 1 exit

Note that the new neighbor added to the BGP configuration on my router is the IP address of the NSX-T Edge T0 external router port configured in step 9.3. Note also that my upstream router AS is 5700, the original neighbor AS is 5600 and my new neighbor AS is 5500. (AS is short for Autonomous System).

13.2 NSX-T Edge T0 Router BGP configuration

Now lets set up BGP on the NSX-T Edge via the NSX-T Manager. Navigate to Advanced Networking and Security > Routers > Routers and click on the Tier 0 router. From the Routing dropdown, select BGP. Click Edit, change the status to Enabled, and add the Local AS. From the configuration that was added to the upstream router earlier, my local AS is 5500.

Next, we need to add neighbor details, such as IP address, Remote AS and so on. Here are the details once again from my lab. In the Neighbor tab, I added my upstream router IP address and its Remote AS of 5700. Other values are left at the defaults. In the Local Address tab, I unchecked ‘All Uplinks’, then set Type to Uplink and picked my T0-uplink-RP which is the router port created in step 9.3. You can see it has the IP address we allocated in that step associated with it here. Now save the BGP configuration.

13.3 BGP validation – Upstream Router

At this point, we can take a look on the upstream router to see if anything has happened. What you need to see is a Peer State of Established, highlighted in blue below. This may take a few moment to change from Active. This output is from my DELL N4064 lab upstream router. If you are using a different router, the commands and output may differ.

console(config)# show ip bgp neighbors 192.50.0.253 Remote Address ................................ 192.50.0.253 Remote AS ..................................... 5500 Peer ID ....................................... 192.50.0.253 Peer Admin Status ............................. START Peer State .................................... ESTABLISHED Local Interface Address ....................... 192.50.0.1 Local Port .................................... 50977 Remote Port ................................... 179 Connection Retry Interval ..................... 2 sec Neighbor Capabilities ......................... MP RF Next Hop Self ................................. Disable IPv4 Unicast Support .......................... Both IPv6 Unicast Support .......................... None Template Name ................................. None Update Source ................................. Lo1 Configured Hold Time .......................... 90 sec Configured Keep Alive Time .................... 30 sec Negotiated Hold Time .......................... 180 sec Negotiated Keep Alive Time .................... 60 sec Prefix Limit .................................. 8160 Prefix Warning Threshold ...................... 75 Warning Only On Prefix Limit .................. False MD5 Password .................................. None Originate Default ............................. False Last Error () ................................. None Last SubError ................................. None Time Since Last Error ......................... Never Established Transitions ....................... 1 Established Time .............................. 0 days 00 hrs 01 mins 19 secs Time Since Last Update ........................ 0 days 00 hrs 01 mins 18 secs IPv4 Outbound Update Group .................... 0 BFD Enabled to Detect Fast Fallover ........... No Open Update Keepalive Notification Refresh Total Msgs Sent 1 1 2 0 0 4 Msgs Rcvd 1 1 2 0 0 4 Received UPDATE Queue Size: 0 bytes. High: 99 Limit: 392192 Drops: 0 IPv4 Prefix Statistics: Inbound Outbound Prefixes Advertised 0 16 Prefixes Withdrawn 0 0 Prefixes Current 0 16 Prefixes Accepted 0 N/A Prefixes Rejected 0 N/A Max NLRI per Update 0 16 Min NLRI per Update 0 16 console(config)# console(config)# show ip bgp neighbors 192.50.0.253 routes Local router ID is 192.50.0.141 Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPref Path Origin ------------------- ---------------- ---------- ---------- ------------- ------ console(config)# show ip bgp neighbors 192.50.0.253 received-routes Local router ID is 192.50.0.141 Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPref Path Origin ------------------- ---------------- ---------- ---------- ------------- ------ console(config)#

So while we appear to have established a peer-to-peer BGP connection between the upstream router and the NSX-T tier 0 router, we do not appear to have advertised any routes yet, as can be seen by the commands at the end of the previous output above. This is expected. There is another configuration update that we need to make for that to happen.

13.4 BGP validation – NSX-T Tier 0 Router

Before we do the next configuration step, let’s hop onto the Edge and see what it see as the current state of play. The good news is that it also sees a BGP peering established. However, when querying the routes from the upstream router neighbor, it is not seeing any of our new NSX-T Edge neighbor network segments (e.g. 147.70.10.0, 147.70.20.0). It only see networks that have been advertised by the other neighbor (AS 5600).

w01-edge-01(tier0_sr)> get bgp neighbor 192.50.0.1 BGP neighbor is 192.50.0.1, remote AS 5700, local AS 5500, external link BGP version 4, remote router ID 192.50.0.141, local router ID 192.50.0.253 BGP state = Established , up for 00:05:00 Last read 00:00:44, Last write 00:00:00 Hold time is 180, keepalive interval is 60 seconds Configured hold time is 180, keepalive interval is 60 seconds Neighbor capabilities: 4 Byte AS: advertised AddPath: IPv4 Unicast: RX advertised IPv4 Unicast Route refresh: advertised and received(new) Address Family IPv4 Unicast: advertised and received Hostname Capability: advertised (name: plrsr,domain name: n/a) not received Graceful Restart Capabilty: advertised Graceful restart informations: Local GR Mode : Helper* Remote GR Mode : Disable R bit : False Timers : Configured Restart Time(sec) : 180 Received Restart Time(sec) : 0 Message statistics: Inq depth is 0 Outq depth is 0 Sent Rcvd Opens: 2 1 Notifications: 0 0 Updates: 1 1 Keepalives: 6 6 Route Refresh: 0 0 Capability: 0 0 Total: 9 8 Minimum time between advertisement runs is 0 seconds Update source is 192.50.0.253 For address family: IPv4 Unicast Update group 1, subgroup 1 Packet Queue length 0 Community attribute sent to this neighbor(all) 16 accepted prefixes Connections established 1; dropped 0 Last reset never Local host: 192.50.0.253, Local port: 179 Foreign host: 192.50.0.1, Foreign port: 50977 Nexthop: 192.50.0.253 Nexthop global: :: Nexthop local: :: BGP connection: shared network BGP Connect Retry Timer in Seconds: 120 Read thread: on Write thread: on BFD Status: Not configured w01-edge-01(tier0_sr)> get bgp neighbor 192.50.0.1 routes BGP table version is 16, local router ID is 192.50.0.253 Status flags: > - best, I - internal Origin flags: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path > 172.16.0.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 172.16.1.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 172.16.2.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 172.16.3.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 172.16.4.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 172.16.5.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.64/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.65/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.66/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.67/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.68/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.69/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.70/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.191.71/32 192.50.0.254 0 100 0 5700 5600 5500 ? > 192.168.192.0/24 192.50.0.254 0 100 0 5700 5600 5500 ? w01-edge-01(tier0_sr)> get bgp neighbor 192.50.0.1 advertised-routes BGP table version is 16, local router ID is 192.50.0.253 Status flags: > - best, I - internal Origin flags: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path > 172.16.0.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.1.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.2.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.3.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.4.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.5.0/24 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.0/24 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.64/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.65/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.66/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.67/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.68/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.69/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.70/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.71/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.192.0/24 192.50.0.254 0 100 0 5700 5600 ? w01-edge-01(tier0_sr)>

So the connection has been established, but the only routes/networks that our neighbor is displaying are not coming from our new NSX-T Edge Tier 0, but rather somewhere else (other neighbors). That’s because we have not yet told it about our network segments. So let’s go ahead and enable our upstream router neighbor to learn about our routes/networks. This is done via the Route Distribution feature on our NSX-T Edge T0 router.

Step 14 – Enable Route Distribution

To enable our upstream router neighbor to learn about our networks, we need to be able to exchange (import/export) routing information with it. This is where Route Distribution is used. Navigate to Advanced Networking and Security > Routers > Routers and click on the Tier 0 router. From the Routing dropdown, select Route Distribution, enable it, then add the redistribution criteria. In my lab, I turned on route distribution from every source. In a production environment you may choose something a subset of resources to share. You will need to enable T1 sources however, since the network segments are attached to a Tier 1 Router.

After adding the criteria, it will look something like this.

Now let’s check back on our upstream router and NSX-T Edge and see if anything has changed.

14.1 Validate Route Distribution – NSX-T Edge

Let’s check the upstream switch neighbor from the Edge first. Woot! It looks like our network segments are discovered.

w01-edge-01(tier0_sr)> get bgp neighbor 192.50.0.1 advertised-routes BGP table version is 19, local router ID is 192.50.0.253 Status flags: > - best, I - internal Origin flags: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path > 147.70.10.0/24 0.0.0.0 0 100 32768 ? > 147.70.20.0/24 0.0.0.0 0 100 32768 ? > 172.16.0.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.1.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.2.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.3.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.4.0/24 192.50.0.254 0 100 0 5700 5600 ? > 172.16.5.0/24 192.50.0.254 0 100 0 5700 5600 ? > 192.50.0.0/24 0.0.0.0 0 100 32768 ? > 192.168.191.0/24 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.64/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.65/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.66/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.67/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.68/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.69/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.70/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.191.71/32 192.50.0.254 0 100 0 5700 5600 ? > 192.168.192.0/24 192.50.0.254 0 100 0 5700 5600 ?

14.2 Validate Route Distribution – Upstream Router

Let’s pop onto the upstream router and take a look from there. It would appear that BGP has indeed done its thing and my network segments are now visible to the upstream router. And if I display the full routing table on the upstream router, I can see that my network segments have been derived from BGP (in blue below).

console(config)#show ip bgp neighbors 192.50.0.253 routes Local router ID is 192.50.0.141 Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPref Path Origin ------------------- ---------------- ---------- ---------- ------------- ------ 147.70.10.0/24 192.50.0.253 0 5500 ? 147.70.20.0/24 192.50.0.253 0 5500 ? 192.50.0.0/24 192.50.0.253 0 5500 ? console(config)#show ip bgp neighbors 192.50.0.253 received-routes Local router ID is 192.50.0.141 Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPref Path Origin ------------------- ---------------- ---------- ---------- ------------- ------ 147.70.10.0/24 192.50.0.253 0 5500 ? 147.70.20.0/24 192.50.0.253 0 5500 ? 192.50.0.0/24 192.50.0.253 0 5500 ? console(config)#show ip route Route Codes: R - RIP Derived, O - OSPF Derived, C - Connected, S - Static B - BGP Derived , E - Externally Derived, IA - OSPF Inter Area E1 - OSPF External Type 1, E2 - OSPF External Type 2 N1 - OSPF NSSA External Type 1, N2 - OSPF NSSA External Type 2 S U - Unnumbered Peer, L - Leaked Route * Indicates the best (lowest metric) route for the subnet. No default gateway is configured. C *1.1.1.0/31 [0/1] directly connected, Lo1 C *10.10.0.0/24 [0/1] directly connected, Vl10 C *10.10.10.0/24 [0/1] directly connected, Vl502 C *10.20.0.0/24 [0/1] directly connected, Vl20 C *10.20.20.0/24 [0/1] directly connected, Vl503 C *11.11.11.0/24 [0/1] directly connected, Lo0 C *147.70.0.0/24 [0/1] directly connected, Vl70 B *147.70.10.0/24 [20/0] via 192.50.0.253, Vl50 B *147.70.20.0/24 [20/0] via 192.50.0.253, Vl50 C *147.80.0.0/24 [0/1] directly connected, Vl80 C *172.3.0.0/24 [0/1] directly connected, Vl3 C *172.4.0.0/24 [0/1] directly connected, Vl4 C *172.6.0.0/24 [0/1] directly connected, Vl6 C *172.16.0.0/24 [0/1] directly connected, Vl504 B 172.16.0.0/24 [20/0] via 192.50.0.254, Vl50 B *172.16.1.0/24 [20/0] via 192.50.0.254, Vl50 B *172.16.2.0/24 [20/0] via 192.50.0.254, Vl50 B *172.16.3.0/24 [20/0] via 192.50.0.254, Vl50 B *172.16.4.0/24 [20/0] via 192.50.0.254, Vl50 B *172.16.5.0/24 [20/0] via 192.50.0.254, Vl50 C *172.30.0.0/24 [0/1] directly connected, Vl30 C *172.30.50.0/24 [0/1] directly connected, Vl501 C *172.30.51.0/24 [0/1] directly connected, Vl500 C *172.32.0.0/24 [0/1] directly connected, Vl32 C *172.40.0.0/24 [0/1] directly connected, Vl40 C *172.64.0.0/24 [0/1] directly connected, Vl505 C *172.200.0.0/24 [0/1] directly connected, Vl200 C *192.50.0.0/24 [0/1] directly connected, Vl50 B 192.50.0.0/24 [20/0] via 192.50.0.253, Vl50 C *192.60.0.0/24 [0/1] directly connected, Vl60 C *192.168.0.0/24 [0/1] directly connected, Vl2 B *192.168.191.0/24 [20/0] via 192.50.0.254, Vl50 B *192.168.191.64/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.65/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.66/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.67/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.68/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.69/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.70/32 [20/0] via 192.50.0.254, Vl50 B *192.168.191.71/32 [20/0] via 192.50.0.254, Vl50 B *192.168.192.0/24 [20/0] via 192.50.0.254, Vl50 console(config)#

A tip about the usefulness of the above command, which displays the full routing table on the upstream router. Be aware that BGP will only accept routes that it knows about. If you are trying to route to a network that is not on the upstream router, or the upstream router doesn’t know how to reach, then BGP will reject these routes (at least, that is what I observed in my testing). The clue that routes have been rejected is that the routes show up in the BGP neighbor commands seen earlier, but are not in the routing table on the upstream router. You can use the following command to see if any routes have been rejected: show ip bgp neighbors [ip address of neighbor] rejected-routes

14.3 Validate Route Distribution – VM on network segment

Our final test is here. From a virtual machine that has been deployed on one of the overlay network segments in vSphere, can I ping out on the 192.50.0.0 network and get a response? Why yes I can! Woot! Woot!

Now we have completed the deployment of NSX-T Edge on my Workload Domain in VMware Cloud Foundation (VCF). There is quite a bit to it, isn’t there? If you have read this far, you have my greatest admiration. Thank you. I hope you found it informative – I certainly learnt a lot going through this exercise.

Final word of thanks – kudos to Ianislav Trendafilov from our Sofia office who guided me through some of the nuances of BGP. I also found this blog post from Prasad Kalpurekkal very informative.

Well at this point, everything is set for me to deploy PKS, and have it run some Kubernetes clusters.

OK – PKS, here I come.