Tiziano Tofoni wrote a lengthy comment on my EVPN in small data center fabrics blog post continuing the excellent discussion we started over a beer last October. Today I’ll address the first part:

I think that EVPN is an excellent standard for those who love Layer 2 (L2) services, we may say that it is an evolution of the implementation of the VPLS service, which addresses some limits in the original standard (RFCs 4761 and 4762).

I might be missing something, but in my opinion there’s no similarity between EVPN and VPLS (apart from the fact that they’re trying to solve the same problem).

VPLS is the result of organic evolution of anything-over-MPLS idea:

The process started with P2P transport of Frame Relay frames and ATM cells over MPLS LSPs;

Ethernet transport was the next logical step, first as P2P circuits, and later as emulated LANs;

BGP control plane was introduced to solve scalability challenges caused by lack of automation.

Want to know how to automate VPN service provisioning? I published a simple case study on GitHub; for more details join the Ansible for Networking Engineers webinar or online course. You might also want to explore the solution Francois Herbet built while attending Building Network Automation Solutions online course.

At no time did VPLS evolve from wire-focused service to endpoint-focused service like MPLS/VPN officially known as BGP/MPLS IP VPN.

EVPN is (almost) like MPLS/VPN but using MAC and IP addresses as well as IP prefixes as endpoint identifiers. It also solved a number of other problems including:

Localization of dynamic MAC learning. MAC addresses are gathered by EVPN PE-routers and transported across the core network in BGP updates. VPLS relied exclusively on dynamic MAC learning. You could use this behavior to block unknown unicast flooding across EVPN backbone.

Combining L2 and L3 forwarding information. EVPN endpoint information can include MAC and IP addresses, enabling proxy ARP functionality on EVPN PE-routers thus further reducing the flooding across EVPN backbone.

Support for host IP addresses and external prefixes. EVPN can transport IP addresses of attached endpoints or external IP prefixes within the same address family, resulting in an (almost) universal L2+L3 control plane.

The missing bit: IP multicast (well, I’m not missing it ;).

Edge multihoming. Have you ever tried to implement multihoming at VPLS edge? The STP tricks you had to use on top of mesh of pseudowires got ridiculously complex – someone had to write a whole CiscoPress book dedicated primarily to this topic.

EVPN has built-in support for edge multihoming based on Ethernet Segment Identifiers (ESI). Preventing layer-2 forwarding loops is still tricky, but at least it’s a contained problem solved within the standard, not a heap of kludges.

Edge load balancing. EVPN standard describes how you can use ESI to enable load balancing across EVPN backbone toward a device that’s connected to two PE-routers (MLAG). Using that functionality, some vendors (starting with Juniper) managed to eliminate the need for MLAG clusters at the network edge – a significant reduction in complexity (note: not everyone agrees with me).

Want ot know more?

I’ll describe the differences between three major network virtualization solutions (VMware NSX, Cisco ACI and EVPN-based transport fabrics) in NSX, ACI or EVPN webinar on March 1st 2018, and Dinesh Dutt will go deep into how EVPN works on March 6th 2018.

Both webinars (including live sessions) are part of standard ipSpace.net subscription.