In VMware Cloud on AWS (VMC), the default behavior of the NSX-T Distributed Firewall (DFW) is to currently allow all traffic between compute workloads even across different logical networks (Segments). Today, the default behavior is currently not configurable and is something the NSX team is looking into with a few update of the VMC Service.



Having said that, it is actually pretty straight forward to create a new Deny All policy that would achieve the same desired behavior of blocking all traffic by default. Since this topic has come up a few times, I figure it would be useful to share the quick fix and big thanks to Michael Kolos, one of our VMC Customer Success Engineers who shared the original tidbit with me.

Before we begin, let's take a look at my environment and what I have already deployed in my SDDC to help explain the expected behaviors. I have two logical networks that I have created: sddc-cgw-network-1 and sddc-cgw-network-2 and for each, I have a few VMs that have been provisioned and attached.

sddc-cgw-network-1 (192.168.1.0/24) app-01 db-01 photon-01

sddc-cgw-network-2 (192.168.2.0/24) app-02 db-02



Initial Deployment - No DFW Rules

When you initially deploy an SDDC in VMC, there are no DFW rules defined and the default behavior as mentioned earlier is to allow all traffic between the VMs. For example, I can ping from app-01 to app-02 and vice-versa.

Deny All Rule

To create a "default" Deny All rule, we need to create a new DFW rule at the very bottom of the "Application Rules" category which is the last category of rules to be evaluated.



Create a new Section, which I named Default - Deny All and then a new Rule that contains the following definition:

Rule: Deny All (you can name it anything you want)

Deny All (you can name it anything you want) Sources: 0.0.0.0/0 (you will need to create a new NSX-T Group containing the following IP Membership)

0.0.0.0/0 (you will need to create a new NSX-T Group containing the following IP Membership) Destinations: Any

Any Services: Any

Any Action: Drop

Click the Publish button on the upper right for the new rule to be applied. At this point, all traffic is now blocked between my VMs. I can no longer ping from within the same logical network or across logical networks, which is what we expect.

DFW Rule to enable traffic between specific VMs

If we wish to allow connectivity between App-01/DB-01 and App-02/DB-02 respectively, we can easily do so by creating the following definition:

Section: App Stack 01

Rule: App-01 to DB-01

App-01 to DB-01 Sources: App Stack 01 VM (Specify the specific VMs to be part of this group)

App Stack 01 VM (Specify the specific VMs to be part of this group) Destination: App Stack 01 VM (Specify the specific VMs to be part of this group)

App Stack 01 VM (Specify the specific VMs to be part of this group) Action: Allow

Section App Stack 02

Rule: App-02 to DB-02

App-02 to DB-02 Sources: App Stack 02 VM (Specify the specific VMs to be part of this group)

App Stack 02 VM (Specify the specific VMs to be part of this group) Destination: App Stack 02 VM (Specify the specific VMs to be part of this group)

App Stack 02 VM (Specify the specific VMs to be part of this group) Action: Allow



After publishing these rules, we can now ping from App-01 to DB-01 and App-02 to DB-02 but not from App-01 to App-02 and not from DB-01 to DB-02. We can easily do this by simply selecting the specific VM from our VMC Inventory when creating the NSX-T Group. We could have also easily created the group by simply using IP Membership based on the network (e.g. 192.168.1.0/24 and 192.168.2.0/24), but for demonstration purposes, we can control on a per-VM level on what access it has on the network.

DFW Rule to enable connectivity to NAT'ed VM

Managing East/West network traffic using the DFW is pretty easy, but what about inbound or outbound connectivity, especially for a VM that might be NAT'ed to a public IP Address? In this case, you will need to create the required firewall rules on BOTH the Edge Firewall as well as the DFW. One important thing to note is that when specifying the Destination, the reference must be the Private IP Address of the VM being NAT'ed and the Source must reference the Public IP Address. This means you will need to create two NSX-T Groups, one referring to the VM itself which will give you a mapping to the Private IP but also another group that maps to the Public IP (using IP Membership) used with the NAT'ed VM.

Here is a screenshot of my Edge Firewall definition which allows inbound SSH access as well as general outbound connectivity (e.g. Internet access) for my PhotonOS VM.



Here is a screenshot of my DFW definition which also allows inbound SSH access as well as general outbound connectivity (e.g. Internet access) for my PhotonOS VM.



After BOTH sets of rules have been applied, I can then SSH to my PhotonOS VM as well as access the Internet from within the VM.

The DFW in NSX-T is extremely powerful and I know many customers will be taking advantage of this capability when working in VMC. For those interested in Automating DFW rules, be sure to check out my blog post here using PowerShell and the NSX-T Policy API.