Caution: Articles written for technical not grammatical accuracy, If poor grammar offends you proceed with caution ;-)

I know your excited to get right down to the meat of the installtion, but there is some housekeeping that we need to get out of the way first. There are a number of pre-requisites that we need to ensure exist in the environment first.

Pre-requisites

A properly configured vCenter Server with at least one cluster. (Ideally (2) clusters – (1) Management Cluster & 1(1) Cluster for everything else.) Cluster should have at least (2) hosts. (More would be better. Memory will be important) You will need to be using Distributed Virtual Switches (DvSwitch) NOT Standard vSwitches. If you are NOT running vSphere 5.5 you will need to have your physical switches configured for Multicast. (Unicast requires vSphere 5.5) You will need a vLAN on your physical network that you can utilize for VXLAN.

To give you an idea below is the configuration for the “MoaC” lab that I will be working in.

MoaC Configuration

vCenter 5.5 U2b

(3) Clusters Management Cluster with (2) vSphere ESXi 5.5 U2 Hosts 32GB Memory Cluster only DvSwitch using NIC Teaming Services Cluster with (4) vSphere ESXi 5.5 U2 Hosts 196GB Memory Cluster only DvSwitch using LAG. Desktop Cluster with (2) vSphere ESXi 5.5 U2 Hosts 112GB Memory Cluster only DvSwitch using LAG.

Physical vLAN trunked to all vSphere hosts in all clusters.

Installing the NSX for vSphere 6.1 OVA

Before we can deploy the OVA you need to download it. You can download it here.

Once Downloaded you need to deploy the OVA. Right-Click on the cluster in which you would like to deploy the NSX Manager OVA and choose “Deploy OVF Template” When the dialog open click “Browse” and select the “NSX OVA” that you downloaded, then click “Next”

“Check” the “Accept extra configuration Options” check box and then click “Next”.

“Accept” the terms and conditions and click “Next”

Provide a “Name” for the NSX Manager Server, then choose the “folder” to locate the virtual machine in and click “Next”

Next select the “datastore” you wish to provision the NSX Manager on to and click “Next”.

Next we need to select the “network” that you want to place the NSX Manager Server on to and click “Next”.* Note – You should select a network that has enough IP’s for all the NSX components that we will need to deploy. A Management Network with at least 12 available IP address would be ideal. You may want to consider utilizing a special routed vLAN for this. You will later need to either configure an IP Pool that can be utilized for assigning IP addresses to other components or alternatively use DHCP. This network should be a DvPortGroup on the DvSwitch in the cluster you are deploying to.

Next we need to enter information to customize the NSX Manager server. Enter an “Admin” and “Privilege Mode” password, “FQDN” hostname, “IP Address”, “Netmask”, “Gateway”, “DNS Servers”, “Search Suffix”, and “NTP Servers”. Optionally check the box for “Enable SSH” and then click “Next”.

Review your information check “Power on after deployment” and click “Finish”.

Next open your web browser and navigate to the NSX Manager administration site either by “hostname” or “IP address”. Login with the “admin” user and the “password” you set during the OVA deployment.

Once logged in select the “View Summary” button.

Before we continue we want to ensure that “all” components are in the “Running” state. Once all components are in the running state click on the “Manage” tab.

Next verify the “NTP” configuration and optionally configure a “Syslog” Server and then select the “Network” menu item.

Next verify the “Network” configuration to ensure it is correct and select the “SSL Certificates” Menu item.

Next review the “SSL Certificate” if the certificate doesn’t meet your needs and you would like to generate a “CSR” and provide a different certificate, now would be a really good time to do it otherwise select the “Backup & Restore” menu option.

Here you may configure scheduled backups of your NSX Manager. I know it’s temping to put this off until later, but I highly recommend you configure it now and get it over with and have the peach of mind that you have backups scheduled. Once you configure your backups click on the “NSX Management” menu item.



Now we will configure the “Lookup Service (SSO)” and the integration to the “vCenter Server” You will want to use the “administrator@vsphere.local” account for the user account for both services. First select “Configure” for the “Lookup Service”.

For the lookup service input the “FQDN” or “IP” address of the “vCenter SSO Server”. This most likely is your” vCenter Server” unless you installed the “SSO” server as a separate component on a separate server. Input the port number “7444”, the SSO Admin user “administrator@vsphere.local” and the appropriate “password” then click “OK”. Once complete your “Lookup Service” Status should be “Connected” (UPDATE: For vSphere 6 use 443 as the port NOT 7444)

Now we need to connect NSX Manager to “vCenter Server” So click “Configure” for “vCenter Server” and when the below dialog appears input the “FQDN” or “IP” address of your “vCenter Server” Input “administrator@vsphere.local” as the “User Name” and the appropriate “Password” then click “OK”

Now let’s login to “vCenter” and “administrator@vsphere.local” and you should see a “Networking & Security” icon on the home screen. Click on the “Networking and Security” icon.

Once the “NSX Home” page appears click on the “Installation” menu item.

Once the “Installation” page appears you will need to add an “NSX Controller Node” We will actually be deploying three “Controller Nodes” Where you can run NSX with one “controller node” the recommended number of nodes is three due to the way the controllers handle master node elections.

Once the “Add Controller” dialog opens you will need to select the “DataCenter”, the “Cluster/Resource Pool”, “Datastore”, “Host”, and “Folder” where the “NSX Controller” will be deployed. We next to select the “Network/DvPortGroup” that the “NSX Controller” will be connected to. You will then need to assign an “IP Pool” you will have the opportunity to create one when you click “Select”. The Instructions for the “IP Pool” creation are below. Once you complete the “IP Pool” assignment set a password and click “OK”

*Note – You may only deploy (1) controller at a time. You cannot deploy any additional controller until this one finishes. If you attempt to deploy another while this one is still deploying it will fail.



On the “Add IP Pool” Dialog” give the pool a “Name” define the gateway address of the network you are connecting the “NSX Controller” to, define the “Networks” “Subnet Bits”, the “Primary & Secondary DNS” Servers, the “DNS Suffix” and then define an “IP Address Range” in the “Static IP Pool” field, then click “OK”.

When you add additional controller take notice that it does not require you to input a password as seen below. This is because the controllers are part of a high availability cluster and will share the same login info.

Once you have finished deploying your “NSX Controllers” they should all have a “Status” of “Normal”.

*Note – You will notice my controllers are not 1,2, &, 3. That is because I have removed and re-deployed my controllers in the lab a few times to understand how the overall NSX deployment would behave.

Next click the “Host Preperation” menu item on the “Top Menu”. Here we need to install the vSphere (ESXi) kernel modules and configure the host “VXLAN” settings. To install the Kernel Modules simply click the “Install” link under “Installation Status” Once you click “Install” you can view the status of each host by clicking the “Arrow” next to the “Cluster”.

Once the “Clusters” status is “Ready” click the “Configure” link under “VXLAN” to configure “VXLAN” for the cluster.

Once the “Configure VXLAN Networking” dialog appears select the “DvSwitch” to configure the “VXLAN VMKernel” interface. Input the “VLAN” you would like to use for the “VXLAN” networking traffic, set the “MTU” to “1600” to account for the “VXLAN Encapsulation”. You may choose to use “DHCP” or “IP Pool” for the “IP Address” Assignments for the “VXLAN VMKernel” interfaces. If you choose “IP Pool” You will need to create an additional pool that includes IP address for the “VLAN” that you are assigning to the “VXLAN VMKernel” Next you will need to set the “VMKNic Teaming Policy” This is very important, you must set this appropriately according to how your NIC teaming is configured on the “DvSwitch”. If this is not set to the appropriate “Teaming Policy” no traffic will traverse over the “VXLAN” interface. You must do this for each cluster. Make sure to use the same VLAN for all clusters.

*Note – To support the “MTU 1600” you much have “Jumbo Frames” enabled on the Physical Switch.

Here you can see the “VMKernel” that was created for the “VXLAN”.



Here you can also see the “VXLAN VMKernel” and “IP Address”.

Once you have all the “VXLAN” configuration completed your “Host Preparation” screen should look like the below image.

Once all is complete select the “Logical Network Preparation” and then select “Segment ID”.

On the “Segment ID” configuration click “Edit”.

*Note Segment ID’s are like VLAN ID’s for VXLAN. Each “Logical Switch” a.k.a.”vWire” we create will utilize a segment ID.

Input a “Segment ID Pool” of “5000-5999” and click “OK”

The next part of the configuration is “Transport Zones” these deserve their own article and therefore I will not be covering them as part of this all encompassing installation post. I will be continuing this under article “NSX for vSPhere 6.1 – Transport Zones”.