Last time, we went through the initial setup steps to get NSX 6.3 deployed in your vSphere 6.5 environment. Today, we’ll finish all of the initial configuration and get your environment to a place where you can start deploying the tasty bits of NSX, like distributed firewalling.
When you first login with your user, you may see the following error. This is because you need to explicitly give rights to the NSX installation and NSX Manager. It is initially only given to the login you used to install the service, generally the firstname.lastname@example.org user.
To fix it, login as the user you used to install and go to the Networking and Security section and select NSX Managers:
Once there, select the NSX Manager on the Navigator tab and under Manage select Users. Add the user you’d like to have access (yourself at least!) to the system and give yourself the appropriate rights. For my lab, I’ve given myself the Enterprise Administrator role which is the NSX god role. Then you can log back into your Web Client and you will see the NSX Manager listed and you can continue with configuring NSX.
Select installation under Networking & Security. Now we’re going to deploy the NSX Controller nodes. In a production environment, three controllers are deployed for each NSX instance. As the boys from Monty Python said, “the number shall be three, two is not enough and four shall be right out.” I’m paraphrasing, obviously, but the gist is that there are exactly 3 NSX Controller nodes in an NSX implementation. I’m only going to deploy one due to resource limitations, but you can do that in a lab environment.
Click on the green plus symbol to add a controller and fill in the dialog box:
IF you haven’t created any IP Pools yet, you’ll need to do that to continue deploying the controller.
Click on Installation under Networking & Security and then Host Preparation. Once there, select the cluster you want to install NSX on and under Actions click Install. As you can see, I’m following VMware’s recommended practice of having a resource and a management cluster. If you do this in production, you may also have an Edge cluster to hold any Edge devices you deploy.
This is where we install the ESXi VIBs and then complete the configuration of the VXLAN transport network. If you have problems, as I did, you can follow the steps in this KB: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2075600
I ended up trying to manually install the VIB on each of my hosts using this KB about a problem with vSphere Update Manager: https://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2053782. It still failed, but after patching my ESXi hosts to the latest version using the great instructions Vladan has here I was able to install the VIBs from the GUI.
Once that’s finished, we’ll move on to the creation of the VTEP (VXLAN tunneling endpoint). This will create a portgroup on the distributed virtual switch that I already created. Ultimately, it creates a new vmkernel port on each host in that portgroup that the system uses as the VTEP.
You’ll need a larger than standard MTU for this to work correctly, as the VXLAN encapsulation adds bytes to the end of the frame. The minimum is 1550, although the recommended value is 1600 bytes. Keep in mind, the underlying network must support the increased value. When the system asks for IP addresses of the VTEP, I recommend using an IP Pool like we did for the controller(s).
Click on Not Configured under the VXLAN column, choose the VLAN and IP addressing scheme and click Finish:
When it’s done, you should see the following:
At this point, NSX is configured and operating on your cluster. You can see the portgroup created for the VXLAN traffic by going to Networking and selecting the portgroup on your distributed virtual switch:
Next time, I’ll dive deeper into some of the cooler features of NSX 6.3 and what things you might deploy initially to justify the money you’ll have to spend for the licensing!