In an earlier post, I discussed my focus on a new network design for the lab. This post continues along that journey with a focus on vCenter, plugins, ESXi hosts, and gotchas within a vSphere 5.5 environment. Hope it helps!

Plugin URLs

The first thing I wanted to look at was my vCenter plugin URLs. These are the addresses used to talk with the plugin. If they were hard coded using an IP address, they’d need to be updated to either a new IP address or a DNS name. I figured it would be easiest to use DNS names going forward, so that IP changes are trivial.

The fastest way to dump all of the URLs for my plugins was to crank out a quick PowerCLI script. You can snag a copy from my PowerCLI GitHub repo.

Most of them were fine, or pointed towards the local server itself, but a few – such as my vCenter Operations Manager plugin – would need to be fixed.

Here’s what I did for each entry that I found bound to an “old” IP address:

Removed the plugin using the instructions specific to that plugin (this varies per plugin, make a backup just in case).

Changed the IP address of the server hosting the plugin.

Made sure DNS was up-to-date for that plugin and pinged it from the vCenter Server to validate a fresh DNS entry – use ipconfig /flushdns on the vCenter server if you’re impatient.

Connected the plugin to vCenter using the vCenter DNS name, not the IP address (because vCenter’s IP address is going to change).

Validate plugin is operational and healthy. This may take a little bit depending on what the plugin is.

I also went through any plugins that looked healthy to see how they were connecting back into vCenter. For each one of those, make sure the plugin’s server is using vCenter’s DNS entry, not the IP address. This is likely a good idea for any scenario, but I wanted to do a sanity check to be sure.

Changing vCenter’s IP Address

For whatever it may be worth, my vCenter Server runs on Windows Server 2012. If you’re using the Linux appliance, I’m not sure how much this post is going to help, but I’m sure William Lam can offer some tips if you break it. 🙂

At this point I felt that enough pre-work had been done to mitigate most of the risks I could proactively avoid. It was time to change the IP address. Here’s the list of steps I used:

Created backups of the vCenter Server VM and underlying SQL database. I use Veeam B&R 7 in the lab for this. Set DRS to manual mode to avoid anything moving around. Identified the ESXi host running the vCenter VM and connected directly to the host with the vSphere Client. Close any sessions you have open to the vCenter Server (Web Client, vSphere Client, etc.) Opened a console window to the vCenter Server by way of the ESXi host. Stopped all VMware services. Changed the IPv4 address and IPv4 gateway. I also gave the server an IPv6 address. Restarted the server. Put DRS back to fully automated (optional based on your setup)

I let this sit for about 10 minutes to give vCenter time to load up all of its services again, then tried connecting to it. I was sort of blown away that it connected successfully and everything showed healthy. I really thought something would break or I had forgotten a step.

Angry ESX Agent Manager

I diddled around in vSphere for a while to see how things were going, and did notice that the vSphere ESX Agent Manager remained in a red alert status for greater than 15 minutes. I figured that it must be broken for real because all other alerts vanished after a few minutes.

I’ve seen this issue before, and KB 2009934 provides solid steps on what to do. In the end, I updated “localhost:443” to the FQDN of my server, which matches my SSL certificate. Issue solved.

I also found out that my VMware Update Manager (VUM) service was configured to listen on the old IP address. Rather than tinker with it, I just uninstalled VUM and then installed it fresh while pointing to my existing database.

Make sure that when the wizard asks which interface to use to change the drop down from the IP address (default) to the FQDN of the server.

IP Changes for the ESXi Management Host

It makes sense to work from top to bottom, and since the management components were now on new IP addresses it was time to do the same for my ESXi hosts.

I began work on my management host, which runs my critical workloads that are the heart of my lab and home services. Because my lab services run on this host, I do not rely on DNS entries or the VDS. While a tier 3 or tier 4 data center is nearly guaranteed to avoid power outages, my lab is not nearly so well protected, and I have to be able to survive a complete shutdown of everything.

I found out that ESXi hosts don’t like changing IPv4 address, IPv4 gateway, and VLAN ID all at the same time. I could do the address and VLAN ID, but not gateway – all three would cause the task to error out. So I dropped a jump box on the Server VLAN to allow me to connect to the ESXi host after the IPv4 address changed, since the old invalid gateway wouldn’t matter (layer 2 adjacency).

Here’s what I came up with for the management host:

Updated the ESXi host’s DNS entry to the new IPv4 address.

Disconnect host from vCenter to flush out the database entry.

Changed the IPv4 address and VLAN ID of vmk0. At this point I lose connection to the host.

Reconnected to the ESXi host using the new address and update the IPv4 gateway.

Add the host back into vCenter and validate all is good.

This process went flawlessly. I also created an IPv6 address on all of my vmkernel ports for future fun.

IP Changes for the ESXi Resource Hosts

Similar work for the resource hosts, except I was able to use maintenance mode.

Updated the ESXi host’s DNS entry to the new IPv4 address.

Put host into maintenance mode and wait for VMs to migrate off.

Disconnect host from vCenter to flush out the database entry.

Changed the IPv4 address and VLAN ID of vmk0. At this point I lose connection to the host.

Reconnected to the ESXi host using the new address and update the IPv4 gateway.

Add the host back into vCenter and validate all is good.

Pretty straight forward stuff. If I had more than 3 hosts to do, I would have looked into building a script, but I found my time to be limited and the above steps only took about 30 minutes. It also gives me the itch to go back to using Razor or looking into Hanlon for my lab hosts. I need a cloning machine. 🙂

Thoughts

The remaining servers were easy to move around. Don’t forget to update your DNS SOAs for the domain controllers after changing their IP addresses. I still have to take the time to update my NAS IP addresses to the new subnet; I’ll probably just power down the lab VMs for this to avoid any weird disk access issues. Once that’s done, I’ll go back and edit the NFS vmkernel ports to the new NFS subnet and VLAN ID.

So we’re clear, this is what I did in my lab. It worked for me and went by at a reasonably fast speed. I didn’t do this for fun and giggles; it was a means to an end. I would imagine that folks have better ways to do this – and please share them – while at the same time please understand that your environment is most certainly constructed differently and will require adjustments. If this helps out a bit, great, but don’t take it to be an authoritative view on how things must be done. 🙂