Introduction

Sometimes, you badly need to provide your VMware VMs with more RAM or vCPUs without shutting them down. True, there’s a trick allowing you to do that – CPU Hot-Plug and Memory Hot-Add. In this article, I’ll discuss both these features and how to use them in different environments.

Why do I write an article about Hot-Plug and Hot-Add in 2018 even though they were introduced back in ESXi 4.0? You see, there are very few in-depth studies of how Hot-Add and Hot-Plug work in different environments for some reasons. Sure, you can find a bunch of good articles about how you enable those features, why you need them, and when you may just want to leave them disabled (yes, they are disabled by default). There were some studies held for Windows guest OS family, but there is very few known about how Hot-Add/Hot-Plug work in Linux. Well, I hope to fill that gap with this article.

Hot Add & Hot Plug features in a nutshell

To start with, let’s clarify the diff between “hot-add” and “hot-plug”. Actually, there’s no difference at all, it’s just semantics. You typically add extra RAM and plug additional CPUs, right? For my money, it just won’t be correct to say it other way around! But, if you are ok saying, “hot-add CPU”, it is still fine. That isn’t something serious, you know. VMware themselves do not bother with all that stuff and refer the features as “CPU Hot Plug” and “Memory Hot Plug” in the vCenter interface.

Now, as we have clarified some non-technical things, I want to talk a bit about both technologies on the whole. These features appeared back in ESXi 4.0. As it comes from the names, they allow adding more RAM (Memory Hot Add) and vCPUs (CPU Hot Plug) without shutting down the VM. Looks awesome! In this way, there will be no downtime if you need just to add some more power to your VMs. At this point, I’d like to mention that you cannot un-plug vCPUs or reduce RAM on the fly. Well, I think there’s a good reason for that: just imagine how SQL server would behave if you pulled the memory out of the system spontaneously!

Another thing I’d like to mention is resource overhead. You see, when Memory Hot-Add is enabled, OS pre-allocates some kernel resources to handle any possible memory changing in future. In this way, the kernel may allocate the resources to RAM that you may never use! That’s why the feature may cause the maximum size of the paged pool to be smaller than one could expect. And, that’s, probably, why Memory Hot-Add is disabled in ESXi by default. But, no panic, all this overhead is a matter of percents.

Requirements and limitations

Before enabling Hot-Add/Hot-Plug, it’s good to know requirements and limitations:

Your VMs need to run at minimum hardware version 7.

Your VMware vSphere edition should be higher than Advanced.

You can only hot-add/hot-plug. There’s no way to “hot-reduce”/“hot-unplug”.

Guest OS must support the features. Take a look at VMware Compatibility Guide on this matter (select Guest OS from the What are you looking for: and Search compatibility guide: dropdowns, type the guest OS name, and press Search)



There are troubles with hot-adding memory in some OS from Linux 64-bit and Windows 32-bit guest OS families if the VM has less than 3 GB (3072 MB) of RAM at the beginning. Also, note that you cannot hot-add more than 3 GB of RAM for those OS.

Your current guest OS license may prevent you from adding extra vCPUs or memory. You see, you may be running beyond the vCPU or memory limit with that hot-plug/hot-add.

You cannot use vNUMA with CPU Hot-Add enabled, but you still can go with NUMA!

You cannot change the number of vCPU cores on the fly. So, you need to stick to the number of cores set before you boot the VM.

How I enable Hot-Add/Hot-Plug

Regarding all those limitations, both features are disabled by default. In order to enable them, check the appropriate checkboxes in Virtual Hardware settings.

The VM configuration

First, let’s look at the initial VM configuration.

1 x Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz;

1 x 4 GB RAM;

1 x 20 GB HDD.

Well, I guess that’s pretty boring to play around with just a single guest OS. So, I checked how Hot-Add and Hot-Plug work in a bunch of them! Here’s the list:

Windows Server 2016 (Version 10.0.14393);

Windows Server 2012 R2 (Version 6.3.9600);

Windows Server 2008 R2 (Version 6.1.7601);

Suse Linux Enterprise Server 12 SP3;

Ubuntu Server 16.04.5 LTS;

CentOS 7.4.1708 (Core);

FreeBSD 11.1 Release;

Debian 9.4.0 (stretch).

At this point, I’d like to say a couple of words about FreeBSD 11.1 Release. Spoiler: It doesn’t support Hot-Add and Hot-Plug. Why do I still list it here? Just to check how vCenter behaves.

For each guest OS, I created own VM on the ESXi 6.7 host. To keep the things I did clear, I named VMs after their guest OS. Here is the VM “destination configuration”:

4 x Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz;

1 x 8 GB RAM.

Hot-adding

I added the host to vCenter. I’m going to manage it via the Flash console today. Next, I installed each guest OS on the separate VM named after that OS.

For each VM, I allocated 8 GB of RAM and plugged 4 vCPUs at once. Afterward, I checked whether the change has run smoothly with Task Manager (for Windows) or using top and lscpu commands (Linux). For FreeBSD, I run the following cmdlet:

sysctl -a | egrep -i 'hw.model|hw.ncpu' 1 sysctl - a | egrep - i 'hw.model|hw.ncpu'

Windows

Let’s start with Windows. Here’s how you change VM configuration in Windows Server 2012.

Now, let’s look whether the changes have been applied smoothly. Note that you need to restart Task Manager to see the changes. In Windows Server 2008 R2, you can do that right from the window that pops up once you confirm configuration changes. For newer OS, (Windows Server 2012 R2 and Windows Server 2016) just don’t forget to restart Task Manager on your own.

So, here’s what I saw in Task Manager.

As you can see, all the changes are applied, and there’s no point in rebooting the VM! Note that the NUMA nodes option is grayed out. Well, there’s always a price to pay for cool things, you know…

To be honest, I did not check how adding RAM on the fly affects applications that I was running at that moment. The system did not crash, so let’s hope that applications are doing well too.

Linux

Now, let’s try changing Linux VMs configurations on the fly. I used the native utilities here (apart from sysctl in FreeBSD).

FreeBSD

One more time, FreeBSD does not support Hot-Add/Hot-Plug features. I just wanted to show how hypervisor behaves if you attempt changing VM configuration on the fly in an environment that does not support that feature. And, the thing is, you won’t see any warnings or error messages popping out! ESXi host just accepts the changes, but you won’t see them applied until you reboot the VM. Let me show what exactly I’m talking about.

First, I change VM configuration just as if the guest OS supports Hot-Add/Hot-Plug.

Looks as if there were some changes applied! However, top output won’t display any changes until you reboot the VM. Find below top outputs before and after rebooting FreeBSD VM.

To check whether the number of vCPUs changed, I deployed the following cmdlet:

sysctl -a | egrep -i 'hw.model|hw.ncpu' 1 sysctl - a | egrep - i 'hw.model|hw.ncpu'

Here are the command outputs before and after rebooting the VM.

Well, ESXi behavior looks weird to me, but I hope there’s a good reason for that. If you know any, please share it in comments.

Ubuntu and Debian

Now as we know that ESXi host silently accepts changing FreeBSD VM configuration on the run, let’s look at how you hot-add RAM and hot-plug vCPUs in a guest OS that “allows” such changes. Here, I use lscpu to trace vCPU number change and top for identifying memory changes.

Even being said to be capable of supporting Hot-Add, Ubuntu and Debian behave strange. Ubuntu does not support or Hot-Plug, but let’s see how it behaves after hot-plug anyway! In both cases, RAM was added only after VM reboot. Changing the number of vCPUs also works weird: the system allows hot-plugging, but the newly-added processors are off-line. Well, this phenomenon has something to do with how Ubuntu and Debian work. Do you have any ideas on how to explain that thing?

As usual, I hot-add RAM and hot-plug extra vCPUs.

ESXi Web interface says that everything is great. Let’s run lscpu now to make sure that everything is alright.

The system silently allows changing the configuration… but it does not display the right number of sockets until you reboot the VM.

Amount of RAM also remains the same until you reboot the VM. Check out top output. The first and the second outputs were derived before and after hot-adding RAM. The last one I got after rebooting VM.

Therefore, you cannot hot-add RAM in Ubuntu and Debian.

Any other Linux OS that supports Hot-Add/Hot-Plug

To make a long story short, both features work as they should only in two reviewed Linux OS: Suse Linux, and CentOS.

Change parameters.

Suse immediately applies the change. It should be noted that all vCPUs are up and running and there’s no point in rebooting the VM.

Hot-adding RAM also works good. The nice thing about Suse is that you don’t need to re-run the command since the configuration is refreshed automatically.

Conclusion

Changing VM configuration on the run comes in really handy when you build a test environment or need your VM configuration to keep up with your changing needs. Obviously, it is nice just to go to the Virtual Hardware settings and change VM configuration right there without rebooting the VM. Today, I took a closer look at how you can do that with Hot-Add/Hot-Plug in Windows and Linux environments. That’s up to you to enable these features or not. Here, I just present how they work.

Unfortunately, these features do not work with all OS I tested here. Find Hot-Add/Hot-Plug compatibility table below.