Monday, June 8, 2015

Reserved Memory with vRealize Operations

Over the past few days, I have been analyzing the resource capacity of a cluster that supports a mission critical application for one of my clients. The application is streamed by Citrix XenApp virtual machines that run on vSphere ESXi infrastructure. Each virtual machine has 14,336 MB of memory reserved, which provided a different visual in vRealize Operations for Workload than you would typically observe. For instance, in the diagram below, we recognize that our memory demand is well below our memory usage. The memory demand is the active memory workload, the usage is what was delivered. ESXi allocates physical RAM only as necessitated. The vSphere host below has 7,999 MB allocated, but the virtual machines are currently only demanding 2,349 MB (31% of the usable memory). Because the virtual machines have touched 7,262 MB of physical memory (95% of the usable memory), that is the amount of allocated. The usage is the memory address blocks that are being held in physical memory, but may not be in active use by the virtual machines.

Another thing to keep in mind, ESXi only allocates memory as needed, if it has plenty of memory resources it doesn't bother to reclaim it. But, if there is memory contention, it does go through a process of reclaiming some of the memory.


Now let's take a look at the mission critical virtual machine, we can see that inside the virtual machine the demand is typically between 760 MB and 2,484 MB, which is around 15% of the configured memory. This virtual machine has Reserve All Guest Memory in the virtual machine properties checked, effectively locking in 14,336 MB of memory. There are 8 guest virtual machine on the host with this configuration.



Looking at the host capacity, the server is at 92% memory utilization with an orange Workload alert being bound by memory.

Further examination of the server memory capacity shows the host demand is at 92% capacity. It is even with the memory usage (both are at 118,478 MB). On the second bar with the green blocks, the actual demand for the virtual machines is well below the host demand. Because we have locked in the memory reservation for all eight virtual machines the demand equals the usage. If we look at the diagram above, the host demand equaled the virtual machine demand. Significantly different for the two scenarios.



There is 131,037 MB of memory configured on the vSphere host. Of the 131,037 MB of memory allocated, 1,921 MB is reserved for the hypervisor (the chart below gives you an estimate of the overhead required for VM size) leaving 129,116 MB of memory for virtual machine guests.



On this particular host, 115,720 MB is being reserved by the virtual machines. If we look at the diagram above, the demand for the virtual machines is actually over the reserved capacity. It is using 118,478 MB of memory, or roughly 92% of the usable memory. If we remove that from the available memory of 129,116 MB, that leaves us with 10,638 MB, which does not leave enough room to start a virtual machine during a host failure with the reservation of 14,336 MB.

Since the application is load balanced by XenApp in a cluster with 30 hosts, it effectively could lose a server and the other virtual machines in the cluster would pick up the additional workload. The virtual machines on the failed host would be powered down and the memory demand for the active virtual machines on the remaining host would increase, let's say it increases from 15% to 45%. Even with the increased memory demand on the virtual machines, the overall memory demand on the vSphere host would remain at 92% because it is reserved.
News: Top vBlog 2016 Trending: DRS Advanced Settings