Wednesday, October 21, 2015

vRealize Operations 6.1 vSphere VMs Memory dashboard

In today's post, I wanted to explore the vSphere VMs Memory dashboard in vRealize Operations 6.1 to find potential memory contention issues. But before we dive into vRealize Operations, I wanted to review memory management in vSphere.

Let's start by looking at Utilization on the Monitor tab of the vSphere Web Client to understand the memory consumption of a guest virtual machine. In my example below, we are looking at my vRealize Operation Manager 6.1 appliance.

On the Virtual Machine Memory widget, VM Consumed is the amount of physical memory consumed by the virtual machine for guest memory. Unlike host consumed memory, virtual machine consumed memory does not include overhead memory.



The bottom bar is VM Overhead Consumed, which is the amount of machine memory used by the VMkernel to run the virtual machine.
 
Some of the factors that affect the memory overhead consumption are the number of virtual CPUs, the amount of virtual memory, number of virtual devices, and the capabilities of the physical processor. The table below provides a sample of the memory overhead values for a virtual machine.


   
The next widget is Guest Memory, which displays several memory counters. Because of the VMkernel's memory-management techniques for allowing overcommitment, guest physical memory mapping does not have a 1 to 1 correspondence. Managing the memory in the hypervisor enables sharing of similar data (such as redundant copies of the same guest OS in memory pages), memory compression, memory swapping, and memory balloon technique, for efficient host memory reclamation. 


  Here is a description of the memory counters displayed:
  • Active Guest Memory: The amount of guest physical memory actively being used (Demand in vRealize Operations)
  • Private: The amount of non-shared physical memory stored on the host's memory chip
  • Shared: The amount of guest physical memory shared with other virtual machines through Transparent Page Sharing (TPS)
  • Compressed: The amount of virtual memory that has been compressed by the VMkernel
  • Swapped: Current amount of guest physical memory swapped out to the virtual machines swap file by the VMkernel
  • Ballooned: Memory reclaimed from the guest by the ballooning driver
  • Unaccessed: The amount of memory that has been untouched by the virtual machine
Transparent Page Sharing (TPS) is when pages of memory that are identical take up the same physical space on the physical memory card. Transparent Page Sharing can take place on a single virtual machine or multiple virtual machines on an ESXi host. By default, inter-virtual machine TPS is no longer enabled, but you can adjust the setting in advanced properties.

The memory balloon driver collaborates with the host to reclaim pages that are considered least valuable by the guest operating system. The driver uses a ballooning technique that provides predictable performance that closely matches the behavior of a native system under similar memory constraints. This technique increases or decreases the memory pressure on the guest operating system, causing the guest to use its own memory management algorithms. When physical memory is under contention, the guest operating system determines which memory pages to reclaim physical memory and if necessary swaps them to its own virtual disk.

Host swapping can have a heavy performance impact and is used as a last resort for reclaiming pages. The host drive swaps lower priority pages on to the physical disk where it can still be accessed. However, memory backed by a physical disk has significantly higher latency than backed by physical memory. Swapping out memory to physical disk is not necessarily bad, but swapping in memory from disk can cause significant performance issues with applications.

Once memory is being swapped, pages that can be identified with a minimum 50% compression ratio may be compressed and kept in physical memory. The host zips lower priority pages, so that two or more pages can occupy a single page in physical memory. Compressed memory has small overhead on reading memory; however writes to compressed memory can slow down processing.

vRealize Operations 6.1 provides a dashboard called vSphere VMs Memory. The dashboard includes six widgets, four of them are heatmaps and two top-n dashboards. They include:
  • VMs Heatmap Sized by Memory Demand (%) and Colored by Memory Swapped (KB)
  • VMs Heatmap Sized by Memory Demand (%) and Colored by Memory Balloon (KB)
  • VMs Heatmap Sized by Memory Demand (%) and Colored by Memory Swap In Rate (KBps)
  • VMs Heatmap Sized by Memory Demand (%) and Colored by Memory Compressed (KBps)
  • Top 25 VMs by Memory Demand (%) (24h)
  • Top 25 VMs by Mem Swapped-in (KBps) (24h) 
 

Looking at the dashboard, I notice that my vRealize Operations Manager Appliance is colored red for memory swapped (KB), I can hover over the box for more detailed information including the Sparkline. If we look at the Top 25 VMs by Mem Swapped-In (KBps) (24h) we notice that vRealize Operations Manager Appliance is listed as number three, but it is only recording a swap-in rate of 0.03. Even though my appliance is swapping to disk, it is not swapping in much of the memory from the disk.

On the other hand, my vCloud Connector Server and vCloud Connector Node are experiencing significant swap-in rate of 104.6 and 78.03.


If I switch back to the vSphere Web Client, on Performance of the Monitor tab I recognize I had a few spikes for swap-in rate. I am not overly concerned about performance issues, but it is something I may want to explore further.




In vRealize Operations, the Workload for my vRealize Operations Manager Appliance is 46 and green. Workload shows the demand for physical resources. The Workload badge is calculated by the most utilized resource, in this instance memory is the most heavily used resource with 46.27%.


As we discussed previously, ESXi allocates physical RAM only as necessitated. My vRealize Operations Manager Appliance has 8 GB of memory allocated, but is currently only demanding 3.7 GB of memory (Guest Active Memory). Usage is the combination of the virtual machine private memory and the VM overhead consumed. On my vRealize Operations Manager Appliance, there is 7.48 GB of private memory and 51 MB of VM overhead consumed, which totals 7.53 GB of memory usage.

Again everything looks pretty good for my appliance. But, I want a deeper examination of my virtual machine memory. Under Further Analysis, I am going to select Virtual Machine Memory Diagnose to bring up the respective view. Views provide details on the specific object that has been selected.


The Virtual Machine Memory Diagnose shows the last 24 hours of memory demand, memory consumed, memory ballooning, memory swapping, and memory compressed for my vRealize Operations Manager Appliance. This diagnoses can be extended out past 24 hours by clicking the calendar icon, as shown below.

We can see that my vRealize Operations Manager Appliance is showing memory swapping, compression, and sharing; however, there is no ballooning happening. The demand for the virtual machine is consistently around 30% to 40%.



If I want to take a long term analysis, I can select the Virtual Machine Memory Demand Forecast Trend to ensure that my demand isn't going to cause a problem in the next 30 days.


The last item I want to look at is Capacity Remaining. Because this virtual machine hasn't been on for more than 30 days I am only getting a partial trend, but it does show me that I have enough memory capacity on the virtual machine and it is recommending that I could lower the amount of memory on my vRealize Operations Manager Appliance to 5.06 GB to right-size the resource.


vRealize Operations Manager gives you great perspective on memory performance, capacity, and trending for your virtual machines; and the vSphere VMs Memory dashboard can be the launching point.
News: Top vBlog 2016 Trending: DRS Advanced Settings