Wednesday, January 23, 2013

Mission Critical Hosting Environment Part 2


Storage
Data Protection
It’s all about recovery; data protection design protects against all relevant types of failure and minimizes data loss. While disk capacity has increased more than 1,000-fold since RAID levels were introduced in 1987, disk I/O rates have only increased by 150-fold. This means that when a disk in a RAID set does fail, it can take hours to repair and re-establish full redundancy.
RAID levels:
·    RAID-1: An exact copy (or mirror) of a set of data on two disks.

·    RAID-5: Uses block-level striping with parity data distributed across all member disks.

·    RAID-6: Extends RAID 5 by adding an additional parity block; thus it uses block-level striping with two parity blocks distributed across all member disks.

·    RAID 10: Arrays consisting of a top-level RAID-0 array (or stripe set) composed of two or more RAID-1 arrays (or mirrors). A single-drive failure in a RAID 10 configuration results in one of the lower-level mirrors entering degraded mode, but the top-level stripe may be configured to perform normally (except for the performance hit), as both of its constituent storage elements are still operable—this is application-specific.

·    RAID 0+1: In contrast to RAID 10, RAID 0+1 arrays consist of a top-level RAID-1 mirror composed of two or more RAID-0 stripe sets. A single-drive failure in a RAID 0+1 configuration results in one of the lower-level stripes completely failing (as RAID 0 is not fault tolerant), while the top-level mirror enters degraded mode.

For mission critical systems being able to overcome the overlapping failure of two disks in a RAID set is important to protect from data loss. RAID 0+1 stripes data across a pair of mirrors. This approach gives an excellent level of redundancy, because every block of data is written to a second disk. Rebuild times are also short in comparison to other RAID types.

Raid 0+1 has increased read performance with mirrored copies of the data; it can read from the mirrored disks in parallel.  Furthermore, there is a dramatic improvement in write performance to the disk; RAID 0+1 needs to only write to two disks at a time. As opposed to RAID-5 which has to take into account four steps when writing to the disk. It needs to read the old data, then read the parity, then write the new data, and then write the parity. This is known as the RAID-5 write penalty.
Mirrored RAID volumes offer high degrees of protection, but at the cost of 50 percent loss of usable capacity.
Multipathing
vSphere hosts use HBA adapters through fabric switches to connect to the storage array’s storage processor ports. By using multiple HBA devices for redundancy, more than one path is created to the LUNs. The hosts use a technique called “multipathing” which provides several features such as load balancing, path failover management, and aggregated bandwidth. I suggest the use of Round Robin multipathing policy which uses automatic path selections through all available paths, distributing the load across both HBA adapters. HBA adapters seem to be the one component that fails the most often, if possible it is good to use four HBA adapters for mission critical applications to ensure you never have orphaned virtual machines.
One note about using Round Robin multipathing policy, you are not able to use Microsoft Clustering when using Round Robin, in the case that you do require Microsoft Clustering, you should stay with the default setting.


Virtual Disks
Virtual disks (VMDKs) are how virtual machines encapsulate their disk devices. Virtual disks come in three formats Thin Provision, Thick Provisioned Lazy Zeroed, and Thick Provisioned Eager Zeroed.
·    Thick Provision Lazy Zeroed: Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
·    Thick Provision Eager Zeroed: Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create other types of disks, but you can see a slight performance improvement.
·    Thin Provision: Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.


Thick Provisioned Eager Zeroed virtual disks are true thick disks. In this format, the size of the VMDK file on the datastore is the size of the virtual disk that you create and is pre-zeroed. For example, if you created 500 GB virtual disk and place 100 GB of data on it, the VMDK file will be 500 GB at the datastore filesystem. As the I/O occurs in the guest, the VMkernel (Host OS kernel) does not need to zero the blocks prior to the I/O occurring. The result is slightly improved I/O latency and fewer backend storage I/O operations.
Another benefit for Thick Provisioned Eager Zeroed is that you can’t over-subscribe the LUN like you can with Thin Provisioned disks.

Thick Provisioned Eager Zeroed ensures disk resources are committed to mission critical systems and provides slight disk I/O improvement. The drawback to this disk format is it requires more storage capacity than Thin Provisioning because you are committing the entire disk allocation to the datastore.

Virtual Machine
Guest OS High Availability

vSphere High Availability can detect operating system failures within virtual machines by monitoring system heartbeat information. If there is no VM heartbeat, no disk I/O, and no network I/O for a period of time, vSphere HA will automatically restart the guest OS under the assumption that it has failed. To help troubleshooting, vSphere also takes a screenshot of the VM’s console right before vSphere HA restarts the VM.

vSphere will only restart the VM a maximum amount of 3 times within a given timeframe. If more failures occur within the period, the VM is not restarted.


VM-VM Anti-Affinity Rules
A VM-VM Anti-Affinity rule specifies which virtual machines are not allowed to run on the same host. Anti-Affinity rules can be used to offer host failure resiliency to mission critical services provided by multiple virtual machines using network load balancing (NLB). It also allows you to separate virtual machines with network intensive workloads; if they were placed on one host, they might saturate the host’s networking capacity.
Networking
Switches
The ESX host environment is interconnected using switches. Some switches like the Cisco Catalyst 4500 series have oversubscription problems when you exceed the port groups which cause network packets to drop. This problem has an impact on your capability to perform live migrations in the environment, specifically with transaction intensive applications such as database VMs. The switches that will be supporting mission critical systems should be the latest switch technology to help resolve the issue.
This issue can be exaberated when using IP based storage solutions.
 Multi-NIC Configuration
By using multiple network adapters we can separate the VMware kernel (host OS kernel) traffic which includes management, vMotion, and fault tolerance from virtual machine traffic. In this scenario, VM traffic goes through the virtual distributed switch and VMware kernel traffic stays on the virtual standard switch providing further isolation. This also provides redundancy for all components except fault tolerance which currently isn’t used in our environment.


Infrastructure Maintenance and Deployment Management
So what does this mean for your organization? All IT organizations have limits on their resources, people, time and money. Therefore, it is critical to determine what the vital business functions are. By creating a small infrastructure cell dedicated to mission critical core systems in your data center, you can enhance the infrastructure maintenance and deployment processes. With the current production environment which can encompass very large host clusters and hundreds of virtual machines; it is impossible to upgrade all the infrastructure host systems in strict ITIL infrastructure release windows. Moreover, it is very risky taking more than two hosts out of the cluster at a time to perform maintenance and upgrades.  By creating a small cluster which supports no more than 75 virtual machines outside the DMZ and 75 virtual machines inside the DMZ, you can assure that changes to the mission critical cell are only performed on approve infrastructure release dates.
This will help you minimize the risk to the business for your vital business functions by having a more rigid change and release management processes.
News: Top vBlog 2016 Trending: DRS Advanced Settings