Tuesday, November 20, 2012

Moving vSphere and ESXi Hosts to 10 Gig

Let's face it, networking is one of the most important aspects of setting up a hosted environment. In the end, the virtual machines you deploy are useless until networking is configured and they can communicate across the enterprise. VMware networking provides a tremendous amount of flexibility with virtual standard switches (vSS), virtual distributed switches (vDS), and the vShield products. It allows you to create a secured multi-tenant environment that can be configured at the virtual switch level, or all the way down to the port if you are using the virtual distributed switch.

Let's have a brief networking overview of vSphere.

A vSphere standard switch works much like a physical switch. It is a software-based switch that keeps track of which virtual machines are connected to each of its virtual ports and then uses that information to forward traffic to other virtual machines. A vSpherel standard switch (vSS) can be connected to a physical switch by physical uplink adapters; this gives the virtual machines the ability to communicate to the external networking environment and other physical resources. Even though the vSphere standard switch emulates a physical switch, it lacks most of the advanced functionality of physical switches. A vSphere distributed switch (vDS) is a software-based switch that acts as a single switch to provide traffic management across all associated hosts on a datacenter. This enables administrators to maintain a consistent network configuration across multiple hosts.



A distributed port is a logical object on a vSphere distributed switch that connects to a host’s VMkernel or to a virtual machine’s network adapter. A port group shares port configuration options, these can include traffic shaping, security settings, NIC teaming, and VLAN tagging policies for each member port. Typically, a single standard switch is associated with one or more port groups.

A distributed port group is a port group associated with a vSphere distributed switch; it specifies port configuration options for each member port. Distributed port groups define how a connection is made through the vSphere distributed switch to the network.

Additionally, vSphere distributed switches provide advanced features like Private VLANs, network vMotion, bi-directional traffic shaping, and third party virtual switch support.

Most corporate environments are using multiple 1 Gigbite (1 GB) Ethernet adapters deployed as their physical uplinks. In the diagram below, we are using 6 uplink adapters connected to a combination of vSphere standard switches for the Vmkernel port groups and a vSphere distributed switch for the virtual machine port groups. This gives us the capability of spreading out the host network traffic between the uplink adapters for traffic shaping. Futhermore, it provides traffic isolation between the Vmkernel activities and virtual machine traffic.



Today, many virtualized datacenters are shifting to the use of 10 Gigabit Ethernet (10GbE) network adapters. The use of 10GbE adapters replaces configuring multiple 1GB network cards. With 10GbE, ample bandwidth is provided for multiple traffic flows to coexist and share the same physical 10GbE link. Flows that were limited to the bandwidth of a single 1GbE link are now able to use as much as 10GbE.

Now lets take a look at 10 Gigabit Ethernet configurations and the impact on our environments. Because we don't have as many uplink adapters, the way we approach traffic shaping and network isolation is different. I am going to demonstrate two scenarios, the first provides traffic shaping and isolation by the uplink adapters, and the second is a more dynamic approach that takes advantage of vSphere Network I/O Control (NIOC).

With the first scenario we are segmenting the virtual machine traffic to dvUplink1 and providing failover to dvUplink0, this provides physical isolation of your virtual machine traffic from your management traffic. The Vmkernel traffic is pointed to dvUplink0 with dvUplink1 being the failover adapter. If security controls dictate that you segment your traffic, this is a good solution, but there is a good chance that you won't be using the full capabilities of both your 10GbE network adapters.




In our second scenario we are going to use network resource pools to determine the bandwidth that different network traffic types are given on a vSphere distributed switch.

With vSphere Network I/O Control (NIOC), the convergence of diverse workloads can be enabled to be on a single networking pipe to take full advantage of 10 GbE. The NIOC concept revolves around resource pools that are similar in many ways to the ones already existing for CPU and Memory.

In the diagram below, all the traffic is going through the Active dvUplinks 0 and 1. We are going to use a load-based teaming (LBT) policy, which was introduced vSphere 4.1, to provide traffic-load-awareness and ensure physical NIC capacity in the NIC team is optimized. Last, we are going to set our NIOC share values. I have set virtual machine traffic to High (100 shares), management and fault tolerance to Medium (50 shares), and vMotion to Low (25 shares). The share values are based on the relative importance we placed on the individual traffic roles in our environment. Furthermore, you can enforce traffic bandwidth limits on the overall vDS set of dvUplinks.

Network I/O Control provides the dynamic capability necessary to take full advantage of your 10GbE uplinks, it provides sufficient controls to the vSphere administrator, in the form of limits and shares parameters, to enable and ensure predictable network performance when multiple traffic types contend for the same physical network resources.


Resource allocations for NIOC can be found on the Resource Allocation tab of the vSphere distributed switch.


These are just a couple of scenarios available as you design your 10GbE infrastructure. The nice thing about vSphere's toolbox is that it provides capabilities to meet your organizations specific needs.

I am going to close with Vmware's Networking Best Practices:

■  Separate network services from one another to achieve greater security and better performance. Put a set of virtual machines on a separate physical NIC. This separation allows for a portion of the total networking workload to be shared evenly across multiple CPUs. The isolated virtual machines can then better serve traffic from a Web client, for example

■  Keep the vMotion connection on a separate network devoted to vMotion. When migration with vMotion occurs, the contents of the guest operating system’s memory is transmitted over the network. You can do this either by using VLANs to segment a single physical network or by using separate physical networks (the latter is preferable).

■  When using passthrough devices with a Linux kernel version 2.6.20 or earlier, avoid MSI and MSI-X modes because these modes have significant performance impact.

■  To physically separate network services and to dedicate a particular set of NICs to a specific network service, create a vSphere standard switch or vSphere distributed switch for each service. If this is not possible, separate network services on a single switch by attaching them to port groups with different VLAN IDs. In either case, confirm with your network administrator that the networks or VLANs you choose are isolated in the rest of your environment and that no routers connect them.

■  You can add and remove network adapters from a standard or distributed switch without affecting the virtual machines or the network service that is running behind that switch. If you remove all the running hardware, the virtual machines can still communicate among themselves. If you leave one network adapter intact, all the virtual machines can still connect with the physical network.

■  To protect your most sensitive virtual machines, deploy firewalls in virtual machines that route between virtual networks with uplinks to physical networks and pure virtual networks with no uplinks.

■  For best performance, use vmxnet3 virtual NICs.

■  Every physical network adapter connected to the same vSphere standard switch or vSphere distributed switch should also be connected to the same physical network.

■  Configure all VMkernel network adapters to the same MTU. When several VMkernel network adapters are connected to vSphere distributed switches but have different MTUs configured, you might experience network connectivity problems.
News: Top vBlog 2016 Trending: DRS Advanced Settings