Virtualization Notes, Best Practices, and Gotcha's
I spent last week attending the Virtualization Pro Summit. I came away with a wealth of information that I’m still compiling, wrapping my head around, and figuring out where and how I can implement what. Below are some of the notes that I took away from the conference.
Memory is the first bottle neck in virtualization.
When sizing a server, be sure to use servers that can handle at least 128GB of RAM. It doesn’t mean that you will need to purchase a server with 128GB of RAM off the bat. It will allow for proper expansion.
If possible, use DDR3 RAM and buy in sticks of three.
Balance the memory allocations across the CPU channels.
Leave a buffer to the amount of logical RAM allocated to the amount of physical RAM in the server.
The speed of your hard disk subsystem is probably the second bottleneck that will be encountered.
For iSCSI SANs, use multiple load balanced connections to the SAN to get the desired bandwidth.
For the best performance, purchase a SAN or DAS that uses newer SAS (serially attached SCSI) hard drives.
Fiber Channel over Ethernet (FCoE) will provide the better performance as it doesn’t have the overhead of the IP protocol, but for the time being, iSCSI will provide the most bang for the buck.
SAS hard drives provide 384MBps throughput.
15K SAS drives will provide the best performance.
You can allocate additional RAM for x64 guests for disk caching to compensate for an overloaded hard disk subsystem. This will greatly enhance performance for servers that are hard on disk I/O, such as Exchange, SQL, and Virtual Desktops (VDI).
Target CPU usage is around 60 - 70%, combined for all guest VMs and the VM host.
A four to one ratio of CPU core to guest VM vCPU is a good ratio to start off with after taking other factors into account (RAM, disk I/O, networking, etc). After that you can add or remove VMs as needed.
Multi-socket x64 processors provide the best performance.
For SMP Applications - vCPU’s shouldn’t out number the physical CPU’s.
Dynamic VM moves (VMWare vMotion / Microsoft Live Migration)
You will need to plan your VM clusters so that not any single VM host is over loaded. If a VM host goes down in a cluster, it will cause a domino effect.
NIC teaming or 10GB Ethernet will provide the best performance for heavy usage.
Isolate the console network and protect the VM hosts at all costs.
Isolate the cluster heartbeat (vMotion / Live Migration) traffic on a physical separate switch.
The console network and cluster heartbeat network can be on the same network if need be.