Friday, January 16, 2009

Partition Alignment in ESX 3

I came across this article today while doing some research on using dd in a vmfs environment on partition alignment in ESX server. If you aren't familiar with the issues that can be caused by partition misalignment, it is most definitely a worthwhile read. If your head starts spinning in the first couple of sections and you feel like it's not worth the trouble, skip ahead to the benchmarking section at the end and I'm sure you'll reconsider.

Friday, January 9, 2009

Why Your Virtual Infrastructure Network Should Be Segmented

When architecting a Virtual Infrastructure, networking is one of your most important concerns. Dropping 50 servers into 5 buckets creates new IO and security concerns. Here is how we have implemented this and had good results:

1) Service Console vLAN - Having a separate vLAN for your service consoles not only brings up the level of security around them, but ensures that your ESX to ESX pings for high availability cannot be bogged down by other network traffic.

2) vMotion Physically Segmented - The obvious reason for this is to improve performance. Running a vMotion or an svMotion sends an enormous amount of data over the network. So as to not impact the performance of other applications on your network and to get the best throughput possible in your migrations, having this physically separate is key.

The less obvious reason for this is security. When a vMotion is happening, the raw contents of a VM's memory are sent across the network. Should a malicious party be listening on this vLAN, they would be able to read the entire memory contents of your server. If you have any type of sensitive data being processed on that server, this creates an additional point of exposure.

3) Virtual Machine Network - In our case, we use the same vLAN for our VMs that we use for our physical servers.

4) Storage Network - If you will be using iSCSI or NFS on the back end, it makes sense to segment this out at least virtually if not physically to be able to ensure proper performance of your storage devices.

Friday, January 2, 2009

A GUI for sVmotion

A couple of days ago, I downloaded and installed a VI Client plugin for Storage vMotion which makes utilizing Storage vMotion as easy as a standard vMotion migration. David Davis over at VirtualizationAdmin.com has written up a nice little guide on installing and using this plugin. It was written by Andrew Kutz and is provided free of charge on his Google Code page. The only gripe I have is that I was unable to use it to locate my vmdks and vmx files on separate SAN LUNs. I find this especially useful in the case of something like a database where I may want a vmdk on RAID 5 for my data files and a separate VMDK on RAID 10 for my transaction logs. If anyone knows of a slick GUI that supports this functionality, I'd appreciate the tip.

Wednesday, December 31, 2008

CPU Masking

Several months ago, I ended up in an unfortunate situation. We invested a sizeable amount of money on a powerhouse server to add to our ESX cluster. While the server was the same model as the rest of the machines in our cluster, the vendor had added SSSE3 support to this latest revision, something I was not aware of at the time of purchase. The end result of adding these new instructions to the cluster was vMotion incompatibility between this server and our other ESX hosts.

While not supported by VMWare, CPU masking is a feature available in Virtual Center that will allow you to overcome situations like this. While in my case, SSE3 servers to an SSSE3 server, compatibility certainly cannot be guaranteed in these situations and I must emphasize that you proceed with caution. In addition, if the differences in my CPU's were more profound (say AMD vs. Intel), I would never attempt this, especially in a production environment.

In some situations, Enhanced vMotion will allow you to overcome minor CPU differences and I would certainly recommend that as a first attempt. You will need to be running at least Virtual Center 2.5 Update 2 and ESX 3.5 Update 2 to enable this. You will also need to recreate your cluster. You can find information on EVC as well as supported processors in KB 1003212.

VMWare outlines the process of CPU masking in KB 1993. In my case, I wanted to mask only the needed instruction sets and it took a little while to figure out exactly which combination of masks to use between each of the registers leveraged. The end result was to navigate to C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter and edit the vpxd.cfg XML configuration file. The section of text below was all that was needed to accomodate the differences between these processors. Sorry for the screenshot, but formatting XML for web presentation is a pain...

Tuesday, December 30, 2008

Adding Raw Device Mappings for Maximum Performance

We know that adding an RDM to your SQL and Exchange hosts can have an enormous impact on IO performance. In addition, under Windows Server guests prior to Server 2008, the partition must be properly aligned when accessing a RAID volume in order to achieve maximum performance. This is due to the fact that with a physical disk that maintains 64 sectors per track, Windows always creates the partition starting at the sixty-forth sector, therefore misaligning it with the underlying physical disk. In some cases, aligning a partition can boost IO performance by up to 50 percent! Here is a step by step guide to creating, attaching and aligning an RDM partition in ESX 3.5 on Server 2003.

Attaching a Raw Device Mapping

1. Log on to your SAN and create your LUN. Make this LUN available to all ESX hosts in your cluster. The steps needed to do this will vary by SAN and Fibre switch, so consult your vendor’s documentation for more info.
2. Log in to Virtual Infrastructure Client and connect to your Virtual Center instance.
3. Click on an ESX host and choose the Configuration tab.
4. Click on Storage adapters and rescan for new storage devices. Your new LUNs should show up. You do NOT want to create a VMFS here.
5. Repeat this procedure for each ESX host in your cluster.
6. Right click on the guest OS that you will be attaching the RDM to and click on Edit Settings.
7. Click Add in the hardware tab, choose Hard Disk and then Raw Device mapping.

Adding your new disk to the guest OS

1. Log on to your guest OS and launch Device Manager.
2. Scan for hardware changes.
3. Open Disk Management and initialize the new disk. Do not create a partition at this time.

Adding a properly aligned partition to the RDM

1. Log on to the server.
2. Type diskpart at the command line to launch the diskpart utility.
3. Type list disk to see a list of disks present. For my example, I will be creating a partition on Disk 2 and it will be the only partition on this disk.
4. Type select . In my example, you would type SELECT Disk 2.
5. Type create partition primary align=64 to create a primary partition that takes up the entire disk. You can use the size keyword if you are creating more than one partition on the disk.
6. After you have finished here, you will need to go in to Disk Management, format the partition and assign it a drive letter as you would normally.