Showing posts with label ESX. Show all posts
Showing posts with label ESX. Show all posts

Friday, January 16, 2009

Partition Alignment in ESX 3

I came across this article today while doing some research on using dd in a vmfs environment on partition alignment in ESX server. If you aren't familiar with the issues that can be caused by partition misalignment, it is most definitely a worthwhile read. If your head starts spinning in the first couple of sections and you feel like it's not worth the trouble, skip ahead to the benchmarking section at the end and I'm sure you'll reconsider.

Friday, January 9, 2009

Why Your Virtual Infrastructure Network Should Be Segmented

When architecting a Virtual Infrastructure, networking is one of your most important concerns. Dropping 50 servers into 5 buckets creates new IO and security concerns. Here is how we have implemented this and had good results:

1) Service Console vLAN - Having a separate vLAN for your service consoles not only brings up the level of security around them, but ensures that your ESX to ESX pings for high availability cannot be bogged down by other network traffic.

2) vMotion Physically Segmented - The obvious reason for this is to improve performance. Running a vMotion or an svMotion sends an enormous amount of data over the network. So as to not impact the performance of other applications on your network and to get the best throughput possible in your migrations, having this physically separate is key.

The less obvious reason for this is security. When a vMotion is happening, the raw contents of a VM's memory are sent across the network. Should a malicious party be listening on this vLAN, they would be able to read the entire memory contents of your server. If you have any type of sensitive data being processed on that server, this creates an additional point of exposure.

3) Virtual Machine Network - In our case, we use the same vLAN for our VMs that we use for our physical servers.

4) Storage Network - If you will be using iSCSI or NFS on the back end, it makes sense to segment this out at least virtually if not physically to be able to ensure proper performance of your storage devices.

Tuesday, December 30, 2008

Adding Raw Device Mappings for Maximum Performance

We know that adding an RDM to your SQL and Exchange hosts can have an enormous impact on IO performance. In addition, under Windows Server guests prior to Server 2008, the partition must be properly aligned when accessing a RAID volume in order to achieve maximum performance. This is due to the fact that with a physical disk that maintains 64 sectors per track, Windows always creates the partition starting at the sixty-forth sector, therefore misaligning it with the underlying physical disk. In some cases, aligning a partition can boost IO performance by up to 50 percent! Here is a step by step guide to creating, attaching and aligning an RDM partition in ESX 3.5 on Server 2003.

Attaching a Raw Device Mapping

1. Log on to your SAN and create your LUN. Make this LUN available to all ESX hosts in your cluster. The steps needed to do this will vary by SAN and Fibre switch, so consult your vendor’s documentation for more info.
2. Log in to Virtual Infrastructure Client and connect to your Virtual Center instance.
3. Click on an ESX host and choose the Configuration tab.
4. Click on Storage adapters and rescan for new storage devices. Your new LUNs should show up. You do NOT want to create a VMFS here.
5. Repeat this procedure for each ESX host in your cluster.
6. Right click on the guest OS that you will be attaching the RDM to and click on Edit Settings.
7. Click Add in the hardware tab, choose Hard Disk and then Raw Device mapping.

Adding your new disk to the guest OS

1. Log on to your guest OS and launch Device Manager.
2. Scan for hardware changes.
3. Open Disk Management and initialize the new disk. Do not create a partition at this time.

Adding a properly aligned partition to the RDM

1. Log on to the server.
2. Type diskpart at the command line to launch the diskpart utility.
3. Type list disk to see a list of disks present. For my example, I will be creating a partition on Disk 2 and it will be the only partition on this disk.
4. Type select . In my example, you would type SELECT Disk 2.
5. Type create partition primary align=64 to create a primary partition that takes up the entire disk. You can use the size keyword if you are creating more than one partition on the disk.
6. After you have finished here, you will need to go in to Disk Management, format the partition and assign it a drive letter as you would normally.

Wednesday, December 3, 2008

Running SQL Server on ESX

There is currently a lot of debate going on about whether or not you should be running SQL server or Exchange on ESX. Rather than jump into the middle of this debate, I'll offer the best way that you could possibly run a high performance SQL server on virtual hardware.



I/O

I/O is by and large the biggest concern when running a database on an ESX cluster. Your main concern here is going to be ensuring that your database files are not impacted by I/O operations on your other VMs. We can ensure this through Raw Device Mappings.

Generally, for a high-performance database, you'll want your data files, log files, tempdb and program/OS/backup files to all live on separate spindles. To accomplish this, we do the following:

Data Files - RAID5 or RAID10 (RAID5 is more economical but you will suffer a hit on write performance).

Log Files - RAID10

TempDB - RAID10

OS/Program Files/Backup/File Share - This can be run from a standard VMDK.


Memory

In addition to I/O, memory is very important. If a database server has sufficient memory, it will not need to go to disk for its data as often, vastly improving performance. To be absolutely sure that your server receives sufficient memory, you can set your memory reservation equal to the amount of memory given to your VM. This will ensure that ESX never swaps out active memory from your database and that the SQL query optimizer can make accurate predictions about hardware performance.


CPU

On my key instances, CPU has never been much of a bottleneck, so I generally treat this as I would with any other VM.

Monday, November 24, 2008

ESX Partitioning

While most of the defaults in an ESX installation will be fine, I always take the time to edit the partition scheme. Since most new servers will come with drives no smaller than 60GB and I'll be storing all of my VMDKs on shared storage, there's no reason to not allocate extra space to places that will use it. Here is my usual breakdown:

/boot - 100MB - ext3 - The default is fine here.

/ - 10GB - ext3 - If you ever want to update your Service Console, it's nice to have some extra space available.

(none) - 1600MB - swap - The service console can access a maximum of 800MB of RAM. Your swap file should always be at least twice the size of memory being used. Since this partition cannot be resized without doing a reinstall, I always set it to the max in case I need to allocate more memory to the SC down the road a bit.

/var/log - 2GB - ext3 - Having a separate partition for your logs prevents them from filling up your root partition in the case of system issues.

(none) - 100MB - vmkcore - While this is optional, it holds the kernel dump if you have a Purple Screen of Death. When you call VMWare support in such a case, they will want to look at the contents held here.

/home - ? - ext3 - If you plan on storing scripts and other such files on your ESX server, you may want to carve out an extra home partition.

/vmfs/volumes/xyz - ? - vmfs-3 - Any leftover space can be set aside as a spare VMFS volume.

Wednesday, November 5, 2008

ESXKILL Shutdown Script

A little while back, I was looking into getting Power Chute set up for our ESX hosts. The unfortunate news is that APC does not yet support ESX 3.5 U2. While researching this, I came across a script to shut down all guest OSes and then an ESX host, though, which I have slightly modified and deployed to all of my ESX hosts. The original can be found at http://www.tooms.dk/ By copying the script to a location that can be accessed in my PATH, I can quickly log on to each server in the case of disaster and type 'esxkill". This will usually shut everything down gracefully in 5-10 minutes. Here's the syntax:


#####################################################################
#!/bin/sh
#
# UPS shutdown script for VMware ESX 3.5 U1
#
# 20060911 First version by tooms@tooms.dk
# 20081006 Revised by ian.reasor@gmail.com
#####################################################################
# Set PATH variable to the location of ESX utilities
PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"
#####################################################################
# Attempt a graceful shutdown through VMWare Tools
count_vm_on=0
for vm in `vmware-cmd -l` ; do
#echo "VM: " $vm
for VMstate in `vmware-cmd "$vm" getstate` ; do
#echo $VMstate
# If the VM is power ON
if [ $VMstate = "on" ] ; then
echo " "
echo "VM: " $vm
echo "State: is on and will now shut down"
echo "Shutting down: " $vm
vmware-cmd "$vm" stop trysoft
vmwarecmd_exitcode=$(expr $?)
if [ $vmwarecmd_exitcode -ne 0 ] ; then
echo "exitcode: $vmwarecmd_exitcode so will now power off"
vmware-cmd "$vm" stop hard
fi
count_vm_on=$count_vm_on+1
sleep 2
# if the VM is power OFF
elif [ $VMstate = "off" ] ; then
echo " "
echo "VM: " $vm
echo "State: is already powered off"
# if the VM is power suspended
elif [ $VMstate = "suspended" ] ; then
echo " "
echo "VM: " $vm
echo "State: is already suspended"
# if state is getstate or =
else
printf ""
#echo "unknown state: " $VMstate
fi
done
done
########################################################################
# wait for up to 5 min for VMs to shut down
#
if [ $count_vm_on = 0 ] ; then
echo " "
echo "All VMs are powered off or suspended"
else
echo " "
vm_time_out=300
count_vm_on=0
echo "Waiting for VMware virtual machines."
for (( second=0; second<$vm_time_out; second=second+5 )); do sleep 5 printf "." count_vm_on=0 for vm in `vmware-cmd -l` ; do for VMstate in `vmware-cmd "$vm" getstate` ; do if [ $VMstate = "on" ] ; then count_vm_on=$(expr $count_vm_on + 1) fi done done if [ $count_vm_on = 0 ] ; then #echo "exit for" break fi done #echo $VMstate fi #echo $count_vm_on ##################################################################### # Check to see if all VMs are off and if not then power them down for vm in `vmware-cmd -l` ; do #echo "VM: " $vm for VMstate in `vmware-cmd "$vm" getstate` ; do # If the VM is power ON if [ $VMstate = "on" ] ; then echo " " echo "VM: " $vm echo "it is still on but will now be powered off" vmware-cmd "$vm" stop hard sleep 2 fi done done ##################################################################### # Shutdown ESX Host echo " " echo "All VM's have been successfully shut down, halting ESX Host" echo " " shutdown -h now