Friday, January 9, 2009

Why Your Virtual Infrastructure Network Should Be Segmented

When architecting a Virtual Infrastructure, networking is one of your most important concerns. Dropping 50 servers into 5 buckets creates new IO and security concerns. Here is how we have implemented this and had good results:

1) Service Console vLAN - Having a separate vLAN for your service consoles not only brings up the level of security around them, but ensures that your ESX to ESX pings for high availability cannot be bogged down by other network traffic.

2) vMotion Physically Segmented - The obvious reason for this is to improve performance. Running a vMotion or an svMotion sends an enormous amount of data over the network. So as to not impact the performance of other applications on your network and to get the best throughput possible in your migrations, having this physically separate is key.

The less obvious reason for this is security. When a vMotion is happening, the raw contents of a VM's memory are sent across the network. Should a malicious party be listening on this vLAN, they would be able to read the entire memory contents of your server. If you have any type of sensitive data being processed on that server, this creates an additional point of exposure.

3) Virtual Machine Network - In our case, we use the same vLAN for our VMs that we use for our physical servers.

4) Storage Network - If you will be using iSCSI or NFS on the back end, it makes sense to segment this out at least virtually if not physically to be able to ensure proper performance of your storage devices.

3 comments:

Anonymous said...

Would you please look at Ed Haletky's blog entry and comment upon it relative to your ideas??

http://www.networkworld.com/community/haletky

I hope this is the right URL, it's about his suggestions for setting up ESX depending on how many NICs one has.

Thank you, Tom

Unknown said...

I'm always torn with the vMotion Network. I normally put it on the iSCSI network if available. During implementations, vMotions are hot and heavy but once the environment stabilizes, I don't see an egregious amount of vMotioning happening and I hate that the 'reserved' bandwidth is just sitting there.

But, by the book, you make good points.

Ian Reasor said...

The thing I love about vMotion is that it doesn't impact the performance on any of my VMs. If I am sharing a network between storage and vMotion, every time a vMotion happens, I should expect to see an IO bottleneck on any VMs that heavily utilize my storage network. In my case, I like to overcommit and make extensive use of DRS, so vMotioning happens pretty regularly, even after the environment has stabilized.

In terms of the Network World article, which of the options are you considering? I read through his two pNIC proposal which seems to be pretty sound (if you absolutely must run on 2 NICs), but he has use cases for several other options. How many pNICs are you planning to implement with?