Quantcast
Channel: VMware Communities: Message List
Viewing all articles
Browse latest Browse all 230200

Re: Best design for using 8 physical NICs on an ESXi 5.1 host

$
0
0

I'm presuming that you are using 1Gb uplinks.

 

If I were looking to make the most use of all the uplinks then I would do something like the following;

 

vSS0 - Standard Virtual Switch - 3 uplinks

Management - vmk0/vmk3/vmk7 active/standby/standby

vMotion1 - vmk0/vmk3/vmk7 - unused/active/standby

vMotion2 - vmk0/vmk3/vmk7 - unused/standby/active

 

vDS1 - Distributed Virtual Switch - 3 uplinks - NIOC enabled - Route based on physical NIC load

VM Networking

 

vDS2 - Distributed Virtual Switch - 2 uplinks - NIOC/SIOC enabled - Route based on IP HASH - LACP enabled

NFS

 

There are a lot of configuration you could use. This isn't a simple design, but it will give you maximum throughput for each of your traffic types. I'm presuming that you have a VLAN available for each traffic type; management, vMotion, VM Networks and NFS. I'm further presuming that you are not routing NFS traffic.

 

I don't like putting management on a vDS that vCenter manages or runs from, I don't even like putting the management network on a vDS. In a 10GbE environment when you usually have only 2 uplinks there isn't a choice in this matter, but in a 1Gb environment you do have a choice. Host profiles and vDS do not always talk nicely to each other, so if you lose the management network applying the rest of the host profile configuration will fail and you will need to manually re-add the host.

 

Cheers,

Paul


Viewing all articles
Browse latest Browse all 230200

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>