The pfSense Store

Author Topic: VMWare ESXi 4.0U1: too many VLAN & NIC options?  (Read 4190 times)

0 Members and 1 Guest are viewing this topic.

Offline athompso

  • Newbie
  • *
  • Posts: 8
    • View Profile
VMWare ESXi 4.0U1: too many VLAN & NIC options?
« on: May 18, 2010, 12:38:55 am »
I don't think I'm restarting an exact copy of some other thread; at least, I haven't found anything summarizing this info.

When using multiple segregated networks in a VMWare environment, there are several ways to provision pfSense.  I've now tried two of them, and am wondering if anyone has any long-term field experience or has actually done comparison testing?

My first virtualized pfSense box was pretty conventional: the VMWare host had 4 NICs, I assigned one each to the WAN, LAN, and OPT (aka "development & testing") networks, created a separate vSwitch for each, didn't use VLAN (802.1q) tagging at all.  Created one virtual NIC for each vSwitch, e1000-type, assigned em0/em1/em2 in pfSense to the appropriate networks.

The second virtualized pfSense box was the opposite, in essentially the identical scenario... I used a single switch for all the ethernet connections, using VLANs to segregate them.  Created a 4-way trunk group (static, not LACP) to connect to a VMWare ESXi 4.0U1 server (4 NICs, again).  One vSwitch only, used VLAN tagging on the vKernel interface, and used a single virtual interface for the pfSense box with 802.1q tags passed straight through.  I used "flexible" type NIC in this case, after reading VMWare's published results about latency vs. CPU usage vs. throughput; I don't need massive throughput but I do need better latency if possible.  Created 3 VLANs off the single le0 interface in the pfSense image.

Both scenarios seem to be working OK; there's a slight advantage in that it'll probably be easier to turn on jumbo frames in the newer (2nd) scenario but otherwise I can't really see any difference.

I haven't done comparative stress tests between the two environments, even though they're pretty much directly comparable; I can't see any performance difference during normal use.

I can think of at least four more intermediate configurations in between these two scenarios; neither of them has run long enough for me to have any substantial experience yet... can anyone see potential pitfalls that I'm likely to run into?  What's worked for you and what hasn't?

Thanks,
-Adam Thompson <athompso@athompso.net>

Offline athompso

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: VMWare ESXi 4.0U1: too many VLAN & NIC options?
« Reply #1 on: May 18, 2010, 12:52:54 am »
Looks like EddieA is collecting some real data, here: http://forum.pfsense.org/index.php/topic,21510.0.html.

Offline EddieA

  • Full Member
  • ***
  • Posts: 151
    • View Profile
Re: VMWare ESXi 4.0U1: too many VLAN & NIC options?
« Reply #2 on: May 22, 2010, 01:29:37 pm »
Looks like EddieA is collecting some real data, here: http://forum.pfsense.org/index.php/topic,21510.0.html.

I gave up on that shortly after I posted, because I moved my pfSense off the ESXi box onto it's own, dedicated, thin client.

Cheers.