pfSense English Support > Virtualization installations and techniques

IMPORTANT: Xen/KVM networking will not work using default hypervisor settings!

(1/10) > >>

johnkeates:
This is still needed with 2.4-RELEASE so any version using VirtIO (2.2 and above) so far is still affected.


If you are reading this, you probably have issues with your virtual network and pfsense, usually when packets need to pass pfSense for NAT or routing.
You will be able to ping stuff, but TCP and sometimes it seems UDP as well might fail or transport super-slow.

An issue exists with VirtIO drivers in combination with checksum offloading and the packet filter (pf) when you leave checksum offloading on for your virtual interfaces.

The reason this is happening is that virtual networks don't need checksums to verify the integrity of packets that are sent over a wire, because there is no wire in the virtual network (it's using shared memory). So packets will not be checksummed by the virtual interfaces and therefore supposedly arrive at pf with an invalid checksum. Those packets get dropped! I currently don't know if this is intended behavior, but since the packets are practically incorrect it would seem understandable that they get filtered out and dropped.


Your symptoms should include:

- Ping works with no problem, even over NAT, from LAN to LAN, from WAN to WAN and any cross-subnet combination with the correct NAT/gateway rules.
- TCP connections work in one way or between specific hosts, but in one direction to WAN or from WAN they don't work, or just silently drop. Really slow traffic (~0.4Kbps) has been observed too
- UDP seems to work sometimes, but sometimes it randomly fails depending on the application (which seems to point towards a TCP initiation)

In case you are not sure, you can apply the offloading change anyway, as it won't harm your network and at it's worst will simply degrade performance a few percent. Reverting is easy.

This will, however, not fix any VLAN  issues, the VirtIO drivers simply do not support VLANS. Circumvent that by either using VLANs on your VIF stanzas creating multiple interfaces on the pfSense side, or use HVM emulated network devices.


This is currently triple confirmed on IRC and this forum for all Xen types (XenServer, Xen source etc. in at least 3 major Linux distributions and different releases and kernels), so if you are using Xen and you can't seem to get proper packet flow after upgrading, this is probably your problem. For KVM, only reports on this forum are confirmation, as I didn't need to research anything about it for my own systems and didn't confirm with people anywhere else. Since they both use VirtIO it seems plausible that the checksumming implementation is the same and therefore presents the same problems. In theory, any virtualisation system using virtual interfaces that don't checksum packets will have this issue.


The solution is to turn off at least tx checksum offloading for the interface that pfSense receives it's non-checksummed packets on. Definitely on the pfSense side and on the hypervisor side as well!


I'm collecting platform-specific settings but for now, here is a guide using ethtool for Xen hosts using XL as toolstack and a Linux control domain (dom0):

To fix the checksum problem, do the following:

1. In pfSense, make sure all forms of offloading are turned off. We don't want to offload anything to the virtIO drivers!
2. On the hypervisor side of the pfSense interfaces (commonly called vifX.Y where X is the instance ID and Y is the interface ID) at least turn off tx offloading.

Regarding step 2: if your hypervisor has a Linux control domain (which it will in most cases) you can use the program called ethtool to do this, for example:


--- Code: ---$ sudo ethtool -K vif123.3 tx off
--- End code ---

This is assuming you are using sudo so gain root previleges and vif123.3 is the pfSense netback interface you want to turn tx off for.


(Beginners guide ahead!)

If you do not know which interface is for the pfSense vm (domU in Xen lingo), follow the following steps (on dom0, the Xen control domain):

1. List the currently running instances using your command toolchain, in case of the current standard (which is XenLight, or xl):


--- Code: ---$ sudo xl list
--- End code ---

This will give you a list of all the VMs containing their name as well as their current running ID. Note this ID.

2. List all the interfaces for this VM by listing all the interfaces on the system and filtering for the vif ID that belongs to your pfSense vm. In this example, I use 16 as the ID, so I'll grep for vif16:


--- Code: ---$ sudo ifconfig | grep vif16
--- End code ---

This will give you a list of all the virtual interfaces that your pfSense interfaces are connected to using VirtIO.
It might look like:


--- Code: ---vif16.0   Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff 
vif16.1   Link encap:Ethernet  HWaddr fe:ff:ff:ff:ff:ff 
--- End code ---

If there are lines like "vif16.0-emu", ignore those.

3. Disable offloading for the interfaces to make sure the dom0 (xen's control domain, Linux) recalculates the sum. If one of those interfaces is bridged to an actual hardware network card, this isn't strictly needed. If you are not sure that is the case for you, disable offloading for all interfaces. The cost to the CPU is a lot lower than it used to be.


--- Code: ---$ sudo ethtool -K vif16.0 tx off
$ sudo ethtool -K vif16.1 tx off
--- End code ---

(Beginners guide ending here!)

From this point on, all traffic should pass as it normally should and because of the VirtIO drivers it should be faster than using the HVM virtual ethernet cards!

Now, those settings are not persistent in any way, so if you want it to stick, you will have to adjust your vif-script on the hypervisor to check the settings whenever you create the domU. It's up to you to find out how to do that, this is a pfSense forum, not a Xen forum (and I don't have the time to create a sample script right now :p).


But what if you don't want to have those bloody VirtIO interfaces? Or what if you really need the old VLAN capability?

Well, just disable PV altogether. VirtIO or Paravirtualised networking relies on communication via a virtual PCI device, which can be turned off for any domU in Xen that you don't want to load PV drivers for. For Xen 4 and above when using the XL toolstack, just add this line to your domU configuration file:


--- Code: ---xen_platform_pci=0
--- End code ---

And stop / start (not restart, we wan't to re-create the domU with the new settings) the pfSense domU.

A different option would be disabling the enlightenment interfaces in pfSense, but that's a bit hacky and modifying your pfSense isn't something I would recommend if you can do it reliably from the outside. You can probably use loader settings, or disable the loading of the virtIO kernel modules, but why make such a mess of thing if the hypervisor can do that with just one line ;-)


I hope this is clear to everybody, from beginners to SysOps running Xen farms, you now know what to do until a better documented fix comes along!

(and to mods/admins: pinning or sticky-ing this post might reduce duplicate threads :) )

johnkeates:
Reserved for future use.

duntuk:
Are you suppose to do this on the Windows Device Manager level too?

Example:

Device Manager --> Network adapters --> Intel PRO/1000 PT Dual Port Network Connection --> Advanced --> TCP Checksum Offload (IPv4) --> Value: Disabled

johnkeates:

--- Quote from: duntuk on February 12, 2015, 07:56:56 pm ---Are you suppose to do this on the Windows Device Manager level too?

Example:

Device Manager --> Network adapters --> Intel PRO/1000 PT Dual Port Network Connection --> Advanced --> TCP Checksum Offload (IPv4) --> Value: Disabled

--- End quote ---

No. This is for the Xen hypervisor side. Not the domU's.

hvisage:
The "better" place to do this, is in pfSense: System -> Advanced -> Networking (tab) and check the "Disable hardware checksum offload"

Navigation

[0] Message Index

[#] Next page

Go to full version