Netgate SG-1000 microFirewall

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - TechyTech

Pages: 1 [2]
I also ran into this issue since my ISP started throttling / rate limiting my connection speed and I saturate the WAN link with VPN traffic.  Easy to reproduce using

This is typical behavior when an upstream service is throttling or rate limiting throughput, packets are delay (but not dropped) in order to choke back the downstream connection speed.

The problem your experiencing is because Gateway monitor uses dpinger, which has a configured limit on how long it waits for responses before determining they are "lost".  What's important to note is the Loss % is not actual data loss, but "missed" ping responses, because they arrived too late to be counted.

Key item that indicates this is RTTsd; RTT is of course the aggregate ping transit time, but the RTTsd is the Standard Deviation between each received ping response.  When the link is quiet the RTTsd will generally be fairly low, but when the RTTsd goes up it means that something up stream is intermittently delaying packets resulting in a larger deviation between each ping attempt. Thus if the pings are delayed beyond the configured wait time, they are considered "lost" even if they still arrive.

I was able to get around this by going to System >> Routing >> Gateways and edit each gateway to increase the "Loss Interval" under the advanced section to increase the time that dpinger waits for responses before considering them "lost".  After that, my loss percentages dropped to near 0%, but then I started seeing the real latency of the delayed packets skyrocket, so had to tweak with the Latency threshold values as well to keep the gateway from dropping out from excessively high latency when it is saturated with traffic.

You'll need to do some testing with traffic saturation on your VPN/WAN in order to come up with monitor values that do not cause the gateway monitor to considered the link offline.  I ended up having to configured some pretty high values on the upper latency threshold to keep the link from being knocked offline when running heavy traffic loads.

For anyone who has configured their PIA VPN links and found that Gateway Monitor, ala dpinger is unable to ping the VPN link gateway, or are using DNS IP's to monitor your PIA VPN links, I figured out a solution to ping a monitor IP of the actual link that is not the gateway or a DNS server, and after a few weeks of testing seems to be stable and working without issue.

I've been scouring forums about how to configure the Monitor IP of a PIA VPN connection and so far the only kludge workaround of using an external DNS to monitor the gateway link.  This poses some problems; 1) pfSense / dpinger configure a static route to the defined monitor IP (in this case a DNS server), which immediately limits the specified DNS server IP address to only use the link that it's defined as the gateway monitor IP, and 2) pinging beyond the gateway itself is subject to "Internet Weather" which results in sometimes erratic ping responses that do not reflect the actual state of the link itself, but routing issues beyond the link, and 3) if you have multiple VPN links you need to configure multiple DNS IP's each of which then gets static routed to a specific link.  Not a good working solution to me.

I noticed that all PIA links I've tested always have an IP address range of 10.xx.10.yy.  And digging into the configurations that PIA pushes to the client, I noticed that it's pushing a NET30, not SUBNET configuration.

More digging brought me to the pfSense docs on alternate monitor address for NET30 links (see  Though this did not directly address the PIA problem, some more playing around while remember that using non-linear / CIDR notation netmasks such as, is entirely legal, after all it's just a bit mask, and that such masks are used on some routing / NAT'ing situations.

Further playing with the pfSense Diagnostics >> Ping tool, I discovered that I can ping the VPN link "gateway" using a 10.xx.10.1 address of the link subnet.  However since each re-connection to PIA changes the "xx" portion, and the fact that the pfSense configuration for the gateway monitor IP is static, I needed either a fixed address, or a way to detect the sub-net assignment and update the dpinger configuration accordingly.

Just as I was thinking this was going to take some coding to come up with a solution to monitor a dynamically changing, non-gateway IP, I fell back the non-linear bits netmask thoughts and more testing.

What I came up with is that you can ping the PIA VPN link gateway using regardless of the dynamically assigned link subnet.  And the corresponding ping times were not only much lower than pinging a DNS server beyond the link, but appeared to be far more stable as well, suggestion that using this as a monitor IP I'm getting actual link ping timings.

I also have more that one VPN link and because the monitor IP is static routed to the link it's associated too, I needed to have a unique fixed monitor IP for each VPN link.  Well ping also worked.  And so far from testing I"ve been able to ping through (mask with no problems.  So each link gets a unique monitor IP, (,, etc.), and I get real gateway latency timings that are not subject to upstream latency issues.

So for all you PIA VPN users that have needed a proper way to monitor your gateway links; configure your Gateway Monitor IP using the 10.xx.10.1 as described above, and enjoy properly configured gateway monitoring.

Windows 10 updates are .PSF files.  You might want to update your regex.

Thank you for the note on Windows 10 updates.  But these were not "my" regex.  For the sake of example; I extracted these from the built-ins, (that have now been removed from the webgui).

However, my point was not to focus on windows updates specifically, I only used that as an example that everyone seems familiar with, in the context of overriding default squid behaviour to force caching of content, in order to demonstrate how to limit the scope of the range_offset_limit, without causing problems with other traffic.

This concept can be applied to any traffic for which you need to override the default behaviour for a specific set of traffic, matched by ACL, without the need to change the default for all traffic.

Overall, I see people keep mucking with this setting, along with quick_abort_min -1, which will cause squid to appear to zombie download content, generally manifesting as "performance" problems that manifest over time, because they mess with these settings to affect one specific type of traffic and not realize what it does to other traffic, that was working fine.

And given the amount of complaints about performance problems with Squid that are indicative of this very problem, and the amount of bad examples found throughout the internet (not just related to pfsense); ever more so with pre-built configs being removed from the pfsense gui in favour of users adding their own custom configs, there is a much larger chance of people simply cutting and pasting poorly implemented configurations that will lead them back to the same unexpected behaviour and performance problems.

If anything, I think the pfsense gui for squid could be augmented to help users create a proper ACL scoped configurations instead of changing the overall default behaviour, simple by enforcing that any added configurations be wrapped in an ACL so that users can't unintentionally override the default for all traffic, just to address an issue with a limited set of traffic.

The gui could, instead of having pre-built configs, have add config lists where users could create a "Traffic Override" that would require an ACL name, sanity check ACL patterns, and insure the the configs generated for a specific set of traffc, are limited by the ACL, so that users can generate limited scoped traffic override configurations, without causing themselves more problems, that they then blame on pfsense / squid, because they don't understand how these settings affect various forms of other traffic.


After finding a problem with the built refresh patterns under dynamic cache settings, as well as several forum posts here and the linked documentation on squid tuning, that have bad examples on how to implement windows update caching, there seems to be a misunderstanding on how and why to set the proper settings to cache windows update content without affecting other traffic.  So I'm posting  this out to clear up how to set the settings in squid to get the desired behaviour of forcing windows updates to cache, without causing problems with other traffic.

So the first step to caching windows updates, the obvious, is to set a refresh_pattern to force the cache to retain windows update files.  This part I think everyone has and is in the all the docs so no need to elaborate on this.

The devil in the details is the second part; windows update uses chunked downloads to quietly retrieve updates in the background.  Since chunked or ranged updates are not a complete object, they are not cache-able, only the download of a complete file is cache-able.  Which results in many many workstations all downloading updates and nothing getting cached.  So, we need Squid to fetch a full file, not a ranged chunk, so that subsequent requests can be satisfied from the cache.

And this is not a problem, Squid3 has a configuration directive that changes a ranged download into a full file retrieval; it's the range_offset_limit, which controls how squid handles range offset download request.  And when set to -1 causes squid to ignore the range request and retrieve the whole file.  Thus setting this directive to force squid to download the full file instead of a chunk results in the first windows workstation to request the file causes squid to retrieve the whole file such that subsequent requests can be satisfied from the cache.

All good yes?  Just what we want to happen to get those windows updates into our cache, yes?

Now here is where the problems are occurring.  The range_offset_limit directive accepts an ACL to limit the scope of what the setting applies too.  And using an ACL means you can have different values for different purposes, but also means that you do not need to change the overall system default behaviour that allows squid to handle ranged downloads, just to force caching of windows updates.  But without an ACL specified, which as with all squid directives that have optional ACL's, the lack of an ACL is a wild-card match for all traffic.  And squid evaluates multiple range_offset_limit directives on a first-match basis.  Thus not using an ACL to limit what the range_offset_limit -1 applies to, the first statement without an ACL is a wild-card match such that it applies to all traffic, not just the windows updates that we specifically want to force download.

Many examples show the use of range_offset_limit set to -1 to force the update files to be downloaded and cached, but do not show the use of ACL's to limit the scope of the setting to just the intended traffic, without overriding the system default.  Which then causes other problems with content that should be chunked but instead squid is downloading full files.  Not bad with small files, but with very large files, squid will appear to be unresponsive and possibly even slow down performance wise as squid is trying to download multiple large files that should have been chunks of a file until it can get a copy into it's cache.

The solution to not break ranged download for the unintended traffic, but make sure it's used for our Windows Update downloads is to use an ACL to restrict the range_request_limit -1 to just the items we want to force to cache.

So here is an example of setting the range_offset_limit to only apply to windows updates to get the forced caching behaviour we want for windows updates, but not affect other traffic.

Code: [Select]
acl Windows_Update dstdomain
acl Windows_Update dstdomain
acl Windows_Update dstdomain
acl Windows_Update dstdomain

range_offset_limit -1 Windows_Update

refresh_pattern -i*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims

The use of the acl "Windows_Update" allows the range_offset_limit -1 to only apply to the patterns matched by the stacked ACL "Windows_Update", which in this case is a list of known domains used for windows updates, and no other traffic, thus leaving all non windows-update traffic to behave according to the default (0) where squid will forward the range request.  And of course other forms of ACL's could be used if you wanted to be more specific, but suffice to say, using any ACL with the range_offset_limit -1 keeps it from applying to traffic it should not be used for.  After the files are retrieved, then the refresh_patterns control how long the files are retained in cache.

And of course, this applies to any traffic whereby you want to override the default chunk download behaviour, but don't want to force for all traffic.  Just setup an ACL and apply it to your range_offset_limit.

Now, onto the problem with the current squid3 built in refresh patterns under the dynamic content cache configuration settings is that when any of the built it patterns are enabled, in addition to the refresh_patterns added to squid.conf, each one adds a "refresh_pattern -1" directive, with no ACL to the generated squid.conf.  And further, that these statements will appear in the generated squid.conf prior to any other gui configured sections so that they will be the first match before any other user settings and thus override the overall default behaviour.

The resulting behaviour is that enabling any of the built-in refresh patterns sets the range_offset_limit from the default of 0 to -1 for all traffic.  So until this is fixed in the pfSense GUI, and if you do not want this behaviour applied to all traffic, you should disable the use of the built in patterns and use your own patterns that correctly implement ACL's in order to restrict the scope of the over-riding of the range_request_limit in order to not unexpectedly break other traffic.

See the Squid docs for more details about the range_offset_limit directive.

And the docs here should be corrected to properly implement range_offset_limit with an ACL, instead of a warning that other things might break.

So if you're having problems with squid misbehaving with chunked downloads, check your squid.conf for unexpected range_offset_limit entries that may be over-riding the default for all traffic.

Packages / Re: NEW Package: freeRADIUS 2.x
« on: November 06, 2015, 12:48:04 am »
Good day to everybody,
As you maybe know, latest package Freeradius2 1.6.15 that contains a Freeradius 2.2.6 daemon, has a trouble on EAP-TLS authentication.
Above all with latest Android 6.0 Marshmallow.
Some tech details are available here:
Someone known if it exists a workaround, perhaps editing some configuration files on freeradius, or also on Android with some apps, in order to avoid this issue ?
Thanks a lot in advance for your time and for any suggestion.

Just got Marshmallow OTA update myself and smacked into this same problem.  EAP-TLS configuration that has been working fine for quite a while, now no longer works on upgraded Nexus 7.

Symptoms are that the device appears to negotiate authentication, FreeRADIUS logs indicate the device was authenticated, but the device never finishes joining the network, and just keeps repeating.

From the Google thread, the issue is tied to the use of TLSv1.2 and downgrading to TLSv1.1 or 1.0, the final keying is correct.  But downgrading to broken encryption standards is not what I'd consider a workaround. 

From other forum reading, it sounds like this is going to be a quickly growing problem as Marshmallow is currently being rolled out OTA to all Nexus devices and expected to hit OEM devices soon.

So really the only question then is how soon an updated release that contains a fix for this issue can be made available.

Cache/Proxy / Re: Squid3 Transparent Proxy with antivirus
« on: November 02, 2015, 12:38:07 pm »
Finally getting to turning on squid3 antivirus and smacked right into this same problem.

Running on pfSense 2.2.5-DEVELOPMENT (amd64) built on Sun Nov 01, with squid3,

The filename to edit is different, it's now /usr/local/pkg/

But editing to change [::1] to now works, and even though the C-ICAP access log still shows ::1, it still passes the EICAR test.

Much thanks for the workaround.

Cache/Proxy / Re: Netflix being filtered..
« on: October 28, 2015, 10:21:46 pm »

The netflix player uses IP address to pull content, so your ACL's by domain name may not work once the player gets going.

See my reply in a similar thread about Netflix streaming and how to find the CIDR range to bypass in squid:

Cache/Proxy / Re: Netflix iOS app via Squid not working
« on: October 28, 2015, 09:48:24 pm »
I'm not using any apple devices, but the last few days Netflix has been giving problems with video streaming via squid, that I managed to find a workaround, so throwing this out if it helps anyone.

I'm using pfSense 2.2.5 beta & Squid3 in transparent mode, routed over a VPN service.

Problems with netflix for the last few days has been videos will take a very long time to start if they are not started from the beginning.  Everything seems to indicate that the way Netflix is retrieving the video portions is causing squid to do a full download of content.  I noted that whenever I start a Netflix video, the traffic on the VPN/WAN goes ballistic, well beyond what the player device is consuming to play the video.  All apparent symptoms of download amplification occurring.  Which seeing the original post to this thread about the use of byte-ranges not being forwarded made a lot of sense in that if the app requests a portion of a file, but instead squid decides it can't satisfy the range request and attempts to download the whole file. And each portion the app requests to play the next sequence of video get's translated to another download.

Clearly at this point squid is the issue, but setting up a bypass needs something that would cover all endpoints without a huge list of IP's, such as a cidr range.

So, finally, going through my logs to see if anything was getting blocked in the firewall, I noticed the IP's for the netflix content distribution network, and doing a "whois" on the IP got me the address range for Netflix video distribution network that I'm hitting.  Then putting this CIDR range in the proxy bypass destination for transparent proxy I am now able to watch netflix without the startup and buffering problems.  Though they still seem to still be having other intermittent connectivity issues.

So here are a couple of suggestions for those looking to find the IP range to bypass for Netflix.

In "System Log -> Settings" enable logging of passed packets.

Then while trying to access Netflix, us the system log -> Firewall log, check the IP addresses being connected to, do a reverse lookup on them, and look for any that look like "". Then take that IP address and use a "whois" tool or website to lookup the address sub-net information.  In that info should be a CIDR range for the address range.  Then you should be able to use that range as a CIDR bypass for squid or whatever service you need to bypass for Netflix streaming.


Update:  Some more digging I found that Squid3, was modified in 2014 to ignore unknown byte ranges due to a security vulnerability.  The modification was to ignore these headers which would result in the behaviour described above if an "unknown" byte range were being used.

+Changes to squid-3.3.13 (28 Aug 2014):
+   - Fix segmentation fault setting up server SSL connnection
+   - HTTP/1.1: Ignore Range headers with unidentifiable byte-range values


I also have recently setup pfSense with PIA and been wanting to use stronger encryption.

I found a note about changing the port to 1196 to get AES-128-CBC to work (SHA only, not SHA256).  Which is the most I've been able to get beyond the weak defaults.  I tried other ports to try to get AES-256-CBC, but no luck.

Unfortunately after much digging I found a few obscure forum posts that indicated that to get SHA256, or a cert higher than 2048, you need to use PIA's patched client. (Anyone that has more or different info, would be appreciated.)

This should just be a matter of changing standard client settings, and should not need a special patched client.  So I'm a bit disappointed with PIA and their default to weak encryption and the need for a  patched client to get what should be common high encryption standards to work with common OpenVPN clients.

Pages: 1 [2]