Netgate SG-1000 microFirewall

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - TechyTech

Pages: [1]
Tracing through why syslog-ng was not recording log entries from my networks, even though everything, including firewall logs, show packets being received,  I found that syslog-ng is only binding to the last configured interface.  Look at the configuration being generated by pfSense, it is placing all the IP address to bind too in a single syslog() driver statement.  This results in syslog-ng only binding to the last defined IP (interface) in the syslog() driver declaration.  This can be verified by logging into a command shell and check active listening ports using 'netstat -n | grep 5140'

Looking through the syslog-ng 3.13 documentation, it does not indicate that multiple ip() directives can be used inside a syslog() driver definition, and various configuration examples I could find show using multiple source driver statements in the source definition block.

Modifying the configuration file to break up the "ip(xx.xx.xx.xx)" bindings to multiple syslog() driver statements and then manually starting syslog-ng, it correctly binds to all defined interfaces.

Example pfSense generated config (/usr/local/etc/syslog-ng.conf) that will only bind to the last defined interface:
Code: [Select]
# This file is automatically generated by pfSense
# Do not edit manually !
destination _DEFAULT { file("/var/syslog-ng/default.log"); };
log { source(_DEFAULT); destination(_DEFAULT); };
source _DEFAULT { internal(); syslog(transport(udp) port(5140) ip( ip( ip( ip( ip(; };

Modified configuration that binds all defined interfaces.
Code: [Select]
destination _DEFAULT { file("/var/syslog-ng/default.log"); };
log { source(_DEFAULT); destination(_DEFAULT); };
source _DEFAULT { internal(); syslog(transport(udp) port(5140) ip(;
syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; };


Unrelated to the interface bindings, but also noticed errors in the system log about syslog-ng failing daemon stop/start calls:

Code: [Select]
/pkg_edit.php: The command '/usr/local/etc/rc.d/ stop' returned exit code '1', the output was ''
Running /usr/local/etc/rc.d/syslog-ng stop from command shell produces the following output:
Code: [Select]
Cannot 'stop' syslog_ng.  Set syslog_ng_enable to YES in /etc/rc.conf or use 'onestop' instead of 'stop'.
Running /usr/local/etc/rc.d/syslog-ng onestop  or onestart, syslog-ng stops and starts without error.

During package update get syntax error in /etc/rc.bootup then system fails to configure and falls to login prompt.

Error:  Parse error: syntax error, unexpected 'conf_path' (T_STRING) in /etc/rc.bootup on line 88

Screenshot of console with error after upgrade attached.

Been seeing this intermittently, but yesterday and today has occurred so often that I'm now sure it's not my imagination and is now occurring with enough frequency it's getting difficult to keep my VPN links up and running as well as time I have to spend re-checking the VPN configurations every time something stops working.  So throwing this out to see if anyone has ideas as to what is going on here.

The problem that is occurring is that with multiple OpenVPN client links, the custom option configurations are intermittently not saving, or more so, getting mangled.

Either the changes to the configuration are not saved, even though gui suggest it has, or two of the configuration lines in the custom options section are merged into one line.  The former of which results in configuration changed needed to be re-applied when it's discovered that the intended change is no longer present in the configuration, the later actually causes the VPN link to fail to start due to configuration syntax problems.

An example of the config mangling are two config lines:
Code: [Select]
pull-filter ignore "redirect-gateway"
route-delay 3

When the problem occurs become:
Code: [Select]
pull-filter ignore "redirect-gateway"route-delay 3
There is never any space between the end of the first directive and the start of the second directive, and so far does not seem to be specific to a given combination of directives.

So far, after frequent re-occurrences of this problem, I've noted the following:
  • A formerly working configuration is no longer working as configured due to merging two config directives into a single line.
  • A change to a configuration seems to not get applied; configuration has to be re-edited and re-saved.
  • Problem can occur when saving a configuration change, and re-opening to configuration to verify it,
  • or after a reboot with what a appears to be a fully functioning configuration, all VPN links up and running before the reboot and failing after the reboot.
  • The problems seem to only occur in the 2nd and 4th (out of 4) VPN clients, not sure about the 3rd, but definitely has not occurred with the first clients configuration, always in the 2+ clients.
  • Does not seem to be related the the specific configuration directives.  Have seen it occur with various configuration lines.
  • The line merge thing seems to occur with the last two lines of the custom options, though not entirely positive about this.

I'm assuming that the line merging may have something to do with white space scrubbing, but it does not explain why a working configuration is suddenly broken after a just a reboot.
As for the not saving or reverting to a previous configuration, this isn't just a couple lines merging, the entire block of custom options can be re-arranged and the next time that client is edited, it's back to the previous incarnation, as if the configuration update was never saved.

In either case it has led to a lot of fussing around and re-checking & re-saving configurations to try and get links up and running, and just when all appears to be working, rebooting and the links are in various states of broken until the configurations are re-edited and re-saved, again.

Over the past few weeks of build updates I've been noticing that Avahi has been having increasing problems starting automatically after reboots (more so than the usual temperamental behavior Avahi tends to exhibit).  What was once intermittent, or just long delayed after all interface transitions complete, has now become fairly consistent.

It either takes a long time to start, with several restart attempts noted by errors in the system log, if it starts at all.  When it fails to start automatically, manual start works fine.

The only entries I see in the system log when Avahi wont' start are:

Code: [Select]
Found user 'avahi' (UID 558) and group 'avahi' (GID 558).
Successfully dropped root privileges.
avahi-daemon 0.6.31 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
WARNING: Failed to contact D-Bus daemon.
avahi-daemon 0.6.31 exiting.

When started manualy, there is the NSS error in the log, but otherwise starts up with no problems.

Avahi configuration is at defaults, and I've tried both re-installing the package and deleting and installing the package with no changes in behaviour of startup problems.

2.4 Development Snapshots / Squid 0.4.43 update breaks squid service.
« on: January 30, 2018, 01:24:20 pm »
Just did an update to the recent build 2.4.3.a.20180129.2021.

Noticed after rebooting that squid and squidguard services no longer show up in the Services menu and of course Squid and related services are not running.

Checked the VGA console and saw the tail of the boot up after today's update showing that the Squid package had been updated.  Rebooted again to see if it would clear things up, but squid did not start and still not show in any Services menu or status listing.

Package Manager shows the squid package installed.

Had to re-install squid package and reboot to get squid back up and running.

For anyone looking to monitor the dynamically assigned public IP address of any WAN or OpenVPN link but do not want or need to create a public DDNS domain, this is a quick way to dummy up the Dynamic DNS Client custom configuration to retrieve the public IP so it can be displayed on the gui interface using the ddns client widget.

pfSense DDNS uses the configured service under Services >> Dynamic DNS >> Check IP Services to retrieve the public IP address of a DDNS client, then uses the configured Update URL to update an external service, that later of which is not of concern when we just want to see what the external address is and not update an external dynamic domain.

So for this configuration we are only interested in the public IP that is retrieved and don't  care about updating an external dns service, however the GUI requires entering a URL, but allows not verifying the results of the update URL, so we just need a dummy entry that will fail quickly so we can get to displaying the retrieved / cached public IP.

For each interface you wish to monitor the external Public IP, add & configure a DDNS client as follows:

Service Type: Custom

Interface to monitor & Interface to send update: <set both to interface to retrieve public IP for>

Verbose logging: check (as needed for debugging)

HTTP API DNS Options: Check Force IPv4 DNS Resolution

Username / Password: blank

Update URL: http://localhost

Result Match: blank

Description: External IP

That's it.

For each configured ddns client interface pfSense will retrieve and cache the Public IP address, and since we don't care about the update URL or the results, when it fails, you still get an IP lookup that is displayed in the DDNS status and gui widget.

Now just add the DDNS client widget to the gui and have up to date external public NAT'd IP address for each interface.

For anyone who has configured their PIA VPN links and found that Gateway Monitor, ala dpinger is unable to ping the VPN link gateway, or are using DNS IP's to monitor your PIA VPN links, I figured out a solution to ping a monitor IP of the actual link that is not the gateway or a DNS server, and after a few weeks of testing seems to be stable and working without issue.

I've been scouring forums about how to configure the Monitor IP of a PIA VPN connection and so far the only kludge workaround of using an external DNS to monitor the gateway link.  This poses some problems; 1) pfSense / dpinger configure a static route to the defined monitor IP (in this case a DNS server), which immediately limits the specified DNS server IP address to only use the link that it's defined as the gateway monitor IP, and 2) pinging beyond the gateway itself is subject to "Internet Weather" which results in sometimes erratic ping responses that do not reflect the actual state of the link itself, but routing issues beyond the link, and 3) if you have multiple VPN links you need to configure multiple DNS IP's each of which then gets static routed to a specific link.  Not a good working solution to me.

I noticed that all PIA links I've tested always have an IP address range of 10.xx.10.yy.  And digging into the configurations that PIA pushes to the client, I noticed that it's pushing a NET30, not SUBNET configuration.

More digging brought me to the pfSense docs on alternate monitor address for NET30 links (see  Though this did not directly address the PIA problem, some more playing around while remember that using non-linear / CIDR notation netmasks such as, is entirely legal, after all it's just a bit mask, and that such masks are used on some routing / NAT'ing situations.

Further playing with the pfSense Diagnostics >> Ping tool, I discovered that I can ping the VPN link "gateway" using a 10.xx.10.1 address of the link subnet.  However since each re-connection to PIA changes the "xx" portion, and the fact that the pfSense configuration for the gateway monitor IP is static, I needed either a fixed address, or a way to detect the sub-net assignment and update the dpinger configuration accordingly.

Just as I was thinking this was going to take some coding to come up with a solution to monitor a dynamically changing, non-gateway IP, I fell back the non-linear bits netmask thoughts and more testing.

What I came up with is that you can ping the PIA VPN link gateway using regardless of the dynamically assigned link subnet.  And the corresponding ping times were not only much lower than pinging a DNS server beyond the link, but appeared to be far more stable as well, suggestion that using this as a monitor IP I'm getting actual link ping timings.

I also have more that one VPN link and because the monitor IP is static routed to the link it's associated too, I needed to have a unique fixed monitor IP for each VPN link.  Well ping also worked.  And so far from testing I"ve been able to ping through (mask with no problems.  So each link gets a unique monitor IP, (,, etc.), and I get real gateway latency timings that are not subject to upstream latency issues.

So for all you PIA VPN users that have needed a proper way to monitor your gateway links; configure your Gateway Monitor IP using the 10.xx.10.1 as described above, and enjoy properly configured gateway monitoring.


After finding a problem with the built refresh patterns under dynamic cache settings, as well as several forum posts here and the linked documentation on squid tuning, that have bad examples on how to implement windows update caching, there seems to be a misunderstanding on how and why to set the proper settings to cache windows update content without affecting other traffic.  So I'm posting  this out to clear up how to set the settings in squid to get the desired behaviour of forcing windows updates to cache, without causing problems with other traffic.

So the first step to caching windows updates, the obvious, is to set a refresh_pattern to force the cache to retain windows update files.  This part I think everyone has and is in the all the docs so no need to elaborate on this.

The devil in the details is the second part; windows update uses chunked downloads to quietly retrieve updates in the background.  Since chunked or ranged updates are not a complete object, they are not cache-able, only the download of a complete file is cache-able.  Which results in many many workstations all downloading updates and nothing getting cached.  So, we need Squid to fetch a full file, not a ranged chunk, so that subsequent requests can be satisfied from the cache.

And this is not a problem, Squid3 has a configuration directive that changes a ranged download into a full file retrieval; it's the range_offset_limit, which controls how squid handles range offset download request.  And when set to -1 causes squid to ignore the range request and retrieve the whole file.  Thus setting this directive to force squid to download the full file instead of a chunk results in the first windows workstation to request the file causes squid to retrieve the whole file such that subsequent requests can be satisfied from the cache.

All good yes?  Just what we want to happen to get those windows updates into our cache, yes?

Now here is where the problems are occurring.  The range_offset_limit directive accepts an ACL to limit the scope of what the setting applies too.  And using an ACL means you can have different values for different purposes, but also means that you do not need to change the overall system default behaviour that allows squid to handle ranged downloads, just to force caching of windows updates.  But without an ACL specified, which as with all squid directives that have optional ACL's, the lack of an ACL is a wild-card match for all traffic.  And squid evaluates multiple range_offset_limit directives on a first-match basis.  Thus not using an ACL to limit what the range_offset_limit -1 applies to, the first statement without an ACL is a wild-card match such that it applies to all traffic, not just the windows updates that we specifically want to force download.

Many examples show the use of range_offset_limit set to -1 to force the update files to be downloaded and cached, but do not show the use of ACL's to limit the scope of the setting to just the intended traffic, without overriding the system default.  Which then causes other problems with content that should be chunked but instead squid is downloading full files.  Not bad with small files, but with very large files, squid will appear to be unresponsive and possibly even slow down performance wise as squid is trying to download multiple large files that should have been chunks of a file until it can get a copy into it's cache.

The solution to not break ranged download for the unintended traffic, but make sure it's used for our Windows Update downloads is to use an ACL to restrict the range_request_limit -1 to just the items we want to force to cache.

So here is an example of setting the range_offset_limit to only apply to windows updates to get the forced caching behaviour we want for windows updates, but not affect other traffic.

Code: [Select]
acl Windows_Update dstdomain
acl Windows_Update dstdomain
acl Windows_Update dstdomain
acl Windows_Update dstdomain

range_offset_limit -1 Windows_Update

refresh_pattern -i*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims
refresh_pattern -i*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip) 4320 80% 43200 reload-into-ims

The use of the acl "Windows_Update" allows the range_offset_limit -1 to only apply to the patterns matched by the stacked ACL "Windows_Update", which in this case is a list of known domains used for windows updates, and no other traffic, thus leaving all non windows-update traffic to behave according to the default (0) where squid will forward the range request.  And of course other forms of ACL's could be used if you wanted to be more specific, but suffice to say, using any ACL with the range_offset_limit -1 keeps it from applying to traffic it should not be used for.  After the files are retrieved, then the refresh_patterns control how long the files are retained in cache.

And of course, this applies to any traffic whereby you want to override the default chunk download behaviour, but don't want to force for all traffic.  Just setup an ACL and apply it to your range_offset_limit.

Now, onto the problem with the current squid3 built in refresh patterns under the dynamic content cache configuration settings is that when any of the built it patterns are enabled, in addition to the refresh_patterns added to squid.conf, each one adds a "refresh_pattern -1" directive, with no ACL to the generated squid.conf.  And further, that these statements will appear in the generated squid.conf prior to any other gui configured sections so that they will be the first match before any other user settings and thus override the overall default behaviour.

The resulting behaviour is that enabling any of the built-in refresh patterns sets the range_offset_limit from the default of 0 to -1 for all traffic.  So until this is fixed in the pfSense GUI, and if you do not want this behaviour applied to all traffic, you should disable the use of the built in patterns and use your own patterns that correctly implement ACL's in order to restrict the scope of the over-riding of the range_request_limit in order to not unexpectedly break other traffic.

See the Squid docs for more details about the range_offset_limit directive.

And the docs here should be corrected to properly implement range_offset_limit with an ACL, instead of a warning that other things might break.

So if you're having problems with squid misbehaving with chunked downloads, check your squid.conf for unexpected range_offset_limit entries that may be over-riding the default for all traffic.

Pages: [1]