Netgate SG-1000 microFirewall

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - TechyTech

Pages: [1] 2
Hi shimano, I donít' have a solution, but I think I ran into a very similar problem myself, and wanted to share a few more details about the problem as I observed it so far.

I've been downloading a lot of Linux ISO's, so lots of large, long running downloads, and was having problems with the file downloads that should only be taking several minutes were taking hours to days to download, from my linux hosts.  I also have a dual boot system with windows.  When I booted into windows, the problem as so bad that windows would not even fetch updates, and web pages would start to load then ultimately time-out.

While trying to do more testing to characterize what was going on, I also noticed that, on linux, when my file downloads were running slow I could load & refresh web pages seeming ok, but things like ISO file downloads or PDF files displayed in a browser were significantly slower to the extend that some pages that normally only take a second or two to load, were taking minutes.  On my dual boot box I found after booting into windows, everything internet was so slow that even web pages were not loading or refreshing, even though a linux host would load the same web pages seemingly fine.  Basically windows seems to be more impacted by this than linux.

Like you, restarting squid did not alleviate the problem, but all of this goes away when squid proxy is disabled and pfSense is used as a pure router, (what your referring to as NAT).

The nature of the traffic behaviour I'm seeing is indicative that there is some form of selective throttling taking place as after each reboot of pfSense. I can reproduce a short burst of high rate traffic  then after a period of bytes, (which happens quicker than 1-2 minutes with lots of files downloading simultaneously), watch the traffic suddenly fall off to just a few KBs/s, while other things like web pages seemed to download normally (from linux hosts).

I don't have any traffic throttling enabled, and verified that squid's bandwidth throttling settings were all set to default / disabled, Yet still the only way I can get full throughput bandwidth is to not proxy through squid.

For added note, my configurations were built from scratch using 2.4.x releases, so I think that rules out a version migration issue.

Anyway, for now, the problem as I'm seeing it, is so bad that I had to completely disable squid so I could get some work done.  But I did want to follow up to share that you are not alone in your observations of bandwidth throughput problems with squid proxying.

Tracing through why syslog-ng was not recording log entries from my networks, even though everything, including firewall logs, show packets being received,  I found that syslog-ng is only binding to the last configured interface.  Look at the configuration being generated by pfSense, it is placing all the IP address to bind too in a single syslog() driver statement.  This results in syslog-ng only binding to the last defined IP (interface) in the syslog() driver declaration.  This can be verified by logging into a command shell and check active listening ports using 'netstat -n | grep 5140'

Looking through the syslog-ng 3.13 documentation, it does not indicate that multiple ip() directives can be used inside a syslog() driver definition, and various configuration examples I could find show using multiple source driver statements in the source definition block.

Modifying the configuration file to break up the "ip(xx.xx.xx.xx)" bindings to multiple syslog() driver statements and then manually starting syslog-ng, it correctly binds to all defined interfaces.

Example pfSense generated config (/usr/local/etc/syslog-ng.conf) that will only bind to the last defined interface:
Code: [Select]
# This file is automatically generated by pfSense
# Do not edit manually !
destination _DEFAULT { file("/var/syslog-ng/default.log"); };
log { source(_DEFAULT); destination(_DEFAULT); };
source _DEFAULT { internal(); syslog(transport(udp) port(5140) ip( ip( ip( ip( ip(; };

Modified configuration that binds all defined interfaces.
Code: [Select]
destination _DEFAULT { file("/var/syslog-ng/default.log"); };
log { source(_DEFAULT); destination(_DEFAULT); };
source _DEFAULT { internal(); syslog(transport(udp) port(5140) ip(;
syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; syslog(transport(udp) port(5140) ip(; };


Unrelated to the interface bindings, but also noticed errors in the system log about syslog-ng failing daemon stop/start calls:

Code: [Select]
/pkg_edit.php: The command '/usr/local/etc/rc.d/ stop' returned exit code '1', the output was ''
Running /usr/local/etc/rc.d/syslog-ng stop from command shell produces the following output:
Code: [Select]
Cannot 'stop' syslog_ng.  Set syslog_ng_enable to YES in /etc/rc.conf or use 'onestop' instead of 'stop'.
Running /usr/local/etc/rc.d/syslog-ng onestop  or onestart, syslog-ng stops and starts without error.

Squid works really great for a while, after it gets really slugish, like i have to wait 1-2 mint for a page to response ! everything gets back to normal after i reboot my pfsense and works great for a day or two then again i have really slow speeds so i have to reboot again.

What could cause this issue ?

Your description doesn't really provide clear indication that Squid is the root cause.  The 1-2 minutes for a page response sounds like you could be hitting a protocol time-out, with the connection succeeding on retry.

Rebooting resets the whole platform, so this doesn't help isolate if squid is the sole cause.  I would suggest that the next time it occurs, try only restarting the squid service and see if that clears up the issue or it still persists until a full reboot of pfSense.

During package update get syntax error in /etc/rc.bootup then system fails to configure and falls to login prompt.

Error:  Parse error: syntax error, unexpected 'conf_path' (T_STRING) in /etc/rc.bootup on line 88

Screenshot of console with error after upgrade attached.

2.4 Development Snapshots / Re: router dead.. mountroot>
« on: February 16, 2018, 04:57:04 pm »
Just finished migrating from bare metal to virtualbox and ran into this myself, thought it was something amiss in the migration.

After finished the migration, everything was working, then updated to the latest pfsense build; update apears to go fine, then after reboot, the zfs root will not mount.

Ended up backing up configuration, reloading with latest build image installed as ufs, and restoring configuration to get going again.

Update:  Dug up some info.

After the first update new error appears on console:

Warning: file_get_contents(): Filename cannot be empty in/etc/inc/ on line 1120

Then once rebooted, the mount failure happens.

On the rebuild using UFS, after each reboot, I''m getting crash dump errors similar to the first error after upgrade.

PHP Errors:
[16-Feb-2018 <scrub> America/Los_Angeles] PHP Warning:  file_get_contents(): Filename cannot be empty in /etc/inc/ on line 1120

The "America/Los_Angels" part is the timezone.  I've tried changing the timezone, but still get the same error, just different time zone.

Doh, looked right past what was staring me in the face.   :o

But it seems that this over-site is what flushed out the configuration saving problem.

Now that I've liberally sprinkled semicolons everywhere, I have been able to confirm that there is still some intermittent strangeness occurring when saving OpenVPN client configurations; specifically, the active running configuration gets updated, but the configuration database is not updated with the changes and is not isolated to the custom options section as it occurred with a couple of other gui checkbox settings as well.

So far here's what I been able to confirm:

Make a change to an OpenVPN client configuration and save that configuration.
Wait for client link and any restarted services to recover and link to stabalize.
Re-open client configuration edit pane.

At this point the client configuration will show the old configuration prior to being edited and saved.

Then to verify what state the current running OpenVPN configuration is at, go to ssh and login into firewall and open the active OpenVPN configuration file for the specific client (ie /var/etc/openvpn/client1.conf).

The configuration file will have the new saved changes active and running.

In order to eliminate any gui and browser cache issue at this workstation, go to a complete different computer, open a fresh browser window, login to pfSense and open the configuration editor for the specific OpenVPN client.

On this other computer the gui also shows the previous un-edited version of the configuration.

Go back and re-list the active OpenVPN configuration file to be sure nothing has changed and it still shows that new configuration settings.

Re-edit and re-save the configuration again, and repeat above, and now the new configuration settings are saved and appear in both the edit gui and the active configuration.

So, behavior also explains why configurations are breaking after a reboot; In the case where missing semicolons resulted in a broken configuration, editing to fix that results in the active config file being updated, and the link now working, but after reboot and the config file get's regenerated again from the old broken configuration because the change did not get updated to the configuration database.

I have not notice a cause as to when things don't get saved correctly other than using the procedure as describe above to note when it doesn't get saved in the gui, but isolated that the configuration change is being acted on, thus confirming that configuration changes are being generated to the active running configuration, but not being saved to the stored configuration database.

Try separating directives with semicolons.

Code: [Select]
pull-filter ignore "redirect-gateway";
route-delay 3

Hadn't thought of that yet.  Will definitely give it a try.  Thank you.

I will note though these are not the only config directives in my custom options, the full set varies from 5 to 20 lines, and it's only the last couple of lines of any given custom options block that seem to get merged.  And on that thought I will have to pay attention to the number of lines in a config and see if that correlates to the problem as well.

Been seeing this intermittently, but yesterday and today has occurred so often that I'm now sure it's not my imagination and is now occurring with enough frequency it's getting difficult to keep my VPN links up and running as well as time I have to spend re-checking the VPN configurations every time something stops working.  So throwing this out to see if anyone has ideas as to what is going on here.

The problem that is occurring is that with multiple OpenVPN client links, the custom option configurations are intermittently not saving, or more so, getting mangled.

Either the changes to the configuration are not saved, even though gui suggest it has, or two of the configuration lines in the custom options section are merged into one line.  The former of which results in configuration changed needed to be re-applied when it's discovered that the intended change is no longer present in the configuration, the later actually causes the VPN link to fail to start due to configuration syntax problems.

An example of the config mangling are two config lines:
Code: [Select]
pull-filter ignore "redirect-gateway"
route-delay 3

When the problem occurs become:
Code: [Select]
pull-filter ignore "redirect-gateway"route-delay 3
There is never any space between the end of the first directive and the start of the second directive, and so far does not seem to be specific to a given combination of directives.

So far, after frequent re-occurrences of this problem, I've noted the following:
  • A formerly working configuration is no longer working as configured due to merging two config directives into a single line.
  • A change to a configuration seems to not get applied; configuration has to be re-edited and re-saved.
  • Problem can occur when saving a configuration change, and re-opening to configuration to verify it,
  • or after a reboot with what a appears to be a fully functioning configuration, all VPN links up and running before the reboot and failing after the reboot.
  • The problems seem to only occur in the 2nd and 4th (out of 4) VPN clients, not sure about the 3rd, but definitely has not occurred with the first clients configuration, always in the 2+ clients.
  • Does not seem to be related the the specific configuration directives.  Have seen it occur with various configuration lines.
  • The line merge thing seems to occur with the last two lines of the custom options, though not entirely positive about this.

I'm assuming that the line merging may have something to do with white space scrubbing, but it does not explain why a working configuration is suddenly broken after a just a reboot.
As for the not saving or reverting to a previous configuration, this isn't just a couple lines merging, the entire block of custom options can be re-arranged and the next time that client is edited, it's back to the previous incarnation, as if the configuration update was never saved.

In either case it has led to a lot of fussing around and re-checking & re-saving configurations to try and get links up and running, and just when all appears to be working, rebooting and the links are in various states of broken until the configurations are re-edited and re-saved, again.

Over the past few weeks of build updates I've been noticing that Avahi has been having increasing problems starting automatically after reboots (more so than the usual temperamental behavior Avahi tends to exhibit).  What was once intermittent, or just long delayed after all interface transitions complete, has now become fairly consistent.

It either takes a long time to start, with several restart attempts noted by errors in the system log, if it starts at all.  When it fails to start automatically, manual start works fine.

The only entries I see in the system log when Avahi wont' start are:

Code: [Select]
Found user 'avahi' (UID 558) and group 'avahi' (GID 558).
Successfully dropped root privileges.
avahi-daemon 0.6.31 starting up.
WARNING: No NSS support for mDNS detected, consider installing nss-mdns!
dbus_bus_get_private(): Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
WARNING: Failed to contact D-Bus daemon.
avahi-daemon 0.6.31 exiting.

When started manualy, there is the NSS error in the log, but otherwise starts up with no problems.

Avahi configuration is at defaults, and I've tried both re-installing the package and deleting and installing the package with no changes in behaviour of startup problems.

2.4 Development Snapshots / Squid 0.4.43 update breaks squid service.
« on: January 30, 2018, 01:24:20 pm »
Just did an update to the recent build 2.4.3.a.20180129.2021.

Noticed after rebooting that squid and squidguard services no longer show up in the Services menu and of course Squid and related services are not running.

Checked the VGA console and saw the tail of the boot up after today's update showing that the Squid package had been updated.  Rebooted again to see if it would clear things up, but squid did not start and still not show in any Services menu or status listing.

Package Manager shows the squid package installed.

Had to re-install squid package and reboot to get squid back up and running.


you do not need to configure an DNS outbound interface under System >> General (unless you have disabled pulling routes in the VPN client.)
It's frequently advised in the forums here to disable pulling routes in OpenVPN client config.  Lots of people do that, which helps with policy-based routing.

Disabling of pulling routes and use of policy routing is not covered in the "Tutorial", the subject of this thread, and shouldn't be for people just trying to get their VPN service up and running because policy routing can not be applied to traffic generated by the firewall such as Unbound and Squid proxy.  This is why so many people have problems with DNS & proxied traffic leaks when using Squid Proxy, even though traffic from their LAN passes (via policy rule) out the VPN.

Therefore, whether configured from the VPN's pushed configuration, or manually, you need a default route for at least one VPN link, and configure firewall services to use route table routing (via us of localhost as an outbound interface) otherwise traffic that can't go through policy routing will go out either the outbound interface{s} directly, (as both Unbound and Squid will do), or routed via whatever state the routing table is at, including the default route (if VPN is not configured as default route), such as the WAN, again resulting in traffic leaks.

The main reason I see for disabling of pulling of routes and manually managing them is due to edge cases where the default route get's mis-directed to an undesired link, or is not updated during VPN link transitions, but falls back to the default WAN, resulting again in traffic leaks.  This gets further into use of policy routing and the use of gateway groups in multi-WAN/multi-VPN, but does not eliminate the need for a properly configured default route.  But alas also, using multiple VPN links is outside the scope of a simple tutorial to get PIA VPN link up and running with pfSense, without leaking traffic out the WAN.

So in a single link scenario, better to stick to pulling route configurations and understanding default routing is necessary for firewall initiated services such as DNS and Proxy, to simplify initial configuration, for people just trying to get their VPN service configured, without leaking data.

For anyone looking to monitor the dynamically assigned public IP address of any WAN or OpenVPN link but do not want or need to create a public DDNS domain, this is a quick way to dummy up the Dynamic DNS Client custom configuration to retrieve the public IP so it can be displayed on the gui interface using the ddns client widget.

pfSense DDNS uses the configured service under Services >> Dynamic DNS >> Check IP Services to retrieve the public IP address of a DDNS client, then uses the configured Update URL to update an external service, that later of which is not of concern when we just want to see what the external address is and not update an external dynamic domain.

So for this configuration we are only interested in the public IP that is retrieved and don't  care about updating an external dns service, however the GUI requires entering a URL, but allows not verifying the results of the update URL, so we just need a dummy entry that will fail quickly so we can get to displaying the retrieved / cached public IP.

For each interface you wish to monitor the external Public IP, add & configure a DDNS client as follows:

Service Type: Custom

Interface to monitor & Interface to send update: <set both to interface to retrieve public IP for>

Verbose logging: check (as needed for debugging)

HTTP API DNS Options: Check Force IPv4 DNS Resolution

Username / Password: blank

Update URL: http://localhost

Result Match: blank

Description: External IP

That's it.

For each configured ddns client interface pfSense will retrieve and cache the Public IP address, and since we don't care about the update URL or the results, when it fails, you still get an IP lookup that is displayed in the DDNS status and gui widget.

Now just add the DDNS client widget to the gui and have up to date external public NAT'd IP address for each interface.



cache_dir aufs /var/squid/cache 100000 16 256

From your initially posted configuration to here you changed your storage type from ufs to the posix threaded aufs.  This would result in a performance change due to I/O process blocking.

See the description for aufs here:

If I'm running 'Services > DNS Resolver' on PFsense, It looks like (most?) of my DNS queries are still going out the WAN. Is this because the the source IP is 'LAN net' on my VPN policy (ports 80,443,53) and the Resolver is using my WAN IP for the DNS queries (at least what it looks like from tcpdump)?
To fix this, go to Services / DNS Resolver and under "Outgoing Network Interfaces," select only your PIA VPN interface(s) and make sure "All' and "WAN" aren't selected.

This fixes the DNS leak over your regular WAN but introduces the problem that if your VPN ever goes down, pfSense will not be able to resolve DNS to reconnect the VPN.  To fix this, go to System / General Setup and specify a 3rd party DNS resolver of your choosing (Google, OpenDNS, Level 3, Verisign, etc.).  This setting only affects outbound DNS queries by localhost, not by anything on your LAN, which should go out the PIA VPN only via unbound.

This is not entirely true.  You do not need to add a 3rd party DNS for WAN use.

When System >> General DNS "Outgoing Network Interfaces" is configured for a given DNS server, pfSense creates a static route for that DNS entry to go out that interface, but only when that interface is up.  Otherwise the routing is subject to the current routing table default routing, thus DNS queries will go out the default route, usually the WAN.  (This also applies to policy routing rules, which are only added after the target gateway interface is online.)  So you have to plan routing and rules in two states; 1) When VPN links are down and WAN is default route, and 2) when VPN links are up.

What's missing in the DNS routing is configuring Unbound to use localhost for outbound queries so that outbound DNS queries go through default routing.  If Unbound is configured to only use the VPN links as outbound interfaces, then of course those interfaces will not be usable when the VPN is down, and Unbound will have no other interface for outbound queries.

Additionally to really prevent unexpected DNS leaks, and extra query traffic, especially when using multi-wan / multi-vpn, configure Unbound to only use localhost for outbound. 

The reason is that Unbound will try to query all configured DNS servers on each configured outbound interface directly, bypassing the route table, and then manage responses accordingly, which results in leaks if WAN is configured as an outbound interface, and lots of extra activity if you have multiple links.  Configuring Unbound to only use the localhost for outbound not only limits the amount of queries outbound attempts, but also subjects all outbound queries to NAT'ing and routing.  Thus when the VPN links are down, dns queries are routed out the default route, which should be the WAN, but when the VPN links are up and are the default route, DNS queries go out the VPN default routes (either by default or as configured for "Outgoing Network Interface" once the link is up.  When Unbound is configured to use only localhost as outbound, you do not need to configure an DNS outbound interface under System >> General (unless you have disabled pulling routes in the VPN client.) as outbound queries from localhost will go out the VPN link once it becomes the default route.

And as an added note: Since PIA does not support IPv6, Unbound needs to be configured to disable IPv6 queries, otherwise there is a bit of a performance hit with all DNS queries as it wil try to resolve both A & AAAA records for all queries.  Add the Unbound configuration directive "server: do-ip6: no" to turn off AAAA record queries.

2.4 Development Snapshots / Re: Dont update to latest snap
« on: January 25, 2018, 12:34:55 pm »
Bleh.  Had to go to shell and edit /etc/inc/ file manually.

LIne 2514:

Code: [Select]
!isset($config['interfaces'][$iface]]['enable'])) {
Should be:

Code: [Select]
!isset($config['interfaces'][$iface]['enable'])) {
Save file and reboot.  All is back up.

Pages: [1] 2