Netgate SG-1000 microFirewall

Author Topic: bandwidthd kills my throughput  (Read 240 times)

0 Members and 1 Guest are viewing this topic.

Offline jlwright77

  • Newbie
  • *
  • Posts: 3
  • Karma: +0/-0
    • View Profile
bandwidthd kills my throughput
« on: October 22, 2017, 08:50:26 pm »
I like the stats from bandwidthd but when I enable it on my Netgate hardware, it kills my throughput.  Anyone have any experience or thoughts on a resolution for this?

I have a gigabit fiber connection that usually tests out right about 940/940.  I NEVER get those speeds with bandwidthd, often only half.  Sometimes 2/3-3/4 but never the full speed. CPU usage doesn't appear significantly different during a speedtest with or without bandwidthd. At the beginning of the test or shortly in it will surge close maybe 800s and then taper off, often ending in the 400-500s.  Turn bandwidthd off, immediately speeds are restored.

Offline jimp

  • Administrator
  • Hero Member
  • *****
  • Posts: 21409
  • Karma: +1437/-26
    • View Profile
Re: bandwidthd kills my throughput
« Reply #1 on: October 23, 2017, 10:17:06 am »
Do you have the "promiscuous" option set in bandwidthd? If so, uncheck it and test again. Promiscuous mode puts a lot more strain on things as it will have a lot more packets to process that aren't usually seen by the firewall.

Also, what hardware exactly? Tracking traffic will eat up some CPU time, so there is some performance loss to be expected, but how much depends on the CPU power in that box.
Need help fast? Commercial Support!

Co-Author of pfSense: The Definitive Guide. - Check the Doc Wiki for FAQs.

Do not PM for help!

Offline jlwright77

  • Newbie
  • *
  • Posts: 3
  • Karma: +0/-0
    • View Profile
Re: bandwidthd kills my throughput
« Reply #2 on: October 24, 2017, 06:38:15 am »
jimp, thank you for taking time to respond.

I am using a Netgate SG-4860, interface say it's processor is "Intel(R) Atom(TM) CPU C2558 @ 2.40GHz 4 CPUs: 1 package(s) x 4 core(s) AES-NI CPU Crypto: Yes (active)"

With bandwidthd DISABLED, a speedtest tests out at 940 Mbps and processor usage according to the PFsense web interface is maxing out at 62%.

Re-enabling bandwidthd and I lose at least 200 Mbps of speed and processor usage only peaked at 56%, doesn't seem like it would be a processor issue to me!?!?

I didn't ever have promiscuous mode enabled either before or during this test just now.  Turn bandwidthd off and the speed is back immediately.

Offline jimp

  • Administrator
  • Hero Member
  • *****
  • Posts: 21409
  • Karma: +1437/-26
    • View Profile
Re: bandwidthd kills my throughput
« Reply #3 on: October 24, 2017, 07:27:34 am »
I wouldn't normally expect to see that much of a drop, but it's still possible it's due to bandwidthd trying to track all the traffic.

From a shell, watch the output of "top -aSH" while running a speed test, see what shows up when the firewall is under load.
Need help fast? Commercial Support!

Co-Author of pfSense: The Definitive Guide. - Check the Doc Wiki for FAQs.

Do not PM for help!

Offline jlwright77

  • Newbie
  • *
  • Posts: 3
  • Karma: +0/-0
    • View Profile
Re: bandwidthd kills my throughput
« Reply #4 on: October 25, 2017, 03:51:14 pm »
During a "successful" Speedtest, I get the following output:

last pid: 46610;  load averages:  0.41,  0.25,  0.16  up 3+02:46:41    15:48:56
202 processes: 8 running, 152 sleeping, 42 waiting

Mem: 43M Active, 262M Inact, 391M Wired, 178M Buf, 7192M Free
Swap: 8192M Total, 8192M Free


  PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
   11 root          155 ki31     0K    64K RUN     3  73.8H  88.77% [idle{idle: cpu3}]
   11 root          155 ki31     0K    64K RUN     0  73.9H  80.08% [idle{idle: cpu0}]
   11 root          155 ki31     0K    64K RUN     1  73.9H  79.59% [idle{idle: cpu1}]
   11 root          155 ki31     0K    64K RUN     2  73.8H  58.69% [idle{idle: cpu2}]
    0 root          -92    -     0K   560K CPU2    2   3:41  38.28% [kernel{igb1 que (qid 3)}]
   12 root          -92    -     0K   704K WAIT    0  14:30  18.99% [intr{irq256: igb0:que 0}]
   12 root          -92    -     0K   704K CPU1    1  14:27  15.48% [intr{irq257: igb0:que 1}]
   12 root          -92    -     0K   704K WAIT    3  15:08   7.18% [intr{irq260: igb1:que 1}]
   12 root          -92    -     0K   704K RUN     2  17:17   4.05% [intr{irq259: igb1:que 0}]
31309 nobody         23    0 14920K  5084K select  3   3:01   3.08% /usr/local/sbin/darkstat -i igb0 -b 192.168.0.1 -p 666
84405 root           21    0   263M 38676K piperd  0   0:01   0.59% php-fpm: pool nginx (php-fpm)
    0 root          -92    -     0K   560K -       1   0:02   0.20% [kernel{igb0 que (qid 0)}]
    0 root          -92    -     0K   560K -       3   2:43   0.10% [kernel{igb1 que (qid 2)}]
 5207 squid          20    0   281M   132M kqread  3  38:15   0.00% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
27498 root           20    0 12696K  2356K bpf     3   2:54   0.00% /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.pid
   17 root          -16    -     0K    16K -       0   2:14   0.00% [rand_harvestq]
   12 root          -60    -     0K   704K WAIT    0   1:57   0.00% [intr{swi4: clock (0)}]
 2954 root           20    0 10484K  2540K select  0   1:46   0.00% /usr/sbin/syslogd -s -c -c -l /var/dhcpd/var/run/log -P /var/run/syslog.pid -f /etc/syslog.conf

During an unsuccessful Speedtest with bandwidthd enabled, I get:

top: warning: process display count should be non-negative -- using default
....
last pid: 17290;  load averages:  0.97,  0.48,  0.26  up 3+02:48:48    15:51:03
75 processes:  1 running, 74 sleeping

Mem: 47M Active, 260M Inact, 396M Wired, 178M Buf, 7184M Free
Swap: 8192M Total, 8192M Free