If you use limiters on 2.4 and check the system log you may have seen this pop up

load_dn_sched dn_sched FIFO loaded load_dn_sched dn_sched QFQ loaded load_dn_sched dn_sched RR loaded load_dn_sched dn_sched WF2Q+ loaded load_dn_sched dn_sched PRIO loaded load_dn_sched dn_sched FQ_CODEL loaded load_dn_sched dn_sched FQ_PIE loaded load_dn_aqm dn_aqm CODEL loaded load_dn_aqm dn_aqm PIE loaded

FQ_CODEL was added to FreeBSD in 11.0 in dummynet/ipfw, and since 2.4 is based on that we can enable it by hand without recompiling anything.

Note: This doesn't look like it will officially be in 2.4 via the GUI, and may need more testing. Since we're messing around with the command line, bad things may happen so use at your own risk.

Start with a recent 2.4 snapshot. Create two root limiters, Download and Upload, and put 95% your maximum values in bandwidth. Create two queues under each, say LAN and WAN. For LAN, selection destination addresses for mask and source addresses for WAN. Modify the default outgoing firewall rule to use WAN under "in" pipe and LAN under "out" pipe.

This generates /tmp/rules.limiter with something like the following:

pipe 1 config bw 85Mb queue 1 config pipe 1 mask dst-ip6 /128 dst-ip 0xffffffff pipe 2 config bw 9Mb queue 2 config pipe 2 mask src-ip6 /128 src-ip 0xffffffff

and the firewall rule adds a "dnqueue( 2,1)" in /tmp/rules.debug for the outgoing lan rule.

Without messing with php we can manually change this to fq_codel and have it persist across reboots and ruleset reloads.

cp /tmp/rules.limiter to /root/rules.limiter

I edited /etc/inc/shaper.inc as follows:

4599c4599,4600 < mwexec("/sbin/ipfw {$g['tmp_path']}/rules.limiter"); --- > #mwexec("/sbin/ipfw {$g['tmp_path']}/rules.limiter"); > mwexec("/sbin/ipfw /root/rules.limiter");

replace /root/rules.limiter with:

pipe 1 config bw 85Mb sched 1 config pipe 1 type fq_codel queue 1 config sched 1 mask dst-ip6 /128 dst-ip 0xffffffff pipe 2 config bw 9Mb sched 2 config pipe 2 type fq_codel queue 2 config sched 2 mask src-ip6 /128 src-ip 0xffffffff

replace your bandwidth numbers with your own

Trigger a rule reload (disable, apply, reenable a rule) and kill states. Might want to run "ipfw pipe flush" before doing that. then verify in command line:

[2.4.0-BETA][admin@pfsense.lan]/root: ipfw sched show 00001: 85.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 1450 2175000 31 46500 0 00002: 9.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 21 840 0 0 0 [2.4.0-BETA][admin@pfsense.lan]/root: ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000

fq_codel is running, and we're passing traffic. Cool.

I tried using limiters a long time ago but had to stop due to some problems with dropped traffic probably relating to my hardware and igb. I then just had two interface shapers, WAN and LAN with CODELQ, set to 95% my upload and download. This stopped bufferbloat, but I noticed that most real traffic would actually be half of these values.

So far this has been far superior to the altq CODELQ with some of the following observations from the top of my head:

Downloads not randomly halved versus codel.

Twitch streams don't buffer when under heavy load such as steam

Two heavy bandwidth, multiple connection programs will share bandwidth evenly.

No more "sendto: No buffer space available" for unbound

Slight latency increase versus intermittent packet loss at load

Works just as good as cake in openwrt/lede from my limited home testing.

Some points:

1. Since I haven't been able to use plain limiters until now, this may just be better performance due to dummynet just limiting my bandwidth instead of fq_codel actually shaping. But it seems to perform better than plain limiters with reaching my bandwidth values versus the default WF2Q+.

2. Traffic isn't shown under queues, but 0.0.0.0/0 will show under ipfw sched, so I guess the traffic is still being shaped. I noticed this in the original dummynet aqm paper on the developers' website, so maybe it's by design.

Discuss if you've tried this or have any input. If you use limiters I'm interested if you can actually measure a difference since I'm coming from altq.