What's new

Routing performance pfsense

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

DocA

New Around Here
Hello all, I have a n00b question:
Am I right in assuming that pfsense installed on decent hardware would easily outperform any consumer router (like the asus GT-AC5300) in routing performance? For example regarding buffer bloat and when cut through forwarding is disabled?
 
Yes! Even a 'simple' 1037u or j1900 will give you great performance

Verstuurd vanaf mijn A0001 met Tapatalk
 
dual core Intel C2*** series @ 1.7GHz - easily will do 500Mbit symmetric under heavy load (100 plus users).

If planning on building a machine, ensure that the CPU you use has AES-NI instruction support, as this will be required for the future 2.5 release - also note that with 2.4, a 64-bit capable CPU is required. The 1037U/J1900 cpu's mentioned above do not have AES-NI support.

pfSense has very granular QoS settings - the wizards out of the box will get you really close, and one can tweak things a bit from there.

Because of the flexibility of the software, some things like port forwarding are a bit more effort, but pfSense has much more functionality than most consumer routers.
 
performance is highly dependent on hardware, basically the CPU and ram. For 1Gb/s dual channel DDR2 ram easily copes, if you want 10Gb/s you should get as much ram bandwidth as you can like quad channel DDR3 or DDR4.

The reason is that the packets are sent back and forth between ram and CPU for processing. The NIC doesnt send it straight to the CPU (it can but would cause packet drops when CPU is busy) so it is sent to ram first, then CPU to process then back, then to the NIC. So take the given ram bandwidth and divide by 4, it would give you the limit for ram as long as CPU is fast enough, which for x86 is plenty fast compared to mips or ARM.

on platforms like TILE, the NICs are CPU connected, traffic does use ram but does not travel back and forth as much and it is not recommended on the TILE platform like mikrotik CCRs to have packets travelling between different CPUs either but it gets more effective use out of memory bandwidth.
 
Thanks for the replies. It's for a 1 gbit FTTH setup. I went a bit overboard for the hardware, considering it's a home setup. Went for a Shuttle DH170, i5 7600T, and 2x2gb DDR3.
 
performance is highly dependent on hardware, basically the CPU and ram. For 1Gb/s dual channel DDR2 ram easily copes, if you want 10Gb/s you should get as much ram bandwidth as you can like quad channel DDR3 or DDR4.

The reason is that the packets are sent back and forth between ram and CPU for processing. The NIC doesnt send it straight to the CPU (it can but would cause packet drops when CPU is busy) so it is sent to ram first, then CPU to process then back, then to the NIC. So take the given ram bandwidth and divide by 4, it would give you the limit for ram as long as CPU is fast enough, which for x86 is plenty fast compared to mips or ARM.

on platforms like TILE, the NICs are CPU connected, traffic does use ram but does not travel back and forth as much and it is not recommended on the TILE platform like mikrotik CCRs to have packets travelling between different CPUs either but it gets more effective use out of memory bandwidth.

Interesting fact. I remember (from a MUM presentation long time back) that using routeros x86 on an expensive Intel Xeon with large L cache would benefit low latency due to CPU internal fast memory caching, by tweaking/keeping the memory footprint of routeros as small as possible. Running routeros within CPU memory.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top