What's new

Anyone using Data Channel Offload (OVPN-DCO) on any of your client devices/networks yet?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

OpenVPN DCO is significantly faster than without.
It's also faster than Wireguard (even keeping the transform the same.)
 
It's also faster than Wireguard (even keeping the transform the same.)
That's probably because CPUs specialized operands can accelerate AES cipher operations, but not Chacha20.
 
That's probably because CPUs specialized operands can accelerate AES cipher operations, but not Chacha20.

Only problem with your assertion: In addition to AES-GCM, DCO can run ChaCha20/Poly1305, and yes, OpenVPN w/DCO is still faster than Wireguard running ChaCha20/Poly1305. This is what I was relating when I wrote: "(even keeping the transform the same.)"

There is no good reason for this, after all, both protocols have to do about the same work, but there is a less good one: architecture. Wireguard could be better implemented on linux and windows.

Oh, and CPU instructions can accelerate ChaCha20, too. That's why we did the IIMB work for pfSense, and extended it to ARM64 platforms.
 
Oh, and CPU instructions can accelerate ChaCha20, too. That's why we did the IIMB work for pfSense, and extended it to ARM64 platforms.

Jim,

Which instructions are you asserting to regarding ChaCha20-Poly1035 acceleration and on which architecture classes?

On x86 - even without AES-NI, intel did a lot of good work using SSE to speed up the AES family...

ChaCha20 does run quite nicely on MIPS32 along with 32-bit ARM (and ARM64 cores that didn't license the crypto extensions like Broadcom's older Pi chips...)
 
In addition to AES-GCM, DCO can run ChaCha20/Poly1305

Go up in the thread - it's been discussed that DCO supports the AEAD ciphers for AES-128-GCM and ChaCha20-Poly1305 - that's old news...

Anyways nice to see that pfSense is implementing DCO - with the BSD stack, should perform well...

Realizing of course, that DCO is still work in progress, obviously...
 
Jim,

Which instructions are you asserting to regarding ChaCha20-Poly1035 acceleration and on which architecture classes?

On x86 - even without AES-NI, intel did a lot of good work using SSE to speed up the AES family...

ChaCha20 does run quite nicely on MIPS32 along with 32-bit ARM (and ARM64 cores that didn't license the crypto extensions like Broadcom's older Pi chips...)

SSE 4.1/AVX/AVX2, if you have it will all accelerate ChaCha20 with our implementation.

With AVX-512 or even some recent Atoms, you can run VAES for AES-GCM and that rips .vs plain AES-NI.
 
Go up in the thread - it's been discussed that DCO supports the AEAD ciphers for AES-128-GCM and ChaCha20-Poly1305 - that's old news...

Anyways nice to see that pfSense is implementing DCO - with the BSD stack, should perform well...

Have, and it does.
Realizing of course, that DCO is still work in progress, obviously...

At this point, it’s essentially done.
 
At this point, it’s essentially done.

Is this going to work or done already for Intel C3558? I have 2x Netgate 6100 and 1x Netgate 5100 units in use based on this platform. All run site-to-site OpenVPN with sufficient for the needs speed, but I don't mind better. My IT guys have nothing much to do lately. I have one Netgate 6100 spare for experiments.
 
At this point, it’s essentially done.

Any chance of this making it over to pfSense CE?

Anyways - welcome to the SNB forums - you're always welcome here, just note that some folks are challenged with their filters - they mean well, just saying

;)
 
I'm taking a risk here by reviving a dead thread... but has anyone tried out DCO + QAT on Linux? I took this for a spin and have some head-scratching results, curious if anyone else here has attempted.

Test setup: iperf on two devices on different LANs, both connected to a Linux-based router instance running Intel C3858. Linux 6.6.40, OpenVPN 2.6.3, QAT Linux Driver 4.24.

Device A running built-in macOS ipsec client, Viscosity for OpenVPN. Device B just a standard Ethernet client on the LAN. Both devices are connected to the router using 2.5 GbE NICs.

ScenarioDevice A ReceivingDevice A Sending
LAN to LAN (forwarding only)2.35 Gbits/sec2.35 Gbits/sec
OpenVPN (userspace)188 Mbits/sec188 Mbits/sec
OpenVPN (DCO, no QAT)929 Mbits/sec318 Mbits/sec
OpenVPN (DCO with QAT)19.5 Mbits/sec18.4 Mbits/sec
IPSec (AES-CBC with QAT)1.13 Gbits/sec411 Mbits/sec (presumably because macOS is slow)

Meanwhile props to the PFSense folks who seem to have it working quite nicely with their BSD-based implementation.
 
I did some iperf testing today with and without DCO on pfsense with an OpnVPN Windows client (seems DCO is enabled by default on the app), so far I didn't see much difference.

As the documentation states, those primarily affected by the lack of DCO have been small, low-powered embedded devices. If you're using pfSense, then you're probably already using a much more powerful platform (x86) (both client and server). In fact, one of the typical recommendations to solving this performance issue w/ OpenVPN has been to use more powerful devices! And given that most ppl are still quite limited in the bandwidth available from their ISP, there just isn't much opportunity for DCO to "strut its stuff" given your current configuration (assuming DCO is even active; who's to say it wasn't *silently* deactivated due to an unsupported configuration).

IOW, you're already in a configuration that provides much better performance than would be expected from our otherwise lowly consumer-grade routers. You're only going to see significant improvement w/ DCO if line speeds from your ISP increase substantially too. And even then, at least w/ commercial OpenVPN providers, it might be a long time before they increase their own lines speeds to match.

As I see it, the problem w/ DCO, at least for small embedded devices (let's face it, in these forums, that tends to be consumer-grade routers), it's too little, too late. This would have been a very welcome improvement 5 or 10 years ago. Obviously, it only became worth the effort in the eyes of the OpenVPN developers once Wireguard gained priveleged access to the kernel (nothing like competition to move things forward). Eventually, as line speeds do increase for everyone, the benefits of DCO will become more evident. But that could take quite some time, since OpenVPN providers tend to lag behind in order to maximize compatibility. And given that some of the documented restrictions/limitations of DCO are dictated by the server (e.g., NO net30 support), DCO will likely continue to be the exception rather than the rule.

Some OpenVPN providers might even *want* net30, since it prevents OpenVPN clients from being able to communicate w/ each other.

That's why Wireguard will probably continue to have the upper hand. It just works (although it has its own limitations as well, e.g., no bridged tunnels, but DCO doesn't work w/ bridged tunnels either, making the issue moot, at least from a performance perspective).

BTW, fwiw, Brainslayer over @ DD-WRT has apparently backported DCO support with kernel 4.4 and higher starting with build 53787 (although it sounds like it might not be perfected at this time).
 
I'm taking a risk here by reviving a dead thread... but has anyone tried out DCO + QAT on Linux? I took this for a spin and have some head-scratching results, curious if anyone else here has attempted.

Test setup: iperf on two devices on different LANs, both connected to a Linux-based router instance running Intel C3858. Linux 6.6.40, OpenVPN 2.6.3, QAT Linux Driver 4.24.

Device A running built-in macOS ipsec client, Viscosity for OpenVPN. Device B just a standard Ethernet client on the LAN. Both devices are connected to the router using 2.5 GbE NICs.

ScenarioDevice A ReceivingDevice A Sending
LAN to LAN (forwarding only)2.35 Gbits/sec2.35 Gbits/sec
OpenVPN (userspace)188 Mbits/sec188 Mbits/sec
OpenVPN (DCO, no QAT)929 Mbits/sec318 Mbits/sec
OpenVPN (DCO with QAT)19.5 Mbits/sec18.4 Mbits/sec
IPSec (AES-CBC with QAT)1.13 Gbits/sec411 Mbits/sec (presumably because macOS is slow)

Meanwhile props to the PFSense folks who seem to have it working quite nicely with their BSD-based implementation.
Once upon a time I remember reading something about if the appropriate driver isn’t installed, QAT may still technically work, but its performance may be worse than if you hadn’t enabled QAT at all… but u can’t find the source anymore… (?)

But maybe I’m misremembering?
 
Once upon a time I remember reading something about if the appropriate driver isn’t installed, QAT may still technically work, but its performance may be worse than if you hadn’t enabled QAT at all… but u can’t find the source anymore… (?)

But maybe I’m misremembering?
Alternatively, …..
IMG_0060.jpeg

 
Once upon a time I remember reading something about if the appropriate driver isn’t installed, QAT may still technically work, but its performance may be worse than if you hadn’t enabled QAT at all… but u can’t find the source anymore… (?)

But maybe I’m misremembering?

pfSense may support QAT - but one has to include the chipset in use...

Rangeley was supposed to support it, but eventually did not... Netgate released a number of boxed based on the Rangeley chipset, such as the SG-2440, which probably was the most common as the price was right...

Anyways - all of these crypto accelerators/offload things - sometimes it helps, many times it doesn't as it is very conditional for one, the other being that the offload may free up CPU cycles, but performance on that path may actually be lower - there's a number of OMAP chips I know first hand there... and more that a few Marvell chips - and these were the better ones of the bunch.


Intel's QAT does work, but it's mostly for data center/cloud use cases...

The more important thing perhaps is Data Plane Development Kit (DPDK) support - this offloads much more than just Crypto and VPN...
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Top