What's new

Anyone using Data Channel Offload (OVPN-DCO) on any of your client devices/networks yet?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

I suppose 300 Mbps was too low on WAN side to even make a dent on CPU use with or without DCO. Was like 1-2%.
 
It depends on where the bottleneck is. If the bottleneck is the CPU, then DCO will help get higher throughput. But if the bottleneck is your WAN link, then DCO might only help in reduce CPU usage.

Did you see they only support the AEAD ciphers?

AES-128-GCM and ChaCha20-Poly1305 - this isn't a bad thing to be honest...

I see this feature as more for the provider side, where supporting 1000's of connections per server makes this a lot more efficient, and perhaps easier to manage...
 
AES-128-GCM and ChaCha20-Poly1305 - this isn't a bad thing to be honest...
It`s moving forward, at least. Not that AES-128-CBC is bad, but it`s not as efficient since it requires a separate HMAC calculation.

I see this feature as more for the provider side, where supporting 1000's of connections per server makes this a lot more efficient, and perhaps easier to manage...
One thing I wanted to test myself was if I could efficiently start using bcmspu with crypto getting shifted to kernel space, getting rid of the expensive context switch that made bcmspu useless with OpenVPN. Sadly the newest BCM platform I have access to is still on kernel 4.19, so no DCO possible.

Better hardware acceleration support would be beneficial to those trying to scale such large numbers of simultaneous connections.
 
AES-256-GCM is also supported that’s what I used.

@RMerlin on their site it says it can be back ported to older kernels, not sure if 4.19 is too old?

Found this discussion:

Also here’s the test results below from OpnVPN’s website:

Their Test setup:
AMD ThreadRipper 3970x system running Hyper-V as the hypervisor with Linux and Windows guests.
IMG_0227.jpeg
 
Last edited:
AES-256-GCM is also supported that’s what I used.
Keysize doesn't matter here, the point was more about CBC vs GCM.
 
It`s moving forward, at least. Not that AES-128-CBC is bad, but it`s not as efficient since it requires a separate HMAC calculation.

Some comparison numbers on ARM - I think they made reasonable choices - AES-128-GCM for cores that have the instruction support, and chacha20-poly1305 for those that don't...

gnutls-bin v3.7.3, ubuntu 23.04 (Armbian)

Coretex-A35 - Amlogic S905Y4 - Khadas vim1s

AES-128-GCM 0.21 GB/sec
CHACHA20-POLY1305 75.13 MB/sec
AES-128-CBC-SHA1 0.27 GB/sec
AES-128-CBC-SHA256 0.25 GB/sec

Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

Coretex-A17 - Rockchip RK3288 - Tinkerboard

AES-128-GCM 48.96 MB/sec
CHACHA20-POLY1305 125.20 MB/sec
AES-128-CBC-SHA1 48.21 MB/sec
AES-128-CBC-SHA256 38.30 MB/sec


Flags: half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm
 
AES-256-GCM is also supported that’s what I used.

@RMerlin on their site it says it can be back ported to older kernels, not sure if 4.19 is too old?

Found this discussion:



Linux version 4.1.27 (merlin@ubuntu-dev) (gcc version 5.3.0 (Buildroot 2016.02)
….would be a “no-go” then, eh??
Thank you!!

dazono
•21 hr. ago
Correct. 4.1 kernels are unsupported. Also by upstream Linux kernel.
There is support for RHEL-8's kernel (which is v4.18 based, but it's also carries a lot of support from newer kernels backported by Red Hat).

IMG_8037.jpeg
 
Good to see the OpnVPN dev respond. Pretty much in line with what RMerlin said.

As for my testing it’s probably useless at my ISP speeds. I no longer have a Comcast gigabit connection. I’m on Fios 300/300 which peaks at like 330 Mbps which the VPN test easily maxed out without breaking a sweat in CPU usage as previously mentioned. Maybe at multi gigabit and or lots of clients like sfx2000 mentioned DCO may make a bigger difference. Maybe could also make a difference with much lower clocked CPUs as DCO allows multi threaded encryption.
 
Last edited:
Well - it's a step forward for OVPN - but performance should be similar to non DCO - it will be more efficient however...

It keeps OVPN relevant compared to WG - and they've made some good choices with ciphering using a couple of good choices on AEAD... chacha20-poly1305 for cores that do not implement some kind of AES acceleration, and AES-128-GCM for those that do...

With DCO, there is a cost, and that cost is portability - as long as the OVPN team can support kernel modules/extensions/drivers across operating systems and core ISA's, they're good to go, but the burden is on them to keep it in sync...

Keep in mind that OVPN keeps the relative strengths they've had in the past - they're layer 3, whereas WG, like IPSeC is Layer 2 - which is useful in certain use cases...
 
Good to see the OpnVPN dev respond. Pretty much in line with what RMerlin said.

As for my testing it’s probably useless at my ISP speeds. I no longer have a Comcast gigabit connection. I’m on Fios 300/300 which peaks at like 330 Mbps which the VPN test easily maxed out without breaking a sweat in CPU usage as previously mentioned. Maybe at multi gigabit and or lots of clients like sfx2000 mentioned DCO may make a bigger difference. Maybe could also make a difference with much lower clocked CPUs as DCO allows multi threaded encryption.
What is your OVPN client device?
 
A Windows PC with a Ryzen 5800 (non x ie essentially an OEM 5700X), later tested on my 2023 MBP 16" and iPhone 14 Pro as well, latter two don't have DCO support on the OpenVPN app.
 
tested on my 2023 MBP 16" and iPhone 14 Pro as well, latter two don't have DCO support on the OpenVPN app.

Probably for the best - Apple is very protective of their kernel for security reasons.

Client side there isn't really any improvement, IMHO, this is more for the server side...
 
Probably for the best - Apple is very protective of their kernel for security reasons.

Client side there isn't really any improvement, IMHO, this is more for the server side...
How is it not for the client side?
People using underpowered Pfsense/opnsense hardware, other firewall hardware/vpn appliance, vpn router, etc , (AKA, CPU bottlenecks), as OVPN clients … could all benefit from running OVPN-DCO
 
Last edited:
Looking at OpenVPNs own results it definitely looks like it helps client side, but probably not useful just yet for the average end user with relatively low WAN speeds like myself. Servers/Routers dealing with many clients and large loads will likely see a more immediate effect. It’s still a welcome improvement.
 
Last edited:
People using underpowered Pfsense/opnsense hardware, other firewall hardware/vpn appliance, vpn router, etc , (AKA, CPU bottlenecks), as OVPN clients … could all benefit from running OVPN-DCO

Folks running OpenWRT can load up the kernel module and kick the tires..


I've got a QCA-Dakota based board on the shelf - GL-Inet B1300, but I'd have to spin up a build environment as I'm not actively working on that one...

Dakota - IPQ4028 which is a Cortex-A7 Quad @ 717MHz

Last time I check it with OVPN, it was around 25Mbps, where WG was around 190 Mbps...
 
Servers/Routers dealing with many clients and large loads will likely see a more immediate effect. It’s still a welcome improvement.

I agree - for an enterprise security GW, esp with the work-from-home situation, DCO improvements could be profound here for capacity reasons - same as it would be for the commercial VPN service providers...

Think about it - if one is running that security gateway on a 2U server on a big AMD Eypc or Intel Xeon - it's got plenty of BW on the wire, and tons of CPU resources, but the session overhead drops dramatically with DCO - which is my point...
 
Folks running OpenWRT can load up the kernel module and kick the tires..

I've got a QCA-Dakota based board on the shelf - GL-Inet B1300, but I'd have to spin up a build environment as I'm not actively working on that one...

Hm... so I did spin up a B1300/Dakota build with DCO, but the results are inconclusive, and with a limited number of tests, I'd rather not share results as they really don't prove or disprove the advantages of DCO...

I tihnk if someone wants to tinker with this, it's best to go first into configuring debian (or whatever) as a router, and enabling/disabling the feature there...

Remember, router SoC's add a layer of complexity with the built in switch, so when doing A/B testing, best to keep things as simple as possible...

Here's a couple of diagrams that might illustrate the complexity...

ovpn-classic as a user space implementation...

Screenshot_2023-06-13_at_10.37.55_AM.original.png


Now here's DCO...

Screenshot_2023-06-13_at_10.39.27_AM.original.png


It's down in the network stack where the other dragons live - flow accelerators, QoS management, task offloaders, etc... both HW and SW...

So full integration into something like AsusWRT (if the kernel was new enough) or similar, well, that's a pretty big task, and a lot of testing across different configurations...
 
What is this Open VPn data channel offload? My wife was having issues with her connectivity and I noticed a network adapter called Open VPN data channel offload. How did that get installed?
Is that a Windows update thing?
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top