What's new

OpenVPN 2.6 w/DCO

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

Swammy

New Around Here
Is OpenVPN 2.6 in the future plans for Merlin. More important will DCO be able to run within the ASUS Merlin Hardware (AX88U, AX86U…)?
I have done some reading on the OpenVPN website and it sounds like DCO is a feature that will eliminate some of the throughput limitations of OpenVPN. It also sounds like multi threading is available in 2.6
I would love to see this feature brought to Merlin if it fits in the scope of all the great enhancements that is currently in Merlin.
 
OpenVPN 2.6 yes, but for DCO it will depend on its requirements. Current development builds of the DCO module requires a very recent development kernel. DCO would need to be able to work with kernels 4.1 or 4.19 to be usable under Asuswrt.
 
OpenVPN 2.6 yes, but for DCO it will depend on its requirements. Current development builds of the DCO module requires a very recent development kernel. DCO would need to be able to work with kernels 4.1 or 4.19 to be usable under Asuswrt.
Upgraded my AX-88U to 388.2_alpha2-g9dba35b46c.
Seems like there's no DCO support currently?
Didn't find anything related to DCO in git commits either.
 
Upgraded my AX-88U to 388.2_alpha2-g9dba35b46c.
Seems like there's no DCO support currently?
Didn't find anything related to DCO in git commits either.
DCO is not happening. It requires kernel > 5.2.
 
It has been backported to 4.18 in the redhat world. So maybe if Asus bumps the kernel to at least that version, there is a chance to replicate what they did.
And damn what a difference it would make.... On x86 it does bring openvpn up from 200Mbps to ~900Mbps.
 
It has been backported to 4.18 in the redhat world. So maybe if Asus bumps the kernel to at least that version, there is a chance to replicate what they did.
And damn what a difference it would make.... On x86 it does bring openvpn up from 200Mbps to ~900Mbps.
Red Hat's kernel has a ton of kernel backports, so it's not really pure 4.18, a lot of stuff from 5.x was backported to it.
 
It has been backported to 4.18 in the redhat world. So maybe if Asus bumps the kernel to at least that version, there is a chance to replicate what they did.

Likely won't happen - as @RMerlin mentions, RHAT's kernel is somewhat custom - the HND software dev kit kernel is also custom...

Backporting likely won't happen - there's been a shedload of changes in that part of the kernel that would make backporting any changes very difficult to implement, but more importantly, maintain over time...

If you want to run a cutting edge kernel, then OpenWRT support devices are likely the best choice there...
 
Got it. But I still think at some point a little bit of magic from Mr. Merlin will make it happen.
I've just finished a migration to 2.6.8 with DCO on server and clients and it's beyond impressive. (On x86 beefy hardware with many simultaneous clients).
I guess I just want more from my router than I paid for, as usual ;)
 
Got it. But I still think at some point a little bit of magic from Mr. Merlin will make it happen.
No, it won't. I already looked at it. DCO code relies on features of the kernel stack that don't exist in 4.xx, and a backport of kernel changes is largely impossible even by a kernel developer because Broadcom's drivers were not developed to work with these changes.
 
What's frustrating is that Broadcom decided to stick with 4.19 for their BCM4916 Wifi 7 SDK. I guess I shouldn't be surprised there, considering their SDK still includes dnsmasq 2.78 (from 2017) and curl 7.35 (from 2014). Seriously...
 

Similar threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top