What's new

Wireguard Session Manager - Discussion thread (CLOSED/EXPIRED Oct 2021 use http://www.snbforums.com/threads/session-manager-discussion-2nd-thread.75129/)

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

@Martineau,
I ran into an issue where after adding and changing (deleting the previous ones) RPDB rules, the following error shows-up:
Code:
E:Option ==> 4 wg11

        Requesting WireGuard VPN Peer start (wg11)

        wireguard-client1: Initialising Wireguard VPN 'client' Peer (wg11) in Policy Mode to engage.cloudflareclient.com:2408 (# Cloudflare WARP) DNS=192.168.2.1
iptables: Chain already exists.

wireguard-client1: ***ERROR Failed to create -t nat WGDNS1.

On a different topic: the /jffs/addons/wireguard/Scripts directory gets wiped-out upon uninstalling the script. I have set-up a cron job to keep it safe, but a note during the install or uninstall -for the rest of us- would be welcome.
 
@Torson
I need this device for my Internet.
This is my setup: PC (192.168.1.139) -> Router (192.168.1.1) -> LTU Pro (Bridge, 172.16.253.6) -> LTU Rocket (Bridge, 172.16.253.2) -> ISP

Can I simply change this line in my warp.conf to another IP adress?
Code:
Address = 172.16.0.2/32
If so, to which IP adress?
Your devices are on a different subnet. So that's not the issue.
The internal address provided in the .conf file cannot be changed, so that's not an option.
 
Last edited:
I ran into an issue where after adding and changing (deleting the previous ones) RPDB rules, the following error shows-up:
Code:
E:Option ==> 4 wg11

        Requesting WireGuard VPN Peer start (wg11)

        wireguard-client1: Initialising Wireguard VPN 'client' Peer (wg11) in Policy Mode to engage.cloudflareclient.com:2408 (# Cloudflare WARP) DNS=192.168.2.1
iptables: Chain already exists.

wireguard-client1: ***ERROR Failed to create -t nat WGDNS1.

That appears to be an issue due to the fact that the previous stop wg11 request didn't complete correctly?

i.e. if you switched the 'client' Peer from Policy to non-Policy, whilst the 'client' Peer is connected then that indeed would be the cause.

Not sure if you can provide the exact sequence of steps you performed to create the error? - otherwise perhaps it may be prudent to only allow modifying the RPDB rules when the 'client' Peer isn't ACTIVE?

On a different topic: the /jffs/addons/wireguard/Scripts directory gets wiped-out upon uninstalling the script. I have set-up a cron job to keep it safe, but a note during the install or uninstall -for the rest of us- would be welcome.
Uninstall usually means exactly what it says, so directory '/jffs/addons/wireguard/' together with all its sub directories and files are deleted - I assume that is what you would normally expect and have actually observed?

However, if during the uninstall you consciously elect to keep the data directory '/opt/etc/wireguard.d/' (containing the SQL database and all the '.conf' files) the script (since v4.01) is supposed to try to second-guess your intentions and should silently preserve the '/jffs/addons/wireguard/Scripts/' directory to '/opt/etc/wireguard.d/'

e.g.
Code:
    Press Y to delete ALL WireGuard DATA files (Peer *.config etc.) ('/opt/etc/wireguard.d/') or press [Enter] to keep custom WireGuard DATA files.


    Event scripts

wg11-down.sh
wg11-route-down.sh
wg11-route-up.sh
wg11-up.sh

    Deleted Peer Auto-start @BOOT

    [✖] Statistics gathering is DISABLED

    nat-start updated - no longer protecting WireGuard firewall rules
    Deleted aliases for 'wg_manager.sh'
    Restarting DNSMASQ to remove 'wg*' interfaces

Done.

    WireGuard Uninstall complete for RT-AC86U (v386.2)


but unfortunately there appears to be a typo in the command :oops:


I'll push a Hotfix ASAP
 
That appears to be an issue due to the fact that the previous stop wg11 request didn't complete correctly?

i.e. if you switched the 'client' Peer from Policy to non-Policy, whilst the 'client' Peer is connected then that indeed would be the cause.

Not sure if you can provide the exact sequence of steps you performed to create the error? - otherwise perhaps it may be prudent to only allow modifying the RPDB rules when the 'client' Peer isn't ACTIVE?
I believe you're right - I may have switched from one mode to another while the client was active.

So, "perhaps it may be prudent to only allow modifying the RPDB rules when the 'client' Peer isn't ACTIVE?" sounds like a sensible approach.

That may also be related to the initial auto start mode set to 'Y' when the client peer is imported. Would you entertain the idea of 'N' as import default followed by the prompt to chose an option upon the client peer activation?
 
Was browsing through the wg_client code on github and it looks like when the new routing table for policy routing are created they will only contain a copy of lan_ifnames entries (together with wg default routing). Why are the new routing table so sparse? Why not copy the entire routing table so we get routing information to any other subnets that might be in there?

So if I for example use policy routing and route my lan out through vpn I would no longer be able to contact my guest network from that lan which is either on br1, br2, wl0.1 or any other subnet, vlans or bridges I might have been created simply because there are no route to them (well, actually these packages are routed out through the wireguard interface where they will be dropped (if you are lucky)).

Or am I misreading/misunderstanding the code? Or missing some other functionality? How is this typically done with open-vpn?

//Zeb
 
Last edited:
I believe you're right - I may have switched from one mode to another while the client was active.

So, "perhaps it may be prudent to only allow modifying the RPDB rules when the 'client' Peer isn't ACTIVE?" sounds like a sensible approach.

That may also be related to the initial auto start mode set to 'Y' when the client peer is imported. Would you entertain the idea of 'N' as import default followed by the prompt to chose an option upon the client peer activation?
There is no user available to answer any prompt during the boot process!!!!!

So, having imported the 'client' Peer why wouldn't you want to auto-start the Peer?

If I set the import default to Auto=N then no doubt I'll probably eventually get someone complaining "why didn't the 'client' Peer start @boot?" :rolleyes:
 
Was browsing through the wg_client code on github and it looks like when the new routing table for policy routing are created they will only contain a copy of lan_ifnames entries (together with wg default routing). Why are the new routing table so sparse? Why not copy the entire routing table so we get routing information to any other subnets that might be in there?

So if I for example use policy routing and route my lan out through vpn I would no longer be able to contact my guest network from that lan which is either on br1, br2, wl0.1 or any other subnet, vlans or bridges I might have been created simply because there are no route to them (well, actually these packages are routed out through the wireguard interface where they will be dropped (if you are lucky)).

Or am I misreading/misunderstanding the code? Or missing some other functionality? How is this typically done with open-vpn?

//Zeb
Unlike the OpenVPN GUI implementation, I don't offer 4 options, and for simplicity, decided to not differentiate between Force Internet traffic through tunnel=Policy Rules and Force Internet traffic through tunnel=Policy Rules (strict) on the basis that for most users the latter option always works without risk.

However, if a user can indeed correctly configure true VLANs on separate bridges like - yourself - ( as opposed to using a different subnet) then surely that is why I accepted your pull-request/enhancement to add the Event trigger exit for edge case routing such as yours.
 
Unlike the OpenVPN GUI implementation, I don't offer 4 options, and for simplicity, decided to not differentiate between Force Internet traffic through tunnel=Policy Rules and Force Internet traffic through tunnel=Policy Rules (strict) on the basis that for most users the latter option always works without risk.

However, if a user can indeed correctly configure true VLANs on separate bridges like - yourself - ( as opposed to using a different subnet) then surely that is why I accepted your pull-request/enhancement to add the Event trigger exit for edge case routing such as yours.
Understood, and I really appreciate your work in either way. I have no problem adding on scripts but for my own case that won't be nessissary since I only use lan to vpn (without access to anywhere else), all others go through wan. I'm mostly talking about this in general and only wants what is best for your script.

My humble opinion is however that access control (forcing internet traffic...) is better left to the firewall and not by excluding routes since it could have adverse effects.

Maybee making a full copy on main routing table and remove wan routes have other adverse effects? Anyone?

I guess an easy enough fix would be to add a policy rule src:0.0.0.0/0 dst:192.168.0.0/16 to use wan.

//Zeb
 
Last edited:
There is no user available to answer any prompt during the boot process!!!!!

:oops: Who mentioned the boot process?

As I understand it, once the import completed, the client peer is set to auto start, but not activated.
Between that time and @boot, some life events may occur, random chores or a firewall restart by another script. At least one of those will start the client peer(s).

There are some familiar programs, services (or firmware components) out there based on similar concepts that after importing a configuration file offer the choice to activate the client/service immediately or not and/or automatically start it at boot time etc. I have not seen many user complaints with that approach either.

Regardless of the path chosen for this specific script I appreciate your time and effort and will continue following its development.
 
Dear @Martineau
I noticed this error, when stopping the interface: (bold font)
E:Option ==> 4

Requesting WireGuard VPN Peer start (warp)

wireguard-clientp: Initialising Wireguard VPN 'client' Peer (warp) to engage.cloudflareclient.com:2408 (# Cloudflare Warp) DNS=1.1.1.1
wireguard-clientp: Initialisation complete.


ENABLED WireGuard ACTIVE Peer Status: Clients 1, Servers 0



1 = Update Wireguard modules 7 = Display QR code for a Peer {device} e.g. iPhone
2 = Remove WireGuard/wg_manager 8 = Peer management [ "list" | "category" | "new" ] | [ {Peer | category} [ del | show | add [{"auto="[y|n|p]}] ]
9 = Create Key-pair for Peer {Device} e.g. Nokia6310i (creates Nokia6310i.conf etc.)
3 = List ACTIVE Peers Summary [Peer...] [full] 10 = IPSet management [ "list" ] | [ "upd" { ipset [ "fwmark" {fwmark} ] | [ "enable" {"y"|"n"}] | [ "dstsrc"] ] } ]
4 = Start [ [Peer [nopolicy]...] | category ] e.g. start clients
5 = Stop [ [Peer... ] | category ] e.g. stop clients
6 = Restart [ [Peer... ] | category ] e.g. restart servers

? = About Configuration
v = View ('/jffs/addons/wireguard/WireguardVPN.conf')

e = Exit Script [?]

E:Option ==> 5

Requesting WireGuard VPN Peer stop (warp)


[: 99p0: bad number
[: 99p0: bad number
[: 99p0: bad number

wireguard-clientp: Wireguard VPN 'client' Peer (warp) to engage.cloudflareclient.com:2408 (# Cloudflare Warp) Terminated


ENABLED WireGuard ACTIVE Peer Status: Clients 0, Servers 0

My warp.conf:
# Cloudflare Warp
[Interface]
PrivateKey = hidden
Address = 172.16.0.2/32
DNS = 1.1.1.1

[Peer]
PublicKey = bmXOC+F1FxEMF9dyiK2H5/1SUtzH0JuVo51h2wPfgyo=
AllowedIPs = 0.0.0.0/0
Endpoint = engage.cloudflareclient.com:2408

I'm still struggeling to get WARP to work.
I can ping all websites, so it seems I can resolve domain names, but the websites won't load.
So I guess there is a traffic problem somewhere.

Edit:
I was so desperate to figure out my problem, so I bought a mobile data sim and LTE stick.
I plugged this stick into my AC86U and now I'm using the mobile LTE data sim as my primary WAN.
Now, WARP is working without problems. So I guess the problem is my other ISP or the special setup I use. (Router->LTU Pro->LTU Rocket->ISP)
 

Attachments

  • Unbenannt.PNG
    Unbenannt.PNG
    14.1 KB · Views: 83
Last edited:
My ISP told me, in order to properly use WARP I have to do something:
Code:
/ip firewall mangle
add out-interface=pppoe-out protocol=tcp tcp-flags=syn action=change-mss new-mss=1300 chain=forward tcp-mss=1301-65535
But I don't know what this is or how to use this?
Maybe someone can explain me, how I can use this or where I have to add this?

Edit:
I think it has something to do with MTU?
Is there a way to configure MTU for my warp.conf interface?

Edit:
FINNALY!
I had to lower the MTU for my warp interface to 1280, now the tunnel is working!

How can I set the MTU to permanently 1280 for my warp interface? Even after a reboot?
 
Last edited:
How can I set the MTU to permanently 1280 for my warp interface? Even after a reboot?
In the GUI:
Advanced > WAN > Account Settings
Don't forget to hit Apply at the bottom of the page...which might trigger a soft reboot.
 
In the GUI:
Advanced > WAN > Account Settings
Don't forget to hit Apply at the bottom of the page...which might trigger a soft reboot.
Sorry, that's only for the ppp0 interface.
I need to permanetly set MTU for the "new" warp/wireguard interface.
 
@Martineau
Another thing, when I want to restart my warp interface, I get this:
Apr 10 21:24:38 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:38 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:55 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:55 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:25:15 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:25:15 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m

And sometimes this:
Apr 10 21:40:16 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x1
Apr 10 21:40:16 kernel: bcm63xx_nand ff801800.nand: intfc status f80000e0
Apr 10 21:40:32 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x4
Apr 10 21:40:32 kernel: bcm63xx_nand ff801800.nand: intfc status c80000e0
Apr 10 21:45:41 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x1
Apr 10 21:45:41 kernel: bcm63xx_nand ff801800.nand: intfc status f80000e0
Apr 10 21:46:27 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x4
Apr 10 21:46:27 kernel: bcm63xx_nand ff801800.nand: intfc status c80000e0
 
Last edited:
Dear @Martineau
I noticed this error, when stopping the interface: (bold font)
Code:
E:Option ==> 5

        Requesting WireGuard VPN Peer stop (warp)


[: 99p0: bad number
[: 99p0: bad number
[: 99p0: bad number
        wireguard-clientp: Wireguard VPN 'client' Peer (warp) to engage.cloudflareclient.com:2408 (# Cloudflare Warp) Terminated


ENABLED WireGuard ACTIVE Peer Status: Clients 0, Servers 0
Whilst wgm allows importing of '*.conf' files without mandating renaming them to the 'wg1x.conf' convention, there are probably still tests for the prefix 'wg1*' to determine extraction of data fields and run logic etc.

I suggest you turn ondebug mode and PM me the output, to identify where the '99p0' text string is being retrieved from.
 
My ISP told me, in order to properly use WARP I have to do something:
Code:
/ip firewall mangle
add out-interface=pppoe-out protocol=tcp tcp-flags=syn action=change-mss new-mss=1300 chain=forward tcp-mss=1301-65535
But I don't know what this is or how to use this?
Maybe someone can explain me, how I can use this or where I have to add this?

Edit:
I think it has something to do with MTU?
Is there a way to configure MTU for my warp.conf interface?

Edit:
FINNALY!
I had to lower the MTU for my warp interface to 1280, now the tunnel is working!

How can I set the MTU to permanently 1280 for my warp interface? Even after a reboot?
Unfortunately, wgm currently does not allow specifying the MTU for individual WireGuard interfaces.

The MTU is globally set to 1420 in the script wg_client, which works for my Mullvad hosted WireGuard sessions, on the basis that a larger MTU usually helps eliminate the need for fragmentation.

I'll add the option to specify the MTU in the import process and also allow tweaking of the MTU when using the peer command e.g. peer warp mtu=1280

In the meantime you will need to customise/edit your copy of wg_client to apply MTU=1280 each time you initialise the WireGuard interface after the reboot.
 
@Martineau
Another thing, when I want to restart my warp interface, I get this:
Apr 10 21:24:38 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:38 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:55 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:24:55 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:25:15 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
Apr 10 21:25:15 kernel: ^[[0;33;41m[ERROR mcast] bcm_mcast_blog_process,789: blog allocation failure^[[0m
And sometimes this:
Apr 10 21:40:16 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x1
Apr 10 21:40:16 kernel: bcm63xx_nand ff801800.nand: intfc status f80000e0
Apr 10 21:40:32 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x4
Apr 10 21:40:32 kernel: bcm63xx_nand ff801800.nand: intfc status c80000e0
Apr 10 21:45:41 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x1
Apr 10 21:45:41 kernel: bcm63xx_nand ff801800.nand: intfc status f80000e0
Apr 10 21:46:27 kernel: bcm63xx_nand ff801800.nand: timeout waiting for command 0x4
Apr 10 21:46:27 kernel: bcm63xx_nand ff801800.nand: intfc status c80000e0
for possible solutions...
 
Hi @Martineau

I also think, there may be something wrong with the stats in syslog:

Code:
Apr 11 17:59:00 (wg_manager.sh): 9265 Clients ^[[97m1^[[95m, Servers ^[[97m0
Apr 11 17:59:01 (wg_manager.sh): 9265 warp:^[[97m transfer: 2.02 GiB received, 72.47 MiB sent        ^[[97m0 Days, 00:54:38 from 2021-04-11 17:04:23 >>>>>>^[[0m
Apr 11 17:59:01 (wg_manager.sh): 9265 warp: period : -2126008812 Bytes received, 72.47 MiB sent (Rx=-2126008812;Tx=75990303)
Apr 11 18:59:00 (wg_manager.sh): 14631 Clients ^[[97m1^[[95m, Servers ^[[97m0
Apr 11 18:59:01 (wg_manager.sh): 14631 warp:^[[97m transfer: 3.63 GiB received, 124.91 MiB sent        ^[[97m0 Days, 01:54:38 from 2021-04-11 17:04:23 >>>>>>^[[0m
Apr 11 18:59:01 (wg_manager.sh): 14631 warp: period : -397284475 Bytes received, 124.91 MiB sent (Rx=-397284475;Tx=130977628)
 
Hi @Martineau

I also think, there may be something wrong with the stats in syslog:

Code:
Apr 11 17:59:00 (wg_manager.sh): 9265 Clients ^[[97m1^[[95m, Servers ^[[97m0
Apr 11 17:59:01 (wg_manager.sh): 9265 warp:^[[97m transfer: 2.02 GiB received, 72.47 MiB sent        ^[[97m0 Days, 00:54:38 from 2021-04-11 17:04:23 >>>>>>^[[0m
Apr 11 17:59:01 (wg_manager.sh): 9265 warp: period : -2126008812 Bytes received, 72.47 MiB sent (Rx=-2126008812;Tx=75990303)
Apr 11 18:59:00 (wg_manager.sh): 14631 Clients ^[[97m1^[[95m, Servers ^[[97m0
Apr 11 18:59:01 (wg_manager.sh): 14631 warp:^[[97m transfer: 3.63 GiB received, 124.91 MiB sent        ^[[97m0 Days, 01:54:38 from 2021-04-11 17:04:23 >>>>>>^[[0m
Apr 11 18:59:01 (wg_manager.sh): 14631 warp: period : -397284475 Bytes received, 124.91 MiB sent (Rx=-397284475;Tx=130977628)
The negative stats count might be caused by a 'missing' record in the SQL database.

I suggest you reset the SQL tables

So stop ALL WireGuard interfaces, then delete both the traffic and session tables
Code:
E:Option ==> diag sqlX


Use command 'diag sql [ table_name ]' to see the SQL data (might be many lines!)

       Valid SQL Database tables: clients  devices  fwmark   ipset    policy   servers  session  traffic

             e.g. diag sql traffic will show the traffic stats SQL table

    DEBUG: Interactive SQL '/opt/etc/wireguard.d/WireGuard.db'
    Tables:    clients  devices  fwmark   ipset    policy   servers  session  traffic

SQLite version 3.33.0 2020-08-14 13:23:32
Enter ".help" for usage hints.
sqlite> drop table traffic;
sqlite> drop table session;
sqlite> .quit

Now recreate the tables
Code:
e  = Exit Script [?]

E:Option ==> initdb keep

    No Peer entries to auto-migrate from '/jffs/addons/wireguard/WireguardVPN.conf', but you will need to manually import the 'device' Peer '*.conf' files:

warp

    [✔] WireGuard Peer SQL Database initialised OK

Now start the 'client' Peer warp

Every hour @hh:59 the stats are generated and you can check if the hourly stats are as expected i.e. are negative values still shown?

e.g. I left two 'client' Peers running and they don't show negative values
Code:
E:Option ==> diag sql traffic

    DEBUG: SQL '/opt/etc/wireguard.d/WireGuard.db'

    Table:traffic
Peer  Timestamp            RX        TX
<snip>
wg11  2021-04-13 02:59:01  16495176  8545302
wg13  2021-04-13 02:59:01  10505     39024
wg11  2021-04-13 03:59:01  6248437   5998447
wg13  2021-04-13 03:59:02  10559     39476
wg11  2021-04-13 04:59:01  17732496  10023794
wg13  2021-04-13 04:59:02  13177     49120
wg11  2021-04-13 05:59:01  7129241   7267224
wg13  2021-04-13 05:59:02  13140     49092
wg11  2021-04-13 06:59:01  18948844  11365972
wg13  2021-04-13 06:59:02  15839     58920
wg11  2021-04-13 07:59:01  8324618   8630372
wg13  2021-04-13 07:59:02  15813     58912
wg11  2021-04-13 08:59:01  20217621  12781550
wg13  2021-04-13 08:59:02  18409     68535
 
My ISP told me, in order to properly use WARP I have to do something:
Code:
/ip firewall mangle
add out-interface=pppoe-out protocol=tcp tcp-flags=syn action=change-mss new-mss=1300 chain=forward tcp-mss=1301-65535
But I don't know what this is or how to use this?
Maybe someone can explain me, how I can use this or where I have to add this?

Edit:
I think it has something to do with MTU?
Is there a way to configure MTU for my warp.conf interface?

Edit:
FINNALY!
I had to lower the MTU for my warp interface to 1280, now the tunnel is working!

How can I set the MTU to permanently 1280 for my warp interface? Even after a reboot?
Thanks for the aditional debug trace.

wgm really doesn't like 'warp' as an interface, as opposed to a numeric suffix provided by the 'wg1*' naming convention as expected.

(The numeric suffix is used to determine/select the routing table and RPDB rule priority hence the weird '992p' value)

However, I have modified the scripts to try and tolerate your insistence to use the 'warp' WireGuard interface name (rather than rename it), but Policy Routing is DISABLED.

Code:
e  = Exit Script [?]

E:Option ==> uf

wgm Beta v4.10 now allows you either to define the MTU value to be specified during the import

e.g. add the desired MTU directive prior to importing the '.conf'
Code:
# Cloudflare Warp Ubimo
[Interface]
PrivateKey = aOFsuGBj/kphICgdeemBnb/GjIvKa44ih7qvNaJmfGA=
Address = 172.16.0.2/32
DNS = 1.1.1.1
MTU = 1280
[Peer]
PublicKey = bmX0C+F1F/EMF9dyiK2H5//SUtzH0JuVo51h2wPfgyo=
AllowedIPs = 0.0.0.0/0
Endpoint = engage.cloudflareclient.com:2408

or you may accept the import default and manually change/tweak it post-import using the peer interface_name mtu=nnnn command

Code:
e  = Exit Script [?]

E:Option ==> peer warp

    Peers (Auto=P - Policy, Auto=X - External i.e. Cell/Mobile)

Client  Auto  IP             Endpoint                          DNS      MTU   Public                                        Private                                       Annotate
warp    N     172.16.0.2/32  engage.cloudflareclient.com:2408  1.1.1.1  1420  bmX0C+F1FxEEF9dyiK2H5/1SUtzH0Ju/o51H2wPfgyo=  aOFsuGBjXkphICGdeemBnbsgjIvKa44i/7qvNaJmfGA=  # Cloudflare Warp

    No RPDB Selective Routing rules for warp
Code:
e  = Exit Script [?]

E:Option ==> peer warp mtu=1280

    [✔] Updated MTU
Code:
e  = Exit Script [?]

E:Option ==> peer warp

    Peers (Auto=P - Policy, Auto=X - External i.e. Cell/Mobile)

Client  Auto  IP             Endpoint                          DNS      MTU   Public                                        Private                                       Annotate
warp    N     172.16.0.2/32  engage.cloudflareclient.com:2408  1.1.1.1  1280  bmX0C+F1FxEEF9dyiK2H5/1SUtzH0Ju/o51H2wPfgyo=  aOFsuGBjXkphICGdeemBnbsgjIvKa44i/7qvNaJmfGA=  # Cloudflare Warp

    No RPDB Selective Routing rules for warp
 
Last edited:

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!

Staff online

Top