What's new

Very Slow SMB over over RT-AX58U

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

leof13

New Around Here
Hello, I'm experiencing a problem with SMB over my RT-AX58U, both wireless and wired. Most of my network is hardwired through a layer 2 switch, which in turn is connected to the AX58U to act as my router and gateway. My network includes a very fast NAS (SSDs, RAID5) that transfers data at expected rates to other devices connected to the switch (north of 900Mbits/s). When I try to transfer data to a wireless device, or even a device that is hardwired to the router rather than the switch, I am lucky to even get 20Mbits/s. The same is true in reverse (NAS wired to router, client to switch), and even if both devices are wired to the router. This behavior is repeatable with both my NAS appliance, and standard SMB shares from Windows or Linux PCs
To make this more perplexing, I do not have any internet speed issues. Devices on the both the router and the switch can upload and download data from the Internet at my 200Mbit/s subscription level without issue. Likewise other file transfer protocols like ftp do not have this problem when transferring data within the LAN, it seems only related to the SMB protocol.
I have gone through just about every SMB share setting and tweak possibly within Windows without meaningful change, SMB v1, v2, and v3 all present similar behavior.

My AX58u is running AsusWRT Merlin 3004.388.6. I am not running anything complex on the router or switch, there is no VLANing anywhere or anything out of the ordinary other than some DHCP option tweaks. My whole network runs on CAT6a cables and shows 1Gbit full duplex on all connections.

Has anyone seen anything like this before? If so how was it fixed? I'm out of ideas on things to check. Any insight is appreciated.
 
Are you running any add-on scripts on the router? If so which ones?
What features have you enabled (QoS, AiProtection, etc) on the router?
Do you have a USB hard drive attached to the router? If so is it being used a a network storage location?
 
I have the same experience with smb and rt-ax58u.

My smb server is running on lxc container on proxmox, when I try to upload huge amount of files via smb protocol the transfer stalls (number or size doesn't matter), on the other side when I transfer the same bucketing files via ssh or sftp clients, such as filezilla, all work well with an average speed of 40 MB/s. (I transfer the data to a rotational HDD 7200 rpm)

I don't think the bottleneck is the router, unless smb inserts a few too many packets into the communication for its management.

But I don't know much about the SMB protocol and I haven't researched much about it.
The only thing that I noticed with SMB go the vCPUs assigned to the LXC to full load, unlike what happens with SSH or SFTP.
 
Are you running any add-on scripts on the router? If so which ones?
What features have you enabled (QoS, AiProtection, etc) on the router?
Do you have a USB hard drive attached to the router? If so is it being used a a network storage location?
No add-on scripts are being run. There is not QoS, Ai Protection, or any other major feature enabled. No USB storage is attached to the router. The only things changed from the base Merlin install are some wireless network settings, DHCP server config, and a few DHCP options to point to a TFTP server.
 
I have the same experience with smb and rt-ax58u.

My smb server is running on lxc container on proxmox, when I try to upload huge amount of files via smb protocol the transfer stalls (number or size doesn't matter), on the other side when I transfer the same bucketing files via ssh or sftp clients, such as filezilla, all work well with an average speed of 40 MB/s. (I transfer the data to a rotational HDD 7200 rpm)

I don't think the bottleneck is the router, unless smb inserts a few too many packets into the communication for its management.

But I don't know much about the SMB protocol and I haven't researched much about it.
The only thing that I noticed with SMB go the vCPUs assigned to the LXC to full load, unlike what happens with SSH or SFTP.
I'm basing my belief that it is the router in the fact that when I do not go through it, SMB speeds are as expected. I know SMB is not the most efficient protocol, but even given that the transfer speeds though the router are much slower than I would expect. Especially when I can get substantially more throughput (about 30-40 times more) via SSH, SFTP, ect. SMB is just convenient for windows clients.
 
Tcp is used for smb. Samba includes a function called smb multichannel which uses RDMA. It only works with direct server to client and cannot have a router in between. You can add up to 4 nics between a server and client pc. Can be 1000Base-T or even 10Gbase-T but all NICs must be the same speed and samba has to have it enabled in smb.conf. I’m unsure if samba built into the router supports it, but going direct to server would be highly recommended using that or supporting a higher NIC on your server like 2.5G.

Also enabling jumbo frame support might help. At most on a 1000Base-T you might get 150MB/s going through tcp instead of multi channel. LACP aggregation won’t help as it’s not supported by samba.

There are some optimizations you can make in sambas configuration, but I’ve found they don’t help much besides adding maybe vfs_iouring which helps with async. But even I get stalls on certain files. I’ve been experimenting with adding irqbalance to my server to help spread interrupts between my cpu cores.

Also adding the io schedular kyber to my server and using cake and BBR. Samba is still a poor protocol. Wish they would support udp QUIC.

Also I make sure my proxmox server has async enabled in my mount options for each disk. Which again helps ensure async. Proxmox by default uses io_uring. But I also use writethrough and iothread. And if you use btrfs filesystem add mount options “discard=async, ssd” or set rotational disk to 0 in rc.local.


 
Last edited:
Tcp is used for smb. Samba includes a function called smb multichannel which uses RDMA. It only works with direct server to client and cannot have a router in between. You can add up to 4 nics between a server and client pc. Can be 1000Base-T or even 10Gbase-T but all NICs must be the same speed and samba has to have it enabled in smb.conf. I’m unsure if samba built into the router supports it, but going direct to server would be highly recommended using that or supporting a higher NIC on your server like 2.5G.

Also enabling jumbo frame support might help. At most on a 1000Base-T you might get 150MB/s going through tcp instead of multi channel. LACP aggregation won’t help as it’s not supported by samba.

There are some optimizations you can make in sambas configuration, but I’ve found they don’t help much besides adding maybe vfs_iouring which helps with async. But even I get stalls on certain files. I’ve been experimenting with adding irqbalance to my server to help spread interrupts between my cpu cores.

Also adding the io schedular kyber to my server and using cake and BBR. Samba is still a poor protocol. Wish they would support udp QUIC.


But shouldn't the router only be routing the packets between clients? I don't actually have any samba server enabled on the router itself. Unfortunately I can't do anything about direct connections for wireless windows clients as that has to go through the router no matter what since it's acting as the wireless access point. Jumbo packets are enabled throughout my network. I realize that going above a 1Gb connection will increase maximum speed, but it won't help if I'm not saturating the existing 1Gb link through the router. SMB multichannel is disabled on my storage appliance, so it shouldn't be trying to use it. I am able to monitor hardware usage on both the server and clients, and this is definitely not a hardware limitation (as further evidenced by the fact that taking the router out of the equation fixes the issue). Best I can tell the router just isn't able to handle SMB TCP packets as fast as my switch, but that would be concerning as my switch is about 15 years old and not anything particularly special (Netgear GS724Tv1).
 
If your going through any wireless client even if jumbo frames are enabled you’re only capable of the standard 1500 MTU as that point to point WLAN connection can’t or shouldn’t be changed. Jumbo frames are for Ethernet only.


I use these settings for my wireless

IMG_0050.jpeg


I also set my bandwidth to 80 or 160Mhz. OFDMA DL/UP + MU-MIMO added full duplex for wireless which might help.

Tcp optimizer on windows optimal settings might help as well.


If you use hyper-v on any of your clients for vm’s that use a legacy adapter instead of nat then you’re going to limit the whole computer to 100Base-T speeds. Windows xp uses this adapter and I’ve run into problems with it had to add another nic to serve it separately instead of sharing the connection.

Those tips might help.
 
If your going through any wireless client even if jumbo frames are enabled you’re only capable of the standard 1500 MTU as that point to point WLAN connection can’t or shouldn’t be changed. Jumbo frames are for Ethernet only.


I use these settings for my wireless

View attachment 56746

I also set my bandwidth to 80 or 160Mhz. OFDMA DL/UP + MU-MIMO added full duplex for wireless which might help.

Tcp optimizer on windows optimal settings might help as well.


If you use hyper-v on any of your clients for vm’s that use a legacy adapter instead of nat then you’re going to limit the whole computer to 100Base-T speeds. Windows xp uses this adapter and I’ve run into problems with it had to add another nic to serve it separately instead of sharing the connection.

Those tips might help.
Your settings actually match mine. Not sure it's related to that though. Regardless of wireless or direct hardwire connection to the router I get the same approximate SMB throughput. To me that suggests the physical medium is not the issue.

A few simple test cases to help clarify:
-NAS Appliance is a RAID 5 array of 8x1TB SATA 3 SSDs. Quad Core CPU, 16GB PC4 RAM, 10Gb NIC (running at 1Gb) (I know way overkill)
-Client is a Dell Precision 7530 Laptop, 1Gb Ethernet via Intel I219, 802.11AC wireless via Intel(R) Dual Band Wireless-AC 8265, 8 Core CPU, 64GB PC4 RAM
-Switch is a Netgear GS724Tv1 1Gb Switch
-Router is the Asus AX58U Router

Case 1:
NAS Appliance <-wired-> 1Gb Switch <-wired-> Client
Download (NAS to Client) of a single 16GB file - 110MB/s average (880Mb/s)

Case 2:
NAS Appliance <-wired-> Client
Download (NAS to Client) of a single 16GB file - 117MB/s average (936Mb/s)

Case 3:
NAS Appliance <-wired-> 1Gb Switch <-wired-> Router <-wired-> Client
Download (NAS to Client) of a single 16GB file - 2.6MB/s average (22Mb/s)

Case 4:
NAS Appliance <-wired-> Router <-wired-> 1Gb Switch <-wired-> Client
Download (NAS to Client) of a single 16GB file - 2.6MB/s average (22Mb/s)

Case 5:
NAS Appliance <-wired-> 1Gb Switch <-wired-> Router <-wireless-> Client
Download (NAS to Client) of a single 16GB file - 2.6MB/s average (22Mb/s)

Case 6:
NAS Appliance <-wired-> Router<-wired-> Client
Download (NAS to Client) of a single 16GB file - 2.6MB/s average (22Mb/s)

Since case 1 and case 2 are more or less consistent with SMB limits over a 1Gb link, I'm scratching my head about cases 3 and 4. I get that I'll never get that speed over wireless, but that doesn't explain why I can't get anywhere near there over a wired connection to the router.
 
Last edited:
I’m assuming your switch is unmanaged and there is nothing to configure. Have you tried different cat 6a cables. Well I understand they are showing 1G full duplex it’s possible you might have a faulty cable somewhere in the path. Raid can cause some weird problems. My server was running software raid mdadm and btrfs filesystem; it worked fine, however samba wouldn’t connect a client computer that was attached to a AImesh wireless node that was directly connected to the node by cat6a Ethernet. Removed raid and gave my server/samba access to each individual drive and it worked flawlessly. The AImesh node is a RT-AX58U and my main router is a GT-AX110000. Nothing except the path to the drives changed in the configuration as each extra share was just a copy paste and name change of the original raid share. Weird things can cause smb to not cooperate.

Unfortunately I don’t really know what could be causing your issue seems like your switch is able to handle the connections, and the router should just be forwarding the packets and handling dhcp which shouldn’t be that taxing. Maybe on the router go into SSH and see the cpu and IO use of the router using htop just to see if maybe the router is just bottlenecking it self with something well transferring files. The RT-AX58U isn’t the strongest router, but shouldn’t cause performance loss like this since it’s not even handling samba.
 
Can you confirm the speeds without the switch or wireless involved. i.e.

NAS Appliance <-wired-> Router <-wired-> Client

I think you said you tried this in post #1 but I just want to be sure I understood correctly.
 
Can you confirm the speeds without the switch or wireless involved. i.e.

NAS Appliance <-wired-> Router <-wired-> Client

I think you said you tried this in post #1 but I just want to be sure I understood correctly.
yes. My bad, I forget to add it to my previous post
 
I’m assuming your switch is unmanaged and there is nothing to configure. Have you tried different cat 6a cables. Well I understand they are showing 1G full duplex it’s possible you might have a faulty cable somewhere in the path. Raid can cause some weird problems. My server was running software raid mdadm and btrfs filesystem; it worked fine, however samba wouldn’t connect a client computer that was attached to a AImesh wireless node that was directly connected to the node by cat6a Ethernet. Removed raid and gave my server/samba access to each individual drive and it worked flawlessly. The AImesh node is a RT-AX58U and my main router is a GT-AX110000. Nothing except the path to the drives changed in the configuration as each extra share was just a copy paste and name change of the original raid share. Weird things can cause smb to not cooperate.

Unfortunately I don’t really know what could be causing your issue seems like your switch is able to handle the connections, and the router should just be forwarding the packets and handling dhcp which shouldn’t be that taxing. Maybe on the router go into SSH and see the cpu and IO use of the router using htop just to see if maybe the router is just bottlenecking it self with something well transferring files. The RT-AX58U isn’t the strongest router, but shouldn’t cause performance loss like this since it’s not even handling samba.
The switch is managed, but I don't see any issues with the configuration, all I've really done to it is up the MTU. For testing purposes I did just try an unmanaged 8-port desktop switch and got the same results as with my managed switch.
 
Yeah if all things are equal I’d focus on the router.

Curious through if you have the same issues well using something like robo copy and setting your threads = to the number cores or thread if it has hyperthreading on the pc robo copy resides.


I’d also try setting your NIC’s interrupt moderation to extreme or as high as it goes and disable packet priority & vlan on the card.

On the router side if you use QoS disable it, and if you have bandwidth monitor on the rt-ax58u (can’t remember if it does I use it as a node) set the clients to very high priority.

Curious if a brute force approach works better or the same. Can monitor nic speeds on task manager also. Oh and I’d disabled file locking completely on samba.

Just point cho to a samba network mapped drive’s folder and it should do its thing. Robo copy moves folders not individual files.

Cho can move folder to or from the server from the clients side.
 
Last edited:
Spanning Tree Protocol?
I’ve certainly had issues with STP and proxmox, as long as there is not a loop in your setup you should be good with STP disabled. If your VM uses 2 virtual network adapters it can cause STP enabled to fail and do a start and stop operation in the file transfer, and general network connectivity to the server. Probably wouldn’t just cause reduced speeds overall.

That said op hasn’t indicated if his NAS is running baremetal or proxmox or equivalent. Assuming it’s just one NIC into baremetal NAS shouldn’t be an issue to enable STP as it should ensure even if a loop the dhcp server doesn’t die.
 

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top