What's new
  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

DS1819+ and 10Gb Ethernet and M.2 Adapter Card E10M20-T1

I uncoupled the 4GB ports and only the 10GB port is connected now:
1741027653900.png
 
the cat/sys etc. gets the confirmation of 2500 as the speed.
So the driver is linking a 2.5Gb, but the net profile is 1Gb
Is this a synology install, or did you load this in docker?
Because you have to add --cap-add=NET_ADMIN to the docker's run statement to alter this.
Docker is a virtualization program. Which some think it provides a layer of safety, but is quite hackable if it was exposed to the internet.
 
Last edited:
looking at this, we would have to alter the .jason file in /usr/syno/etc/packages/Docker/

 
Thank you for your inputs and your patience. I will dig into this a bit more when I have some more time.
That's cool, feel free to bug me if you have any questions.
On a side note, I have an old synology NAS I was going to get running again, but I think I will try getting a copy of Ubuntu IOT Os to run on it instead of Synology's "choices" if there really is any.

To break down the Intel's default ip profile setting of 10Gb:
9000 MTU - maximum transmission units at highest link
txqueuelen 2500 - queue length of network (2500 pps based on a 4 port 2.5Gb + 2 port 10Gb switch)
 
Last edited:
My DS1819+ was connected via 4-way GB bond to a multi-gigabit LAN (2.5 Gbit switches all around). My transfer speeds (copying large files back and forth) was 75 to 80 MB/s or around 600-650 Mbit/sec.
I installed a new 10Gb Ethernet and M.2 Adapter Card E10M20-T1 (which has a 10 GB ethernet port) and switched to that as the connection to the LAN. To my surprise, my transfer speed dropped by half. I now only get around 40 MB/s.
I changed back to the 4-way bond and speed went back up to 80 MB/s.
What am I missing? Is this expected behavior? If so, what is the point of the 10GB port/card?

Update: I have since deleted the 4-way bond (4x 1Gb ethernet ports) and created a new 5-way bond (4x1 Gb plus the 10Gb port), as an experiment.
As my switches are all 2,5Gb, the bond now shows 4x 1Gb full duplex and 1x 2.5 Gb full duplex for a total theoretical max of 6.5 Gb full duplex.
BUT: if I copy a large file from NAS to laptop, I get 90-92 MB/sec (fine). When I copy the same large file the other way around, from laptop to NAS, I only get 40 MB/sec.

What could be causing that?
What version of Synology DSM are you currently running?
 
What version of Synology DSM are you currently running?
Whatever the DS214se came with.
I'm been thinking to just retire it since its so old and I do have a little supermicro server board that I can purpose it as a NAS. Just would have to find what I would need in Linux to read the redundant array its set up in.
 
Thank you for your inputs and your patience. I will dig into this a bit more when I have some more time.
Basic question … what kind of Ethernet cable are you using?

Try connecting the NAS/10GbE port with a different cable. Especially one marked Cat5E or Cat6. If that doesn’t work, also try plugging it into different 2.5GbE ports on your switch/router. If you can, try plugging into a different switch/router entirely also.

If any cable/port combination gets you a 2.5GbE connection, your problem isn’t software.
 
Whatever the DS214se came with.
I'm been thinking to just retire it since it’s so old and I do have a little supermicro server board that I can purpose it as a NAS. Just would have to find what I would need in Linux to read the redundant array it’s set up in.
I’m sorry, I meant that question for the OP and their DS1819+
 
I’m sorry, I meant that question for the OP and their DS1819+
That is why I had them execute uname-r. It came back as 4.4.302+
Its almost as old as my NAS. But its an old kernel version as that driver its using is linking at 2.5G but its limited to 1GB. Which every OS last year had to change their kernel (even Windows) to accommodate 2.5Gb interfaces correctly. If they are able to update, it should fix it. But that is if its available for it.
 
7.2.2-72806
What gets me is that they give you ssh w/o the ability to change things with commands.
Does it allow editing the files?
See the driver doesn't look like a big issue even though it loaded a 1Gb interface profile, because the OS will scale it. However, there are other places it could be bottle neck, but you will have to edit the file since you can not use any net tool command to alter this.

2.5Gb? so if samba was using a software irq buffer and is too low, it would be the bottle neck. Linux and BSD use 300K as the buffer size for 1.5G throughput. Turning this up to 600 should give you about 2.75Gb of irq throughput.
In SSH seesion:
Code:
nano  /etc/sysctl.conf
Then change:
Code:
netdev_budget=300
to
Code:
netdev_budget=600
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Back
Top