What's new

slow network throughput, how to improve?

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

SoftDux-Rudi

Occasional Visitor
Hi all,

I would like some input on this one please.

Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch
transfer files to each other at about 30MB/s between each other.

Both servers have the following setup:
CentOS 5.5 x64
XEN 3.0 (from xm info: xen_caps : xen-3.0-x86_64
xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64)
kernel 2.6.18-194.11.3.el5xen
1GB NIC's
7200rpm SATA HDD's

The hardware configuration can't change, I need to use these servers
as they are. They are both used in production with a few xen domU's
virtual machines running on them.
I want to connect them both to a SAN, with gigabit connectivity and
would like to know how I can increase network performance a bit, as
is.
The upstream datacentre only supplies 100MB network connection, so in
the internet side of it isn't much of a problem. If I do manage to
reach 100MB that will be my limit in any case.


root@zaxen02.securehosting.co.za:/vm/xen/template/centos-5-x64-cpanel
root@zaxen02.securehosting.co.za:/
root@zaxen02.securehosting.co.za's password:
centos-5-x64-cpanel.tar.gz
100% 1163MB
29.1MB/s 00:40


iperf indicates that the network throughput is about 930MB though:

root@zaxen01:[~]$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 196.34.x.x port 5001 connected with 196.34.x.x port 45453
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec

root@zaxen02:[~]$ iperf -c zaxen01
------------------------------------------------------------
Client connecting to zaxen01, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 196.34.x.x port 45453 connected with 196.34.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.09 GBytes 936 Mbits/sec


Is iperf really that accurate, or reliable in this instance, since the
packet size is so small that it probably goes straight to memory,
instead of HDD? But at the same time, changing the packet size to
10MB, 100MB and 1000MB respectively doesn't seem to degrade
performance much either:

root@zaxen02:[~]$ iperf -w 10M -c zaxen01
------------------------------------------------------------
Client connecting to zaxen01, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 10.0 MByte)
------------------------------------------------------------
[ 3] local 196.34.x.x port 36756 connected with 196.34.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.07 GBytes 921 Mbits/sec
root@zaxen02:[~]$ iperf -w 100M -c zaxen01
------------------------------------------------------------
Client connecting to zaxen01, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 100 MByte)
------------------------------------------------------------
[ 3] local 196.34.x.x port 36757 connected with 196.34.x.x9 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.08 GBytes 927 Mbits/sec
root@zaxen02:[~]$ iperf -w 1000M -c zaxen01
------------------------------------------------------------
Client connecting to zaxen01, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 1000 MByte)
------------------------------------------------------------
[ 3] local 196.34.x.x port 36758 connected with 196.34.x.x port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.04 GBytes 895 Mbits/sec
 
I'm trying to measure network throughput, transferring data from one server to another.

The transfer speed was only 29.1MB/s on a GBe network
 
Network bandwidth sounds about right. Just realize that when using the -w switch with iperf you are changing the receive window size not packet size. Usually packet size is a maximum of 1500 bytes unless you use jumbo frames. The receive window is the amount of data that a computer can accept without acknowledging the sender. (per Wiki) Most OSes support at least a maximum of 64k but it can be enabled to go higher. From what I remember Windows and some of the Linux versions auto tune this setting so it does not need to be specifically set. Just figured I would mention that in case you were not aware. FYI since iperf sets this specifically when it runs (default of 8k or 16k generally which is usually too small for max performance) most people set it to 64k to see what max network performance is.

As for your bottleneck... Since the network looks to be good next would be the disk subsystems. Just about any SATA drive nowdays can do better than 30 MB/sec for sequential read/writes of large files so I would guess they are good as well. Last would be the program/protocol used to do the file transfers. There are lots of different ways to transfer files over a network connection. Like using the SMB, NFS, FTP, iSCSI, or AFP protocols. What protocol and program are you using for your testing?

00Roush
 
Just a quick thought .. are you using a managed or an unmanaged switch? An unmanaged switch with some low speed connections .. printers and like with 10/100 cards .. end up averaging the throughput and thus reducing max throughput that the switch will allow.
 
Just a quick thought .. are you using a managed or an unmanaged switch? An unmanaged switch with some low speed connections .. printers and like with 10/100 cards .. end up averaging the throughput and thus reducing max throughput that the switch will allow.

Nope, it's a managed Layer2 gigabit switch :)
 
Just a quick thought .. are you using a managed or an unmanaged switch? An unmanaged switch with some low speed connections .. printers and like with 10/100 cards .. end up averaging the throughput and thus reducing max throughput that the switch will allow.
100 Mbps clients will have no effect on transfers between Gigabit clients whether or not the switch is managed.
 
Based on my understanding of SCP it is not really designed for high performance. Is it necessary for you to have the whole file transfer be encrypted? If not NFS or SMB might give better performance.

00Roush
 
Well, I used SCP since these machines don't advertise NFS / CIFS on the public network.

So I guess I could setup NFS / CIFS, but since I don't use those protocols, it might not help much.
 
Was your goal to improve the throughput between these two servers or to a SAN you were going to setup? If you were going to be using a SAN what protocol were you going to connect to it with?

00Roush
 
Was your goal to improve the throughput between these two servers or to a SAN you were going to setup? If you were going to be using a SAN what protocol were you going to connect to it with?

00Roush

I want to connect these servers to a SAN/NAS.

The SAN/NAS hasn't been setup yet, so I have a lot of flexibility for the protocol being used, and right now I'm experimenting with setting up CLVM on 2x iSCS SAN's for higher reliability.


These servers can't take any iSCSI HBA card, nor 10GBe NIC's (1U servers with no space left inside) so I need to try and do everything over the LAN.
 

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top