What's new

What a Intel Dual Port Server card can do for you...

  • SNBForums Code of Conduct

    SNBForums is a community for everyone, no matter what their level of experience.

    Please be tolerant and patient of others, especially newcomers. We are all here to share and learn!

    The rules are simple: Be patient, be nice, be helpful or be gone!

00Roush

Very Senior Member
network2-1.png


So here I have two computers running iperf with the -d switch connecting to a single computer which has a Intel PRO/1000 PT Dual Port Server NIC in a teamed configuration. Roughly 215 MB/sec in and out of the network card in the server at the same time.

This is just some of the testing I have been doing the last few days. Working on getting one of these cards on both ends of the connection and doing some LACP (Link Aggregation) testing with my new Netgear GS108T switch. Goal is to see higher than 113 MB/sec sustained file transfer speeds between my server and my main PC.

Just wanted to give you all of hint of what's to come. Let me know if there is any particular test scenarios you want me to run.

00Roush
 
Well I doubt I have enough disks here to push that much data at one time. The other thing is... do I test a single RAID array with multiple clients and see how it turns out or do I setup a separate disk in the server for each client to use. A single RAID array being used by multiple clients is probably the most real world. I will probably test both though just to see the kind of performance I could get in a best case scenario.

I have done some tests with just a single client using a single disk in the server and file transfer speeds seem to be maxed out at 110-113 MB/sec. This is testing with a single file of about 3 GB and a set of 9 files totaling about 17 GB. Preliminary tests with two clients accessing two different disks in the server has file transfer speeds at 80-100 MB/sec for each client. Not sure if I had everything setup right for that test though but not bad either way.

In short to answer your question... no the drives can't keep up yet!

Still more testing to do. Once I get the server setup ironed out a bit more I will post up some more info about it along with some power consumption numbers.

00Roush
 
Definitely a drool-worthy graphic! Can your drives keep up?

Yep, definitely drool-worthy... That is part of the reason I posted it! :D I just had to post it up as I was a bit shocked when I first saw the numbers. But seriously it is more of a theoretical test of max bandwidth instead of something real world. So I did some more real world tests with actual file transfers. I setup a two drive RAID 0 (windows software RAID) array (Hitachi 7K1000.C 1TB drives) in the server to test with. Using two clients I basically could sustain about 200 MB/sec of data flow for at least 15 GB of data. So with both clients writing to the sever each was writing at about 100 MB/sec. Same for reads. One client could also read at 100 MB/sec and the other could write at 100 MB/sec. Actually I think with these tests I am being limited by the disks in my clients as max read/write speed for the drives in clients is about 110 MB/sec. Next up I will be trying to get a 3 drive RAID 5 array setup in the server and trying to test with more clients.

As promised here is the hardware I am running:

Client 1 (my main PC)
Intel Core i7 860 CPU
MSI P55-GD65 Motherboard (2 onboard Realtek gigabit NICs)
4GB (2x2GB) G. Skill DDR3 1600 RAM
2 320GB WD Cavair Blue hard drives ( C: and E: )
Windows 7 x64

Client 2 (wife's PC)
AMD Athlon 64 X2 5400+ CPU
Asus M3A78-T Motherboard (1 onboard Marvell gigabit NIC)
4GB (2x2GB) Corsair DDR2 800 RAM
1 320GB WD Cavair SE16 hard drive
Windows 7 x64

Server
AMD Phenom II X4 955 BE CPU
MSI 890GXM-GD65 Motherboard (1 onboard Realtek gigabit NIC)
4GB (2x2GB) Corsair DDR3 1600 RAM
1 160GB WD Raid Edition hard drive (OS)
3 1TB Hitachi 7K1000.C hard drives
1 Intel PRO/1000 PT Dual Port Server NIC
Windows Server 2008 R2 Standard

Netgear GS108T 8 port Gigabit Switch

Power consumption for the server depends on how I have it setup. With just the OS drive, Cool n Quiet enabled, and C1E enabled I was seeing about 45 watts at idle. Same settings but with the 3 1TB drives hooked up and the Intel NIC installed I am seeing about 73 watts with the drives spun up. About 55 with all the drives spun down. The biggest problem is with Cool n Quiet enabled performance takes a hit. So I have been considering just undervolting and underclocking the CPU so I don't have to use Cool n Quiet. Another option is to use K10STAT to optimize Cool n Quiet settings.

One other thing I wanted to note is that the Intel network card is doing all of the Link Aggregation via Adaptive Load Balancing. I am not even using the IEEE 802.3ad LACP ability of my switch. This was the only way I could get higher than gigabit speeds for both send and receive.

00Roush
 
Last edited:
Similar threads
Thread starter Title Forum Replies Date
D Solved Need help with my usb 3.1 port on my Nas failing. DIY 7

Similar threads

Latest threads

Support SNBForums w/ Amazon

If you'd like to support SNBForums, just use this link and buy anything on Amazon. Thanks!

Sign Up For SNBForums Daily Digest

Get an update of what's new every day delivered to your mailbox. Sign up here!
Top