I bought 2 MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe on eBay so I could do high speed transfers between machines.
Initially I put one in my Windows Server 2008 R2 and one in a PC. Had all kinds of trouble. I then put both cards in my ESXi machine and ran up 2 Windows 10 VM's. I tuned the drivers (jumbo packets, buffers and a few other things you should do on 10GB drivers. I added 8GB Ram Disks on both Win10 instances and pushed 6GB file across at 700MB/s+ Just what I was looking for. Of course on ESXi they use their own drivers (vmxnet3) so I'm not using any Mellanox drivers.
I then put one of the 10GB cards into my Dell T320 Xeon machine into a 16x PCI slot. Installed various different Mellanox drivers (very confusing), set up the same ram drive and haven't been able to get more than 90MB/s no matter what I do with the Win10 VM.
I have verified that the traffic is indeed going across the 10GB link. By using the ram disks on both sides I'm eliminating the disk subsystems.
Tonight I ran up 2008 R2 as a VM, did same setup and got the same 700MB/s+ speeds, again using the ESXi vmxnet3 driver.
Does anyone have any insight into how I fix this problem? There are so many Mellanox drivers it's very confusing, but I've tried many of them with no fix.
I'm not expecting 700MB/s+ once I re-introduce the disk subsystems, but until I fix my transfer problems, I'm using the ramdisks to try and figure out what is going on. If I can get 200-300MB/s on disk I'd be happy, double/triple what I'm getting on GBe.
--- update ---
I'm in the process of trying to pass the Mellanox NIC directly through to the guest OS on ESXi. If I am successful (waiting on some windows updates to finish), then I'll be even closer to the exact configuration I have with my stand-alone 2008 R2 server (which is a Dell T320), and then I can see if I'm having a similar slow-down. I will then try and fix the problem using the VM's before I try and take the card back to the Dell.
--- update 2 ---
I was able to get the Mellanox card passed through to the VM running Windows Server 2008 R2 and installed the 4.80 version of the driver. After tuning it up I'm still transferring 700+MB/s. So what's happening on my Dell T320 that under very similar configuration I'm only getting 90MB/s? This is getting frustrating.
Thanks,
Roveer
Initially I put one in my Windows Server 2008 R2 and one in a PC. Had all kinds of trouble. I then put both cards in my ESXi machine and ran up 2 Windows 10 VM's. I tuned the drivers (jumbo packets, buffers and a few other things you should do on 10GB drivers. I added 8GB Ram Disks on both Win10 instances and pushed 6GB file across at 700MB/s+ Just what I was looking for. Of course on ESXi they use their own drivers (vmxnet3) so I'm not using any Mellanox drivers.
I then put one of the 10GB cards into my Dell T320 Xeon machine into a 16x PCI slot. Installed various different Mellanox drivers (very confusing), set up the same ram drive and haven't been able to get more than 90MB/s no matter what I do with the Win10 VM.
I have verified that the traffic is indeed going across the 10GB link. By using the ram disks on both sides I'm eliminating the disk subsystems.
Tonight I ran up 2008 R2 as a VM, did same setup and got the same 700MB/s+ speeds, again using the ESXi vmxnet3 driver.
Does anyone have any insight into how I fix this problem? There are so many Mellanox drivers it's very confusing, but I've tried many of them with no fix.
I'm not expecting 700MB/s+ once I re-introduce the disk subsystems, but until I fix my transfer problems, I'm using the ramdisks to try and figure out what is going on. If I can get 200-300MB/s on disk I'd be happy, double/triple what I'm getting on GBe.
--- update ---
I'm in the process of trying to pass the Mellanox NIC directly through to the guest OS on ESXi. If I am successful (waiting on some windows updates to finish), then I'll be even closer to the exact configuration I have with my stand-alone 2008 R2 server (which is a Dell T320), and then I can see if I'm having a similar slow-down. I will then try and fix the problem using the VM's before I try and take the card back to the Dell.
--- update 2 ---
I was able to get the Mellanox card passed through to the VM running Windows Server 2008 R2 and installed the 4.80 version of the driver. After tuning it up I'm still transferring 700+MB/s. So what's happening on my Dell T320 that under very similar configuration I'm only getting 90MB/s? This is getting frustrating.
Thanks,
Roveer
Last edited: