Any issues getting Windows server to run 10Gbe at full bandwidth?


Recommended Posts

Has anyone had issues getting Windows server 2012 to run 10Gbe at full bandwidth? Physical links and switches show 10Gb, data will not transfer at more than 1.2Gb at best.

Link to comment
Share on other sites

What are the specs of the server? Is this for file transfers being performed from what device to what device? What is your I/O speed of the disks if you run a drive benchmark? What type network configuration are you passing this through? Switch model?

Link to comment
Share on other sites

What are the specs of the server? Is this for file transfers being performed from what device to what device? What is your I/O speed of the disks if you run a drive benchmark? What type network configuration are you passing this through? Switch model?

32GB and 64GB Dual Proc Xeon X5650

15k SAS, don't have the IO for you but it can sustain more than we're getting. USB can do more.

We're passing through and HP Converged Switch, it is configured for Etherent (with HP), don't have model a the moment.

Actually get the same throughput going NIC to NIC direct attached. That's what's making us look at the OS or the actually server. We've used 3 different NICs so it's looking like Windows or HP servers. I've found many have this issue through net searches, but I've never seen a resolution.

Our guys are continuing to try and track down the issue, but ... Neowin is a good resource, perhaps someone has seen this before. Everything says 10Gbs, but it just won't go. We were actually going to go Ram to Ram but the throughput from save sources over the 1Gb NICs blows away what we get from 10 by 50%.

Link to comment
Share on other sites

What NIC's are you using and specifically what Bus? (PCIe version and lanes). You could be hitting the limits of PCIe. We run Intel X520-T2s (PCIe 2.0 x8) with SuperMicro systems in our current servers with Windows Server 2012 R2 and consistently see good utilization. We use them for the VM live migration network so it is literally RAM to RAM and can really push the 10G network (usually see 80-90% utilization during transfer). We have them connected through a pair Dell PowerConnect 10G switches, don't have the exact models for those offhand though. In the image attached I got the screen capture at 6.3 Gbps but it actually peaked at 7.9. We have seen it peak all the way up to 9 Gbps.

10G.png

Link to comment
Share on other sites

What NIC's are you using and specifically what Bus? (PCIe version and lanes). You could be hitting the limits of PCIe. We run Intel X520-T2s (PCIe 2.0 x8) with SuperMicro systems in our current servers with Windows Server 2012 R2 and consistently see good utilization. We use them for the VM live migration network so it is literally RAM to RAM and can really push the 10G network (usually see 80-90% utilization during transfer). We have them connected through a pair Dell PowerConnect 10G switches, don't have the exact models for those offhand though. In the image attached I got the screen capture at 6.3 Gbps but it actually peaked at 7.9. We have seen it peak all the way up to 9 Gbps.

10G.png

If we could get anywhere near that we'd be happy. We're definitely not pushing any limits at ~700Mbs. We'll eventually figure it out. Going 10Gb for Hyper-V as well.

Solarflare SFN5162F Dual Port 10GbE SFP+ (x8)
HP StoreFabric CN1100R (not sure, but probably x8)

What OS are you running?

Edited by MorganX
Link to comment
Share on other sites

Windows Server 2012 R2 Datacenter. This particular server is running with the Full GUI.

What kind of server are you putting these cards in? You gave us the CPU specs and some RAM info but what system board does it use or what model is it if it is an OEM system. Though it shouldn't matter, are you using copper or fiber? Do you have the specs for the SFP modules you are using?

Link to comment
Share on other sites

Windows Server 2012 R2 Datacenter. This particular server is running with the Full GUI.

What kind of server are you putting these cards in? You gave us the CPU specs and some RAM info but what system board does it use or what model is it if it is an OEM system. Though it shouldn't matter, are you using copper or fiber? Do you have the specs for the SFP modules you are using?

ProLiant DL380 Gen9, don't have the other one on hand. We are using copper. The cables are fixed from HP with SFPs. Cables/SFPs are on the list to check, and I will add checking to make sure the PCIE riser can support x8. (not all x16s can downshift to x8. Some just do x16 or x1).

Link to comment
Share on other sites

Got it down to HP NICs and Driver/Firmware. Thanks for the input, it helped. The Solarflare is looking good.

Link to comment
Share on other sites

so was it a driver issue in the end?

im curious b/c we run 10Gbit at work over iSCSI. it's a similar setup as yours w/ SFP+ copper. I dont believe ive seen any numbers that low, but we dont get anywhere near 10Gbit (not expecting to).

Link to comment
Share on other sites

so was it a driver issue in the end?

im curious b/c we run 10Gbit at work over iSCSI. it's a similar setup as yours w/ SFP+ copper. I dont believe ive seen any numbers that low, but we dont get anywhere near 10Gbit (not expecting to).

We haven't resolved the HP adapter issue. The Solarflare is getting 7Gbe to a VTL over iSCSi. We're still having issues with the HP NICs. With iperf we can get up to 10Gbs. But that's not sending real data, if you send a large file, they quit after a second or two even though iPerf sends at full speed. Windows SMB is slow, but we haven't even thought about that yet. That may be windows settings and/or drive issues as it affects both brands of NICs. Removing the application layer, the solarflare is performing as expected.

I'll update this thread when we get the HP issue resolved, which eventually we will, or HP will. The switch group is saying call the Server group.

Converged switches and adapters are new to us and apparently HP as well.

Link to comment
Share on other sites

We haven't resolved the HP adapter issue. The Solarflare is getting 7Gbe to a VTL over iSCSi. We're still having issues with the HP NICs. With iperf we can get up to 10Gbs. But that's not sending real data, if you send a large file, they quit after a second or two even though iPerf sends at full speed. Windows SMB is slow, but we haven't even thought about that yet. That may be windows settings and/or drive issues as it affects both brands of NICs. Removing the application layer, the solarflare is performing as expected.

I'll update this thread when we get the HP issue resolved, which eventually we will, or HP will. The switch group is saying call the Server group.

Converged switches and adapters are new to us and apparently HP as well.

 

We haven't resolved the HP adapter issue. The Solarflare is getting 7Gbe to a VTL over iSCSi. We're still having issues with the HP NICs. With iperf we can get up to 10Gbs. But that's not sending real data, if you send a large file, they quit after a second or two even though iPerf sends at full speed. Windows SMB is slow, but we haven't even thought about that yet. That may be windows settings and/or drive issues as it affects both brands of NICs. Removing the application layer, the solarflare is performing as expected.

I'll update this thread when we get the HP issue resolved, which eventually we will, or HP will. The switch group is saying call the Server group.

Converged switches and adapters are new to us and apparently HP as well.

I think you may well need to consider changing the NIC brand - does Intel make a NIC that will fit in your servers?  To be honest, I'd still prefer Intel-branded NICs in my hardware (all of it, including desktops) simply due to reliability issues. (While the majority of my Intel NIC experience is in terms of wired Ethernet, Intel put a stomping on all other brands there - even 3com (this was while 3com was still not only independent, but in their heyday).

Link to comment
Share on other sites

 

 

 

I think you may well need to consider changing the NIC brand - does Intel make a NIC that will fit in your servers?  To be honest, I'd still prefer Intel-branded NICs in my hardware (all of it, including desktops) simply due to reliability issues. (While the majority of my Intel NIC experience is in terms of wired Ethernet, Intel put a stomping on all other brands there - even 3com (this was while 3com was still not only independent, but in their heyday).

PG, Intel does make 10Gb and CNA's. A little pricier, but for compatibility and stability worth it. I like these Solarstorm's as well. With Converged Networks, new for us, there's just a lot of configuration.

After getting the switches configured properly with HPs help, what was left were the HP CNAs. Turns out, all options are configurable from the Windows driver except one, FCoE or iSCSI personality. You have to download and use Broadcom's Advance Configuration Utility to change this one setting. If you're not using Fiber, there is not autodetect and you can have basic Ethernet connectivity, but when you start to send large amounts of data, things will not go well. Changing this setting got these cards in order. Only issue, on same and higher hardware, the HP NICs deliver half the throughput of the Solarstorm's. About 3.4Gbs vs. 7.4Gbs. It could be that they're optimized for FCoE.

For anyone moving to CNAs. If you just want 10Gbe, don't use a Converged NIC, just get a dedicated 10Gbe adapter, it'll cost 50% less too.

Thank you to everyone for their input, it definitely helped us out.

so was it a driver issue in the end?

im curious b/c we run 10Gbit at work over iSCSI. it's a similar setup as yours w/ SFP+ copper. I dont believe ive seen any numbers that low, but we dont get anywhere near 10Gbit (not expecting to).

Turns out we had to change the card personality using Broadcom's configuration tool from FCoE to iSCSI. They're delivering 10Gb to our SAN using FC, but as iSCSI not so great.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.