ESXi - HP MicroServer N40L Performance


Recommended Posts

I always use thick provisioning for running production servers and thin provisioning for testing things out in non-production servers. In fact, I'm going to go and see what I can do with the e1000 adapter, afaik it showed up in the list for my windows VM but not for my arch linux VM, not sure if it's because I'm using open-vm-tools or something else or what :s

Link to comment
Share on other sites

Just wondering but have you tried a Linux VM and tested its SAMBA performance? I've been trying to track down an issue with three separate physical Windows 2008 R2 servers where their upload to other machines is dismal but other operating systems give great speeds. I have the three servers in an OVH data centre in France and I just thought I'd bring this up because when I tested Linux everything was A-OK. I'm sure if there was a problem with W2008R2 that it would be well publicised by now but I am starting to wonder...

Might be worth installing another OS just to check and see what happens.

Link to comment
Share on other sites

Here is a push from the VM itself to my workstation. This is from 2k8r2 storage essentials vm, seems to be pushing a file just fine.

post-14624-0-60312700-1335442654.jpg

Link to comment
Share on other sites

iperf doesn't seam to like my Smart Switch, I get this output:


C:\Users\cpressland\Downloads>iperf -c 10.0.1.5
------------------------------------------------------------
Client connecting to 10.0.1.5, TCP port 5001
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[128] local 10.0.1.11 port 52443 connected with 10.0.1.5 port 5001
write failed: Software caused connection abort
read on server close failed: Software caused connection abort
[ ID] Interval Transfer Bandwidth
[128] 0.0- 0.0 sec 24.0 KBytes 16.4 Mbits/sec
C:\Users\cpressland\Downloads>

[/CODE]

Might be worth installing another OS just to check and see what happens.

Going to be setting up an Ubuntu 12.04 box to test later.

Here is a push from the VM itself to my workstation. This is from 2k8r2 storage essentials vm, seems to be pushing a file just fine.

post-14624-0-60312700-1335442654.jpg

Even after Zero-Filling the Drive, I'm not getting that kind of performance.

Link to comment
Share on other sites

Well if you can not get iperf to even run -- seems like something messed up for sure!

Link to comment
Share on other sites

Well if you can not get iperf to even run -- seems like something messed up for sure!

These are all clean installs.

Link to comment
Share on other sites

Maybe worth running a filesystem zeroing utility on each VM then just to expand them out to full size.

That doesn't explain my issues though.

Recommend any Free-Space ZeroFilling Tools?

Sure, use the vSphere Client. Browse the Datastore, locate the vmdk file, Right click and choose Inflate.

Link to comment
Share on other sites

Okay - the connection issue was being caused by a service on 'Galactica' using port 5001.

I setup the same on my Mac Mini and ran the same test and got the following:


C:\Users\cpressland\Downloads>iperf -c 10.0.1.61 -w 256k
------------------------------------------------------------
Client connecting to 10.0.1.61, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[128] local 10.0.1.11 port 54281 connected with 10.0.1.61 port 5001
[ ID] Interval Transfer Bandwidth
[128] 0.0-10.0 sec 423 MBytes 355 Mbits/sec

[/CODE]

Link to comment
Share on other sites

well 355/8 gives you at max 44MBps

Half of what I am seeing. So that sure ain't helping you out. Why such bad network performance?

You mentioned a smart switch?? Maybe its not so smart ;) hehehe

Are you still 9000 mtu on your virtual switch? Do you have promiscuous allowed or something? Traffic shaping enabled?

You don't have your VMs in your management port group do you?

post-14624-0-76511100-1335447294.jpg

Link to comment
Share on other sites

You mentioned a smart switch?? Maybe its not so smart ;) hehehe

It's a Netgear GS108Tv2 - as per your recommendation in October ;)

index.php?app=core&module=attach&section=attach&attach_rel_module=post&attach_id=310055

Link to comment
Share on other sites

can you change it out to see if that changes your performance? I am running on gs108Tv1.. Do you have something tweaked in it that could be causing issues? I have jumbo support off, are you runing IGMP snooping? etc..? I use to run IGMP snooping and wish I could turn it back on.. But my freaking ecurrent bridge has a freaking multicast mac on it, and when I turn it on it gets blocked ;) Have not bothered into looking how to allow it to be on but block still block the other multicast traffic -- might look into that tonight ;)

192.168.1.220 7f-bf-a9-aa-29-5b dynamic

Why they did that I have no freaking Idea - and there is no way to fix it local, have to send it too them.. Which means will have it offline for weeks and weeks - has to go back to the UK.. I might just buy the new one that is coming out, and then send this one to them to fix.

But for test sure can not hurt to try a different switch.

Link to comment
Share on other sites

Can't see how the switch would be the cause seeing as File Server > Mac Mini gets 70 / 90MB/s.

I'll grab a dumb switch in a few days and try it out!

Any other ideas in the mean time?

Link to comment
Share on other sites

hmm I think newer nics you don't have to have it like that huh??? don't some of them take care of that for you??

Link to comment
Share on other sites

From my FileServer:


C:\Users\CPressland\Downloads>iperf -c 10.0.1.61 -w 256k
------------------------------------------------------------
Client connecting to 10.0.1.61, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[164] local 10.0.1.5 port 62577 connected with 10.0.1.61 port 5001
[ ID] Interval Transfer Bandwidth
[164] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec
[/CODE]

I think it's safe to say it's only happening to ESXi.

If you don't have a switch with you, try a crossover cable?

Don't have one.

hmm I think newer nics you don't have to have it like that huh??? don't some of them take care of that for you??

Some of what?

Link to comment
Share on other sites

From my FileServer:


C:\Users\CPressland\Downloads>iperf -c 10.0.1.61 -w 256k
------------------------------------------------------------
Client connecting to 10.0.1.61, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[164] local 10.0.1.5 port 62577 connected with 10.0.1.61 port 5001
[ ID] Interval Transfer Bandwidth
[164] 0.0-10.0 sec 1.08 GBytes 930 Mbits/sec
[/CODE]

I think it's safe to say it's only happening to ESXi.

Don't have one.

Some of what?

the nics... some nics that are new don't need crossover cables...

Link to comment
Share on other sites

Yeah the newer cisco switches allow you to use either crossover or straight through cables in them and some of the newer expensive NICs allow it, as you've got intel server NICs, try it with a straight through cable and see if it acts as a crossover.

Link to comment
Share on other sites

Wow that's pretty amazing :o swapped out all e1000's for vmxnet3 (took a lot of effort and editing of files with vi, urgh) and set all MTUs to 9000...

Made a urandom file on linux VM and sent it to host via scp;

bigfile 100% 78MB 39.1MB/s 00:02

Holy crap, what an improvement! I'm impressed, nice advice :D

Link to comment
Share on other sites

yeah i would say that with 930Mbps -- its not a switch issue ;)

Unless its an issue to that nic in the esxi.

But yeah switch was long shot at best.. But if your seeing that kind of throughput between other devices.. Then I would say its good, you could move ports maybe? Change cables to the esxi..

So WTF would cause such bad performance? Did you make any tweaks to that virtual nic on the VM?

As you saw I am getting great performance from my vms -- and other than the nic itself I don't see what could be different. Other than just overall bad performance of the drives or vmfs? What if you do iperf between vms -- what do you see then?

post-14624-0-44916300-1335455395.jpg

Question are you using the built in n40L nic for your lan connection or the add on card? I am using the built in nic on the lan side of my esxi host.

post-14624-0-46582300-1335455825.jpg

If your using the intel one - try changing to use the built in one and see if that fixes your issue?

Link to comment
Share on other sites

But yeah switch was long shot at best.. But if your seeing that kind of throughput between other devices.. Then I would say its good, you could move ports maybe? Change cables to the esxi..

Just swapped out the CAT5E Cables with a couple of brand new CAT6A cables between the N36L to the switch and the N40L and the Switch. All 100% SuperPowered now.

So WTF would cause such bad performance? Did you make any tweaks to that virtual nic on the VM?

No Tweaks other than the MTU Change that we've undone.

As you saw I am getting great performance from my vms -- and other than the nic itself I don't see what could be different. Other than just overall bad performance of the drives or vmfs? What if you do iperf between vms -- what do you see then?

Between, 300Mbits/sec (that really really does point to ESXi)

Question are you using the built in n40L nic for your lan connection or the add on card? I am using the built in nic on the lan side of my esxi host.

I'm using the built-in as WAN and the expansion card as LAN.

If your using the intel one - try changing to use the built in one and see if that fixes your issue?

All VMs have been migrated over to the VMXNET3 NICs, all other than the pfSense one that is.

Link to comment
Share on other sites

So you only see 300Mbps between vms testing with iperf?? Something wrong there for sure.. See my 1.55Gbps

The physical nic should not even come into play when testing iperf between vms.. So WTF else could be the problem?? hmmmmm

I would look to why your only Seeing 300Mbits thats for sure!! That is horrific!! Is there some other traffic going on at the same time?? For test I would shut down all other vms.. Just leave the 2 your testing with. And make sure they are not doing anything -- 300mbits over the virtual switch -- WTF??

No I didn't mean vmxnet3 for trying other one.. I mean what physical nic is tied to each virtual switch.. I have the built in one connected to my lan vswitch. And the addon nic is connected to my wan vswitch. But again that shouldn't have anything to do with vms talking to each other over the vswitch - it shouldn't You could always remove the nic from the vswitch as a test.

Link to comment
Share on other sites

300mbits over the virtual switch -- WTF??

Glad to see you're just as confused as I am. Makes a nice change.

Shame ESXTOP is such a useless utility! I'll shutdown some VMs tonight and test again!

Link to comment
Share on other sites

This topic is now closed to further replies.