Jump to content



Photo

ESXi with HP smart array p410 problems

esxi hp smart array p410 slow rubbish cack

  • Please log in to reply
23 replies to this topic

#16 OP n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 16 August 2012 - 13:44

Can't be 6MBps on the vclient browser, I uploaded a 10GB or 20GB VHD to my dell ESXi from my laptop using just a green-switch in the middle with both at 1Gbps and it transferred pretty quick in 5 minutes or so.


#17 -ANiMaL-

-ANiMaL-

    the UnLeashed Beast...

  • Joined: 06-July 03
  • Location: Saudi Arabia
  • OS: Windows 7 Ultimate x64
  • Phone: HTC HD7

Posted 20 August 2012 - 00:01

I dont know whether this will help or not. I faced similar problem with the Hyper-V on ProLiant DL380 G5 server. The problem being network speed droping and recovering, causing overall file transfer to be slow. Also inconsistent pings with delays going above 10ms, while it should be always less then 1ms. This used to cause allot of error at hosted VM level (Win2003). After reading allot found it had something to do with Time Stamp Counter (TSC) drift on certain multi-core processors. As we were using Windows we have to use a switch /USEPMTIMER in BOOT.INI to resolve the problem. I understand you have loaded ESXi may be they have covered this problem also.

#18 OP n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 23 August 2012 - 11:32

Humm I've just checked and see the difference which is causing the problems I think... Under 'storage adapters' on both, the dell shows 2 devices, enclosure and disk, both have 'parallel scsi' but on the HP it's got just one with 'block device', I've looked over google and vmware and can't find anything about it only one other person asking a question that never got answered. Anyway, I think that's why the p410 is crap on ESXi, HP have been lazy and coded it as if it was a dumb device.

Well that's sealed the deal for me then, this is the first HP server I've bought and it'll definately be the last, back to dell next time.

#19 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 23 August 2012 - 12:16

Your theory doesn't hold water.

Since your using the same controller to copy to disks that are in VMs right -- well what speeds do you get there?

I see great speeds to mine, and its listed as block device. If it was the controller - wouldn't you just see crap speeds all around. Even to and from VMs?

#20 OP n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 23 August 2012 - 16:26

I haven't tried inside a VM yet, that's what I forgot to do damnit.
Will wait for the parts for RAID5 to arrive and test on a RAID5 datastore.

#21 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 23 August 2012 - 16:55

There are hundreds of posts of crappy speeds to the datastore, some say its because the busybox part of esxi that handles the scp when you use that method is very limited. Some say vmware throttled it on purpose.

I have yet to see any conclusive fix.. I see crappy speeds to the datastore as well, be it scp, sftp, using the datastore browser of the vclient connected to the host. I grabbed vcenter to test if that is better, but have not had time to check yet.

In my setup it does not really matter, I don't move vms around between host. I don't download or even upload that much to the datastore. I can live the 5-10MBps I see to and from the datastore.

I have read that its throttled because your VMs are using that same disk, and they limit the I/O to and from the datastore to reserve the IO for your VMs, etc.

Here is a question for you - what channel are you on compared to your VMs? I agree if your seeing 20-30MBps to box 1 running same version of esxi, why should you not see the speeds on box 2? But then again they are actually different controllers are they not? So maybe esxi likes 1 more than the other?

To be honest your comparing apples to oranges are you not? Now if you had box 1 with exact same hardware as box 2, and one was faster than the other we would have something to look into.

What speed do you see to and from your vms? I see 50 to 90MBps -- which I am ecstatic about considering the hardware I am using and its cost.. I have not cared too much that to and from the datastore is 1/10 of that speed because I don't use it much.

Can you move your datastore to built in controller? Do you have a different controller to try? And lets see the speed test of your VMs then we can see -- maybe your controller is bad, and your only seeing MAX of what the controller is doing?

#22 OP n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 23 August 2012 - 23:27

The annoying thing about this dell server is that my PERC6i won't fit in, because I was more than happy to just use that and transfer the disks over to it but it's way too tall :(. Anyway, I might be able to get a RAID5 setup using the p410 card, and move it into the dell server and see what speeds I get using that.
But at the end of the day, HP push ESXi saying it's great on their servers but it really doesn't, HP's systems with ESXi have a lot of problems, but in terms of network and storage speed, it couldn't be any better.

Again, I'll try some stuff and post results when RAID capacitor and memory arrives.

#23 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 24 August 2012 - 02:27

"HP push ESXi saying it's great on their servers but it really doesn't"

Depends on what your talking about -- I would agree that on my HP microserver esxi ROCKS!!! I am nothing but delighted with the performance from my little box that cost my less than $300, shoot $350 with 8GB of ram and 2nd nic.

How do your VMs perform -- seems to me your dwelling on 1 small aspect of running vms. If they perform well, depending on what exactly your trying to do the speed of moving files on and off the datastore might have nothing to do with anything. It doesn't come into play for my uses to be honest.

#24 OP n_K

n_K

    Neowinian Senior

  • Tech Issues Solved: 3
  • Joined: 19-March 06
  • Location: here.
  • OS: FreeDOS
  • Phone: Nokia 3315

Posted 04 September 2012 - 03:17

So the part finally arrived and I've got a capacitor backed 1GB array, setup the RAID controller and waited for the 'RAID optimisation' to complete...
Transferring via SSH has got to be an all-new record low, 450KBps maximum.. It gave an estimated 9 hours to transfer one of the HD images.
So I tried the datastore browser, MUCH faster and transferred in about 20 minutes instead.

Actual ESXi access with the disks seems pretty **** in all ways which is slightly dissapointing.
I did a sysbench fileio test on the dell but I didn't write down the results :/. Just ran one on the HP but something tells me it's complete inflated and not realistic.
# sysbench --init-rng=on --test=fileio --num-threads=16 --file-num=96 --file-block-size=4K --file-total-size=1200M --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off run --max-requests=90000
sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 16
Initializing random number generator from timer.


Extra file open flags: 0
96 files, 12.5Mb each
1.1719Gb total file size
Block size 4Kb
Number of random requests for random IO: 90000
Read/Write ratio for combined random IO test: 1.50
Using synchronous I/O mode
Doing random read test
Threads started!
Done.

Operations performed: 91566 Read, 0 Write, 0 Other = 91566 Total
Read 357.68Mb Written 0b Total transferred 357.68Mb (5.4823Gb/sec)
1437141.09 Requests/sec executed

Test execution summary:
total time: 0.0637s
total number of events: 91566
total time taken by event execution: 0.8175
per-request statistics:
min: 0.00ms
avg: 0.01ms
max: 26.83ms
approx. 95 percentile: 0.00ms

Threads fairness:
events (avg/stddev): 5722.8750/1450.64
execution time (avg/stddev): 0.0511/0.01

(I think it's saying somehow it passed the test faster than it would have on SSDs... so I'm ignoring it)