So the part finally arrived and I've got a capacitor backed 1GB array, setup the RAID controller and waited for the 'RAID optimisation' to complete...
Transferring via SSH has got to be an all-new record low, 450KBps maximum.. It gave an estimated 9 hours to transfer one of the HD images.
So I tried the datastore browser, MUCH faster and transferred in about 20 minutes instead.
Actual ESXi access with the disks seems pretty **** in all ways which is slightly dissapointing.
I did a sysbench fileio test on the dell but I didn't write down the results
. Just ran one on the HP but something tells me it's complete inflated and not realistic.
# sysbench --init-rng=on --test=fileio --num-threads=16 --file-num=96 --file-block-size=4K --file-total-size=1200M --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off run --max-requests=90000
sysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Initializing random number generator from timer.
Extra file open flags: 0
96 files, 12.5Mb each
1.1719Gb total file size
Block size 4Kb
Number of random requests for random IO: 90000
Read/Write ratio for combined random IO test: 1.50
Using synchronous I/O mode
Doing random read test
Operations performed: 91566 Read, 0 Write, 0 Other = 91566 Total
Read 357.68Mb Written 0b Total transferred 357.68Mb (5.4823Gb/sec)
1437141.09 Requests/sec executed
Test execution summary:
total time: 0.0637s
total number of events: 91566
total time taken by event execution: 0.8175
approx. 95 percentile: 0.00ms
events (avg/stddev): 5722.8750/1450.64
execution time (avg/stddev): 0.0511/0.01
(I think it's saying somehow it passed the test faster than it would have on SSDs... so I'm ignoring it)