HP DL160 G6 Review


Recommended Posts

A bit of a background;

I've been using/playing around with servers for about 13 years, started off with a custom dedicated red hat box with an AMD chip and it really only just hosted a web server on a single IP. Gone on from them basically sticking to Dells (although I've toyed with some IBMs too). Anyway, for the past 4 years I've been using a 2U Dell Poweredge 2950 with PERC 6 and DRAC 5, dual low-voltage quad-core processors at 2Ghz per core and 8GB RAM with 3 2.5" 73GB SAS drives in RAID-5.

So, because it was getting a little too noisy for me at times due to the heat and has a 750W power supply but I'm using less than 200W in the server itself so it's wasting power I thought it was time to upgrade and get something smaller and quieter, so I looked around for Dells like usual, and saw a few that I was interested in, but what I wanted was DDR3 because the DDR2 that the 2950 uses is MUCH more expensive than DDR3 and slower. Then I discovered the HP DL160 G6 server, and it seemed great upon first glance...

(By the way, I took the pictures for this review after the server was put into it's full permanent position and use, therefore I can't get pictures of the front or back of the server due to where it is but these pictures are readially available on google, it's pictures of the internals you can't find though :p and I also apologise for the pretty poor quality of the pictures, I took them with an iPod touch as I don't have an actual dedicated camera to use). So here it is, my first review!

The Specs;

So the model I was looking at was 1U with 4 drive slots with 2 slots filled with 160GB WD enterprise SATA drives, a single intel xeon E5620 processor and 8GB of RAM, and came with an HP smart array p410.

img0063tc.jpg

So I did some research on the web, there were a number of new features that this server came with compared to the dell such as intel TXT, trusted-path-execution, QPI, etc. So the upgrades to the CPU seemed very good! The memeory system is pretty different to what I've been used to before, for there are 16 RAM slots in total, but only 8 work with each processor, so only having 1 processor will give you access to only 8 slots, then there's the problem that instead of just having normal ECC RAM, it now has UDIMM and RDIMM involved, and a special way to put the RAM in, the colour-coded black and white RAM slots on the motherboard make it clear when installing the RAM!

Port wise, at the front it's got 2xUSB 2.0 ports with 4x3.5" SAS/SATA drive slots, and at the rear has 2xUSB 2.0 ports, a VGA port, a COM port and 2xEthernet ports, insidr it's got a few PCI-Express slots, on the motherboard is 2x4X's and 1x8X and on the risers this goes to 1x16X and 1x8X (which is pretty stupid in my mind, plus there's one port without a riser and you couldn't fit a card in there even if you wanted to because of the plastic cooling cover), there's also an internal USB 5-pin connector, I looked around and saw cables for it for ?20 and thought that was pretty disgusting for a simple USB cable, also tried to get it working with a home-made connector, the first and last pins are grounding pins and there appears to be no +5V pin, so I wasn't actually able to get it working, but no loss anyway. On the left of this picture is the cable for the front 2xUSB ports and on the right is the supposed internal USB port header.

img0066ck.jpg

All the screws on the server are torx screws, and you might at first think that's a pain but personally I prefer it. They had a nifty idea of giving you a tool that clips into the back of the server that you can pull out to use on the screws which I've pictured below. Although this does work, it's a bit of a pain, mostly because of having to keep twisting your hand to use it so I use a large screwdriver with torx bits to do anything with the screws personally, but it is a nice touch.

img0065zi.jpg

The Smart RAID Array;

As I'd be running ESXi, I looked up the smart array to see what support for it on ESXi was like and how it'd run, and there's a HUGE amount of conflicting reports on it, some saying that it runs beautifully, others saying it wasn't worth the wrapping paper it came with.... Then there were a few sites (including an official VMware statement) mentioning very bad performance with the RAID controller if you have no cache and/or no backup battery for it, which mine didn't have, so I added getting the RAM for the RAID controller to my list of things to do.

By the way, after I'd got the server and was looking for BIOS updates and whatnot, I came across some information on HP's site about the SAAP... Basically, if you get the RAID controller without a battery, you're limited to RAID0 and RAID1 (and JBOD), then if you get the RAID RAM and backup battery you also get RAID10 and RAID5, and you would think RAID6 too like it states on the product information page... But oh no, no no no! You've got to get the HP SAAP to enable RAID6 (among other pretty stupid features that have been standard on all Dell PERCs since I've used them, starting with a PERC/2i) which costs ?200. Yes, it costs ?200 for a license code to enable a pretty basic feature. And the p410 card itself is about ?220 JUST for the card itself, then you're looking at ?50-?200 for the RAID RAM and backup battery which depends if you want 256MB/512MB BBWC or 1G FBWC... And that's a thing you might be thinking 'what?' about! BBWC and FBWC stand for Battery-Backed Write Cache and Flash-Backed Write Cache, what's the difference you ask? BBWC is the same as traditional RAID backup, if power goes out, the data that isn't lost but hasn't been saved to disk will stay in the volatile RAID RAM until the server's back on and it'll write it to disk! The FBWC is essentially the same as the BBWC but has additional non-volatile flash memory on it and has a supercapacitor (actually 2 capacitors in parallel), when power dies it copies the data from the RAID RAM to the flash memory and writes that back when the server starts up again... So, what's the point/advantage in using FBWC over BBWC? According to HP, using the BBWC will keep the data safe for up-to 3 days whilst the FBWC will keep it safe for about 10 years... So there you have it, if you're thinking of just chucking your server into storage for 5 years as soon as you have a power-cut, it's probably safest to get the more expensive FBWC.

Also, according to comments dotted around the web, the BBWC battery lasts just over a year and needs replacing (at a cost of about ?80 if you get it new off HP) whilst the supercapacitor will last much longer (Although I'm not going to lie, if the pack dies I'll be ordering 2 capacitors off ebay and soldering them on at a cost of about ?5 instead). And beware if you're thinking of getting the FBWC with this server, the capacitor cable length is tiny (I think it's designed for a more heavy duty HP server) and so I had to remove the bracket off my RAID card and put it in the 16x PCI-express slot so the cable would reach.

img0067in.jpg

(Oh and one final rant about the server and RAID - the RAID card is 8x and the PCI-Express riser has a 8x slot on it... But the slot it plugs into is only 4x, I find that slightly stupid personally). The HP Smart Array p410 card has 2xMini-SAS connectors on it, each one handles up to 4 drives and since this server only has 4 drive-slots, only one channel is used.

Testing the server out;

So I did what any other tech geek would do when the server arrived - unboxed it and rushed to power it up! When I turned it on, the noise the fans made was unbelieveably loud, pretty much on par with the Dell 2950 when it's in jet-fighter more (you remove the cover), and it stayed at this speed/noise for the whole first power on. Got ESXi installed onto it, put the HP ESXi ISO image onto a 1GB USB stick and booted from it, then when it'd loaded all installation files to RAM, installed ESXi to the same USB stick, and then turned it off. Opened the server up and looked around at it seeing that everything was in properly and whatnot. After playing put the cooling cover back on and closed the case down next power-on the fans were loud again, until the HP BIOS screen appeared whereby they slowed right the way down to a much more tolerable noise level (quieter than the Dell's lowest fan speed). From turning on to getting to the HP BIOS screen takes about 8 seconds.

Tried using one of the 160GB drives as a test datastore for ESXi and wow, was it painful to watch, snail-speed isn't enough to describe how painful it was, so I gave up and turned it off. That's when I looked around the net to find a RAID RAM card for it! :p. (And I found a very reasonably priced 1GB FBWC one that cost a grand total of about ~?120 from the US, which compared to some auctions on ebay with the 256MB BBWC going for ?60-?80), the only problem about ordering from the US is that it took ~20 days to arrive, the USPS site boasts that parcels sent using the priority international service take 6-10 days to arrive... Oh well.

Adding another CPU and more RAM;

So as I'm used to dual-CPU systems, I thought it was worth getting another CPU, and as the 8GB on my old server was getting exhausted I thought it'd be worth having 16GB as a base level for this server. So I went in search, of an intel E5620 CPU and a single-stick of 8GB RAM...

The CPU wasn't hard to find, a quick ebay search yields a lot of results, some that are very cheap too, but you must be VERY CAREFUL as the majority of the CPUs listed are actually engineering samples, NOT final design processors. You might think 'oh ES, its near final design and is cheaper, it will do' but ES CPUs have potential problems with the CPU dies and L1/L2/L3 caches such as bad memory blocks, they also might be missing features or produce more heat, and generally run quite a bit slower, it's well worth avoiding ES CPUs at all times.

Sourcing a heatsink for the second CPU was slightly hard too, the intel heatsinks for E5620 CPUs are almost the same as the heatsinks you need for this server, but has 4 screws, one at each corner, whilst the HP CPU has 2 screw holes and they're between the middle and the end of the heatsink, so you need to get the HP heatsinks for these servers, I found one for the high price of ?50.

img0059ep.jpg

As said, the DDR3 RAM is a strange beast, with there being UDIMM and RDIMM, plus various single, dual and triple memory channel things and single/dual/quad ranked... I looked up a UDIMM module on crucial and got an 8GB single-slot module, as the part number on the original RAM indicated it was UDIMM RAM. So what's the difference between UDIMM and RDIMM? U and R stand for UnRegistered and Registered, UDIMM is essentially like normal PC RAM whilst RDIMM adds buffering and better ECC support, plus you can have more RAM in RDIMM servers (the limit in the HP manual gives 48GB as maximum UDIMM and 192GB as maximum RDIMM) but at the expense of RDIMM being slower than UDIMM and having a higher CL-latency. After getting the RAM, it was a doddle to install after reading the HP manual on it, I stuck it in the first white RAM slot for the second CPU with the A3 slot marking.

Adding the RAID RAM card;

So when the RAID RAM card arrived, I saw the length of the cable for the capacitor, this is when I made the decision to modify the RAID card bracket to fit into the rear PCI-express slot, I tried removing the bracket completely (as it's half-height and needed to be full-height) and putting various other brackets on it instead. I tried a realtek ethernet card bracket and a sound blaster bracket, neither worked so I just removed the bracket completely and slotted it in as-is, cable is perfect length! So, booted the server up and updated the BIOS (which also updated the firmware of the smart array controller. I found it pretty hard to work out HP's version system as the BIOS version number doesn't like up with the version number they use on their site, so for anyone else it's probably best just to compare dates between release date on HP's site and the BIOS build date in the setup utility, which is in MM/DD/YYYY format!) using the HP firmware updater on USB (it's pretty hard to get working using the linux utilities, unetbootin didn't work, using the USB creator on the DVD image did work mostly but there was a slight problem with it that I needed to fix by hand, can't remember what it was. When I tried it on windows recently to update a hard drive firmware it created the USB perfectly).

So, with 3 2.5" 73GB 10K RPM SAS hard drives, I headed to the RAID setup and created the volume, it took a second to make, again a difference with the Dell PERC6 is that the HP system doesn't actually wait to format the drives and set it all up whilst the PERC6 does, so you can use the virtual drive right away but on each bootup until the drives have been fully formatted you get an error warning you that 'performance will be reduced', it took a few hours of the drive doing nothing but working away slowly in the background for it to be created fully, personally I prefer the PERC! I got a trial code for the SAAP off HP's website and entered it, twice, but got rejected both times, it's worth noting now there's a difference between the SAAP1 and SAAP2 packs, SAAP1 works on older Smart Array controllers such as the p410 whilst the SAAP2 works on newer cards and codes from one pack are not compatible with the other pack. So I quickly re-registered to get the SAAP1 code and entered it, it accepted it first time. Now under 'create new volume', RAID6 was a selectable option. The trial license apparently lasts 60 days, there is no way I can see in the RAID setup to see when it expires or any information about it, all you can do is get the active license code or delete the license code, pretty poor if I'm honest.

RAID5 performance before the drives were completely ready for use was, as warned by the BIOS, poor. Transferring over SSH, a top speed of 6MBps was achieved, which is slightly less than the 10MBps or so that the single drive received. After the drives were setup and ready for use, I transferred all the VMs over to it using the vSphere Client so I'm not sure of the speeds it was getting, but it was pretty fast as it only took a few hours to upload 80GB of data or so.

Performance;

So, as I've got ESXi installed and running on the server, and as it's being used for live VMs I won't be running many tests on the bare-metal hardware itself but will instead run them in a VM. After the RAID5 optimisation was finished, I uploaded a large file to the ESXi store using SSH, the server and workstation are both on 1Gbps connections to a 1Gbps green ethernet switch, here's the results;

[Test]$ dd if=/dev/zero of=./bigfile bs=100M count=8

8+0 records in

8+0 records out

838860800 bytes (839 MB) copied, 1.31515 s, 638 MB/s

[Test]$ scp bigfile root@192.168.1.2:/vmfs/volumes/Main/ISOs/

Password:

bigfile 100% 800MB 33.3MB/s 00:24

As you can see getting ~33MBps which is the same as I was getting on my Dell 2950, as the drives are all 2.5" 73GB 10K RPM SAS drives I dare say you wouldn't notice an improvement. When I tried this test with a single hard drive without RAID or the RAID RAM, I got a pretty awful 1MBps-9MBps so I'm glad to see it's picked up which I guess is either from the addition of the RAID RAM card or from a trial HP SAAP license because it supposdially includes a video-streaming performance boost which might impact it.

I've gone back from Windows 7 on my SSD (which I was using it to set up the ESXi enviroment) back to linux on my old trust 30GB mechanical drive and so I don't have the vSphere Client to time how long it takes to transfer a large file and work out the approximate transfer speed, sorry! So, using my current Arch/Shift2 VM, I got sysbench from the AUR and did some IO testing! (This was performed on a VM guest with a eagerly zero'd thickly provisioned 15GB disk with 8 virtual processors and 4GB of RAM):

[root@Shift2 TEST]# sysbench --init-rng=on --test=fileio --num-threads=16 --file-num=60 --file-block-size=4K --file-total-size=5G --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off --max-requests=90000 prepare

sysbench 0.4.12: multi-threaded system evaluation benchmark

60 files, 87381Kb each, 5119Mb total

Creating files for the test...

[root@Shift2 TEST]# sysbench --init-rng=on --test=fileio --num-threads=16 --file-num=60 --file-block-size=4K --file-total-size=5G --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off run --max-requests=90000

sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:

Number of threads: 16

Initializing random number generator from timer.

Extra file open flags: 0

60 files, 85.333Mb each

5Gb total file size

Block size 4Kb

Number of random requests for random IO: 90000

Read/Write ratio for combined random IO test: 1.50

Using synchronous I/O mode

Doing random read test

Threads started!

FATAL: Too large position discovered in request!

(last message repeated 1 times)

Done.

Operations performed: 90010 Read, 0 Write, 0 Other = 90010 Total

Read 351.6Mb Written 0b Total transferred 351.6Mb (7.8285Mb/sec)

2004.10 Requests/sec executed

Test execution summary:

total time: 44.9130s

total number of events: 90010

total time taken by event execution: 664.6948

per-request statistics:

min: 0.00ms

avg: 7.38ms

max: 1785.50ms

approx. 95 percentile: 35.58ms

Threads fairness:

events (avg/stddev): 5625.6250/991.10

execution time (avg/stddev): 41.5434/9.09

(And the only thing I've compared this to is http://serverfault.c...rmance-resolved which shows it's better than a 6x3.5" disk RAID1+0 setup but as you'd expect, is no-where near as good as using SSDs!).

I'll admit I don't really know all that much about sysbench but I ran a CPU and Memory test too so you could see/compare the results, they are as follows (remember this is on an 8 vCPU VM with 4GB RAM with the server being in low power mode in both the BIOS and ESXi):

[root@Shift2 TEST]# sysbench --init-rng=on --test=cpu --num-threads=16 --file-num=60 --file-block-size=4K --file-total-size=5G --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off run --max-requests=90000

sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:

Number of threads: 16

Initializing random number generator from timer.

Doing CPU performance benchmark

Threads started!

Done.

Maximum prime number checked in CPU test: 10000

Test execution summary:

total time: 13.6770s

total number of events: 90000

total time taken by event execution: 218.4664

per-request statistics:

min: 1.19ms

avg: 2.43ms

max: 54.52ms

approx. 95 percentile: 14.54ms

Threads fairness:

events (avg/stddev): 5625.0000/60.01

execution time (avg/stddev): 13.6541/0.0

And the memory test:

[root@Shift2 TEST]# sysbench --init-rng=on --test=memory --num-threads=16 --file-num=10 --file-block-size=4K --file-total-size=5G --file-test-mode=rndrd --file-fsync-freq=0 --file-fsync-end=off run --max-requests=9000

sysbench 0.4.12: multi-threaded system evaluation benchmark

Running the test with following options:

Number of threads: 16

Initializing random number generator from timer.

Doing memory operations speed test

Memory block size: 1K

Memory transfer size: 102400M

Memory operations type: write

Memory scope type: global

Threads started!

Done.

Operations performed: 104857600 (1526531.12 ops/sec)

102400.00 MB transferred (1490.75 MB/sec)

Test execution summary:

total time: 68.6901s

total number of events: 104857600

total time taken by event execution: 137.4314

per-request statistics:

min: 0.00ms

avg: 0.00ms

max: 9.36ms

approx. 95 percentile: 0.00ms

Threads fairness:

events (avg/stddev): 6553600.0000/53930.75

execution time (avg/stddev): 8.5895/0.18

Personally, I'm very happy with how the server performs.

Downsides;

So, everything has downsides, right? Yes. Despite this server being pretty nifty and nice, there's a few things I personally dislike about it. On the classic Dell Poweredge 2650 the front panel was great, if it was locked you couldn't turn the server on or off (well you could with a paper-clip actually) nor remove the cover, for some reason on the 2950 Dell decided to allow you to completely remove the cover even if the front panel was locked, quite why I don't think I'll ever understand. But with this HP server, you're basically forced to have no front panel at all, if you've got this server in a data-centre and there's some thieves around they can potentially just yank out your drives and walk away as-is. They can also just press the power button too as there's nothing to protect it either, I've seen that in the G8 line of HP servers they now have a front panel for them, quite how well that's been made I'm unsure but it's pretty late to the game in my opinion.

HP's iLO, or rather, uiLO for unintegrated lights out. Not only does the server not have it by default, it's external! You've got to buy something to clip inside the server which adds another networking port and then pay for (on top of that) various licenses for the iLO itself which are darn expensive and overpriced. Dell offers something difference, I got a DRAC for my 2950 but the server itself had an 'integrated DRAC' if you'd call it that, whereby it shared one of the ethernet ports on the server and allowed you to remote administer it partially through that assuming the server hasn't crashed or isn't off (whilst the DRAC allows you to still do everything even if the server has crashed or is off). I can't even rate how good the uiLO would be on this server due to the license alone being ?200 and I've haven't got a clue how much the management plugin-port would cost on top of that.

The internal USB port on this server as I mentioned previously pretty much ****es me off, ?20 for a USB cable? And a non-standard pinout so you don't just make your own? That's pretty low.

The noise on startup of this server, although generally when running it's much quieter than a 2950, until the BIOS has loaded and POSTed (which takes about 20 seconds, and the fans spin up on each restart too) the fans run at top speed, very loudly. Now, why it does this is obvious, the fans are controlled by the IPMI I assume? And run at various speeds depending on what the temperature of the internal server parts are, which is the same on Dells and HPs. But the difference here is how Dell and HP load up the IPMI, until it's fully loaded, the temperatures are unchecked so it runs the fans at full speed in-case it's running very hot (better to be noisy than have a server go on fire or melt) but Dell's load the IPMI within 2 seconds of turning the server on, whilst HP decides to load it with BIOS, so you've got a much longer period of fans being very noisy. It's not a huge problem I'd say in a data-centre but in a home enviroment if I update ESXi and need to reboot and it's late at night, it'd wake everyone around up.

The top panel... Now this is pretty strange I'll admit but I really hate the top panel on this server, trying to get the darn thing off is a challenge, you remove the locking screw and hold down the cover release button but it won't move. So you have to pry it off with a flat-headed screwdriver - very annoying! Never had any problems with covers on any Dells.

As mentioned earlier, the internal second PCI-2X slot, why is it even there if you literally cannot ever use it?

The front HD lights and management of the RAID card/system. PERCs and Dell servers have great front light systems on the drive, you've got a simple power/status light which stays on constantly as green when the drive is spinning, flashes red when there's an error with the drive or is predicted to fail, flashes green if the drive's blink has been activated or if it's ready to be removed, and off if there's no drive present, plus a light below this to indicate if the drive is being written to or read from - simple and fine! So what the hell is with the HP drive light system? There's 2 lights, and they really don't make much sense, if you're in the RAID card BIOS and go over a drive, the corrisponding drive in the system's light goes from green to blue so you know which drive it is - fine. But hard drive access? The light just seems to always be green or flash green, and i don't mean each drive light individually, I mean all drives at the same time. One of the drives I have in it has an LED on the drive itself for disk actvity which you can see, and it doesn't corrispond with what the drive caddy activity light does at all. I'm assuming the other drive light is for if a drive has failed because I've never actually seen it on at all. Then we come to RAID-card controlling software, and I mean for the client OS which in this case is ESXi! On the Dell, you could get the LSI MegaCLI program and the libstore.so file or whatever it's called, plonk them on the ESXi host somewhere and it'd work brilliant, you can do whatever you want to the RAID card although I will admit help for it is pretty much absent and the commands are very strange, plus it wants the enclosure IDs and card IDs in odd formats but other than that it's great, you can add hot spares, prepare drives for removal, etc. - The HP RAID card on the other hand is pretty much rubbish, after eventually finding the tool on HP's site (It's hpacucli if you're looking for it) and getting the VIB installed on ESXi, it puts it in /opt/hp/hpacucli/bin and I'll admit the help for is pretty much faultness, but it doesn't seem anywhere near as good or polished as the LSI utility (PERC5/PERC6 uses an LSI RAID chip which is why the LSI tool works). Also with a PERC you have everything all there, with HP you've got to pay a whacking amount for a basic feature like RAID6... What a joke.

There's no information panel on the front of this HP server either, with the Dell it gave you everything you needed to know, if something went wrong with ANYTHING be it hard drives, RAID, CPU, VRMs, RAM, etc. the system lights would change from steady blue to flashing orange and it'd display the error on the screen. What does HP offer? Well I don't think they offer anything actually... Other than the drive lights. There's also a LED light on the front of the server which looks like it's for HDD activity, I've never yet seen it on once so I'm not sure if it's a DVD-ROM disc activity light or what? And if it is, why? Pretty daft.

The purpose of having PCI-Express slots as 2X, 2X and 4X but with riser cards to change them to 4X and 8X still baffles me. If the motherboard only supports PCI-Express 2X and 4X then the risers should be for 2X and 4X. Pretty daft if you ask me.

The version system of their BIOS, it's not clear in BIOS if the date is DD/MM/YYYY or MM/DD/YYY (it's this one) and the BIOS version number on HP's website isn't at all visible in the BIOS anyway, you have to either go by the dates of the updates or run the HP firmware DVD/USB and see if an update is needed or not.

Drive caddies... (This isn't really related to offical HP spares or parts nor the server itself) Now you have to be careful with drive caddies, when I got the server I looked on the usual place for drive caddies, ebay. The original caddies that came with it had blue metal beneth the drives and the LED illumination from where the LEDs are placed to where you see them at the front are clearly seperated. I didn't realise there's a lot of HP caddy clones but I bought 2 caddies for ?8 each or so, compared to Dell's mad ?20 each. And the quality isn't all that brilliant, they fit but the drive release button allows the clip to click out, but you then need a screwdriver to lever the level out so you can grab it and actually pull it, plus the plastic that carries the light from the LEDs at the back to the front leaks light, so even though only one LED is on, it actually looks like both LEDs are on.

Power usage;

I have a GE 3kVa UPS that I have my server connected to, since moving servers over and whatnot I now also have a BT openreach VDSL modem plugged in and a D-Link green 1Gbps switch plugged into this UPS too so I can't measure the exact amount of power the server is using but all 3 together use between 92-102W of power apparently, before when it was just the Dell 2950 it was using ~160-190W depending on if I'd recently taken the cover off to clean it (and the fans speeding up to jet-plane noise levels), this is with the HP server being set to 'low power mode' in BIOS AND using the ESXi power control 'intel speedstep' module, I'm assuming this also means the intel turbo-boost feature is disabled as well, the feature itself sounded really daft to me like it's a con to make your CPU life last less. If you trust my UPS or not I'm unsure, not sure if I trust it myself personally due to it whacking out a few times randomly for no reason.

img0068qn.jpg

The strange thing about this server is that there are 4 different power supplies you can get for it, the normal one (which I have) is a 500W non-hot-plug PSU, there is a 500W hot-plug PSU (Although quite how it's hot plug with only one power cable I still don't know), there's a high power 750W one and there's some strange obscure 480W high efficiency PSU which I'd personally like, but only seen it dotted around on various sites for a small fortune. The PSU has plenty of spare cables too, a 4-pin molex for a tape or slim DVD-ROM and a few ATX-GPU connectors too, and although my server came with an extra GPU, it didn't need any additional cables (Some NVIDIA quadro).

img0060ymu.jpg

So, my thoughts;

I really like this server if I'm honest, it feels well built and uses a pretty low (relative) amount of power. It supports various power-saving features too and device passthrough in ESXi, HP also provide Up-to-date ESXi images with the various drivers and VIBs already installed on them which is something Dell used to offer but recently seems to have pulled all the images off their site and update lists, plus they weren't current when they actually did have them. The software VIBs for it on ESXi are good, unlike the Dell systems, all the health lights work fine and it gives you lots of detailed information about the fans, VRMs, etc. It would, however, be good if HP did something similiar to what Dell Openmanage offered, although one thing that made me want to switch my old Dell to something else was that Openmanage for ESXi is a complete farce, it didn't work and the comments from around the web indicate that for most people it didn't work either, so if you got it working then you were lucky! And Dell didn't really seem to care less that it didn't work, they didn't reply to any comments about it and the official ESXi Openmanage site had lots of comments from unhappy customers (even some with brand new servers) complaining about it not working. Plus Openmanage remote management doesn't actually work either, again, another Dell farce they don't care about.

Although the extras for this server that would come by default with other server manufacturers have a high price which is quite off-putting, if you can do without the features it's fine but if you really need/want them and are on a limited budget, I'd look to a different provider first.

But overall it's performing very well, using under half the power the Dell used and packing a much mightier punch with it and keeping my ears sane, so for these reasons I'd personally say this server is definately worth checking out (The problems that others mention with RAID doesn't seem to be happening to me anymore which I think is down to the addition of the RAID RAM card and updating all the software to the latest versions. This server is a G6 and HP are selling G8s now [and ironically, are still selling new G6's which I would say proves that it's a good reliable server] and they only recently last week released an update to the system which is great to still see!), so I'd give this a great 8 out of 10, being let down by the lack of iLO, the un-needed expense of the SAAP and the cover being a pain to remove! [if you've got any questions, ask away!]

img0058cem.jpg

Link to comment
Share on other sites

Were you using a phone from the early 90's to take the snaps? >_<

Top section, 'and I also apologise for the pretty poor quality of the pictures, I took them with an iPod touch as I don't have an actual dedicated camera to use).'

Link to comment
Share on other sites

Wow what a ****ty camera you got. But speaking of the review, it's a pain to read these text benchmarks, HD Tune would have done the thing. Anyway, thanks for the review.

Link to comment
Share on other sites

Submitted to main, wonderful job! (Y) http://www.neowin.ne...ews-hp-dl160-g6

Awesome, thanks :D.

Whats the specs and power consumption/location?

The 2950? Dual Intel Low-Voltage Xeon 5335's, 8GB DDR2 RAM, PERC/6i with BBU, DRAC5, 4x3.5" backplane.

it's a pain to read these text benchmarks, HD Tune would have done the thing. Anyway, thanks for the review.

Would have needed to install windows for that, which would require a license and my W7, XP and 2003 license are all in use :( sorry.

This is old stuff. Currently HP sells G8 servers.This is at least 2 years old.

Yeah sorry about not being about to afford ?5,000 for a current gen server, I'll make sure my bank manager deals with you next time so I can review something brand-spanking new off the factory shelf :/. This server isn't used for commercial purposes, it doesn't make me any money, the review's more for people that are interested in home servers and having fun more than having the latest and greatest.

HP servers are awesome!

Yes they are! :p

Link to comment
Share on other sites

Surely beats the crap out of my custom built core i3 nettop server :p

I've been wanting to get an actual server machine, but given the typical noise level and the cost, I never have. Also this was a good review (minus the pictures of course :p). I enjoyed reading it.

Link to comment
Share on other sites

This topic is now closed to further replies.