Thanks for the input guys, maybe I've not been all too clear and yes I'm a complete noob to ESXi and VMware. I'm only in the "learning" stage of VMware and all the features it provides, currently the BIOS of the N54L only allows for software RAID 0,1 or 0+1. I've ignored that and have had each 1.5TB disc independent: A VM here and there, nothing making use of an actual storage array.
Hence why I'm looking at a P410, then the four 1.5TB discs can be in a hardware RAID 5 allowing ~4.5TB of storage which ESXi sees as it's storage pool. I can then from there setup my VM's for what I need.
Lets just say I hypothetically assign 500GB to the OSes of the array, can each one use the 4TB available to do what they want with? Does ESXi manage storage in this way?
WHS2011 sees 4TB uses 1TB out of that but still provides all the streaming needs. Only uses a x% of CPU core and 2GB of RAM
Server 2008 R2 sees 4TB but uses 200GB for hosting a website. Only sees x% of CPU core and 2GB of RAM
FreeNAS sees 4TB but uses 2TB for file storage and distribution. Only sees x% of CPU core and 2GB RAM
Repeat I am a noob to all this
Hypothetically yes, but what n_K says is really important. Also ESXi itself cannot see the RAID 0/1 that comes on-board with the MicroServer. So some other solution is needed if you want hardware RAID (ignoring of course that the on-board RAID is not hardware RAID anyway).
The processor/RAM allocation is the easy bit but like n_K says you are probably just better off letting the hypervisor deal with the CPU aspect. I don't think the 'noisy neighbour' phenomenon is going to be a big problem for you.
Strange use-case though. You could have FreeNAS see the whole lot and present the rest with iSCSI targets so FreeNAS 'owns' all of the storage but different chunks are presented to each box.
Once other thing to consider is that hardware RAID platforms are not to be used with modern consumer level disks - you really need to use Server level discs which have different TLER (or whatever your favourite brand calls it) etc. Even with 7200RPM disks, the rebuild could take a very long time You could quite easily get yourself in the situation whereby an array rebuild after a failed disk causes another disk to fail and breaks the array in its entirety. I would recommend you look at ZFS or something similar.
If you want to avoid VMDKs sitting on VMFS on every disk, then you will need to create some raw device mappings for the local disks (which for some reason ESXi cannot do through its GUI) via the command line, see:
Yes you can configure it to do that (thin storage provisioning) BUT... It is not wise, because each time a virtual disk is written, it expands the virtual hard drive, and when files are deleted, you don't get that space back. So if one VM writes 1TB, deletes 500GB then writes another 1TB, using thin provisioning it will use 2TB.
This example isn't entirely correct - the thin provisioned virtual disk will always remain at the maximum of any data that was on it at any time in the past. In the example given, this will mean that at the end it will be 1.5TB in size, but then if you deleted everything off it, it would still be 1.5TB in size. The performance of virtual disks on VMFS isn't particularly great either.
EDIT: Also if you're looking for a decent RAID card... I'd highly recommend you look at the Dell PERCs (yes they'll work in non-Dell computers). I've got a 512MB PERC6 in an old Dell 2950 and the 1GB Flash Memory p410 in my HP server... OK so it's much faster when just writing too (as it writes to memory before writing to hard drives), but the HP configuration tool is a complete and total mess, you're also locked out of basic features like RAID 6 and some RAID 5 features unless you pay for a stupid add-on pack serial number from HP, and I only get 33MB/s read speeds from RAID 5 array, which is absolutely rediculous for 15K RPM hard drives. The whole card to me seems pretty god awful.
I've never owned a HP RAID card, ignoring performance for just one minute, the point worth considering is that you have to use a special HP build of ESXi which comes with the HP RAID drivers built in. The Dell PERCs are LSI based cards, there are others (IBM, Intel, Supermicro, Fujitsu to name just some) which are essentially also the same cards. Some have memory on board whilst others don't, some have battery back-up units whilst others don't. Some can only do RAID0/1 out of the box whilst others can do 0/1/10/5/6 and others 0/1/10/5/6/50/60 - some can be upgraded to a higher RAID level using a hardware license key (which aren't compatible across different rebrands). The LSI driver comes built in the standard VMWare ESXi build. The LSI client software isn't great either and can be a real PITA to get working.
I'm not sure I would recommend a PERC6 for a MicroServer because the internal cabling in the box uses a SFF8087 cable (this would be difficult to change without some interesting modding), the same as used on LSI based controllers including the PERC H series - which I do have in mine and works perfectly.
Sorry for the long post, but this is topic I have been researching for a very long time and wouldn't want you to go through the same.