Virtual Host Builds - post yours


Recommended Posts

Post up the builds, with links to exact parts if possible.  $ you paid for the parts, what your running on it for hypervisor, etc.

 

This should be helpful for our fellow neowin vm enthusiasts looking to build their own rigs.

 

I will get it started with my own somewhat dated rig, but still working great.

 

HP Microserver N40L - specs http://www8.hp.com/h20195/v2/getpdf.aspx/c04111079.pdf?ver=17

$ http://www.newegg.com/Product/Product.aspx?Item=N82E16859107052 out of stock, but paid $270 3/15/2012

 

Update to 8GB from 2 it came with

$ http://www.newegg.com/Product/Product.aspx?Item=N82E16820148347 out of stock, but paid $43 3/15/2012

 

Added dual gig nic

$ http://www.amazon.com/dp/B000J3OPOU/ref=pe_309540_26725410_item paid $41 back 3/19/2102

 

Added single intel nic

$ http://www.newegg.com/Product/Product.aspx?Item=N82E16833106036 paid $35 back 3/19/2012

 

Updated datastore to 250 GB SSD

$ http://www.newegg.com/Product/Product.aspx?Item=N82E16820148694 paid $120 back 4/17/2014

 

Notes:

Running esxi 6 without any issues.  If slow performance moving files to and from your datatore, breakout your vmkern to its own nic vs sharing portgroup on same physical nic.  Make sure when ordering nics that they are low profile, should come with low profile bracket.

 

I have had trouble with 3TB disks in this system, but need to try with esxi 6, this may have removed the problem.  2TB disks work fine, currently have 2TB, 1TB and 750GB disk raw mapped to my storage vm.  Google raw mapping esxi for details on how to do that.

 

Running mod bios here should get you started http://homeservershow.com/hp-proliant-n40l-microserver-build-and-bios-modification-revisited.html

 

draws very little power, about 50-55w via killawatt meter.

 

edit:

Current VMs

post-14624-0-26647800-1428145630.png

 

I just deleted a fedora 20 that I had deployed to test something with samba for a person that had sent me a PM for help..  I fire up lots of vms like that.  But since I don't think I would really have any need for a fedora vm day to day its gone after I use it.  The ones running are the ones that run 24/7/365 - the cacti one is new, been playing with how to monitor my ups voltage, etc.  The one plugged into the host and monitored by the ubuntu vm was a piece of cake.  But the one connected to my windows workstation is giving me trouble.. I might just move its usb connection to the host and have ubuntu vm monitor both of them and just send my pc a shutdown command when low on power, etc.  Next time apc vs cyberpower - but the cyberpower works fine with apcupsd on linux, just with windows can not get it driver working..

Link to comment
Share on other sites

I combined my HP Micro Server and a custom built Mini ITX PC that ran Esxi in to a U-NAS NSC-800 case. I mainly did this to have one machine that did everything, the 8x drive bays would allow for plenty of growth for years to come in regards to storage.

 

At the time (January 2013) I chose the hardware based on power usage reviews and performance, as this build would be left on 24x7 I wanted something that wouldn't eat energy when sitting idle. However I also wanted good performance for transcoding videos with Plex and running a game server when required. Also importantly the build had to be compatible with Esxi.

 

Hardware:

 

Case: U-NAS NSC-800

PSU: FPS 1U FLEX 250W PSU

Motherboard: Gigabyte GA-Z77N-WIFI

Processor: Intel CPU Core i7 3770T Quad Core IvyBridge Processor

CPU Cooler: Akasa AK-CC7118HP01 Low Profile Mini-ITX Cooler

Ram: 2x Corsair Memory Vengeance Jet Black 8GB DDR3 1600 MHz CAS 9-9-9-24

SATA Controller: http://www.scan.co.uk/products/highpoint-rocket-2720sgl-internal-8x-sata-sas-6gb-s-hbas-pci-e''>HighPoint Rocket 2720SGL Internal 8x SATA/SAS 6Gb/s HBA

Link to comment
Share on other sites

my work environment is my environment. 

 

3 dell r715's with 284GB memory and 300GB sas drives connected to a EMC vnx5500 and a dell EMC.

4 emc recoverpoint servers repurposed for vm hosts with 764GB memory, connected to a EMC vnx 5500 with 2 shelves of disks.

 

A few dozen standalone vm servers for testing outside of these production envrionments running on 2900's, r515's, r415's, and r715's. 

 

I have no desire to building anything at home at the moment.  Mostly due to not having the space for a place to sit and do anything in peace. 

 

Total cost a few hundred grand or so (the latest san environment was about 100k).

Link to comment
Share on other sites

^ dude.. While it neat and all.  Not the point of this thread ;)  Should I post up the hundreds of esxi boxes and hyper-v boxes with all the netapp storage that are in the 3 DCs I manage in the US? ;)

 

As to the others.. Fantastic the links to your parts and what your running is exactly the sort of posts I was hoping for..  I might be building a new rig in the summer and really have my on eye this board.

 

http://www.servethehome.com/intel-xeon-d-1500-platforms-supermicro-d-1540/

 

Dual 10G and Dual 1G, 8 cores.. up 128GB ram, m.2 slot, seems like a home/lab/smb wet dream for vm host..  At a very doable price point I hope..  $800.

Link to comment
Share on other sites

Well build one dude!!  It will be Fun...  Tiny little box can bring a lot of joy and performance to a geeks life ;)

Link to comment
Share on other sites

Mine is a work in progress, he are the proposed specs (HYPER-V HOST)

 

 

- AMD FX 8350 (8 Core)

- 24GB of DDR3 Ram

- 240 GB SSD

- 4 NICs

- 1 TB Drive

- Geforce 750

- Windows Hyper-V Server 2012 R2

- Case: Coolermaster RC 330

 

Budman, what are your thoughts, anything i am missing? Your coments would be really appreciated.

Link to comment
Share on other sites

That case does seem to have a lot of drive bays.. but they are hidden.. Might make it hard to move drives in an out, but if that is not a concern?

 

5.25" Drive Bays 4 (exposed) 3.5" Drive Bays 1 (exposed), 6 (hidden)

 

Are you going to put trays in to the 5.25 bays so you can easy in and out disks?  I do like the 4 nics  What mother board?  That video card seems a bit much for a vm host..

Link to comment
Share on other sites

Hardware

  • AMD Phenom x4 2.2GHz
  • 8GB (4x2GB DDR2 800MHz)
  • 780G Based board with 6x SATA
  • 1x Intel 520 Series 180GB SSD (Host and VM's)
Edited by Aergan
Link to comment
Share on other sites

I think I've posted about my servers on here before but I like talking about them so here goes:

 

Hardware wise:

 

1x Intel Core i3-2120T at 2.6GHz

8GB DDR3-1333 RAM

ASUS H61-I MiniITX motherboard

256GB SanDisk SSD

Realtek Gigabit LAN

Lian Li PC-Q12B case

 

Software wise:

 

Host: Windows Server 2012 R2 with Hyper-V enabled. Also sometimes use this for dev work.

VM: Ubuntu 14.04.1 for DNS

VM: Ubuntu 14.04.1 for mercurial and znc, and linux buildslave for some projects.

VM: Windows Server "10" Tech Preview because why not? I used this one to experiment with VPN stuff, that was fun, it's also my windows buildslave.

VM: WIndows Server 2012 R2 as my main webserver, and build master.

 

Think that it's. If I forgot something will edit it in later on.

Link to comment
Share on other sites

I've added a new one (and my first portable virtualization build), and it's one of just two designed around Hyper-V.

 

The core is the HP Pavilion dv4-2045dx - http://support.hp.com/us-en/product/HP-Pavilion-dv4-2000-Entertainment-Notebook-PC-series/4031721/model/4041679

 

The CPU is the AMD Turion II, while the graphics are also AMD (specifically, Mobility Radeon HD4200 AKA Vision Premium).

 

The OS core (replacing Windows 7 Home Premium x64) is Windows 10 Enterprise 10049.

 

Initially, I'll be doing some VM testing with Hyper-V, because this build of Windows 10 is not suitable for development leveraging Hyper-V - however, once that gets green-lighted, I'll be using it for Visual Studio 2015-driven Android and Windows 10 for phones development.  (Hyper-V is required for the latter, and is now leveragable by the former - being able to take it mobile is a major plus.)

Link to comment
Share on other sites

Not sure if this thread is limited to completely custom stuff or not. So if I'm out of place I'll apologize in advance and temper my continued involvement to be appropriately on topic.

 

My home virtual host details:

 

Hardware:

- Dell R620

-- Upgraded RAM to 64GB of ECC RAM (came with 32GB)

---- http://www.newegg.com/Product/Product.aspx?Item=N82E16820208908

-- Upgraded HDDs to 2.5" 10K RPM SAS @ 600GB (4 drives, 3 in RAID 5; 1 Hot Spare)

---- HDDs were purchased from Amazon Warehouse Deals and are still kicking strong.

-- Added Internal SD card header; ESXi boots off Dual SD cards in RAID1.

 

-- CPU: Dual Intel Xeon E5-2590 @ 2.9Ghz w/ 8 cores + HyperThreading (32 logical cores total...)
-- Dell Perc H710 Hardware RAID controller w/ BBU

 

Software:

- ESXi 6

- pfSense

- 17 other VMs

 

It has a ton of power.

 

Minor Hardware:

- ZyXEL NSA325

- 2 x 1TB HDDs in JBOD format

- Backup target for Veeam to ensure daily differential and weekly total backups of all VMs on the above host.

Link to comment
Share on other sites

@LogicalApex, no whatever your using for your vm host in a home/very small smb/lab would be right on target.  If the hardware involved is home/lab budget friendly I think it will provide others looking to build or get involved in visualization other than running virtualbox or workstation/player, etc on their desktop machine.

 

There are so many options for small budgets or not that small even, putting together dedicated hardware is very home budget possible.  Can be done for less then $500 I think, or even free really if want to run on your older desktop, etc.  Or you could spend a few $K to be sure.

 

But if your setup is in the 10's of Ks for example - this is prob outside the scope of what looking for.  That sort of budget is prob out of reach for most readers of neowin lab setup ;)

Link to comment
Share on other sites

At budman....

 

Sorry I couldn't answer your question earlier...

 

 

The motherboard is an ASUS AM3+ 970 Chipset based motherboard

- The question I have since you mentioned the GEFORCE 750 is overkill. Would a RADEON R7 250X be ok to host 3 VMs ?

 

Thanks for your help.

Link to comment
Share on other sites

Anything that displays the esxi console because you use it for install or upgrade, etc..  Is all that is required for a vm host.  You could even just put the card in if you need to use the console ;)

 

When do you think host video card comes into play on a vm host??  The only time you would want a good video card in the host is if you were going to actually pass it through to your guest anyway and connect that output to something people were going to watch say movies on or something.

Link to comment
Share on other sites

WAF, yeah going to remember that.. As to the cluster - can you get a pic, would like to see that ;)

 

Yeah the 2.5 allows for high density when it comes to space, etc.  But too costly per GB for my tastes with physical platters.  I do love how you can put like 4 or even 6 of them in the space 5.25 bay.

Even a new 8 bay one  http://www.icydock.com/goods.php?id=192

 

When I was putting in my SSD I thought about getting one from them.. But then nay, I had some rails and just mounted the single ssd in the bay.  But easy in out tray would be kind of nice.  Thinking about my future summer build of new host, I kind like this option http://www.icydock.com/goods.php?id=197 where I can put in 2.5" and then maybe a usb front panel, etc.

Link to comment
Share on other sites

WAF, yeah going to remember that.. As to the cluster - can you get a pic, would like to see that ;)

Yeah the 2.5 allows for high density when it comes to space, etc. But too costly per GB for my tastes with physical platters. I do love how you can put like 4 or even 6 of them in the space 5.25 bay.

Even a new 8 bay one http://www.icydock.com/goods.php?id=192

When I was putting in my SSD I thought about getting one from them.. But then nay, I had some rails and just mounted the single ssd in the bay. But easy in out tray would be kind of nice. Thinking about my future summer build of new host, I kind like this option http://www.icydock.com/goods.php?id=197 where I can put in 2.5" and then maybe a usb front panel, etc.

I came across the 8 bay yesterday and thought about it. 7mm drives though - fine for modern SSDs (not my recycled ones though) but not easy to find spinning platters without going for real laptop drives. 8x1TB SSDs does sound like a nice prospect though.

As for the photo, I do have a slightly older one with less boxes. I'll put it up later.

Link to comment
Share on other sites

  • 2 weeks later...
This topic is now closed to further replies.