• 0

ESXi 5.5 - My Experience (and some questions)


Question

Haggis

Hi Guys

 

(because this has turned out to be quite long i have highlighted my questions in red and bold)

 

I have played with virtualbox and such in the past so i am not totally new to Virtual Machines

 

I have never used ESXi before though

 

I have been speaking to Skiver about it recently and thought why not give it a shot.

 

 

So i removed all of my drives form my HP Gen7 Microserver and put in 1 500gb Hard Drive (i removed them mainly so i did not muck up and wipe them)

 

So i installed ESXi 5.5 which was painless and once installed set it on a static IP (192.168.0.65)

 

 

I use my server for a few things

 

  • Anything i download from my laptop goes onto the server to save downloading to my SSD
  • I have a few drives on there that hold photos, videos, tv shows etc
  • I also use it to backup to 

 

So i created my first vm, since i am just playing just now (before i went to bed i switched my normal HDD back in so the server was usable again) i just accepted most of the defaults, only thing i changed was the size of the disk for the VM to 40gb

 

I installed Debian Wheezy on it as i am very familiar with this and the server currently runs Debian Jessie so was happy with it

 

once it was installed (no updates needed as was a netinstall) i then plugged in my other two physical drive back into the server that has my photos and videos on it.

 

It took me age to work out how to add these drives, when trying to add it through vSphere it kept saying it would format them (which i did not want), I finally worked out i could add them using the command line in ESXi, so i SSH'd into the server and added the drive :)

 

I then added the drive into the fstab in the debian VM and rebooted and it mounted fine and i could access it as i had previously

 

I then set about installing Plex Media Server. installed it as i had done a few times on my normal system and it installed and confirmed was working as it should.

 

Once it was all installed and working as i wanted i took a snapshot (I love this feature, i like to play about with stuff and sometimes it goes wrong and i break the system (as i am sure has happened to many of you before). so being able to take a snapshot and then roll back to that is an amazing feature.

 

So that is where i got to from between 6pm and 10pm last night

 

I plan on having other VM's added on there for things to play about with like Squid, DNS etc

 

Also i want NFS on there so i can access the shares from other devices

 

ok here come the questions

 

I am thinking i should just add this into the same server as plex, would you agree?

 

Would this benefit from having EXSi on an SSD?

 

If on an SSD would i need to have the VM's on the SSD to to benefit from it?

 

What else can i do on here, i like to play about so dont mind tinkering   :)

 

If i want to use this for pfsense and such am i right in saying i would need to add an additional NIC to the server?

 

Thanks

 

 

 

I will continue to update this if its of any interest to you guys?

Link to post
Share on other sites

Recommended Posts

  • 0
+LogicalApex

You don't really want to waste SSD space on ESXi (IMHO). If the board supports USB booting I'd boot USB via USB or SD if the board has that. ESXi loads into RAM and runs entirely from RAM so once it is booted it doesn't care how slow the underlying storage medium is.

 

You can benefit having your datastore on an SSD though. This is where VMs will load from and that can feel the speed of an SSD for sure. I run my datastore on 2.5" 10K SAS drives. Just ensure your VMs aren't overworking your underlying storage medium.

 

For pfSense it is definitely best practice to have at least two NICs in the server to use, but it isn't absolutely mandatory depending on what else you have in your environment. Since it is a VM you could add multiple virtual NICs to the pfSense VM and in ESXi use VLAN tagging to differentiate. For this, you want a VLAN capable switch for this though.

  • Like 1
Link to post
Share on other sites
  • 0
+BudMan

"You don't really want to waste SSD space on ESXi (IMHO)"

 

Waste what 8GB..  If your worried about 8GB then you have other issues other than the SLOW ass possible boot you could possibly ever get ;)

 

When you patch your esxi install or do anything that would take a reboot - it is like watching paint dry..  For what??  I put in a 250GB ssd, and boot off of it.. And my current config have 55GB free still on my SSD... And I have 14 VMs configured currently with 5 running 24/7/365

 

If your going to want to run your router - I would HIGHLY suggest adding nics..  There is a 4 port nic or dual that are easy to add and should give you enough ports.  I have both dual and single and onboard on my N40L.. What microserver do you have??

 

I kind of wish I had gotten this one vs my dual -- still thinking of picking it up ;)

http://smile.amazon.com/dp/B000P0NX3G/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=16DHDFCDCHI5U&coliid=I3QLRIH7GF0469

  • Like 1
Link to post
Share on other sites
  • 0
neufuse

"You don't really want to waste SSD space on ESXi (IMHO)"

 

Waste what 8GB..  If your worried about 8GB then you have other issues other than the SLOW ass possible boot you could possibly ever get ;)

 

When you patch your esxi install or do anything that would take a reboot - it is like watching paint dry..  For what??  I put in a 250GB ssd, and boot off of it.. And my current config have 55GB free still on my SSD... And I have 14 VMs configured currently with 5 running 24/7/365

 

If your going to want to run your router - I would HIGHLY suggest adding nics..  There is a 4 port nic or dual that are easy to add and should give you enough ports.  I have both dual and single and onboard on my N40L.. What microserver do you have??

 

I kind of wish I had gotten this one vs my dual -- still thinking of picking it up ;)

http://smile.amazon.com/dp/B000P0NX3G/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=16DHDFCDCHI5U&coliid=I3QLRIH7GF0469

Exactly! and SSD's are getting cheaper and cheaper now, I'd put any host system on a SSD just for that restart situation

  • Like 1
Link to post
Share on other sites
  • 0
+BudMan

If your running into a situation where that 8GB you saved by putting esxi on usb for boot, then you should of gotten a bigger SSD ;) Or add another one..

While you can do stuff with vlans if you have a smart switch that supports them - sure ok.. But why when you have the slots open for adding nic - and I picked up the dual port for like $40, and you can get a 4 porter for 100.. I would suggest you add the physical ports - you will be glad you did just from ease of use and functionality.

I saw a marked improvement in moving files band and forth from the datastore to physical network when I was able to break the vmkern out on its own physical port. I would say at min 3. Wan, Lan and then vmkern.

If you get that 4 porter you would have 5 physical to work with - and now you could do wan, lan, wlan, dmz and vmkern all on physical - that would set you up for a nice lab/home setup to be sure!! Or ability to lagg and get you more bandwidth.. If I had room for another card in my pc I would love to run 2 1gig connection from pc to switch and then lan of esxi would have have 2 as well. Just because with the speed of ssd 1 gig is starting to become bottleneck and 10ge is still a bit pricy for home setup ;)

Link to post
Share on other sites
  • 0
Haggis

Thanks for the replies some of it is still over my head but i am getting there :)

 

I am thinking about getting a new SSD for my laptop so may just use my old one in the server

 

 

ok so correct me if i am wrong here.....

 

so in theory if i got a dual NIC card that would give me three

 

I would have the isp connection coming into the server on 1 and the connection from pfsense to the internal network on the other

 

I could use the spare one for a DMZ and have say a Apache server running on it that is separate form the internal network and open to the outside?

 

 

 

Budman i have a N54L

Link to post
Share on other sites
  • 0
+BudMan

Yeah if you get the dual card.. This one for example should work

http://www.amazon.com/Nc360t-Pcie-Dp-Gig-Adapter/dp/B008C4GKOG

Its $30, make sure you get the low profile bracket. This gives you 3 total physical - 1 for internet, one for your local network and then sure either 1 for dmz or could use it for your vmkern (this is IP of esxi, how you access datastore from your network, etc)

example - here is my esxi host with 4 phsyical nics. I have a dmz - but its only for VMs so needs not physical network connection.

post-14624-0-56194200-1420732868.png

  • Like 1
Link to post
Share on other sites
  • 0
Haggis

can i ask

 

those switches are software switches within EXSi yeah?

 

so that allows you to create different networks even in the same box?

  • Like 1
Link to post
Share on other sites
  • 0
+BudMan

exactly those are just virtual inside esxi, than the physical nic connects it physical network.

  • Like 1
Link to post
Share on other sites
  • 0
+LogicalApex

"You don't really want to waste SSD space on ESXi (IMHO)"

 

Waste what 8GB..  If your worried about 8GB then you have other issues other than the SLOW ass possible boot you could possibly ever get ;)

 

When you patch your esxi install or do anything that would take a reboot - it is like watching paint dry..  For what??  I put in a 250GB ssd, and boot off of it.. And my current config have 55GB free still on my SSD... And I have 14 VMs configured currently with 5 running 24/7/365

 

If your going to want to run your router - I would HIGHLY suggest adding nics..  There is a 4 port nic or dual that are easy to add and should give you enough ports.  I have both dual and single and onboard on my N40L.. What microserver do you have??

 

I kind of wish I had gotten this one vs my dual -- still thinking of picking it up ;)

http://smile.amazon.com/dp/B000P0NX3G/ref=wl_it_dp_o_pC_nS_ttl?_encoding=UTF8&colid=16DHDFCDCHI5U&coliid=I3QLRIH7GF0469

It is possible I would have a similar opinion if my boot times were short enough to make the SSD matter. My Dell R620 takes about 3 minutes to boot the BIOS between configuring memory, booting the RAID card, loading the lifecycle management controller, etc. Saving 30 seconds via an SSD for my ESXi portion of the boot is nonsensical in my situation. I can make better use of those resources on actual VMs.

 

I reboot my servers once every 6 months or so... If I am watching boots I probably shouldn't have a server ;)

Link to post
Share on other sites
  • 0
+BudMan

Well its a lots more than saving 30 seconds.. But understand your point, if your already at 5 minutes what is the matter if its 8 ;)

But in my case your talking 1 minute vs 5 minutes.

Link to post
Share on other sites
  • 0
Haggis

I was trying to work out why it takes you 8 minutes to boot then realised that its not just ESXi booting it then has to start all the VM's too

 

see i have not had that issue yet as i only have one and i just start that manually just now to play with it :)

 

I am going to have a play about with it tonight if i can and also try the Graphics pass through too.

 

 

 

using the internal switches and Vlans could i create a software based DMZ? 

 

I cant think of anything i do where i would need separate networks ;)


Yeah if you get the dual card.. This one for example should work
http://www.amazon.com/Nc360t-Pcie-Dp-Gig-Adapter/dp/B008C4GKOG

Its $30

 

 

a small issue

 

when you go onto UK Amazon

 

this card is close to

Link to post
Share on other sites
  • 0
+LogicalApex

I was trying to work out why it takes you 8 minutes to boot then realised that its not just ESXi booting it then has to start all the VM's too

 

see i have not had that issue yet as i only have one and i just start that manually just now to play with it :)

 

I am going to have a play about with it tonight if i can and also try the Graphics pass through too.

 

 

 

using the internal switches and Vlans could i create a software based DMZ? 

 

I cant think of anything i do where i would need separate networks ;)

 

 

a small issue

 

when you go onto UK Amazon

 

this card is close to

Link to post
Share on other sites
  • 0
+BudMan

As to different networks - first one to break off would be your wifi. But sure your dmz can be only vms, that is how i have it setup as well.

I never understand the cost difference to be honest, if that card sells for $30 USD, then in the UK it should be like 20 quid.

Link to post
Share on other sites
  • 0
Praetor

I was trying to work out why it takes you 8 minutes to boot then realised that its not just ESXi booting it then has to start all the VM's too

 

see i have not had that issue yet as i only have one and i just start that manually just now to play with it :)

 

I am going to have a play about with it tonight if i can and also try the Graphics pass through too.

 

 

 

using the internal switches and Vlans could i create a software based DMZ? 

 

I cant think of anything i do where i would need separate networks ;)

 

 

a small issue

 

when you go onto UK Amazon

 

this card is close to

Link to post
Share on other sites
  • 0
+Fahim S.

I had an SSD with ESXi installed on it when I had ESXi on my Microserver - it really doesn't make a whole amount of difference besides making boot and shutdown time much shorter (great when patching).  It is well worth using an SSD as a datastore - the difference is phenomenal.

 

I have a standard Intel desktop NIC which I use as my LAN facing NIC, leaving the built in NIC as my WAN facing NIC.

http://www.amazon.co.uk/Intel-EXPI9301CTBLK-PRO1000-Network-PCIex/dp/B001CY0P7G/ref=sr_1_1?s=computers&ie=UTF8&qid=1420748243&sr=1-1&keywords=intel+desktop+nic

 

I can't remember if you are on fibre, but you can take an Ethernet cable straight from the VSDL modem from OpenReach supplied to your Microserver in the WAN facing NIC which is connected to a vSwitch which has the WAN facing interface of your pfSense machine on it and have it do all of your routing duties.  Rock solid and highly recommended.

 

Rather than a multiple port NIC, you could consider a SMART switch instead if you are worried about price - you can get a 5 port model for less than

  • Like 2
Link to post
Share on other sites
  • 0
+BudMan

^ if all you want to add is 1 more port then you could get this - maybe its cheaper in the uk?

http://www.newegg.com/Product/Product.aspx?Item=N82E16833106036

This should also work - like like 26 quid

http://www.amazon.co.uk/Intel-EXPI9301CTBLK-PRO1000-Network-PCIex/dp/B001CY0P7G

You then have physical for wan, and lan which you can share with vmkern. And if you need any more segments sure you could vlan if you have a smart switch.

Link to post
Share on other sites
  • 0
offroadaaron

"You don't really want to waste SSD space on ESXi (IMHO)"

 

Waste what 8GB..  If your worried about 8GB then you have other issues other than the SLOW ass possible boot you could possibly ever get ;)

 

When you patch your esxi install or do anything that would take a reboot - it is like watching paint dry..  For what??  I put in a 250GB ssd, and boot off of it.. And my current config have 55GB free still on my SSD... And I have 14 VMs configured currently with 5 running 24/7/365

 

Who watches ESX boot, honestly! And once it's in RAM it doesn't need to access the hard drive. I wouldn't bother putting it on a SSD. I'd probably put it on a USB key to be honest.

Link to post
Share on other sites
  • 0
Stokkolm

Who watches ESX boot, honestly! And once it's in RAM it doesn't need to access the hard drive. I wouldn't bother putting it on a SSD. I'd probably put it on a USB key to be honest.

Depends on how often you reboot it for updates. I've had a Dell R720 take 40 minutes to completely boot ESXi 5.5 off of a USB drive.

Link to post
Share on other sites
  • 0
offroadaaron

Depends on how often you reboot it for updates. I've had a Dell R720 take 40 minutes to completely boot ESXi 5.5 off of a USB drive.

 

Most of that is probably the RAID and machine POSTing which would happen anyways. 40 minutes though..... Something wrong there because I have installed ESX on USB on a microserver and also with clients and honestly never taken 40 minutes so that's not the usual.

  • Like 1
Link to post
Share on other sites
  • 0
Stokkolm

Most of that is probably the RAID and machine POSTing which would happen anyways. 40 minutes though..... Something wrong there because I have installed ESX on USB on a microserver and also with clients and honestly never taken 40 minutes so that's not the usual.

Most of that time was actually watching ESXi load it's components. I believe the issue had to do with jumbo framing in the end though, so you're right about that.

Link to post
Share on other sites
  • 0
binaryzero

Who watches ESX boot, honestly! And once it's in RAM it doesn't need to access the hard drive. I wouldn't bother putting it on a SSD. I'd probably put it on a USB key to be honest.

 Spot on.

 

 

Depends on how often you reboot it for updates. I've had a Dell R720 take 40 minutes to completely boot ESXi 5.5 off of a USB drive.

Think he'll be ok on his microserver. 

 

Op - if you're unsure of how to use VMware, find yourself a copy of CBT.Nuggets.VMware.vSphere.5.5.VCA-DCV.VCP5-DCV-PRODEV. It'll give you a good understanding. 

Link to post
Share on other sites
  • 0
Praetor

OP: i do have a Gen7 N54L HP microserver with ESXi 5.5 on it; runs great from a good reliable USB pendrive (or key, for you US guys :)). As for a SSD it's great as a datastore or for caching (virtual caching http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2058983 - feature requires Enterprise Plus license), but for storing the hypervisor not so much, only for boot times like it's been said in here; since i rarely shutdown my server the advantages of using an SSD for storing the hypervisor would be negletable.

 

having said that remember that free ESXi 5.5 has several limitations when compared witth the paid versions.

  • Like 1
Link to post
Share on other sites
  • 0
+BudMan

You guys do understand your talking about 8GB of what?? 256GB -- what would you use for a datastore?  Its 8/256 = 3% if your worried about 3% of your storage space your nuts!!  Putting the hypervisor on usb has really one advantage - ability to have multiple copies for backup/recovery and or boot different OS off different usb easy on the fly, etc..

 

Putting it on usb to save space is beyond pointless..  Now if you were trying to use a 16GB ssd for a datastore - ok..  But when you can pickup a 256GB current model with great speeds for $110 mx100 line for example.. Why would you worry about 8GB??

 

I currently have 14 different vms on my datastore, and have no worries about space.  Now the boot times off usb are clearly going to have factors to how long it might take - for one the n40l only has usb2 -- they SUCK!!!  They are like watching paint dry, even the fast ones..  And why you spend good money on a FAST usb that is just going to boot esxi ;)  Wouldn't that money be better spent on a bigger/faster SSD!! :)

 

Now if your usb 3, and you have decent stick - maybe its not too bad..  But I can tell you booting a n40l off a usb is HELL for time..  Its not that I watch it, etc..  Its that its down for so long - when you run your router off of it an internet is down because your updating esxi kind extends the down time...

  • Like 1
Link to post
Share on other sites
  • 0
Haggis

i am happy to leave ESXi on the normal hard drive, if i do get a new SSD for laptop i will make the old one my datastore for VM's

 

Someone mentioned plugging the bt fibre box right into the server for pfsense

 

I have a sky router there so would need to work out how to replace that first lol

Link to post
Share on other sites
  • 0
Aergan

i am happy to leave ESXi on the normal hard drive, if i do get a new SSD for laptop i will make the old one my datastore for VM's

 

Someone mentioned plugging the bt fibre box right into the server for pfsense

 

I have a sky router there so would need to work out how to replace that first lol

 

I replaced my router for a VM back in October '14 and couldn't be happier.

My Openreach Huaweii FTTC VDSL modem plugs straight into a dedicated Intel NIC for WAN.

I'm using Zentyal over pfsense as I wanted a Ubuntu server x64 VM to start with. They provide an apt repository so you can build your VM yourself to better suit your needs / host resources.

Link to post
Share on other sites
This topic is now closed to further replies.
  • Recently Browsing   0 members

    No registered users viewing this page.