ESXi Host Storage Upgrade


Recommended Posts

Not much I can help you but

Installing to a flash drive is simply cost effective. Im seeing 160GB for about 40 bucks. Im sure you can find smaller drives at even lower prices.

Just a reflection :)

where are you finding a 160GB flash drive for $40?

 

personally, ive never once heard of installing ESX to a flash drive...

 

also, OP - i highly recommend you stress the importance to your management of getting a bigger budget to do this project properly. in my experience, a lot of companies dont want to spend the proper amount on IT as they dont outwardly 'see' the value. This bites them in the ass when something goes wrong; suddenly, it's your fault.

Link to comment
Share on other sites

Where that is useful then sure, but in a home/lab setup I am not really that worried about decouple.  And there is nothing saying I can not boot esxi of usb/sd and connect to datastore on the disk for recovery options, etc.  Failed update, etc. 

 

I don't have loads of usb laying around of decent size.  I have a 32GB usb 3 as my main one, and then older ones that I use giving to friends and family that need stuff.  Like my older 16GB that would be useful in booting esxi sure, its usb2 its bigger than my current usb3 stick.  But my son has it, I loaned it to him with video of his daughters dance practice, etc. 

 

My point is blanket statement saying you can find no use of not installing on usb/sd - I am on the other side of the fence.  I can see no reason to do it that way ;)  Just cost me more money and one thing more than could fail as well.  The usb stick could fail then I am out of luck ;) hehehe

 

In a production setup where esxi host is never rebooted, etc. Sure makes sense - I reboot mine all the time when doing stuff, power outages -- not like its in a datacenter with 5 9's of power supplied, etc.  I just shut it down this morning to update the firmware on the ssd from U3 to U5.  I will prob shut it down again tonight to maybe update the new modded bios I found, etc.  So while it doesn't make a drastic difference and is pretty pointless benefit.  esxi does come up a bit faster booting of hdd or ssd.  So if I have to reboot it for whatever reason - that is 2 minutes less time of my family asking if I am doing something to the internet, is the internet down, etc. ;)

Link to comment
Share on other sites

Where that is useful then sure, but in a home/lab setup I am not really that worried about decouple.  And there is nothing saying I can not boot esxi of usb/sd and connect to datastore on the disk for recovery options, etc.  Failed update, etc. 

 

I don't have loads of usb laying around of decent size.  I have a 32GB usb 3 as my main one, and then older ones that I use giving to friends and family that need stuff.  Like my older 16GB that would be useful in booting esxi sure, its usb2 its bigger than my current usb3 stick.  But my son has it, I loaned it to him with video of his daughters dance practice, etc. 

 

My point is blanket statement saying you can find no use of not installing on usb/sd - I am on the other side of the fence.  I can see no reason to do it that way ;)  Just cost me more money and one thing more than could fail as well.  The usb stick could fail then I am out of luck ;) hehehe

 

In a production setup where esxi host is never rebooted, etc. Sure makes sense - I reboot mine all the time when doing stuff, power outages -- not like its in a datacenter with 5 9's of power supplied, etc.  I just shut it down this morning to update the firmware on the ssd from U3 to U5.  I will prob shut it down again tonight to maybe update the new modded bios I found, etc.  So while it doesn't make a drastic difference and is pretty pointless benefit.  esxi does come up a bit faster booting of hdd or ssd.  So if I have to reboot it for whatever reason - that is 2 minutes less time of my family asking if I am doing something to the internet, is the internet down, etc. ;)

We'll have to disagree on where it should be installed :) but now that both points are exposed the OP is free to decide.

 

I also don't have the luxury of turning my ESXi host off that frequently. I tend to reboot it once or twice a quarter at most for stuff like ESXi 5.5u1 installs or the like. Luckily, for me power outages are extremely rare. If I rebooted frequently, I could see the value in shaving some boot time off the system. Although on my Dell hardware the boot is so slow an SSD wouldn't make you feel any better... It sits at a "configuring memory..." stage on bootup for about 30 seconds or more...

Link to comment
Share on other sites

Exactly, I do see your point but disagree ;) as well. There can be valid points made for both methods. It comes down to the user using the method that best suites their setup.

Another point to where usb/sd makes sense is if there is no local storage on the host.. Which is quite common for an enterprise setup. So if no local storage what would you rather boot the host from, expensive hdd or ssd you put in the box just for boot. Or a cheap usb/sd - since once the OS is loaded its in ram anyway.

You could always boot the host via pxe, and not even need any sort media in the host ;)

Link to comment
Share on other sites

Keep the posts coming. Im learning a lot more.

Budman: Could I set up the VMs to see the datastore as storage? So Im thinking instead of forcing the VM's to be a specify size cause the server needs that amount of space, I could make a 10G VM that has access to storage on the datastore which will make it so the VM's are tiny as they are just running a server install of linux with a few packages install. This gives me the ability to leverage the data incase the "server" fails to boot or someone ###### it up. I was thinking I could map the "drive/volume/raid" as a Raw Mapping in the VM, but I don't know how that will work.

I am definitely looking to do a USB install / boot of ESXi, going to look more into this before committing. 

Link to comment
Share on other sites

After some reading it looks like creating a virtual disk on the datastore and giving the VM access to that is a better idea then RDM at least in my situation. If my understanding is incorrect please correct me.

Link to comment
Share on other sites

I would've thought to keep it simple you'd do the following

 

- Shutdown all VMs

- Connect to host via vSphere client

- plug USB hard drive into PC that has vSphere client.

- Browse the datastores and copy the VM folders onto the USB drive

- Install new hardware into server

- Fresh install of Vsphere 5.5 U1

- Connect to host via vSphere client

- Browse datastore and copy back the VMs from the USB

- Add the VMs to your inventory

 

Yes, I'm sure you could recover from backups and there's plenty of other options (for instance if you had multiple hosts you'd just vmotion the VMs to another host), but this seems straight forward to me. Think I've done the process ^ once or twice in my time.

 

Once you've got everything running, look at getting yourself some external storage (NAS\SAN) that fits in budget to do your Veeam backups onto.  

Link to comment
Share on other sites

So your going to create another disk on your datastore - that you want to get rid of?  How does that help you?  Or are you talking about this 1 disk to copy your vms too before you redo your array?

Link to comment
Share on other sites

Do you have no spare HDD slots in your server? If you do you could create another RAID array using the new disks, then create another datastore on that and copy.

Alternatively - do you have a NAS available that you can present out an iSCSI disk or NFS mount from?  Even home NAS can do that these days.  Present out a new temporary disk and copy the VMs to that.

 

Also beware if you are planning to go to ESXi 5.5 . Do not upgrade your virtual hardware for your VMs if you are not managing the host through vcentre.  Once it is at virtual hardware 10 you then need to use the web client for any further changes to the vm which you can't use unless you're using vcentre.

Link to comment
Share on other sites

So your going to create another disk on your datastore - that you want to get rid of?  How does that help you?  Or are you talking about this 1 disk to copy your vms too before you redo your array?

 

No. Im saying for after the upgrade instead of having VM's be a specific size because they need so much storage, I just want to be able to make VM's the size of the install OS + packages and then have the VM access a "virtual disk" or access the raw drive.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.