ESXi Host Storage Upgrade


Recommended Posts

When I initially ordered this host, I wanted to equip it with 4 x 4TB Drives in a RAID 5 and leave it at that. But management thought we wouldn't use this server like we have "for awhile" and that by then buying hard drives would be feasible. So I was forced to use old enterprise drives we had lying around. This happened to be 4 x 1TB hard drives.

 

Well after a year of having this host and putting it into production and getting major use out of it, we are now approved "to do whatever to get this thing running like it use to". So I got 4 x 4TB drives and a ram kit and am all set as far as hardware goes. For software, we will be moving from a 5.1 license to a 5.5 license and take advantage of the web client because I hate firing up windows to just manage the host.

 

Now, the problem I run into is moving this data from the RAID to an external drive. My best option is to use another drive via Internal Sata, have it show up in vSphere client, and copy data that way or log into the shell and CP the data from one "datastore" to the other "datastore" which would be the external drive hooked up via internal sata.

 

The question I have is, is there a better way to do this? I used Veeam to backup a VM that was shutdown and 50gbs with 24gbs being used and it said speed of the copy was "20MB/s" over ethernet and the bottleneck was "source". This was while the host was up and running about 26 VMs and actively being used. I figured the best and fastest way to backup and do this upgrade would be using a physical drive and hooking it up to the host.

 

I also would like to know if installing ESXi to a flash drive is a recommendation for production use. I know its preferred by many and especially since ESXi runs in memory and writes to the drive every 10 minutes. But I'm not sure how liable it is. I know that all I have to do is replace the bad flash drive with a good one, rescan for data stores, manually add the VMs into inventory and turn them on. I just don't know what I'm expecting as far as life of the drive? I plan on buying a good brand drive like PNY or Corsair or Kingston Data Travelers. Just some input would be nice

 

If anyone has any suggestions at all please pitch in. Im trying to minimize this to a 1 week downtime. I really wish management would let me buy another host so I have a proper ESXi setup and not a ghetto / home setup but oh well, thats the least of my problems. 

 

Thanks for reading!

Sikh

Link to comment
Share on other sites

You plan to deploy a virtualization server whos vhds on RAID 5 ? With data deduplication you would be far better off with a 500GB SSD.

 

Depending on the operating systems you can fit quite more than you would expect

Link to comment
Share on other sites

You plan to deploy a virtualization server whos vhds on RAID 5 ? With data deduplication you would be far better off with a 500GB SSD.

 

Depending on the operating systems you can fit quite more than you would expect

 

Believe me if I could use SSDs I would. But I'm not worried about storing the VMs on the RAID 5 because its been working really well. The RAID card is compatible with ESXi and overall performance of every VM has been what I expected for the drives I'm currently using.

Most of the servers except for 2 are linux servers. The 2 are dev windows server. Ubuntu and CentOS are the OS's of the linux servers. They are development servers and web servers. 

Link to comment
Share on other sites

Not much I can help you but

I also would like to know if installing ESXi to a flash drive is a recommendation for production use. I know its preferred by many and especially since ESXi runs in memory and writes to the drive every 10 minutes. But I'm not sure how liable it is. I know that all I have to do is replace the bad flash drive with a good one, rescan for data stores, manually add the VMs into inventory and turn them on. I just don't know what I'm expecting as far as life of the drive? I plan on buying a good brand drive like PNY or Corsair or Kingston Data Travelers. Just some input would be nice

Installing to a flash drive is simply cost effective. Im seeing 160GB for about 40 bucks. Im sure you can find smaller drives at even lower prices.

Just a reflection :)

Link to comment
Share on other sites

Not much I can help you but

Installing to a flash drive is simply cost effective. Im seeing 160GB for about 40 bucks. Im sure you can find smaller drives at even lower prices.

Just a reflection :)

 

Im not worried about price or capacity for the flash drive, more then reliability and disaster recovery scenario

Link to comment
Share on other sites

Using large Drives in RAID 5 while saying you want reliability and performance is showing a serious lack of knowledge. RAID 5 with large disks is a recipe for disaster - if one drives goes you suddenly have no redundancy, you have to swap for a new drive and a rebuild starts. If a URE (a bad block) occurs during this rebuild then kiss your virtual disk goodbye, and the chances of a URE (bad block) increases exponentially as drives grow in size.

 

With 4TB drives its almost certain that when a drives fails, you will see a URE on rebuild and you will have to make a new array and restore from backup.

 

(edited for clarity)

Edited by duddit2
Link to comment
Share on other sites

Duddit2: Thanks for the clarification. I should add I meant RAID 6. I already know about why RAID 5 isn't useful. I have also thought about using a Raid 10 for this setup. 

 

This is why I posted my situation so people could correct me if I'm wrong, so thanks for finding the typo. I use RAID so much at work that I can't think straight afterwords. Just so you know I manage 326T of Data. So RAID is used a lot and now it'll be "RAIN" soon.

 

 

Link to comment
Share on other sites

So how much data do you have to move? 2TB would be max in your using raid 6 with only 4x1TB drives. 20MBps is horrific for an enterprise setup - #### really bad for even the most budgeted of home networks. Moving data between data stores using the vmkern not going to be very fast. Prob better off connecting the disk you want to copy it off to a VM with a raw map, and then just copy the files?

Why can you not just leverage your backup - take a backup, put in your new array and restore?

Where exactly where you seeming 20MBps -- is this external connected usb? What was the data path?

Link to comment
Share on other sites

So how much data do you have to move? 2TB would be max in your using raid 6 with only 4x1TB drives. 20MBps is horrific for an enterprise setup - #### really bad for even the most budgeted of home networks. Moving data between data stores using the vmkern not going to be very fast. Prob better off connecting the disk you want to copy it off to a VM with a raw map, and then just copy the files?

Why can you not just leverage your backup - take a backup, put in your new array and restore?

Where exactly where you seeming 20MBps -- is this external connected usb? What was the data path?

 

I was seeing 20MBps on the connected PC. Overall bandwidth in our network is NOT this bad. The ESXi host has 4 x 1gb ports Bonded and the PC had 2 1GB ports bonded. I was surprised how slow it was and figured it was because the host was working.

 

Backups: hahahahahaha. I wish we had backups. The best backups I had was a cron job that shutdown the VM @ 3am on saturday and tar the vm and copied to a RAID on our file server. That was put on hold because we were running out of space on the file server and production vs backups production won. So yeah backups, I wish.

 

 At most its 1.2T of data. Ill have a better expectation friday because some of the servers might be better to set up again being web servers and all the important data (databases) being on the database server. All of the departments that are in question for these servers should get back to me friday.

 

As far as raw mapping, I saw a response you posted about how to do it with cli. So I'm good with that. But are you saying I should raw map the datastore and the external drive I hook up via sata? Only thing I'm not sure on

Link to comment
Share on other sites

Budman,

 

Im talking about the actual VMs. I am using VEEAM to back them up and the first 50GB VM I did which was only 26GB Used told me that it was backing up at a rate of 20MB/s and the bottleneck was the Source. So thats why I wasnt using Veeam as a liable option. I will run it again and let it complete and see if it gets faster but I let it run for 5 minutes and wasnt very hopeful

Link to comment
Share on other sites

RAID 5 with 1TB disks is a no no.. :(

 

and no backups?? WTH!!! :o

 

there are several paths for this setup; one is to backup your VMs, destroy the RAID 5 array, build a new array (RAID 10, for example, but with only 4 disks then only 8TB are available) and restore your VMs into the new array. If you do this then do a test backup to see how much time would the total backup + restore would take (also consider the media used; also don't forget to DOUBLE BACKUP! I've seen migrations going bad because the only backup failed in the restore part, so a double backup decreases your chances of a screw job).

 

planning is key here.

Edited by Praetor
Link to comment
Share on other sites

Dude in the middle of doing this stuff myself actually - but on my home crap N40L with $40 nics and seeing better speeds than you ;)

 

Here where I took the backup - keep in mind this is the ###### drive that came with the N40L, and I was vpn'd in through the pfsense running on the host - running this on my home box, etc.

 

post-14624-0-96817300-1398214124.png

 

Here is the restore, Smoking 160MBps restore rate -- this is to the SSD datastore, but still over my crap single gig nics and a $100 switch.. You got something wrong if your only seeing 20MBps on your backup.

 

post-14624-0-22761000-1398214246.png

Link to comment
Share on other sites

i think he meant to say the processing rate.

 

Thanks Praetor!!

 

Budman, he's right. The processing rate is what I was looking at and was thrown off by. Im going to run Veeam again tomorrow and see what my actual throughput is.

Link to comment
Share on other sites

 

this is from a client of mine: 3 VMs backed into a SMB share; those 20 MB/s must be severely bottlenecked.

Link to comment
Share on other sites

this is from a client of mine: 3 VMs backed into a SMB share; those 20 MB/s must be severely bottlenecked.

post-13369-0-19289300-1398217555.png

Link to comment
Share on other sites

Comes down to how your doing it.. I would assume you have a proxy setup on a VM that has access to the storage. What storage are you using shared or local?

 

I guess he fired up veem backup on his workstation like I did and was just using FREE version with backup to zip sort of thing.  Where the proxy is the box you installed it on, and your using just going over the network as your transport mode.

 

post-14624-0-90709800-1398254802.png

 

Sure if your proxy is on the host or in your vm inf, and has access to the storage and where is the backup database.. Is it on storage the proxy is also connected to?  Then sure you can get some screaming backups - its possible he could setup proxy vm, connect his other disk to the to VM he installs the veem proxy too and use that as the backup dest. 

 

Then his workstation is just management of the backup, and the proxy really does all the work, etc.

 

He would need to own veem or get the trial to be able to do that..  But very good point to how he can speed up the process.  But as you saw even over a home network, backup of the VMs only takes a few minutes.  Odd that he is only seeing 20MBps when on my little N40L low power cpu, etc.. That I am seeing better backup to zip mode..

 

My setup here at home doesn't have the resources to set up how it should be setup an enterprise, etc. 

 

As other option, can you shutdown the VMs and just browse the datastore and download the VM dir off the datastore?  Now this would backup the provisioned size of the vmdk vs just data?  Are these disks think, thick?  All mine are thin and I keep holepunch them every now and then to reduce footprint - did specifically before moving to the ssd, etc.  But when you download off the datastore like that it will be full size of the vmdk even if only using small portion of it when thin.  And the vmkern doesn't really max out your pipe - I normally see about 260Mbps pulling from the datastore like that.

Link to comment
Share on other sites

What is the hardware like in the host? You do have a BBU in place and the Write Back Cache enabled? I have seem some very poor performance on RAID card without a BBU w/ Write Back enabled. Possibly this is affecting your setup as well?

 

What drives are you planning to use and what are you using now? Are you attempting to address slowness in I/O performance?

 

I would recommend booting ESXi off of flash media. There really isn't any reason to waste disk space on ESXi directly. Your disaster scenario can vary depending on what your hardware supports. On my Lenovo host I boot from a USB Flash Drive and my recovery option is as you described... Add new Flash Drive and pull the VMs back out of Inventory. On my Dell host I have ESXi booted via SD card in RAID 1. This option allows me to hot swap the failed SD card without any work needing to be done (provided they don't both fail at the same time).

 

I honestly don't see any reason to do a disk based ESXi install unless you lack an internal USB/SD slot.

 

Whatever you buy SD card or USB wise. Get a larger size than needed. I use 16GB USB and 32GB SD cards at the moment. This over provisioning should allow the cards to last longer as the wearing algorithms will have a lot more area to fall back on.

 

Get a backup strategy in place... Veeam is fantastic. Use something.

Link to comment
Share on other sites

"I honestly don't see any reason to do a disk based ESXi install unless you lack an internal USB/SD slot."

 

Or just don't have need of the space on the SSD or HDD you have in the first place..  Why should I use a USB stick, or SD media if I even had a slot.  Those are better served in my pocket holding my data I need to carry around ;)

 

Why not boot from my SSD, when I don't have any use for all its space in the first place?  Its a few GB that I would not use anyway..

 

post-14624-0-78565400-1398259416.png

 

While I agree it is common to install esxi to usb or other flash card/disk -- If my datastore disk is going to be under utilized anyway then I see no reason to waste portable media that can be used for something more useful.

Link to comment
Share on other sites

"I honestly don't see any reason to do a disk based ESXi install unless you lack an internal USB/SD slot."

 

Or just don't have need of the space on the SSD or HDD you have in the first place..  Why should I use a USB stick, or SD media if I even had a slot.  Those are better served in my pocket holding my data I need to carry around ;)

 

Why not boot from my SSD, when I don't have any use for all its space in the first place?  Its a few GB that I would not use anyway..

 

attachicon.gifusage.png

 

While I agree it is common to install esxi to usb or other flash card/disk -- If my datastore disk is going to be under utilized anyway then I see no reason to waste portable media that can be used for something more useful.

It completely decouples your ESXi install from that of your data. Allowing you to deal with recovering from issues with less risk...

 

I also have more USB thumb drives lying around than I can ever hope to use. Throwing the USB 2 ones in the ESXi host gives them new life :). Not likely I'll ever use those again at 12MB/s write when I can use my USB3 drives with over 100MB/s write...

 

I had an issue on my Dell ESXi host where this was useful... I did a firmware upgrade on my NICs from a very old version to the latest. For whatever reason, the new firmware prevented my existing ESXi install from booting. Purple Screened late in the boot process. I didn't have any problem getting it back up and running without jeopardizing my data. Reinstalling ESXi 5.5u1 fixed the issue...

 

I do nightly backups so I wasn't overly concerned, but it did save me the time of doing a full restore.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.