Jump to content



Photo

ESXi Host Storage Upgrade


  • Please log in to reply
34 replies to this topic

#1 Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 21 April 2014 - 21:33

When I initially ordered this host, I wanted to equip it with 4 x 4TB Drives in a RAID 5 and leave it at that. But management thought we wouldn't use this server like we have "for awhile" and that by then buying hard drives would be feasible. So I was forced to use old enterprise drives we had lying around. This happened to be 4 x 1TB hard drives.

 

Well after a year of having this host and putting it into production and getting major use out of it, we are now approved "to do whatever to get this thing running like it use to". So I got 4 x 4TB drives and a ram kit and am all set as far as hardware goes. For software, we will be moving from a 5.1 license to a 5.5 license and take advantage of the web client because I hate firing up windows to just manage the host.

 

Now, the problem I run into is moving this data from the RAID to an external drive. My best option is to use another drive via Internal Sata, have it show up in vSphere client, and copy data that way or log into the shell and CP the data from one "datastore" to the other "datastore" which would be the external drive hooked up via internal sata.

 

The question I have is, is there a better way to do this? I used Veeam to backup a VM that was shutdown and 50gbs with 24gbs being used and it said speed of the copy was "20MB/s" over ethernet and the bottleneck was "source". This was while the host was up and running about 26 VMs and actively being used. I figured the best and fastest way to backup and do this upgrade would be using a physical drive and hooking it up to the host.

 

I also would like to know if installing ESXi to a flash drive is a recommendation for production use. I know its preferred by many and especially since ESXi runs in memory and writes to the drive every 10 minutes. But I'm not sure how liable it is. I know that all I have to do is replace the bad flash drive with a good one, rescan for data stores, manually add the VMs into inventory and turn them on. I just don't know what I'm expecting as far as life of the drive? I plan on buying a good brand drive like PNY or Corsair or Kingston Data Travelers. Just some input would be nice

 

If anyone has any suggestions at all please pitch in. Im trying to minimize this to a 1 week downtime. I really wish management would let me buy another host so I have a proper ESXi setup and not a ghetto / home setup but oh well, thats the least of my problems. 

 

Thanks for reading!

Sikh




#2 TPreston

TPreston

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 18-July 12
  • Location: Ireland
  • OS: Windows 8.1 Enterprise & Server 2012R2/08R2 Datacenter
  • Phone: Nokia Lumia 1520

Posted 21 April 2014 - 21:45

You plan to deploy a virtualization server whos vhds on RAID 5 ? With data deduplication you would be far better off with a 500GB SSD.

 

Depending on the operating systems you can fit quite more than you would expect



#3 OP Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 21 April 2014 - 21:55

You plan to deploy a virtualization server whos vhds on RAID 5 ? With data deduplication you would be far better off with a 500GB SSD.

 

Depending on the operating systems you can fit quite more than you would expect

 

Believe me if I could use SSDs I would. But I'm not worried about storing the VMs on the RAID 5 because its been working really well. The RAID card is compatible with ESXi and overall performance of every VM has been what I expected for the drives I'm currently using.

Most of the servers except for 2 are linux servers. The 2 are dev windows server. Ubuntu and CentOS are the OS's of the linux servers. They are development servers and web servers. 



#4 +riahc3

riahc3

    Neowin's most indecisive member

  • Tech Issues Solved: 11
  • Joined: 09-April 03
  • Location: Spain
  • OS: Windows 7
  • Phone: HTC Desire Z

Posted 21 April 2014 - 22:06

Not much I can help you but

I also would like to know if installing ESXi to a flash drive is a recommendation for production use. I know its preferred by many and especially since ESXi runs in memory and writes to the drive every 10 minutes. But I'm not sure how liable it is. I know that all I have to do is replace the bad flash drive with a good one, rescan for data stores, manually add the VMs into inventory and turn them on. I just don't know what I'm expecting as far as life of the drive? I plan on buying a good brand drive like PNY or Corsair or Kingston Data Travelers. Just some input would be nice

Installing to a flash drive is simply cost effective. Im seeing 160GB for about 40 bucks. Im sure you can find smaller drives at even lower prices.

Just a reflection :)

#5 OP Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 22 April 2014 - 17:03

Not much I can help you but
Installing to a flash drive is simply cost effective. Im seeing 160GB for about 40 bucks. Im sure you can find smaller drives at even lower prices.

Just a reflection :)

 

Im not worried about price or capacity for the flash drive, more then reliability and disaster recovery scenario



#6 duddit2

duddit2

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 24-January 10
  • Location: Manchester UK
  • OS: Windows 8 Pro

Posted 22 April 2014 - 17:26

Using large Drives in RAID 5 while saying you want reliability and performance is showing a serious lack of knowledge. RAID 5 with large disks is a recipe for disaster - if one drives goes you suddenly have no redundancy, you have to swap for a new drive and a rebuild starts. If a URE (a bad block) occurs during this rebuild then kiss your virtual disk goodbye, and the chances of a URE (bad block) increases exponentially as drives grow in size.

 

With 4TB drives its almost certain that when a drives fails, you will see a URE on rebuild and you will have to make a new array and restore from backup.

 

(edited for clarity)


Edited by duddit2, 22 April 2014 - 17:39.


#7 duddit2

duddit2

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 24-January 10
  • Location: Manchester UK
  • OS: Windows 8 Pro

Posted 22 April 2014 - 17:29

http://www.zdnet.com...mall-arrays/483



#8 duddit2

duddit2

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 24-January 10
  • Location: Manchester UK
  • OS: Windows 8 Pro

Posted 22 April 2014 - 17:36

and this http://www.zdnet.com...ing-in-2009/162



#9 OP Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 22 April 2014 - 17:45

Duddit2: Thanks for the clarification. I should add I meant RAID 6. I already know about why RAID 5 isn't useful. I have also thought about using a Raid 10 for this setup. 

 

This is why I posted my situation so people could correct me if I'm wrong, so thanks for finding the typo. I use RAID so much at work that I can't think straight afterwords. Just so you know I manage 326T of Data. So RAID is used a lot and now it'll be "RAIN" soon.

 

 



#10 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 22 April 2014 - 18:47

So how much data do you have to move? 2TB would be max in your using raid 6 with only 4x1TB drives. 20MBps is horrific for an enterprise setup - #### really bad for even the most budgeted of home networks. Moving data between data stores using the vmkern not going to be very fast. Prob better off connecting the disk you want to copy it off to a VM with a raw map, and then just copy the files?

Why can you not just leverage your backup - take a backup, put in your new array and restore?

Where exactly where you seeming 20MBps -- is this external connected usb? What was the data path?

#11 OP Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 22 April 2014 - 22:13

So how much data do you have to move? 2TB would be max in your using raid 6 with only 4x1TB drives. 20MBps is horrific for an enterprise setup - #### really bad for even the most budgeted of home networks. Moving data between data stores using the vmkern not going to be very fast. Prob better off connecting the disk you want to copy it off to a VM with a raw map, and then just copy the files?

Why can you not just leverage your backup - take a backup, put in your new array and restore?

Where exactly where you seeming 20MBps -- is this external connected usb? What was the data path?

 

I was seeing 20MBps on the connected PC. Overall bandwidth in our network is NOT this bad. The ESXi host has 4 x 1gb ports Bonded and the PC had 2 1GB ports bonded. I was surprised how slow it was and figured it was because the host was working.

 

Backups: hahahahahaha. I wish we had backups. The best backups I had was a cron job that shutdown the VM @ 3am on saturday and tar the vm and copied to a RAID on our file server. That was put on hold because we were running out of space on the file server and production vs backups production won. So yeah backups, I wish.

 

 At most its 1.2T of data. Ill have a better expectation friday because some of the servers might be better to set up again being web servers and all the important data (databases) being on the database server. All of the departments that are in question for these servers should get back to me friday.

 

As far as raw mapping, I saw a response you posted about how to do it with cli. So I'm good with that. But are you saying I should raw map the datastore and the external drive I hook up via sata? Only thing I'm not sure on



#12 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 23 April 2014 - 00:06

I thought you were talking about data - are talking about the actual VMs?  Dude just grab Veem back and pull them off with that - FAST!!  And then put them back t your new datastore.

 

http://www.veeam.com...n-software.html



#13 OP Sikh

Sikh

    Neowin Addict!

  • Tech Issues Solved: 2
  • Joined: 11-March 07
  • Location: localhost
  • OS: Windows 7 / 10.8 / Ubuntu Server
  • Phone: Nexus 5 PA 4.4.2 / iPhone 5

Posted 23 April 2014 - 00:16

Budman,

 

Im talking about the actual VMs. I am using VEEAM to back them up and the first 50GB VM I did which was only 26GB Used told me that it was backing up at a rate of 20MB/s and the bottleneck was the Source. So thats why I wasnt using Veeam as a liable option. I will run it again and let it complete and see if it gets faster but I let it run for 5 minutes and wasnt very hopeful



#14 Praetor

Praetor

    ASCii / ANSi Designer

  • Tech Issues Solved: 3
  • Joined: 05-June 02
  • Location: Lisbon
  • OS: Windows Eight dot One dot One 1!one

Posted 23 April 2014 - 00:17

RAID 5 with 1TB disks is a no no.. :(

 

and no backups?? WTH!!! :o

 

there are several paths for this setup; one is to backup your VMs, destroy the RAID 5 array, build a new array (RAID 10, for example, but with only 4 disks then only 8TB are available) and restore your VMs into the new array. If you do this then do a test backup to see how much time would the total backup + restore would take (also consider the media used; also don't forget to DOUBLE BACKUP! I've seen migrations going bad because the only backup failed in the restore part, so a double backup decreases your chances of a screw job).

 

planning is key here.


Edited by Praetor, 23 April 2014 - 00:26.


#15 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 89
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 23 April 2014 - 00:50

Dude in the middle of doing this stuff myself actually - but on my home crap N40L with $40 nics and seeing better speeds than you ;)

 

Here where I took the backup - keep in mind this is the ###### drive that came with the N40L, and I was vpn'd in through the pfsense running on the host - running this on my home box, etc.

 

backup.png

 

Here is the restore, Smoking 160MBps restore rate -- this is to the SSD datastore, but still over my crap single gig nics and a $100 switch.. You got something wrong if your only seeing 20MBps on your backup.

 

restore.png