Jump to content



Photo

  • Please log in to reply
47 replies to this topic

#1 riahc3

riahc3

    Neowin's most indecisive member

  • Tech Issues Solved: 11
  • Joined: 09-April 03
  • Location: Spain
  • OS: Windows 7
  • Phone: HTC Desire Z

Posted 07 October 2013 - 11:12

(Thread dedicated to photo-14624.gif)

Hello,

Before I get started, please lets not let our knowledge get mixed with emotions because someone perfers to do this or that a certain way....

Also, I do want to make clear this from the start: RAID is NOT a backup solution. It is not its purpose and it is indeed important having (both) offline (completely offline, not connected to any ouside source and/or network) and online backups.

This spoiler contains info about RAID. Skip it if its not of intrest or you know how it works:
Spoiler


Having said that, Ive been using RAID5 for a few years. It has saved me quite a few times when a drive has failed. Its read/write performance is not best (more so with a slow RAID card) but it gets its purpose done.

Now, the guru Ive dedicated this thread to (and a couple of others) have suggested using different RAID levels. I want one that gives me the same as RAID5 but better (that doesnt even make any sense but it just came out like that).

Let (peacefully) debate this :)

Thanks


#2 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 106
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 07 October 2013 - 13:45

Already went over some of them in your other thread..

 

Why do we need to repeat ourselves?

 

I am more interested in your statement "It has saved me quite a few times when a drive has failed."  After you start out your topic clearly in agreement that raid is not a black up solution.

 

So what did raid save you from?  Not having to restore from backup?  If you were using just a normal drive pool, my critical files are on multiple disks in the pool - and only thing that would had to been restored from backup is the stuff on that disk that I deemed non critical in the first place.

 

As I stated in your other thread what portion of your data is deemed "critical" it is not always cost effective or performance wise to keep parity on data that is not critical.

 

The other aspect where raid fails is your spending $$ to for 9TB of storage - when you currently have 1TB?  Unless your going to ramp up need very very quickly you have $ sitting there wasted using elec, just waiting to fail ;)

 

So myself and others have already gone over some of the other options - how about you tell us why those options don't sit well with you, other than your legacy attachment to a old technology.



#3 StrikedOut

StrikedOut

    Outside the box

  • Joined: 09-December 08
  • Location: Southampton

Posted 07 October 2013 - 14:14

I use a mixture of RAID 1 and 5. 1 for OS, and files not accessed very often and I use RAID 5 for file storage mainly due to only dropping ~33% storage. has nothing to do with added protection as both can only recover from a single drive failure.

 

Example. I am putting together a proposal (in its initial stages) for 2 servers to be used for VM and archived data. I am looking at HP DL380's with 8 3.5" drive bays. I plan to use RAID1 for OS, iso's and similar files that will be rarely used and then a 3 drive RAID 5 with either 3 or 4TB drives for the VMs and the archive data leaving 3 drives spare. By the time this RAID 5 has been used up I will be able to buy larger capacity drives for less and create a new RAID 5 with larger capacity than the original and I expect to be replacing the server before if fill that space up.

 

We are only a small office hence doing things like this. We backup to the cloud and are also planning on creating a local backup for a server failure or similar. If it was a larger enterprise I would be looking at clustering with SANs and similar technology.



#4 pupdawg21

pupdawg21

    Neowinian

  • Joined: 16-June 09

Posted 07 October 2013 - 14:46

RAID 1, RAID10, or RAID6 is what I typically recommend.

 

RAID-5 has a very high chance of a second disk failing during rebuild on large disk arrays with high capacity 2TB+ disks rendering all of your data lost.

 

RAID-1 for Boot or mostly sequential data that you want ot protect.

 

RAID-10 or RAID-6 for everything else.

 

RAID-10 has better performance and better protection from disk failures and faster recovery since its just a straight copy operation than RAID-6 but you lose 1/2 of your capacity.

 

RAID-6 is a decent compromise capable of surviving (2) failed disks.



#5 Roger H.

Roger H.

    Neowinian Senior

  • Tech Issues Solved: 22
  • Joined: 18-August 01
  • Location: Germany
  • OS: Windows 8.1
  • Phone: Nexus 5

Posted 07 October 2013 - 14:55

I use a mixture in my server setup at work also.

 

OS  = RAID1 (2 x 136GB SAS 15,000RPM)

Data = RAID10 (4 x 300GB SAS 15,000 RPM)

 

I'll hopefully move those 300GB drives to the OS RAID and then upgrade the RAID10 to 4x1TB drives. :)

 

As to why - The OS is images and backed up onsite and offsite in the event of fire or sprinkler malfunction! (GRRRRR :p ) For just a simple HDD crash however, we can keep running till the drive gets replaced.

 

The data drive - That has some VHDs on it so needed faster storage but also wanted redundancy in the event of HDD failures. RAID10 is faster than RAID5 for WRITES so went that route instead of RAID5. It has the side benefit of allowing 2 drives to fail (only if separate arrays - so one of each in my case of 4 drives) but I have drives on standby just in case one fail I can replace it before the others start to.



#6 +Bryan R.

Bryan R.

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 04-September 07
  • Location: Palm Beach, FL

Posted 07 October 2013 - 15:02

I'm not sure what there is to debate? All the different RAID levels have their pros and cons, some with more cons than others but it's all common information.



#7 StrikedOut

StrikedOut

    Outside the box

  • Joined: 09-December 08
  • Location: Southampton

Posted 07 October 2013 - 15:21

.....

 

RAID-5 has a very high chance of a second disk failing during rebuild on large disk arrays with high capacity 2TB+ disks rendering all of your data lost.

.....

 

This had completely slipped my mind! still a simple change for my server set up. 2x RAID 1. One for VM the other for the archive data perhaps....



#8 +Bryan R.

Bryan R.

    Neowinian Senior

  • Tech Issues Solved: 1
  • Joined: 04-September 07
  • Location: Palm Beach, FL

Posted 07 October 2013 - 15:47

This had completely slipped my mind! still a simple change for my server set up. 2x RAID 1. One for VM the other for the archive data perhaps....

You should just stay away from RAID5 for the most part. Three HDDs in a RAID5 barely gives you any performance improvement over a single or mirror and you just complicate the process with parity adding overhead. RAID10 is much better with I/O and is what you should be using for heavy use.

 

Pretty much the only scenarios I use RAID5 are with SSDs since the performance is so staggering or with small HDD arrays for backups but if they get more than about 6 drives, I'll use RAID6.



#9 StrikedOut

StrikedOut

    Outside the box

  • Joined: 09-December 08
  • Location: Southampton

Posted 07 October 2013 - 15:56

You should just stay away from RAID5 for the most part. Three HDDs in a RAID5 barely gives you any performance improvement over a single or mirror and you just complicate the process with parity adding overhead. RAID10 is much better with I/O and is what you should be using for heavy use.

 

Pretty much the only scenarios I use RAID5 are with SSDs since the performance is so staggering or with small HDD arrays for backups but if they get more than about 6 drives, I'll use RAID6.

 

This server chassis comes in either 8 3.5" bays or if I remember correctly 16 2.5" bays so I will be having another think about the setup when I take another look at this project which will be very soon. may take a look at RAID1 with 2 SSD's for OS then RAID10 with the largest drives I can find for the system leaving 10 bays to expand with. Food for thought though.



#10 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 106
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 07 October 2013 - 16:07

I believe this discussion is more related to home or small office - not an enterprise setup. This is the track I took in the other thread. Where he was asking for a 4 bay nas, neither model he was looking at would be used in an enterprise.

So I look at this from point of view of type of files I have in my home, and what I serve up off my storage.. These are media files, video and music mostly. All of which have no need of parity, since if they are lost I can replace them off their original media sitting on my self or if need be gotten again via other channels ;)

Now what is critical is a small subset of these files, my home video for example. These I have backed up in multiple locations on different storage, cloud, other disk in different system, optical on my self and another copy at my son's home, etc. So why should I create parity for say my rip of scarface or my grateful dead cds? Now for piece of mind - these "critical" files are also duplicated onto another disk in the pool automatically, so you get the same sort of protection you get with raid 1, while only using a subset of your storage pool for these non replaceable "home movies".

Money spent on that parity seems wasted to me, if that drive died where those files are stored I could just rerip (replace from my backup). There is no critical need for these files to be online in case of disk failure. Which is what the purpose of raid - this has little use in home setup or where only a subset of files in the storage is considered of a critical nature and needs to be online even with hardware failure such as a disk.

In his example of 1TB of storage - why would raid 1 not be better option? He uses 2x3TB or even 2x2TB and he covers his storage needs at lower cost while still having room for growth that should cover him for quite a bit of time.

Raid 5 is better suited when you need specific amount of storage but can not achieve this within specific cost constraints with a mirrored setup, Say he needed 6TB of storage - well there is not 6TB disks as of yet.. So he could do say 4x3TB in raid 10 or 5, or he could do 3x3TB in raid 5, or 4x2TB in Raid 5, etc. But again what amount of that storage requires parity? All of it then sure raid 10 or 5 might make sense.

Or what if he has only 1 TB of critical and 5TB of stuff that is nice to have digital access to - like movies and music. I could accomplish that with 2x4TB in a pool where my 1TB is duplicated on each disk. And this leaves me 6TB of storage - 5 of it for my other stuff and 1TB of growth. At a much lower cost and better flexibility. Since I only need 2 disks. And such time that I need more space I could add another disk to the pool - and its connection and size is no matter, it could be say a 2TB esata or usb even, now if I wanted I cold duplicate my 1TB of critical to all 3 disks in the pool and still have an extra 1TB to play with. Lots of different scenarios viable in the growth of my storage pool.

Not having to put min 3 disks into use all at the same time, allow me to grow my storage using size and connection type that gives me best bang for the buck. As we all know, disks only get bigger, faster and cheaper next month. Such a methodology allows me to stagger disks purchase to take advantages of lower cost when I actually need the storage, not having to calculate how much I need to put online now to have what I need 2 years from now.

This can allow for retirement of your OLD disks before they fail as you just naturally grow your storage replacing older/slower/smaller disks with faster/bigger ones while not requiring more slots.

If need be I can move these disks in my pool to new box - not having to worry about the raid controller in it, or lack of one. Say I need to take a bunch of media to a remote location - I can just take the disk out of the pool and access the files directly via anything that can read the filesystem I used - in my case just common ntfs.

The software I am using monitors the disks, and can notify me of possible issues be it physical issues reading sectors on the disks or smart information pointing to possible failure, etc. It can even move files off those disks in the pool if space is available in the pool.

Lets take a look at your 4x3TB - from the math I have seen, there is something like a 56% chance that with reading 10TB of data that you will encounter an unreadable bit and your rebuild will fail. So when your 1 disks fails its a coin toss if your going to be able to rebuild the array from that parity you spent good money on creating. Also you more than likely built that array from disks purchased all at the same time, most likely in the same batch - once 1 disks fails in a batch, the probability of another disk failing in that same batch increases, etc.

What sort of disks are you using to create this raid in the first place - are they enterprise quality designed to be in a array where they are read and written too constantly? Those disks are normally more costly, does this added cost make sense in a home setup to serve media files?

Its great you have had great success with raid 5 in the past, does not mean it meets the needs of today or makes sense with the size and speed of disks that are available today and the other ways to merge them together to so that their combined space is accessible in one location.

Not talking enterprise where files need to be online 24/7/365 or money is lost.. Talking a home or small office, etc.. Small budgets, etc. Even then your seeing the enterprise move away from your typical raid arrays as as these disks get larger and larger the likely hood of failure on a rebuild grows.. From the math I have seen if your talking 100TB is like 99.9% sure your going to hit a unreadable bit trying to rebuild the array on a disk failure.

edit: BTW for anyone curious I am using https://stablebit.com/ and can not say enough about their support.. It just freaking rocks!! For the small cost of their software, you can not find better support. I recently ran into some issues using a 3TB 4k sector disk in my n40l where I use passthru or physical RDM to give access to the disk to the VM. So that it can read smart, etc. The windows vm just was not seeing the full size of the disk or the gpt information correctly, etc. Now the esxi saw it as 3TB no problem, and could manipulate partitions on it just fine, using partedutil, etc. But windows was reporting it as -128MB size or 0 in Disk Manager. If I connected it to a linux VM had no problems using gdisk to manipulate and verify the gpt, and parted to create partitions, etc. So I just created the 3TB partition in linux and then attached to my windows VM.. Working great - but the scanner portion of their software was just using the info windows was giving about the disk.. Which was not correct.. So in a few days chatting with them via their support system the developer created some new beta versions of the scanner software that looked to the disk directly for info when windows was reporting odd information.. Works great now, scanner and pool both report correct size of the disk, scanning of all the sectors works, etc. etc. They even offered to remote in and take a look if need be to get their software to work even clearly the issue is windows and or esxi passhtru, etc. In the long run no need for this - but did have the meeting scheduled, etc. I can not really say enough good things about their product and support.

#11 stokhli

stokhli

    Neowinian

  • Joined: 09-August 07

Posted 07 October 2013 - 16:16

I work in the support department for a large company and I typically deal with all the RAID issues that come around.

 

I want to first start out by saying the performance hit with RAID 5/6 isn't that much of a factor anymore with the speed and resources of modern day RAID controllers.

 

RAID 10 is good but if you don't maintain it, it has a very big weakness. People like to tout how it can survive half the drives failing, however they always neglect to mention that it can only survive 1 drive from each RAID 1 leg. I've seen too many times when a drive from a RAID 1 leg fails but a replacement can't rebuild due to errors on the other member. When that happens nothing you can do other than start that array over from scratch.

 

RAID 60 is a good solution if you want to go with multiple drives, but, I would keep the amount of drives limited to about 12(and with hotspares, have to have hotspares). Too many drives just basically increases the chances for complete failure exponentially.

 

However, I prefer doing a RAID setup on ZFS to hardware RAID. I used to hate software RAID (even still do, other than RAID Z 2/3)



#12 goatsniffer

goatsniffer

    Supercalifragilisticexpialidosh

  • Joined: 11-January 04
  • Location: New York, USA

Posted 07 October 2013 - 16:51

RAID should be used to increase storage performance, and to decrease downtime caused by hardware failure. You should also be using a high-quality dedicated controller for RAID. For any RAID setup, you should be doing automatic disk and array verification, as well as backing up anything you cannot bear to lose to another location.

 

On my workstation I use a RAID-0 configuration, where all the critical data is backed up to a network location. The purpose is to combine storage into one volume, and to increase storage performance.

 

On my home server I use two 4-disk RAID-5 arrays (Though I recommend RAID-10 to everyone else). The reason being, using two 5-in-3 SATA bays, I have the capacity for 10 disks. I prefer the capacity for increased storage to the benefits of RAID-10. I backup what is important to an external disk, and have only myself to blame should there be a catastrophic failure.

 

In my professional experience, hot-spares are a bad idea. You end up with depending on a disk that is as old as the one failing to sustain a trouble-free array rebuild of a typically large amount of data being sequentially written. In one case, I was evaluating a company's SBS server, and I decided to do a disk integrity verification task on their RAID-5 hot-spare. It failed. IMO, keeping a pre-verifed spare on-hand and having the array status notifications properly configured is better.



#13 primexx

primexx

    Neowinian Senior

  • Tech Issues Solved: 6
  • Joined: 24-April 05

Posted 07 October 2013 - 17:17

I use mirrored right now, maximal redundancy.



#14 OP riahc3

riahc3

    Neowin's most indecisive member

  • Tech Issues Solved: 11
  • Joined: 09-April 03
  • Location: Spain
  • OS: Windows 7
  • Phone: HTC Desire Z

Posted 07 October 2013 - 17:19

Hello :)

Already went over some of them in your other thread..

Why do we need to repeat ourselves?

I am more interested in your statement "It has saved me quite a few times when a drive has failed." After you start out your topic clearly in agreement that raid is not a black up solution.

Well, it has saved in the sense that a disk fails and I do not lose any data. Once a disk fails, my procedure is the following (and i know most of you arent going to like this).

1: Make sure important backups are up to date (all of the failures Ive had with one HDD, all the backups were extremely up to date so no worries there)
2: Send off the HDD to get a replacement
3: Do not use the PC at all.

Now number 3 must KILL some of you inside saying "Double you tee ef! What are you going without your PC for at least a week?". The truth is that most of that RAID has "replacable data". There isnt anything (other than important backups which I always keep up to data) life changing. All I will waste is time, gaining it all back. Besides, its nice also to disconnect a bit from the PC and enjoy other aspects of life.


So what did raid save you from? Not having to restore from backup? If you were using just a normal drive pool, my critical files are on multiple disks in the pool - and only thing that would had to been restored from backup is the stuff on that disk that I deemed non critical in the first place.

As I stated in your other thread what portion of your data is deemed "critical" it is not always cost effective or performance wise to keep parity on data that is not critical.

The other aspect where raid fails is your spending $$ to for 9TB of storage - when you currently have 1TB? Unless your going to ramp up need very very quickly you have $ sitting there wasted using elec, just waiting to fail ;)

So myself and others have already gone over some of the other options - how about you tell us why those options don't sit well with you, other than your legacy attachment to a old technology.

Well, from a certain point of view, it really saved me from nothing. Time is wasted. Efficiency is nulled as I dont use my PC (I do enjoy the break so its 50% purposly and another 50% not). I guess it comes down to pure lazyness and the ability to plug in a new drive and say "F it" and it does it all by itself without hardly intervention. That might not be the best way to do things but...

Those 9TBs are really, as I mentioned, future proofing. When I got them, I was working with less than 1GB files. Since now I have room to "move around" I working with 5GB+ files. I dont feel the need to watch spaces. Its something I REALLY hated in the past. When I needed new space, I would have to make a 2 hour trip or wait a week for a new hard drive to arrive. Personally, really got to me.


I'm not sure what there is to debate? All the different RAID levels have their pros and cons, some with more cons than others but it's all common information.

Thats what the thread is about. List the pros/cons in situations myself and other dont see...


I believe this discussion is more related to home or small office - not an enterprise setup. This is the track I took in the other thread. Where he was asking for a 4 bay nas, neither model he was looking at would be used in an enterprise.

Yes, this is SOHO. I would have other ideas in implement in a enterprise.


So I look at this from point of view of type of files I have in my home, and what I serve up off my storage.. These are media files, video and music mostly. All of which have no need of parity, since if they are lost I can replace them off their original media sitting on my self or if need be gotten again via other channels ;)

Now what is critical is a small subset of these files, my home video for example. These I have backed up in multiple locations on different storage, cloud, other disk in different system, optical on my self and another copy at my son's home, etc. So why should I create parity for say my rip of scarface or my grateful dead cds? Now for piece of mind - these "critical" files are also duplicated onto another disk in the pool automatically, so you get the same sort of protection you get with raid 1, while only using a subset of your storage pool for these non replaceable "home movies".

Money spent on that parity seems wasted to me, if that drive died where those files are stored I could just rerip (replace from my backup). There is no critical need for these files to be online in case of disk failure. Which is what the purpose of raid - this has little use in home setup or where only a subset of files in the storage is considered of a critical nature and needs to be online even with hardware failure such as a disk.

To me I rather waste money on parity of replacable data than have to go out and buy hardware 2+ hours plus away just to replace my storage. Maybe Im failing to realize something obvious....


In his example of 1TB of storage - why would raid 1 not be better option? He uses 2x3TB or even 2x2TB and he covers his storage needs at lower cost while still having room for growth that should cover him for quite a bit of time.

I believe its a pure act of laziness on my part: Instead of having 1TB of storage and when I run out of room (at lets say I have another HDD already) just plug it in, why not have it plugged in already waiting. Now you brought up a good point that is electricity/wearandtear not being used....


Raid 5 is better suited when you need specific amount of storage but can not achieve this within specific cost constraints with a mirrored setup, Say he needed 6TB of storage - well there is not 6TB disks as of yet.. So he could do say 4x3TB in raid 10 or 5, or he could do 3x3TB in raid 5, or 4x2TB in Raid 5, etc. But again what amount of that storage requires parity? All of it then sure raid 10 or 5 might make sense.

Or what if he has only 1 TB of critical and 5TB of stuff that is nice to have digital access to - like movies and music. I could accomplish that with 2x4TB in a pool where my 1TB is duplicated on each disk. And this leaves me 6TB of storage - 5 of it for my other stuff and 1TB of growth. At a much lower cost and better flexibility. Since I only need 2 disks. And such time that I need more space I could add another disk to the pool - and its connection and size is no matter, it could be say a 2TB esata or usb even, now if I wanted I cold duplicate my 1TB of critical to all 3 disks in the pool and still have an extra 1TB to play with. Lots of different scenarios viable in the growth of my storage pool.

Not having to put min 3 disks into use all at the same time, allow me to grow my storage using size and connection type that gives me best bang for the buck. As we all know, disks only get bigger, faster and cheaper next month. Such a methodology allows me to stagger disks purchase to take advantages of lower cost when I actually need the storage, not having to calculate how much I need to put online now to have what I need 2 years from now.

This can allow for retirement of your OLD disks before they fail as you just naturally grow your storage replacing older/slower/smaller disks with faster/bigger ones while not requiring more slots.

If need be I can move these disks in my pool to new box - not having to worry about the raid controller in it, or lack of one. Say I need to take a bunch of media to a remote location - I can just take the disk out of the pool and access the files directly via anything that can read the filesystem I used - in my case just common ntfs.

I want to confirm because I might not understand the correct terminology of what you said: What do you mean when you say that you put partiions of your HDDs in a pool?

For the record, I like my storage reprented by ONE drive letter. I dont like splitting partiions and or have each drive represent different things (videos, music, etc). Thats a personal prefrences.

Lets take a look at your 4x3TB - from the math I have seen, there is something like a 56% chance that with reading 10TB of data that you will encounter an unreadable bit and your rebuild will fail. So when your 1 disks fails its a coin toss if your going to be able to rebuild the array from that parity you spent good money on creating. Also you more than likely built that array from disks purchased all at the same time, most likely in the same batch - once 1 disks fails in a batch, the probability of another disk failing in that same batch increases, etc.

What sort of disks are you using to create this raid in the first place - are they enterprise quality designed to be in a array where they are read and written too constantly? Those disks are normally more costly, does this added cost make sense in a home setup to serve media files?

I believe I remember that I bought from at different dates or/and batches. I cant confirm that either.

In my (I guess luck) one disk has failed every 2-3 years. Some are there from the start, another failed 2 months ago or so and got quickly replaced.

One of my mistakes was indeed not buying enterprise quality HDDs.


Its great you have had great success with raid 5 in the past, does not mean it meets the needs of today or makes sense with the size and speed of disks that are available today and the other ways to merge them together to so that their combined space is accessible in one location.

Well, if its changable and makes sense I would change my RAID5 to something else.

Thank you BudMan and others.

#15 +BudMan

BudMan

    Neowinian Senior

  • Tech Issues Solved: 106
  • Joined: 04-July 02
  • Location: Schaumburg, IL
  • OS: Win7, Vista, 2k3, 2k8, XP, Linux, FreeBSD, OSX, etc. etc.

Posted 07 October 2013 - 18:06

"I want to confirm because I might not understand the correct terminology of what you said: What do you mean when you say that you put partiions of your HDDs in a pool?"

My disks look like 1 drive to the system.. Lets call it H: just like a raid array is 1 disk to the system.

Where yours is made up of 3 disks of the same size and speed, my pool can be made up of multiple sized disks and or connection types. Currently it is a 2TB and a 3TB, it use to be 2TB with 750GB and another 750GB, giving me 3.5 total (just using the actual size of disks from the maker - not the actual useable space that you get with disks) For example 3TB is only 2.7TB..

What is nice is I can actually access each disk in the pool directly while the OS also sees it as part of the pool

pool.jpg

So if you notice rdm2 is currently not part of the pool, since this is one of the older 750GB disks that was showing signs of possible failure. But I can directly access E or F, but their total size is presented to the OS as H..

disks.jpg

So as you can see here
duplicated.jpg

My molly folder in the pool is on 2 disks, if I had say 5 disks in the pool I could tell the software to keep everything in the molly folder on all 5 disks, or 3 of them if so desired. Or I can even take that down to the file level and say hey movie.mp4 make sure you keep that on 3 disks at all times.

This gives me my real time replication of my "critical" data while not having to worry about creating parity for every single file I have in the pool.. Like my neil diamond CD's -- which I could just rerip if disk failed that was storing those.