Case for 15 Drive RAID Setup


Recommended Posts

Hey Guys,

I'm planning on building a new server soon with approx 15 Hard Drives, I'm looking for a fairly decent case with 9 x 5.25 slots which I can fill with these suckers: CSE-M35T-1B

CA-M35TIB_LG.jpg

Does anyone have any recommendations for cases I should buy?

Additionally any personal recommendations for RAID Cards / Motherboards?

Thanks Guys,

Chris

Link to comment
Share on other sites

First, who is this for, you or a company? And what is the budget? 15 drives inside one machine is pushing just about any case I've seen to it's limits, though I'm sure you could find something if you looked hard enough. At 15 drives I'd be looking more towards an external storage system with perhaps a fiber channel connection to your server, but then you're starting to talk more expensive solutions. If you do try to put it all inside one system, you'll almost definitely want (if not need) separate controller cards for the drives. I don't even know what motherboards would support that many drives, and if they did, it would probably require sharing channels and mean performance degredation. Additionally, I don't think many onboard RAID controllers can handle that many drives and handle them efficiently. You'd be talking an actual enterprise server by then which would also get expensive.

I'm thinking something kind of like this: http://www.ebay.com/...=item2ebabe7379

Honestly, I had no idea they were so incredibly cheap online. I guess because they are so old and SATA and everything enterprise is going SAS now. Although I deal with these every day, I don't really know much about all the different models there are, so you still might want to do a little research before you click buy now on that link. Regardless, that's the direction I would look rather than stuffing 15 drives in a case.

Slightly more educated edit: The DS4000 line has limited maximum storage, so you wouldn't be able to put terrabyte drives in it, which is why it's cheaper probably. You may want to look at other storage units, that was just the first to come to mind since i have a rack full of them next to me, but they are older and only have 300 GB drives. Unfortunately if you go for a storage unit that can accommodate more capacity, your price will probably go up rapidly.

Link to comment
Share on other sites

Yea, 15 drives in a RAID.. you need to invest in some type of enterprise storage... And if you or the company can't afford it you likely don't have any business or real need for putting together what you are trying to do.

Link to comment
Share on other sites

Yikes, I looked up the DS5100 which is another step up and allows (I think) 16 TB per unit, but it costs as much as a new BMW, haha. IBM's stuff is going to be very expensive though. You'd have to be a high demand enterprise customer for a lot of their stuff. Gives me a new appreciation for the entire rack of them next to me. Look at some other companies and see what they offer for external storage.

Link to comment
Share on other sites

Okay, this is for personal use, this'll be connected to my Server 2008 R2 box with iSCSI for Mass Storage and Backups.

I will be using the Lian Li PC-A77FB with 4 x CSE-M35T-1B giving me 20 Drives total.

Now I've just gotta find a motherboard and set of PCI-E SATA Controllers to run the system. Will likely be using FreeNAS or OpenFilier. I've had a look at Unraid but I simply can't get my head around the redundancy and technology aspects of it so will not be using it. RAID-Z on ZFS seams the most logical direction to go.

Link to comment
Share on other sites

Okay, this is for personal use, this'll be connected to my Server 2008 R2 box with iSCSI for Mass Storage and Backups.

I will be using the Lian Li PC-A77FB with 4 x CSE-M35T-1B giving me 20 Drives total.

Now I've just gotta find a motherboard and set of PCI-E SATA Controllers to run the system. Will likely be using FreeNAS or OpenFilier. I've had a look at Unraid but I simply can't get my head around the redundancy and technology aspects of it so will not be using it. RAID-Z on ZFS seams the most logical direction to go.

Hmm, ZFS with that kind of storage you're going to need LOADS of ram or its going to be horrid. 32GB Would be best.

I'd really give unraid a go. You'll have to buy the pro licence for that many drives but well worth it.

The redundancy of unraid allows you to lose one drive - and keep going. Should you lose more than one drive, you only lose the data that is on those disks - your other data is perfectly accessible.

If you want ISCSI with that many drives maybe consider Raid10 and then openfiler for the ISCSI part. Depends what you want from the system, I guess.

Link to comment
Share on other sites

Hmm, ZFS with that kind of storage you're going to need LOADS of ram or its going to be horrid. 32GB Would be best.

I'd really give unraid a go. You'll have to buy the pro licence for that many drives but well worth it.

The redundancy of unraid allows you to lose one drive - and keep going. Should you lose more than one drive, you only lose the data that is on those disks - your other data is perfectly accessible.

If you want ISCSI with that many drives maybe consider Raid10 and then openfiler for the ISCSI part. Depends what you want from the system, I guess.

Yes, I have read about the amount of Memory ZFS requires to run well. Regarding Unraid, I simply don't understand it in the slightest, I've read tons of guides and watched plenty of videos on it but it just won't sink in.

I just simply cannot get my head around it. Data isn't being mirrored across volumes, Shares can mount to a specific disk. I just want one massive pool of storage, I don't want to have to concern myself with which disk what file is on. Will setup in a VM and have a play now but I really don't understand this.

hilarious - perhaps if the dude has $100k :cool:

Thats a negative! lol

Link to comment
Share on other sites

I built the same thing you are doing a few years ago. I chose the Coolermaster Centurion 590. Then I stuck inside of it three of the 5x3.5" to 3x5.25" adapters.

Here is a picture from my build illustrating how it went together: http://i.imgur.com/6bjPI.jpg

And here is what it looked like inside giving you an idea of clearance to the motherboard: http://i.imgur.com/EaEGm.jpg

Now in that picture you can see I used a mATX motherboard. I later swapped that with a Gigabyte UD5 which is a full sized ATX board and it still fit perfectly fine. The great thing about this case is that it retails for ?50 (or atleast did when I got it) and if you're considering adapter backplanes you know they are not cheap. Around ?80-?100 each so getting a cheap case can take some of the bite out of it.

I later upgraded to a Lian Li PC343B which has 18x 5.25" slots (9 on each side, it's a perfect 18" cube case) and from that I was able to double up my total storage to 30 disks using 6 adapter Backplanes. This case however is quite a lot more expensive at ?230 retail. It does however support 6 drives at the back not inside of any backplanes which is great for putting SSD's or a boot drive on away from the RAID storage.

Hope this helps.

  • Like 2
Link to comment
Share on other sites

I built the same thing you are doing a few years ago. I chose the Coolermaster Centurion 590. Then I stuck inside of it three of the 5x3.5" to 3x5.25" adapters.

Here is a picture from my build illustrating how it went together: http://i.imgur.com/6bjPI.jpg

And here is what it looked like inside giving you an idea of clearance to the motherboard: http://i.imgur.com/EaEGm.jpg

Now in that picture you can see I used a mATX motherboard. I later swapped that with a Gigabyte UD5 which is a full sized ATX board and it still fit perfectly fine. The great thing about this case is that it retails for ?50 (or atleast did when I got it) and if you're considering adapter backplanes you know they are not cheap. Around ?80-?100 each so getting a cheap case can take some of the bite out of it.

Now this is a genuinely helpful post, do you have any photos of the internals after wiring? A list of used PCI/PCI-E SATA Cards?

Thanks!!

Link to comment
Share on other sites

Now this is a genuinely helpful post, do you have any photos of the internals after wiring? A list of used PCI/PCI-E SATA Cards?

Thanks!!

Sure here is a shot with the 15x Sata cables connected. The yellow cable visible is the boot disk. http://i.imgur.com/3G6vm.jpg

And here is the system all wired up and powered on. http://i.imgur.com/RuD7M.jpg

The only reason I'm using a 1000 Watt PSU in the system is that I had it laying around. I would not use something that beefy for this application any other time (The server barely pulls 150 Watts according to my UPS).

RAID card wise in those pictures I am using two Highpoint 2320's they are identical cards but one is a later revision which is why the heatsink is different. They worked fantastic for me for a long time but I've switched out to using an LSI 9620-8i and a HP SAS Expander which provides me with a dedicated processor for parity calculations on the RAID card (which the Highpoints lack) and it gives me 36 ports to play with (8 on the controller + Expander, 4 are used for connectivity to the expander leaving 36 usable for disk drives).

Link to comment
Share on other sites

I thought you didn't like the x58 gigabyte boards :whistle:

Nice case...*nvm the rest here*

I don't. If you remember in that original thread I had to switch to another X58 board because the Gigabyte was a piece of **** for a gaming setup. I stuck an Asus P6T6 in my desktop and had the Gigabyte left over gathering dust. So eventually I stuck the UD5 in the server. As there were no graphics cards in the server blocking the power buttons to turn the system on it wasn't a problem.

Although Gigabytes layouts on all their early X58 boards were really poor the UD5 has been rock solid stable for me in the server. At one point recently it crossed 100 days uptime and I've not had any crashes on the system that weren't due to hardware failure of other components.

Link to comment
Share on other sites

Well, I've tried setting up Unraid in VMWare Fusion 4.1 but unfortunately it cannot find Networking drivers and thus only has the loopback 127.0.0.1 IP. Will figure that out later.

Can you explain to me how that HP SAS Expander works? I've got no experience with SAS beyond the connector built into my current HP MicroServer. Can I connect multiple SATA into a SAS Channel or similar? *n00b storage question*.

Link to comment
Share on other sites

Well, I've tried setting up Unraid in VMWare Fusion 4.1 but unfortunately it cannot find Networking drivers and thus only has the loopback 127.0.0.1 IP. Will figure that out later.

Can you explain to me how that HP SAS Expander works? I've got no experience with SAS beyond the connector built into my current HP MicroServer. Can I connect multiple SATA into a SAS Channel or similar? *n00b storage question*.

Sure. Basically the HP SAS Expander features 8x8087 SAS and 1x8088 SAS ports. Each port is basically four singular SAS connectors bonded together.

You can use an 8087 to SATA cable to take one of the 8087 ports and 'fan it out' to connect four hard disks. So with that one expander you can get a potential of 32 Disks connected by using 8x8087 to SATA cables. These cables each cost about ?8-?10 retail and you can fit four SATA or SAS hard disks per cable.

Now the 8088 SAS port on the card is just the same as an 8087 but its the external version. So the connector is a bit more beefy and it's on the back of the cards bracket which sticks outside the case. This is the connector I've chosen to use with my RAID card so I can use all 8x8087 ports for 32 Disks. On my RAID card it has 2x8087 connectors so I'm using a 8087 to 8088 cable to take one of those connectors on my RAID card outside of the case through a water cooling grommet in my case and in to the back of the expander.

The HP SAS Expander supports 6Gb/ps when used with SAS disks but it only supports 3Gb/ps when used with SATA disks. This isn't a hardware limitation just something HP has done with the firmware to make the card more attractive to enterprise customers. It still works fine with 6Gb/ps SATA disks but it negotiates them at the slower 3Gb/ps speed.

Most SAS RAID cards which support SAS2008 spec (you'll find this listed on their websites or manual usually) should work with the HP SAS Expander but to be honest it's not always that cut and dry. The best RAID cards compatibility wise feature an Intel processor. Cards such as the Highpoint 4330 4320 4321, LSI 9620 series (4i, 8i, 16i) all work fine with the Expander.

It's cost effective but to a certain point. The HP SAS Expander retails for around ?330 in the UK and then you still need to shell out around ?100 on cables and about ?200 minimum on a compatible RAID card. The main benefit is a 32 port RAID card with RAID6 support is around ?1200 so it does cost less than one of those and you always have the option of using the expander in a future system. In my case as I'm using an LSI 9620-8i which features two 8087 connectors I could buy another expander and double my storage potential to 64 disks.

Something else to note is that the expander is not seen by the operating system. Although it connects through PCIe it only does so to receive power and doesn't communicate its presence to the computer at all. Thus it is possible to power it separately from the motherboard you're using it in and it still works fine. It also means it "just works" no configuration needed beyond plugging in the cables and powering the computer on.

Link to comment
Share on other sites

Ahh now I understand, so all disks connect to the SAS Ports on the HP SAS Expander Card via 8087 to SATA Cables then the SAS Expander Card connects to a SAS RAID Controller via a 8088 to 8087 Controller. The OS Sees the SAS RAID Controller, not the HP Expander Card but the RAID Controller manages all disks.

Will JBOD work with this setup?

Link to comment
Share on other sites

Ahh now I understand, so all disks connect to the SAS Ports on the HP SAS Expander Card via 8087 to SATA Cables then the SAS Expander Card connects to a SAS RAID Controller via a 8088 to 8087 Controller. The OS Sees the SAS RAID Controller, not the HP Expander Card but the RAID Controller manages all disks.

Will JBOD work with this setup?

That is exactly how it works yes. And yes JBOD will work. You can use a HBA controller with it such as an LSI 9211-4i or an LSI 3081E-R.

Also in my example I used the 8088 to 8087 but it is also possible to use an 8087 to 8087 cable when connecting your RAID or HBA Controller to the Expander. But obviously doing it that way would lose you an internal 8087 connector on the expander that you could use for disks.

This all means of course you can use the expander with ZFS or UnRAID etc

Link to comment
Share on other sites

Indeed, that sounds like the much more promising a future proofed method of doing this! I shall begin building in about 2 - 3 months. Will post up details as I go.

Thanks for all your help Vice.

Link to comment
Share on other sites

This topic is now closed to further replies.