[Review] Intel X25-M 80GB in RAID 5


Recommended Posts

Test setup:

GA-P55-UD4 with Intel RAID on the P55 (PCH)

Strip size 128KB

Intel Rapid Storage Technology 9.6.4.1002

Paging file moved off the SSD

Disable Indexing

Disable Superfetch

Enable write caching

Turn off Windows write-caching buffer flushing

Enable C State (note this setting in the BIOS when enabled was tested over on Tom's hardware to decrease performance but disabling it stops Turbo Boost from working).

Software tools for testing:

AS SSD Benchmark

Crystal Disk Mark

HD Tach

ATTO

Windows 7 Experience

Here they are three Intel X25-M 80GB...

three%20SSDs.jpg

So tempting to put them all in RAID 0... but they ended up in RAID 5 in the end...

So want to see some numbers...I'm sure you skipped right too them and not reading this...thats ok I do too.

Windows 7 Experience well 7.9 for data transfer rate best score you can get old score was 5.9 so now its no longer the slowest part of my system and get 7.5 base score.

RAID5%20windows%207.jpg

HD Tach Note the Burst speed changes on every run even on the same zones test!

8mb zones

RAID5%20write-back%20cache%20enabled%20plus%20buffer%20off%20HD%20tach%208.png

32mb zones

RAID5%20write-back%20cache%20enabled%20plus%20buffer%20off%20HD%20tach%2032.png

CrystalDiskMark from copy & paste

-----------------------------------------------------------------------

CrystalDiskMark 3.0 x64 © 2007-2010 hiyohiyo

Crystal Dew World : http://crystalmark.info/

-----------------------------------------------------------------------

* MB/s = 1,000,000 byte/s [sATA/300 = 300,000,000 byte/s]

Sequential Read : 727.729 MB/s

Sequential Write : 146.901 MB/s

Random Read 512KB : 396.183 MB/s

Random Write 512KB : 70.850 MB/s

Random Read 4KB (QD=1) : 12.797 MB/s [ 3124.3 IOPS]

Random Write 4KB (QD=1) : 54.132 MB/s [ 13215.9 IOPS]

Random Read 4KB (QD=32) : 284.913 MB/s [ 69558.8 IOPS]

Random Write 4KB (QD=32) : 52.806 MB/s [ 12892.0 IOPS]

Test : 500 MB [C: 11.9% (17.8/149.0 GB)] (x1)

Date : 2010/10/01 21:41:06

OS : Windows 7 [6.1 Build 7600] (x64)

ATTO

RAID5%20write-back%20cache%20enabled%20plus%20buffer%20off%20atto.png

AS SSD Benchmark

RAID5%20write-back%20cache%20enabled%20plus%20buffer%20off%20as%20ssd.png

AS SSD copy Benchmark

RAID5%20write-back%20cache%20enabled%20plus%20buffer%20off%20as%20ssd%20copy%20b.png

4K read seems a bit low?

Yes I am a bit disappoint about this and I'm not sure if its the RAID type, strip size used, drivers or something else...the lowest strip size you can have for a RAID 5 is 16KB which for RAID 0 & 10 for SSD in RAID is the default so one day I may give it a test or find a way to test it at that strip size...

SSD in RAID 5 Conclusion

The read speeds are impressive apart from the 4K QD1 read with each X25-M 80GB listed as having a Sustained Sequential Read of up to 250MB/s which with three in RAID gets close to it times 3 750MB/s with the speed falling off below 128KB transfer size and drops sharply after 8KB. Write speeds for RAID 5 are always bad but most of the time your reading/loading files so write speeds are only important if you want to write to the SSD...which is what you mostly want to avoid being that it wears the SSD out. Having said that the the X25-M 80GB is listed as having a Sustained Sequential Write of up to 70MB/s which with three in RAID is 210MB/s of which in RAID 5 gets you 2/3 of the write speed and at 4K it betters the read when testing at QD1 so it handles writes very well.

All in all another SSD Review with a difference and yes every thing is much faster.

Things about Intel RAID and SSD

Why 128KB strip size for RAID 5?

It is the Default listed for SDD in Rapid Storage Technology here is the full list:

------------------------- RAID 0----------------- RAID 5----------------- RAID 10

Default

SATA disks-------------128KB-------------------64KB-------------------64KB

Solid state disks--------16KB--------------------128KB-----------------16KB

Can you see SMART in RAID for SSD?

Now this might surprise you....using Intel Solid State Drive Toolbox not only can you see your SMART for SSD's in RAID but you can also see SMART for HDD's in RAID too!

http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=3044&DwnldID=18455&ProductFamily=Solid+State+Drives+and+Caching&ProductLine=Intel%c2%ae+High+Performance+Solid+State+Drive&ProductProduct=Intel%c2%ae+X25-M+Solid+State+Drive%2c+80GB+SATA+II+2.5in%2c+MLC%2c+High+Performanceeng

But does Trim work! And can you Trim the SSD out of RAID?

This is no for both as of 9.6.4.1002.

If in the Intel RAID boot ROM I delete the RAID Volume is all my files deleted?

No all it does is deletes the boot info for the RAID and partition.

Part 1: If in the Intel RAID boot ROM I re-create the RAID Volume with the same strip size & size of the array I had before will I be able to boot or see my data again?

No but what you have done is restored the boot info for the RAID but not the partition, to fix the partition after you have restored the boot info for the RAID in Win 7 as a bootable array do the following:

Boot from Win 7 DVD > and when you get to the install screen click > Repair your computer > select Restore your computer using a system image that you created earlier > cancel the re-image > and click startup repair.

Part 2: What if my RAID was not a bootable array but for storage how do I fix the partition after restoring the boot info for the RAID?

I have not tested a tool for this yet but what I can tell you that does work is RAID2RAID to see the array so you can move data off the array.

http://www.diskinternals.com/raid-to-raid

Link to comment
Share on other sites

Update to low 4K read.

As said I enabled C State in order to keep turbo boost and this is the problem as Tom's hardware found out but if you were to run a app using the CPU then your 4K read is more around 20MB/s (which was what I was expecting) CrystalDiskMark shows this best and gives higher numbers.

There is also some more ways found here but have only tested the disable idle (EIST and this might override the BIOS control) where you have to go in to the registry find:

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\54533251-82be-4824-96c1-47b60b740d00\5d76a2ca-e8c0-402f-a133-2158492d58ad]

and change ?Attributes? from 1 to 0 to get the option to show in power options in windows 7.

http://www.storagereview.com/how_improve_low_ssd_performance_intel_series_5_chipset_environments

Its shows it for laptops but it does apply to desktop motherboards too.

So you either disable C State in the BIOS and keep EIST and turbo boost only giving you 1 bin of speed and not run faster for Apps that are single core.

Cut down results from CrystalDiskMark

Random Read 4KB (QD=1) : 20.753 MB/s [ 5066.7 IOPS]

Random Write 4KB (QD=1) : 41.321 MB/s [ 10088.2 IOPS]

Test : 500 MB [C: 27.7% (41.3/149.0 GB)] (x1)

or disable idle and keep C State enabled (which now likely does nothing now) with no EIST and with turbo boost only giving you 1 bin again.

Random Read 4KB (QD=1) : 22.926 MB/s [ 5597.2 IOPS]

Random Write 4KB (QD=1) : 33.351 MB/s [ 8142.4 IOPS]

Test : 500 MB [C: 27.7% (41.3/149.0 GB)] (x1)

What I'm doing now (but not ideal) is having C State enable and creating Power Plan Shortcuts one with disable idle and one with enable idle.

http://www.howtogeek.com/howto/windows-vista/create-a-shortcut-or-hotkey-to-switch-power-plans/

AS SSD Benchmark with disable idle just gets 20MB/s over 2MB/s less then CrystalDiskMark.

RAID5%20write-back-cache-enabled%20plus-buffer-off-as-ssd-ide-disabled.png

Link to comment
Share on other sites

  • 4 weeks later...

Having an SSD in RAID-5 is a big mistake. Extra partial data will help wear down the drives faster, and you've destroyed your writes speeds as well by going down that road since the drives will need to write extra data. Read speeds are not what they really are. That'll be the cache buffer doing those numbers. Turn write cache off for real read numbers.

Actually, I'll just say what any other sane person would say. Just go to RAID-0 with 128k strip size (that's the optimal strip size).

Link to comment
Share on other sites

Having an SSD in RAID-5 is a big mistake. Extra partial data will help wear down the drives faster, and you've destroyed your writes speeds as well by going down that road since the drives will need to write extra data.

Well thats what I'm testing...

The drives have been online for over 880 hours (over 36 days) the host writes for all drives are around 403GB and the media wearout indicator for all drives is still 99.

So lets do the maths:

lets say the wearout indicator drops by 1 now with them numbers above.

365 day (1 year) in to 36 days is just over 10 so the media wearout indicator will drop by 10 every year with over 4000GB host writes.

Intel says you can write 20GB of data per SSD a day for 5 year so

4000GB (4TB) in to 20GB is 200 days which is less then a year so at MY rate this array will last 5 years.

Intel's Write Endurance Specifications are 7.5 Tera-bytes for the 80GB SSD but this is done by measuring 100% span of the drive with 100% random workload with 4KB transfer size. This is very unlikely to be done as transfer size can be higher and a higher transfer size causes less wear on the SSD as has been tested so the Write Endurance Specifications can be much higher then is listed because of mixed transfer size being done by the user and what they are doing.

Within 36 days I have:

Reinstalled Windows a number of times (due to another problem not to do with the SSD's) around 4 times.

Reinstalled two games because of that problem.

Have now installed 5 games

Installed a number of apps

A dozen or more benchmarks

Have moved the page file to this array as testing found it didnt write much to the array any way.

And thats only 403GB per SSD with the wearout indicator for all drives is still at 99.

So it will not wearout as fast as some might think.

Yes write speeds will drop down the road but it happens with any RAID and with the SSD getting filled up that will happen either way but tests of Intel's SSD in a used state show it to handle it better then other SSD's.

Link to comment
Share on other sites

I really don't think you understand what you are actually doing. Granted, RAID-5 offers some redundancy, but a lot of people would agree with me that you should just skip RAID-5 and just use RAID-0. SSD's are aimed at performance, solely for that reason. Why not get your monies worth and actually RAID-0 all of them?

Also, it's true about the wear leveling, but when you use more and more space, the free space you have will have more wear on them when you create/delete more data.

I have two OCZ Vertex Drives and the one thing I've learned is that the more space you use, the faster the wear levelling will happen in the free space area.

If you were really that bothered about using RAID-5, then you shouldn't of bothered getting an SSD (no mind 3 of them) and just kept your normal hard drives. Or even better yet, do what everyone else does, Put the OS on the SSD's and all your Downloads/Music/Games/Important stuff onto a RAID-5 hard drive array.

The problem you have is not only is the write speeds below that of a single SSD, but your read speed won't be that much different, your not seeing what I mean being you've got write-cache/cache turned on. When you start transferring large amounts of data, the cache will fill up and you will have slow downs while the system ways for the cache to empty while it's writing/reading to the SSD.

Link to comment
Share on other sites

The problem you have is not only is the write speeds below that of a single SSD, but your read speed won't be that much different, your not seeing what I mean being you've got write-cache/cache turned on. When you start transferring large amounts of data, the cache will fill up and you will have slow downs while the system ways for the cache to empty while it's writing/reading to the SSD.

So transferring a large 250MB or 500MB will not fill the cache up and thats why with write-cache/cache turned on the speeds look good?

Link to comment
Share on other sites

Write Cache is there for a reason, to help speed things up because SSD's may be fast, but their not efficient or fast enough to run without one. This is how you have 'Burst' speeds where you can max out a SATA Channel/ATA Channel for the first few seconds then things slow down. Those read speeds you have are an example of what I'm saying, you have stupid amount of speed, but turn your cache off and you'll see the real speeds.

What I'm trying to get across is this. You've bought SSD's (3 of them) for one reason: Speed. RAID-5 will not achieve this, not even close. RAID-0 will give you the performance you wanted. And yes, RAID-5 might offer some protection, but even if one drive fails, your RAID array will slow down to a crawl (talking from experience) regardless. RAID-5 are aimed towards uptime and less towards backing up your data.

I'll even bullet point what I'm saying..

1) You're not getting your moneys worth

2) You are crippling your drives speed

3) You're not getting the real protection you think you have.

4) You're killing the drives even faster because of the extra partial data being processed.

The most important thing is this. Wear leveling isn't as efficient as you think. Because while you will say that you can move so much data around a day and that will give you it's life cycle. If you already have data on the drive (Let's say 50% full), that means you are technically halving the drives life cycle because if you were still writing the same amount of data, your wear leveling will only have half of the cells to use which will then in turn see more cells being used/used more often.

There has been a discussion recently on the OCZ forums about the Vertex (and other drives) about maybe cutting about 10% of the drives capacity via firmware to help with wear levelling.

I suggest you do the sane thing and just RAID-0 them. Then you can review and show us some impressive numbers and see why you even bothered to spend so much. If you were that bothered about RAID-5 you will have a RAID-5 with 3 or more hard drives and not SSDs. All you've done - at least in my opinion - is shown how much you don't understand how SSDs work, what reasons we use RAID-5 and how much money you've wasted by achieving absolutely nothing.

Also, to addon to what I'm saying, this disturbs me a little...

Yes write speeds will drop down the road but it happens with any RAID and with the SSD getting filled up that will happen either way but tests of Intel's SSD in a used state show it to handle it better then other SSD's.

SSDs have come a long way since the first generation drives (which you are referring too). We now have TRIM + Garbage collection which can restore a drive to around 95% of the speed it first achieved when you got the drives. Also, while TRIM doesn't work for RAID arrays, Garbage Collection still works in the background to keep the performance up there. If anything, you are making it worse even more because the Garbage Collection will need to clear cells because of the extra partial data.

I think you are using the first generation drives and some silly idea about RAID arrays slowing down drives in the long run to cover the fact you still don't have any idea what your talking about.

I don't want you to feel offended by what I'm saying and I know I'm coming across as patronising, but I have two SSD's in RAID-0 and the speeds are superb and this RAID array has been going for quite a while now (9 months).

It's been good reading a review about what you are achieving, but like other people may ask is this: Where is your comparison? All you've shown us is benchmarks of the drive in a RAID-5 array, what about single drive speeds (No, we shouldn't have to google for this answer) and where is the comparison of a RAID-0 array? Because a RAID-0 array will give you even more impressive numbers, believe me.

Link to comment
Share on other sites

Write Cache is there for a reason, to help speed things up because SSD's may be fast, but their not efficient or fast enough to run without one. This is how you have 'Burst' speeds where you can max out a SATA Channel/ATA Channel for the first few seconds then things slow down. Those read speeds you have are an example of what I'm saying, you have stupid amount of speed, but turn your cache off and you'll see the real speeds.

Write-back cache works when multiple I/O requests come in and are grouped together to improve performance this writes the data more Sequential making it just as real speeds then when requests are not grouped with Write-back cache off. This is why Write-back cache works so well it is not a buffer that is like FIFO its stores multiple I/O requests and writes them more Sequential across the array.

Link to comment
Share on other sites

Write-back cache works when multiple I/O requests come in and are grouped together to improve performance this writes the data more Sequential making it just as real speeds then when requests are not grouped with Write-back cache off. This is why Write-back cache works so well it is not a buffer that is like FIFO its stores multiple I/O requests and writes them more Sequential across the array.

You still don't understand why and when you should use RAID-5, on what hardware, the optimal strip size (128kb) with an alignment/offset of 128kb (I bet you have set that up wrong since your using RAID). You've killed your performance and defeated the purpose of what SSDs are aimed towards. Where is the comparisons that I pointed out in my last post? Giving us random numbers, with no basis of comparison is useless to us. You have given us no reason why RAID-5 would benefit a user of a single drive setup or a setup with RAID-0, especially those who want SSDs for performance reasons.

If you start messing around with strip sizes like you've mentioned in one of your original posts then you'll run into that RAID degradation problem you mentioned earlier also. Windows Vista/7 can setup an SSD with the correct offset, but Windows Vista/7 cannot do this if it's in a RAID setup because it doesn't know if it's an SSD or not, so standard alignment/offset is given which will misalign how your data is stored on the SSD and cause headaches for yourself down the line.

All I've gotten from your review is that you don't know what your doing and why you would do a silly thing like setting up a RAID-5 with SSDs, really don't understand.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.