Jump to content



Photo

Why sequential vs random I/O speeds differ for SSD


  • Please log in to reply
2 replies to this topic

#1 psyko_x

psyko_x

    Neowinian

  • Joined: 13-July 02

Posted 26 April 2011 - 04:25

Does anyone have a good understanding of why SSD sequential reads/writes are so much faster than random ones? Some things that are bothering me:

1. There are no moving parts so there's no seek time nor latency time so why the big speed difference...or any difference at all
2. Apparently, people have shown that defragmenting an SSD doesn't improve performance. That implies that whether or not all the pages of a 1MB file are sequentially laid out on disk or randomly dispersed throughout the disk shouldn't matter, but SSD benchmarks clearly show that sequential I/O and random I/O speeds differ.

I'm not sure why write-limits are even a concern considering MLC memory cells have a write limit on the order of 10,000 which I believe means that an entire drive would would die if you repeatedly filled it up 100% and wiped it clean 10,000 times.


#2 Seizure1990

Seizure1990

    Neowinian Senior

  • Joined: 17-February 08
  • Location: NYC

Posted 26 April 2011 - 04:44

I can't give you an answer to the first part off the top of my head, but in regards to write-limits, it becomes more of an issue with applications and processes that are write intensive. You're computer doesn't only access the drive when you explicitly open a file. It is constantly accessing data, and often writing, whether it's saving your browser data to cache, augmenting your game's save file, etc. All that adds up much more quickly then your write/wipe example, although it will affect individual bits, rather then the entire drive at once.

In reality though, SSDs are still a pretty immature technology (relatively speaking), and since you're likely to be upgrading it again relatively soon, it won't even matter, especially so since by the time you do upgrade, the write-limit will probably have been greatly increased.

#3 random_n

random_n

    Neowinian

  • Joined: 15-November 05
  • Location: Winnipeg, MB

Posted 26 April 2011 - 04:49

To my understanding, there is a phenomenal amonut of overhead once the I/O queue gets dumped on with a lot of random requests. The CPU has to coalesce the commands, the storage controller has to interpret and pass them down to the correct drive, and then the drive interprets the best order to process all of these commands (or doesn't if it's a cheap drive controller). Random file operations will likely also involve a lot of file table access, further compounding the issue.

It would be like going to the grocery store and buying a sack of potatos, one potato at a time. The spinning disk version of that would be driving to the farm and pulling up one potato, heading back home and doing it over and over again. Bulk operations, in just about any context, improve efficiency.



Click here to login or here to register to remove this ad, it's free!