Why sequential vs random I/O speeds differ for SSD


Recommended Posts

Does anyone have a good understanding of why SSD sequential reads/writes are so much faster than random ones? Some things that are bothering me:

1. There are no moving parts so there's no seek time nor latency time so why the big speed difference...or any difference at all

2. Apparently, people have shown that defragmenting an SSD doesn't improve performance. That implies that whether or not all the pages of a 1MB file are sequentially laid out on disk or randomly dispersed throughout the disk shouldn't matter, but SSD benchmarks clearly show that sequential I/O and random I/O speeds differ.

I'm not sure why write-limits are even a concern considering MLC memory cells have a write limit on the order of 10,000 which I believe means that an entire drive would would die if you repeatedly filled it up 100% and wiped it clean 10,000 times.

Link to comment
Share on other sites

I can't give you an answer to the first part off the top of my head, but in regards to write-limits, it becomes more of an issue with applications and processes that are write intensive. You're computer doesn't only access the drive when you explicitly open a file. It is constantly accessing data, and often writing, whether it's saving your browser data to cache, augmenting your game's save file, etc. All that adds up much more quickly then your write/wipe example, although it will affect individual bits, rather then the entire drive at once.

In reality though, SSDs are still a pretty immature technology (relatively speaking), and since you're likely to be upgrading it again relatively soon, it won't even matter, especially so since by the time you do upgrade, the write-limit will probably have been greatly increased.

Link to comment
Share on other sites

To my understanding, there is a phenomenal amonut of overhead once the I/O queue gets dumped on with a lot of random requests. The CPU has to coalesce the commands, the storage controller has to interpret and pass them down to the correct drive, and then the drive interprets the best order to process all of these commands (or doesn't if it's a cheap drive controller). Random file operations will likely also involve a lot of file table access, further compounding the issue.

It would be like going to the grocery store and buying a sack of potatos, one potato at a time. The spinning disk version of that would be driving to the farm and pulling up one potato, heading back home and doing it over and over again. Bulk operations, in just about any context, improve efficiency.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.