Why sequential vs random I/O speeds differ for SSD


Recommended Posts

Does anyone have a good understanding of why SSD sequential reads/writes are so much faster than random ones? Some things that are bothering me:

1. There are no moving parts so there's no seek time nor latency time so why the big speed difference...or any difference at all

2. Apparently, people have shown that defragmenting an SSD doesn't improve performance. That implies that whether or not all the pages of a 1MB file are sequentially laid out on disk or randomly dispersed throughout the disk shouldn't matter, but SSD benchmarks clearly show that sequential I/O and random I/O speeds differ.

I'm not sure why write-limits are even a concern considering MLC memory cells have a write limit on the order of 10,000 which I believe means that an entire drive would would die if you repeatedly filled it up 100% and wiped it clean 10,000 times.

I can't give you an answer to the first part off the top of my head, but in regards to write-limits, it becomes more of an issue with applications and processes that are write intensive. You're computer doesn't only access the drive when you explicitly open a file. It is constantly accessing data, and often writing, whether it's saving your browser data to cache, augmenting your game's save file, etc. All that adds up much more quickly then your write/wipe example, although it will affect individual bits, rather then the entire drive at once.

In reality though, SSDs are still a pretty immature technology (relatively speaking), and since you're likely to be upgrading it again relatively soon, it won't even matter, especially so since by the time you do upgrade, the write-limit will probably have been greatly increased.

To my understanding, there is a phenomenal amonut of overhead once the I/O queue gets dumped on with a lot of random requests. The CPU has to coalesce the commands, the storage controller has to interpret and pass them down to the correct drive, and then the drive interprets the best order to process all of these commands (or doesn't if it's a cheap drive controller). Random file operations will likely also involve a lot of file table access, further compounding the issue.

It would be like going to the grocery store and buying a sack of potatos, one potato at a time. The spinning disk version of that would be driving to the farm and pulling up one potato, heading back home and doing it over and over again. Bulk operations, in just about any context, improve efficiency.

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • Not sure I agree with your particular interpretation of this. An ESU key is still required per device that you have to enter into the device in order to activate ESU. These aren't being given out for free. The linked Microsoft article still states: "If you have paid to enroll your remaining Windows 10 systems in the ESU program..." which seems to confirm this.
    • I swear, whoever is handling the Gundam IP for video games should be fired. How can you go from the gems that were releasing in PS1, PS2 and PS3, to the utter disaster that's today, from the VR game just to be on the hype bandwagon and all the focus on SD. This will probably be studied as one of the worst ways companies mishandle an IP.
    • Meta is now using every possible source to power its data centers by Hamid Ganji Chip shortage is not the only obstacle hindering AI development. The insatiable thirst for electricity from data centers has caused serious problems for tech giants, to the point where they have been forced to invest heavily in purchasing nuclear power plants. However, green and renewable energy could also serve as an alternative power source for data centers. As reported by Reuters, Meta has signed four deals with Renewable energy developer Invenergy to supply 791 megawatts (MW) of solar and wind power for its data centers. This is the second green deal between Meta and Invenergy to supply renewable energy to Meta's data centers, following the firms' signing of contracts last year for 760 MW of solar electricity. According to Invenergy, the latest deal soars Meta's renewable energy purchases to 1,800 MW. The green energy will come from Invenergy's projects in Ohio, Arkansas, and Texas. While renewable energy has a more limited capacity compared to methods like nuclear power, it still holds significant potential to meet some of the data center's energy needs. Moreover, investing in renewable energy aligns with Big Tech's net-zero plans. Last year, Meta announced a request for proposals (RFP) to identify nuclear energy developers in the United States. The company plans to generate 1-4 gigawatts (GW) of new nuclear power by early 2030. Also, in June this year, Meta and energy company Constellation announced plans to revive an aging nuclear power plant in Illinois that has been shut down since 2017 due to financial losses. Meta could rely on this nuclear power plant for the next 20 years. While some major tech companies were committed to achieving net-zero emissions by 2040, the soaring power demands from AI data centers could render all those green plans obsolete. That is why these companies have called for reforms to net-zero rules, as achieving their ambitious net-zero goals by 2040 seems highly unlikely.
  • Recent Achievements

    • Conversation Starter
      Kavin25 earned a badge
      Conversation Starter
    • One Month Later
      Leonard grant earned a badge
      One Month Later
    • Week One Done
      pcdoctorsnet earned a badge
      Week One Done
    • Rising Star
      Phillip0web went up a rank
      Rising Star
    • One Month Later
      Epaminombas earned a badge
      One Month Later
  • Popular Contributors

    1. 1
      +primortal
      531
    2. 2
      ATLien_0
      207
    3. 3
      +FloatingFatMan
      170
    4. 4
      Michael Scrip
      148
    5. 5
      Steven P.
      122
  • Tell a friend

    Love Neowin? Tell a friend!