The real and complete story - Does Windows defragment your SSD?


Recommended Posts

So essentially he's saying that there is a limited amount of recognizable "fragments", so a SSD should be defragmented occasionally to maintain full performance.

That's the first I've heard of such a thing, but I guess I could believe it.  The whole "limited reads/writes" of SSDs is an overblown issue anyway, so I don't see any problem with defraging once and a while if this max fragments thing has any validity to it.

Link to comment
Share on other sites

Not one mention of wear leveling? It is not possible to defrag an SSD due to wear leveling. There is no point either. Fragmentation which defrag fixes is when part of the file is far away causing the disk to spin more than needed, so defrag fixes it to keep those files closer together (sequential) to increase performance. This does not matter with SSDs as there are no moving parts.

Are they talking about old data instead? I wouldn't call that fragmentation. If it was then your magnetic HDD is always fragmented, since there are forensics that can recover data that has been zeroed out.

Link to comment
Share on other sites

Show me an article from SSD engineers where seek time from Cell A is slower/faster than Cell B. Defragging on an SSD is pointless because it takes the same seek time for all cells. There are no moving parts. It does not need to spin to get part of the data.

So please, explain how it is possible to defrag an SSD when it implements wear leveling? How can we guarantee that our data will be sequential if the SSD decides to use a block that has not been used before? Why can't we zero out or SSDs then if we can avoid wear leveling?

You guys are mistaking defragging with garbage collection. Since wear leveling makes this impossible, which it should so our drive will get even use across all cells, there is no point. Since there is no evidence that Cell A has a higher seek time than Cell B, this is also pointless.

Trim and garbage collection is not the same as defragging. When you defrag a HDD it moves data to be sequential so your performance increases since the mechanical drive does not need to move as much to get all the data.

Also there are multi-channel drives that benefit from having data more spread out. It can access more at a lower speed.

  • Like 2
Link to comment
Share on other sites

"We guys" aren't mistaking anything -- it's all the article. I recommend posting your theories there in the active comments section to see what comes of it.

 

no, it's a BLOG

Link to comment
Share on other sites

Why don't you just read it. Windows does defragment SSDs, apparently.
 
Hint: it's not about seek times.

Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled.

 

There is a maximum level of fragmentation that the file system can handle. The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed.
 
SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective.

I learn new things every day.

Link to comment
Share on other sites

Personally I've only ever defragged one SSD...wouldn't do it at all on most of them but I would leave the defragger enabled. 

Hint: it's not about seek times.

It's the same effect.

With an SSD there is no rotational latency or seek time to contend with.  Many experts assume that fragmentation is no longer a problem, but the application data access speed isn?t just defined in those terms.  Each and every I/O request performed takes a measurable amount of time.  SSD?s are fast, but they are not instantaneous.  Windows NTFS file system does not behave any differently because the underlying storage is an SSD vs. HDD and therefore fragmentation still occurs.  Reducing the unnecessary I/O?s by preventing and eradicating the fragmentation reduces the number of I/O requests and as a result speeds up application data response time and improve the overall lifespan of the SSD - See more at: http://blog.condusiv.com/post/2012/01/21/Setting-the-Record-Straight-Windows-7-Fragmentation-SSDs-and-You.aspx#sthash.BfMQEq0N.dpuf

Link to comment
Share on other sites

hold up here, is it the SSD or the MFT table that needs defragged? the MFT table could easily get internal fragmentation, it is a database that points to where everything is stored... that being fragmented could slow down file access.. that is not the SSD it's self that is the problem but the file system structure... in NTFS's case the MFT table can be cleaned up and rearranged to perform better, I know I've seen issues with it even on SSD's and doing a clean up of it fixes the speed issues for file access times.

  • Like 2
Link to comment
Share on other sites

hold up here, is it the SSD or the MFT table that needs defragged? the MFT table could easily get internal fragmentation, it is a database that points to where everything is stored... that being fragmented could slow down file access.. that is not the SSD it's self that is the problem but the file system structure... in NTFS's case the MFT table can be cleaned up and rearranged to perform better, I know I've seen issues with it even on SSD's and doing a clean up of it fixes the speed issues for file access times.

 

Agreed, I do not think the SSD is being defragged.  We actually WANT....yes WANT a fragmented SSD because we want to get use out of all cells equally (called "wear leveling").  And wear leveling makes defragging the SSD impossible.

 

Where in that "article" did it mention wear leveling?  Nowhere.  So again, how can you possibly defrag an SSD that uses wear leveling?

Link to comment
Share on other sites

i'm more interested in this quotetation:

you can hit maximum file fragmentation (when the metadata can?t represent any more file fragments) which will result in errors when you try to write/extend a file

maximum file fragmentation?

I remember deliberately making a singe file (almost 2 GiBytes in size) but very fragmented in FAT16 system (DOS).

While its having a lot slower performance as expected,

DOS can read that file in proper order, and I can also modifies that file content without any error.

 

So why newer NTFS would having more issues if a file (or $MFT) became too fragmented?

Link to comment
Share on other sites

This topic is now closed to further replies.