The_Decryptor, on 06 January 2013 - 23:21, said:
As mentioned, that's only for files 20MB or smaller, and it's not part of the filesystem, it's OS logic (No functional difference to the end user, but there's a logical break).
Splitting the OS and its proprietary filesystem into two separate entities is a complicated proposition, seeing how HFS+ is strictly a Mac filesystem, and its implementation has been improved as OS X evolved (is HFS+ journaling an OS or a filesystem feature, for instance? if a new feature of NTFS such as Quotas or Compression is released together with a specific version of Windows and not backported, does it matter that it is technically ntfs.sys and not Windows if one can't be used without the other?). The defragmentation-on-copy is implemented on the kernel level. OS X uses an assortment of other features such as delayed allocation to make defragmentation a moot point. In normal circumstances there is simply not a sufficient performance gain to justify the wasted time and extensive disk activity caused by defragmentation.
And EXT4 is only resistant to fragmentation as long as you pre-allocate the files (yay extents) to the end length (same with other file systems). If you write a solid 20MB of data, then write 1KB to the middle, it's not going to be able to actually place it in the middle of the 20MB block, it'll be placed somewhere else.
Even if a terribly-written program would decide to do that, defragmentation would probably do more damage than it's worth as a whole to a Linux system unless the program is extremely well written to take into account that files are spread as a strategy to resist fragmentation and improve seek times. The benefits of manual defragmentation, at least under modern versions of Linux and especially Mac OS X, are not readily apparent under normal use. Even under Windows, the built-in defragmentation tool has become less and less thorough because there is a time/performance gained ratio at play here.