Pagefile and SSD


Recommended Posts

I'm in the process if buying a new computer and have been reading a lot about the pros and cons of SSDs and especially the Intel X25-M 80GB. One of the main disadvantages with SSD is the limited life of flash memory. All modern models use wear leveling but they still have problems because they don't support the TRIM command and that can make the wear leveling algorithm not perform the best result possible. This will hopefully be resolved in time to Windows 7 RTM.

My greatest concern right now is the pagefile and how windows will be making a lot of writes to the SSD. I have always been in the school who always keep the pagefile turned on. The new situation faced by wearing out the drives has however made me go back and analyze the advantages and disadvantages with different configurations of the pagefile.

The simplest and most compatible way is to just let windows handle everything as normal. This would probably create the least problems and be very fast. It will also be the configuration that wears out the drive most and because of the high price of SSDs you wouldn't want a large pagefile taking up space on it.

A second option would be to put an additional conventional drive and put the pagefile on that while disabling it on the SSD. This would keep compatibility as high as letting windows decide but would slow thing down if swapping is needed. It would also eliminate any wear on the SSD from pagefile activity. The downside is that when windows needs to swap it would be very slow.

A sort of middle of the ground solution is to use a USB memory of CF card and put the pagefile on that. It will probably be tricky to get windows to accept a USB memory as a fixed drive but could probably be done. A CF card and a CF to SATA adapter would be a simpler solution but will require more additional hardware. The advantage of this is that a secondary cheap flash based memory will have better random I/O performance than a spinning drive. The disadvantage is the need for extra hardware and that you just move the wear from one flash device to another. Factoring in the price of USB memory and CF cards it would not be a problem even if you had to replace the secondary flash memory every year.

The fourth approach is to keep the pagefile on the SSD but limit its size to be very low. The would remove the point about taking up a lot of expensive space on the SSD. It would perhaps also keep the wear down a bit as windows wouldn't be able to swap as much. It would also keep everything running very fast. Im not sure how this would affect compatiblity.

And last, the most controversial approach. Turn the pagefile off completely. This should probably be avoided as long as possible as it can lead to strange problems and hurt performance. It would however eliminate the wear on the SSD and it doesn't require any additional hardware. I can also add that I'm planning on running my system with 12GB of RAM and run a 64bit operating system. I don't know how much that will affect things when running without a pagefile but it cant hurt.

What are your takes on this subject? The goal is to run the system at peek performance and without compatibility problems while still minimizing wear on the SSD.

I'm writing this in the Vista section but I haven't decided between Vista and 7 yet and will upgrade to 7 when it is released so the question is relevant to both OSes.

Link to comment
Share on other sites

As long as you never max out your physical RAM, disabling page file is completely safe (I am not aware of any problems with disabling the PF).

I have a SSD and I always had the page file disabled on Windows 7 and XP. The only problem I ever got was running out of memory while playing Fallout 3 on 2GB RAM (memory leak probably).

Also, make sure to disable superfetch, prefetch, defrag, and make sure that you formatted your SSD with a 128 offset.

EDIT: 12GB?! Ok, forget ever running out of RAM.

Link to comment
Share on other sites

if you ask me , go with the forth approach.

that sounds about right

like limiting the paging to 500mb/1000mb max.

Link to comment
Share on other sites

From my experience running without a pagefile causes more trouble than its worth. I will however give it a try once I have everything up and running. I might get a surprise!

Does anybody know of a good way to monitor the pagefile? I would like to know bytes/s written and read, total size, page faults and so on... Im currently using the build in Performance Monitor and ProcessMonitor & ProcessExplorer from SysInternals.

I don't see how SuperFetch and Prefetching will wear out my drive (if a satisfactory solution to keep the pagefile from doing it is found) as they don't write massive amount of data to the drive. SuperFetch should only load things into memory and as long as there is no swapping it should not pose any problems. Prefetching is a bit more complicated but shouldn't write that much data to the drive. From my understanding as long as you don't remove the prefetch cache or reboot it will not write any data unless you run new programs. The big question about both of them is will they give any performance boost when using a SSD as they are partly created to help speed up loading times when using a regular spinning drive.

EDIT: SuperFetch not ReadBoost...

Link to comment
Share on other sites

From my experience running without a pagefile causes more trouble than its worth. I will however give it a try once I have everything up and running. I might get a surprise!

Does anybody know of a good way to monitor the pagefile? I would like to know bytes/s written and read, total size, page faults and so on... Im currently using the build in Performance Monitor and ProcessMonitor & ProcessExplorer from SysInternals.

I don't see how Readyboost and Prefetching will wear out my drive (if a satisfactory solution to keep the pagefile from doing it is found) as they don't write massive amount of data to the drive. Readyboost should only load things into memory and as long as there is no swapping it should not pose any problems. Prefetching is a bit more complicated but shouldn't write that much data to the drive. From my understanding as long as you don't remove the prefetch cache or reboot it will not write any data unless you run new programs. The big question about both of them is will they give any performance boost when using a SSD as they are partly created to help speed up loading times when using a regular spinning drive.

Experiment with it on and off, see if SSD is being used less and if everything works just fine. If everything works fine, disable the PF, and safe yourself some space.

Vista has performance monitoring tools, very good ones actually. They will tell you what is reading and what is writing to you SSD. I find them to be excellent for diagnosing problems and stuff.

I am not aware of any need for either of those two with a SSD drive. Prefetch is for Hard Drives. Superfetch, depends on you, you won't notice any performance difference.

OCZ forum, which helped me a lot, recommends disabling all of those things.

Also, if you desire, PM me for tweaks for Firefox and IE8 to be more SSD friendly.

Link to comment
Share on other sites

Remember XP practically writes to disk every time you move your mouse.

(Desktop.ini, Browser cache, History, Adobe Flash cache)

Trust me, the page file not make 1/3 of writes compared to the general operating system usage.

Link to comment
Share on other sites

Remember XP practically writes to disk every time you move your mouse.

(Desktop.ini, Browser cache, History, Adobe Flash cache)

Trust me, the page file not make 1/3 of writes compared to the general operating system usage.

No much of an improvement over the years...

There are other entities that write and read a lot on Vista.

Browser Cache is still an enemy of SSD drivers as Safari, Firefox, and Internet Explorer still offer no options to disable it in the GUI (but you can with FF and with IE). System files with the *.ini extension should not be touched. History is not an enemy of the SSDs. Search Indexer and Logging will write a lot to the SSD as well. You can limit the index and limit logging to problems only to reduce writes...

Also there are NTFS tweaks to reduce wear on the SSD.

Link to comment
Share on other sites

And remember, this is not a quest to completely disable all writing to the drive, only to remove the worst culprits. The problem is while in most programs if you disable caching or similar features its clear how the performance for that program will suffer. Disabling caching in Firefox will require you to redownload the content again the next time you visit the page (well there is a memory cache also but lets ignore that for now), disabling thumbnail generation will show the old boring icons instead of the thumbnail and disabling the search indexer will slow down searches.

The difficulty in determining how the pagefile and virtual memory system affects anything is that it affects the system as a whole. I'm not aware of any good tests or benchmarks that has been made with different configurations of the memory subsystem in Vista or 7 when running from a SSD. It basically boils down to that SSD are very new an no current operating system is written with them in mind. We know very well what the problems with regular drives are and how to combat or migrate them. When we put a SSD in our system it has completely different shortcomings that will require new solutions.

Hopefully 7 will take care of the worst problems when its released but we also have to remember that for the last 20 years magnetic spinning disks have been the standard and every consumer and server operating are built around how they operate.

Virtual memory is also not as simple as saying that when we are out of memory write unused stuff to a big file on a disk. A good example of this is if we look at Windows CE which uses virtual memory but no pagefile. Virtual memory is only a way for the system to pretend it has more memory than it actually has. This has several other advantages than never running out of memory such as every process gets its own memory space while keeping every other process from accessing it. This also protects the operating system from crashing due to bugs in programs that overwrite important memory location by accident as they usually don't have access to that area of the memory (or protects if from malicious programs that try to access things the shouldn't). Every modern operating system is build around this concept, from Windows running on your home computer to the operating system running on your mobile phone. Remember this is a VERY simplified explanation.

The big problem is that everybody seems to know exactly how the pagefile in windows works when they don't have a clue. Like Udedenkz said it has several advantages and don't usually cause any problems as long as you don't run out of memory. But almost no problem doesn't mean no problems and I usually trust Microsoft on these topics. A lot of the advice both on these forums and everywhere else is completely wrong and will even slow your system down. But like I said before SSDs are a new territory and Microsoft is not without its faults as we all know. There are probably many tweaks we can do to both speed our systems up and decrease the wear of our drives. It all comes down to what we can sacrifice to gain performance.

Udedenkz, if you could elaborate some more on the NTFS tweaks or give me a link I would be very grateful. I'm already planing to turn off indexing, thumbnails, logging and a lot of similar things and I have been running Firefox without caching enabled for some time now due to 'security' reasons.

Link to comment
Share on other sites

For SSDs disable Superfetch, Prefetch, Indexing (on the SSD only) and Defrag (on the SSD only). Also, if your programs allow it, move any cache they use to a mechanical drive......one major issue with SSDs isnt just the lifespan (though the newer drives have a rather long lifespan) but they also tend to have a stuttering issue.....doing something even as simple as installing software can lock the SSD to a point the system is practically unusable until the install is done. Same thing happens with browser caches, sometimes it can be caching so much the browser locks up for a few seconds.

Other than that they work great, they load programs in a fraction of the time a normal HDD can.

As for the pagefile, I suggest not disabling it if you can, instead put the pagefile on a mechanical HDD with a small (maybe 256MB) pagefile on the SSD. This way you can still utilize the pagefile and avoid any issues that can arise from disabling it, but your minimizing the writes to the SSD pagefile to only error logs and what not.

Here is a page of some of the best tweaks: http://www.ocztechnologyforum.com/forum/sh...ead.php?t=47212

I personally dont mess with clearing the pagefile at shutdown, large system cache, second level data cache or ntfsmemoryusage, but thats all out of preference.

Link to comment
Share on other sites

@mwpeck - Newer and more expensive ones are much better at this it seems - mine AINT lol. Logging System Events and Software Events and stuff will still work without a PF.

And remember, this is not a quest to completely disable all writing to the drive, only to remove the worst culprits. The problem is while in most programs if you disable caching or similar features its clear how the performance for that program will suffer. Disabling caching in Firefox will require you to redownload the content again the next time you visit the page (well there is a memory cache also but lets ignore that for now), disabling thumbnail generation will show the old boring icons instead of the thumbnail and disabling the search indexer will slow down searches.

The difficulty in determining how the pagefile and virtual memory system affects anything is that it affects the system as a whole. I'm not aware of any good tests or benchmarks that has been made with different configurations of the memory subsystem in Vista or 7 when running from a SSD. It basically boils down to that SSD are very new an no current operating system is written with them in mind. We know very well what the problems with regular drives are and how to combat or migrate them. When we put a SSD in our system it has completely different shortcomings that will require new solutions.

Hopefully 7 will take care of the worst problems when its released but we also have to remember that for the last 20 years magnetic spinning disks have been the standard and every consumer and server operating are built around how they operate.

Virtual memory is also not as simple as saying that when we are out of memory write unused stuff to a big file on a disk. A good example of this is if we look at Windows CE which uses virtual memory but no pagefile. Virtual memory is only a way for the system to pretend it has more memory than it actually has. This has several other advantages than never running out of memory such as every process gets its own memory space while keeping every other process from accessing it. This also protects the operating system from crashing due to bugs in programs that overwrite important memory location by accident as they usually don't have access to that area of the memory (or protects if from malicious programs that try to access things the shouldn't). Every modern operating system is build around this concept, from Windows running on your home computer to the operating system running on your mobile phone. Remember this is a VERY simplified explanation.

The big problem is that everybody seems to know exactly how the pagefile in windows works when they don't have a clue. Like Udedenkz said it has several advantages and don't usually cause any problems as long as you don't run out of memory. But almost no problem doesn't mean no problems and I usually trust Microsoft on these topics. A lot of the advice both on these forums and everywhere else is completely wrong and will even slow your system down. But like I said before SSDs are a new territory and Microsoft is not without its faults as we all know. There are probably many tweaks we can do to both speed our systems up and decrease the wear of our drives. It all comes down to what we can sacrifice to gain performance.

Udedenkz, if you could elaborate some more on the NTFS tweaks or give me a link I would be very grateful. I'm already planing to turn off indexing, thumbnails, logging and a lot of similar things and I have been running Firefox without caching enabled for some time now due to 'security' reasons.

I agree with the first sentence. Using RAM for writing and reading instead of a HD or SSD is faster, performance suffers if an application cannot use the available RAM. You will have plenty of RAM for firefox to keep it's cache in RAM, no problem there.

DO NOT disable thumbnails - very dumb idea - just disable thumbnail cache - you SSD is fast enough to get thumbnails without thumbnail cache - thumbnails are rather useful. No one is telling you to disable indexing either, if you don't want - you should limit indexing to windir and user profiles as you SSD is going to really frigging fast at searching the rest. :)

7 formats better and turns off things like superfetch if your SSD is good enough I think. It is still beta though.

There is one other problem with turning off the PF aside from just having no additional memory aside from RAM, Windows doesn't write something related to BSOD debugging or a log when it BSODs or something like that. That is all I ever came across in the year running without a PF.

I think you are being way too pessimistic about this whole thing.

Many tweaks are posted on OCZ Forums, I tried all of them, even though I do not have a OCZ SSD, they are just general performance tips and such for SSDs,

LINK - Formatting Tips (Seems To Work)

Vista/7 Tips - Good, but, few things I disagree on,

ClearPageFileAtShutDown should not be touched - it slows shutdown considerably. Stupid IMO.

One of the Tips is just to speed up the interface - useless.

It is much easier disabling FireFox's cache (and smarter) then doing what is told in the guide. Although putting temporary directories on a RAMDISK is a good idea.

Good Luck!

Edited by Udedenkz
Link to comment
Share on other sites

but they also tend to have a stuttering issue.....

From my understanding the stuttering is only an issue on drives based on the JMicron JMF602A/B controllers. Any drive based on the Intel, Samsung or Indilinx controllers don't have this issue. The problem with the JMF602B controller for example is that a drive that uses it has an average write latency of about 500ms with an IO queue of 3 when writing 4kB blocks and this is even slower than a regular hard drive. What I have heard is that it was rushed to the market and was very cheap and now the consumer is paying for corporate greed.

Thanks for the link, will have a look.

Link to comment
Share on other sites

@mwpeck - Newer and more expensive ones are much better at this it seems - mine AINT lol. Logging System Events and Software Events and stuff will still work without a PF.

I meant for BSOD logging as you mentioned, not just general system event logging. But in all honesty, I havnt seen a single BSOD since I started using Windows 7, back when the first beta came out.

DO NOT disable thumbnails - very dumb idea - just disable thumbnail cache - you SSD is fast enough to get thumbnails without thumbnail cache - thumbnails are rather useful. No one is telling you to disable indexing either, if you don't want - you should limit indexing to windir and user profiles as you SSD is going to really frigging fast at searching the rest. :)

For thumbnails are you talking about for FF or like image thumbnails in Explorer? If the latter how do you go about disabling the cache?

As for indexing, its pretty safe to disable it on the SSD, I personally have it disabled and it takes maybe 10 seconds for Explorer to find the file (on my desktop) when searching the entire C: drive, even if its a newly created never opened before file. Probably slower than if I left indexing on, but its fast enough for me to not be bothered by it....besides I typically only search from the start menu for programs I want to launch that I don't have in the superbar, which it finds nearly instantaneously.

From my understanding the stuttering is only an issue on drives based on the JMicron JMF602A/B controllers. Any drive based on the Intel, Samsung or Indilinx controllers don't have this issue. The problem with the JMF602B controller for example is that a drive that uses it has an average write latency of about 500ms with an IO queue of 3 when writing 4kB blocks and this is even slower than a regular hard drive. What I have heard is that it was rushed to the market and was very cheap and now the consumer is paying for corporate greed.
Could be....I personally have a 120GB Apex which I believe is based on the JMicron controller....I am not sure what the average write latency is but HD Benchmark programs (cant remember the exact name, I know it wasn't HDTACH though) show my latency as 0.2ms, but I think that is for average read and not average write.....any way to bench the average write latency? Edited by mwpeck
Link to comment
Share on other sites

Could be....I personally have a 120GB Apex which I believe is based on the JMicron controller....I am not sure what the average write latency is but HD Benchmark programs (cant remember the exact name, I know it wasn't HDTACH though) show my latency as 0.2ms, but I think that is for average read and not average write.....any way to bench the average write latency?

The Apex uses JMicron JMF602B. One reference I found to this problem is:

http://www.anandtech.com/storage/showdoc.a...i=3531&p=17

The whole article is a good read.

Link to comment
Share on other sites

@mwpeck - For Explorer, just google something like: disable thumbnail cache in vista.

@p1p3 - you might also want to research about disabling NTFS Change Journal... it might help. (fsutil usn deletejournal /d C:)

Although, with an expensive SSD like that, shouldn't matter much. :)

Link to comment
Share on other sites

The Apex uses JMicron JMF602B. One reference I found to this problem is:

http://www.anandtech.com/storage/showdoc.a...i=3531&p=17

The whole article is a good read.

Thanks, one odd thing though is he mentions you will notice it when saving small files and what not, saving files has never caused an issue, when I hit to save a file it saves and I can close without noticing any lag at all.....I mainly notice it when installing programs.

Wouldn't it be more ideal to just put the Pagefile on a external drive?

Thats what I personally do so I still have a pagefile just incase.

Link to comment
Share on other sites

Its been a couple of weeks since i read that article but from what I remember its not saving files from a program but saving (writing) files to the disk that is slow. A simple example is when I look at my Adobe folder where I have Photoshop and Premiere installed. There are over 12000 files that take only about 2GB in total. Installing that on a JMicron based drive could be a problem if its not tweaked right.

The approach with using a secondary hard disk drive for the pagefile was one I mentioned in my first post and its a simple approach which should have decent performance. I would however like to keep my workstation completely free of old disk if possible. All my bulk storage will be on my server anyway.

Link to comment
Share on other sites

Its been a couple of weeks since i read that article but from what I remember its not saving files from a program but saving (writing) files to the disk that is slow. A simple example is when I look at my Adobe folder where I have Photoshop and Premiere installed. There are over 12000 files that take only about 2GB in total. Installing that on a JMicron based drive could be a problem if its not tweaked right.

Actually, Photoshop (thats all I use personally) installs faster on my SSD than it ever has on my HDD.......despite having tons of files.

Link to comment
Share on other sites

if vista detects that it is installed on an ssd drive it will auto disable the defrag as ssd's dont need defragging but for you the best is to leave the pagefile size as default even on an ssd, it will cause the least ammount of problems and maintain high compatability.

Link to comment
Share on other sites

I have an X25-M, read this from manual:

3.5.4 Minimum Useful Life

A typical client usage of 20 GB writes per day is assumed.

Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance.

By efficiently managing performance, this feature enables the device to have, at a minimum, a five year useful life.

Under normal operation conditions, the drive will not invoke this feature.

If it fails before 5yrs then you can get a new one under warrenty.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.