Advisable to disable paging file?


Recommended Posts

windows is a virtual memory os and WILL use it no matter what you do, until windows is changed to a non virtual memory based os you should leave the virtual memory setting alone but of course it's your system and you do what you want to it.

Link to comment
Share on other sites

Keeping it on the boot volume serves one purpose though. It will let you store crash dumps (which are written directly to the sectors of the boot volume where the page file lives).

That's why you create a 50MB page file on the boot drive and a normal one on another drive.

Link to comment
Share on other sites

windows is a virtual memory os and WILL use it no matter what you do, until windows is changed to a non virtual memory based os you should leave the virtual memory setting alone but of course it's your system and you do what you want to it.

That is not quite how it works. Virtual memory simply means that there isn't a direct relation between the actual physical address space and the virtual address space applications are presented with. Each application gets its own private continuous 32-bit/64-bit virtual address space, and the OS maps addresses in this to physical memory (or other resources) on demand. A side effect of this abstraction is that the OS can map addresses to any resource, not just the system memory. You can tell it to map an MP3 file into the virtual address space and it can be treated as if it was in memory (when it's really being read from disk). The OS can also map addresses into a page/swap file the same way, but this is an additional feature and not a requirement.

Link to comment
Share on other sites

According to the X1.5 rule the system will allocate a 18GB pagefile. This pagefile is written into at shutdown to preserve RAM , system files and configurations files, so, on system crash you will be able to recover with "last known good configuration" and files. Minimizing pagefile size will unable the system save to all the data needed.

The same rule for preserving data occurs on sleep mode \ hibernating mode.

Link to comment
Share on other sites

The following run fine without a page file, (Tested for about ~ 1.5 years)

Windows XP Professional SP2/3 x32

Windows XP Home Edition SP2/3

Windows Server 2003 SP2 x32

Windows XP Professional SP2 x64

Windows 7 Ultimate x64

The following will not,

*null*

That is, as long as you have enough RAM, otherwise Windows WILL KILL off any application that goes over the limit.

Enjoy more space for your porn.

Link to comment
Share on other sites

This pagefile is written into at shutdown to preserve RAM , system files and configurations files, so, on system crash you will be able to recover with "last known good configuration" and files. Minimizing pagefile size will unable the system save to all the data needed.

The same rule for preserving data occurs on sleep mode \ hibernating mode.

The "last known good configuration" data is saved when you log on, not when you log off. I mean, think about it. The point of the feature is to save certain information about the last configuration known to successfully boot. You can't do that by saving anything when the system shuts down, because then you'd potentially be saving the last _bad_ configuration.

The information is also not saved in the page file. Why would it? The page file is not a centralized database for storing random things. Nothing is saved in the page file except pages*, and they are only valid during the session. The contents of the page file is not, and cannot be, reused after a reboot.

*The exception is if you have crash dumps enabled and you have a page file on the boot volume, in which case Windows will dump crash data to the sectors occupied by the page file (bypassing the file system driver, which is why it cannot just create a new page file for this purpose if you have it disabled). When Windows boots back up, it can detect this and read this data back and write it into a properly formatted crash dump file.

Link to comment
Share on other sites

Been running without pagefile since vista sp1 on 8gb memory. Never had any problems. Quite honestly i believe i've had less problems due to the fact that the memory managers overhead for virtual memory gets illiminated. Less code to run, less change for faults. QED

Link to comment
Share on other sites

windows memory manager is much efficient to take care that it doesn't make system slower with paging.

all the pages in the "ram" are updated when they are accessed & a LRU or MRU list is maintained with the information about how frequently & last time page was accessed. it only pages out the the pages that have not been recently accessed or are predicted not to do so. so, obviously the pages that are needed to be in ram are not involved in paging so no point of making it slower.

also the pages that are paged out have a "dirty" bit that is updated if page is modified. now if a unmodified page is to be brought up again in ram that is not overwritten in memory by other pages, no disk activity is involved & the previously deleted page in memory is made active.

when the program is first started all the required pages are brought in memory based on working set of the program. only the page that is not accessed frequently will be paged out. now if the large memory is available in the system the previously paged out pages will not involve any additional disk activity & they will be marked active in ram. thus again no point of going slower. & if lesser memory is available you definetely needs page file since the current working set is larger than available ram.

Of course it is. Why on earth do you believe Windows creates super secret page files on your hard drive even though you've explicitly instructed it not to? What do you base this absurd claim on?

+1

Link to comment
Share on other sites

windows memory manager is much efficient to take care that it doesn't make system slower with paging.

I will not go into detail to explain why this statement flaunts every known coding principle, suffice it to say that there's no way that adding an extra layer of disk read/writes is going to make a memory manager MORE efficient.

Link to comment
Share on other sites

I will not go into detail to explain why this statement flaunts every known coding principle, suffice it to say that there's no way that adding an extra layer of disk read/writes is going to make a memory manager MORE efficient.

oh really?? the word "efficient" just means more speed?? doesn't it mean to use the resources in a efficient manner?? the page file allows to use more programs efficiently & concurrently. also regarding speed just read the previous post once again before commenting on the first line...

Link to comment
Share on other sites

read all the thread. Still don't agree sorry. Efficient i see as less processor cycles wasted, thus releasing the processor for other tasks or just plainly heating up less.

More code equals more cycles equals less efficiency.

Link to comment
Share on other sites

unless we can't agree on actual interpretation of efficiency we can't agree whether page file is efficient or not. for me efficiency is best possible use of all available resources & page file is an important instrument in it.

Link to comment
Share on other sites

It doesn't? I've tried and been unable to measure any difference. I've also watched the page fault statistics, and it virtually never pages to disk anyway.

Of course, this depends on your memory usage patterns. You know that.

Windows will page out pages that haven't been accessed for a long time and that have become dormant, even when there is free memory. The idea behind this is that this is memory that probably won't be accessed again (probably memory that the app has "leaked"), and that this will free up memory for any needs down the road (or even for, say, the file system cache).

Normally, this is a useful optimization. Let's say that you're running an app that has leaked 512MB of memory. The leaked pages will become stale and will be paged out, which minimizes the impact that a leaky application has on your system's physical RAM usage. But this optimization is a heuristic, and with all heuristics, there are cases when Windows will get it wrong. If you have 8GB of RAM, chances are, you don't care if a leaky app is hogging half a GB of RAM (you have plenty to spare!), in which case, there is no point in wasting that disk I/O to page out. Worse, it may be that the memory isn't really "leaked", and that the application just has an usual memory usage pattern in which large numbers of pages will become stale, only to be accessed again later. In this case, paging will definitely hurt your performance.

And there is a very good example of this in Firefox: there are very few actual true memory leaks in Firefox because Mozilla is pretty vigilant about using various automated tools to detect them. But there are memory "leaks" in the practical sense, through objects that have long lifetimes (but that do get cleaned up during Gecko's shutdown housekeeping and is thus not technically a leak) or through address space fragmentation. For example, I once had a Firefox process that had 1.3GB committed, but it was only using 0.3GB of physical, since about 1GB of that was de facto "leaked" and stale (and I was nowhere near exhausting my physical RAM, so Windows was not paging this stuff out due to memory pressure). The problem was that this de facto "leaked" memory was not really "leaked", and when you exit Firefox, it will start cleaning all this stuff up, and not by chaotically decommitting a whole range of pages (that wouldn't have been a problem), but rather by deleting objects and freeing stuff from the heap piece by piece, which meant that Windows has to page much of that stuff back for this orderly (though ultimately unnecessary) bookkeeping, and partly why that's why Firefox shutdowns can sometimes take a long time (emphasis on "partly", since one shouldn't pin it all on this).

The point is, IF you are sure that you have enough physical RAM that you can do without a page file, then there are definitely scenarios (the likelihoods of which are dependent on individual usage patterns) where you really can be "smarter than the memory manager". And even if you're not, at worst, you'll have the same performance as before. At best, you can come out ahead in these sorts of scenarios.

Edited by code.kliu.org
Link to comment
Share on other sites

I agree, except when some programs explicitly require the PageFile even if you have 8Gig of ram installed (actually I think even Office prefers some PageFile)

I noticed that even on high Ram installed amounts that a PageFile having minimum of 50meg will be all that is required (to satisfy these programs, we really need a list, but one of them may be the actual memory required for BSOD)

So during this thread of back and forth debating, we can say that:

1. Yes disabling the PageFile, will disable the PageFile (this part seems obvious, but not to some)

2. The PageFile will be used even before Ram is run out, I'd say always

3. The PageFile disabled fully will speed up performance as there is now no managing of the PageFile by the processor

4. The PageFile should be given 50Meg (Min and Max) as a minimum (just in case some program needs it)

5. 1.5 X is now old (ie 1.5 x 6Gig is a 9Gig PageFile - absolutely ridiculous; although on 512Meg > ~768Meg PF is ok)

6. The PageFile (minimum 50Meg) can also be allocated on another drive, to improve performance

Are we settled, or are there going to be more comments that the PF can't be disabled (?) or that it needs to be more than 2Gig (?) or system managed (this only required because the operator doesn't know what value to place in it, for a specific system)

So the question is, how do we determine the amount of PageFile our system requires?

Here's a hint (I have said a few times) Start at 50Meg Min and Max, and adjust from there

I personally feel ~1.5 Gig PF would be absolutely Max (for both Min and Max values) on a system with more than 2 Gig of Ram (note: ideally 50Meg)

I wonder what others have given their PageFile, and what works for them?

Link to comment
Share on other sites

So the question is, how do we determine the amount of PageFile our system requires?

Stress the system (in terms of memory load) for several days (give it time so that you can also account for memory leaks, which can take time to creep up and manifest themselves). Look at the peak commit charge in task manager. Now set your page file size such that it bridges the gap between your physical and that peak commit. Because that's what the page file is for: bridging the gap (if any) between your physical RAM and the amount of memory you actually need. In other words, in general, the less physical you have, the more page file you need. The actual amount depends on what you do with your system and what your memory-stressed load looks. Any advice that says "set it to #% of your memory or just set it to #GB" is mostly bunk.

Or, you can just let the system manage it. Because except in the case where you end up disabling the page file (or setting it so small that it is in effect disabled), there is not going to be any appreciable difference between a manually-sized page file and a system-managed one (except that the latter would probably eat up more disk space, but a few GB is nothing these days).

And out of curiosity, what exactly are these programs that go out of their way to snoop around for a page file? Because I've never seen such an abomination. I find it difficult to believe that any program not related to memory management would ever have a need to know about the state of the page file or that any programmer would waste his time and effort to code in such a check unless there was an actual purpose for it.

Link to comment
Share on other sites

Hm actually after you answered that. I thought oh yeah I knew that

Its all about checking performance in Task Manager, and running all the programs that you normally use

But thanks for the recap (and letting others know, which I should've done !)

As for "which programs"

Hmm I'm sure there were some. I thought even Office, and some games. Could be wrong - maybe nothing needs the pagefile. But I'm sure 50Meg was required for something... I just can't remember. Actually it was the "just in case" part ;)

Link to comment
Share on other sites

As for "which programs"

Hmm I'm sure there were some. I thought even Office, and some games. Could be wrong - maybe nothing needs the pagefile. But I'm sure 50Meg was required for something... I just can't remember. Actually it was the "just in case" part ;)

First, I can't imagine any need for a program to even know whether a page file exists (unless the program is supposed to create and display a memory report for the user). Second, if a program did know whether a page file exists or even the size and current usage of the page file, what is it going to do with that information? The existence or non-existence of the page file doesn't enable or hinder anything (except for crash dumps, but only the kernel needs that; regular memory dumps don't--and can't--make use of the page file). There is no feature or functionality that is enabled or disabled based on the existence of a page file. Finally, there is nothing that a program can do with the page file. The kernel and only the kernel has access to the page file (imagine the chaos that there would be if programs could directly access the page file!). No program can directly deal with the page file: it's all handled through the kernel's memory manager, and the process doesn't know (or care to know) whether its memory is paged out or not. The most that the process can do is signal to the memory manager to lock certain pages in physical RAM and never page them out or to signal to the memory manager that it's okay to trim its working set and page some things out, if the memory manager wants to.

So I find it highly unlikely that any program will fail to run in the absence of a token page file. If there is such a program, then the limitation is a purely arbitrary one (i.e., an idiot programmer who has no understanding of the page file checks for a page file and throws up an error message if none is found, but even that scenario seems a bit far-fetched). So I think it's safe to chalk this one up as an old urban legend; if you don't want a page file, just disable it--there's no need for a 50MB token page file.

Edited by code.kliu.org
Link to comment
Share on other sites

I think it was the game Crysis that seemed to use the PageFile, for better performance

But if PageFile was disabled, then it would still run, just not at 100%. Again this could be wrong - untested

Oh and I found this on XP: http://support.microsoft.com/kb/257758/

When you set the paging file (Pagefile.sys) on your computer to a size that is lower than the recommended size of 12 megabytes (MB) plus the amount of random access memory (RAM), a temporary paging file (Temppf.sys) may be created

So (again about this) to avoid any "Temppf.sys" file, make the PageFile 50Meg at Minimum

Link to comment
Share on other sites

Windows will page out pages that haven't been accessed for a long time and that have become dormant, even when there is free memory. The idea behind this is that this is memory that probably won't be accessed again (probably memory that the app has "leaked"), and that this will free up memory for any needs down the road (or even for, say, the file system cache).

I forgot to add in my original post, that in certain situations, this can be quite important.

For example, I have an old machine (without a lot of RAM) that I use as a build server (for compiling code). I noticed one day that my peak memory usage during compilation never exceeded my physical RAM, and that, furthermore, one of the major bottlenecks was disk I/O. So I decided to see if I could do better by disabling paging; I figured that if I forced the memory manager to never page to disk, then I'd get better disk I/O. This was, of course, a mistake because I had forgotten that Windows has a file system cache that pretty much uses whatever free physical RAM was available (this is not Superfetch; this is the standard file system cache that has been in NT since the very beginning, and since this is a part of the kernel and not a separate service, the memory used for this does not show up in Task Manager or Process Explorer *), and by preventing the system from paging out rarely-used regions of memory, I had effectively reduced the amount of memory available to the disk cache. In the end, this ended up hurting my disk I/O performance; it turned out to be far better to page the rarely-accessed pages to disk and then use that freed physical memory to assist in general disk I/O because the data in the latter was being accessed far more than the data in the former. The moral of the story is that you should should leave a generous cushion between your peak memory usage and your total memory (physical + page file) because you'll never know when it'll come in handy.**

(*) On an unrelated note, this is why the toning down of Superfetch on Windows 7 isn't such a bad thing, because an aggressive Superfetch just crowds out this traditional file system cache, and you just end up robbing Peter to pay Paul.

(**) Of course, if you have 6 or 8GB of RAM, then chances are, your cushion is already pretty darn big even with no page file, and this would not apply to you.

Link to comment
Share on other sites

Unless we are now speaking of (unrelated) caching

I didn't want to get into disk caching and everything caching, because that's offtopic in my view

As for the "handy" part, ie start with 50Meg ! ;)

Link to comment
Share on other sites

I think it was the game Crysis that seemed to use the PageFile, for better performance

But if PageFile was disabled, then it would still run, just not at 100%. Again this could be wrong - untested

Let's imagine that this is true. So how exactly would it "use" the page file, since there is no way whatsoever that it can directly access it? If there is a performance boost, then it will probably be the result of the forms of caching that might be crowded out if rarely-used memory can't be paged out (e.g., Windows can hold a memory-mapped file in RAM for longer, or perhaps more system RAM can be used to assist the video card for texture caching).

Oh and I found this on XP: http://support.microsoft.com/kb/257758/

So (again about this) to avoid any "Temppf.sys" file, make the PageFile 50Meg at Minimum

So you only have 38 MB of RAM? :D It says 12+RAM, not 12. And note the age of the article. This article was relevant back when people actually ran out of virtual memory (even then, the 12+RAM is a bad benchmark, since you could need a lot more than 12+RAM if your RAM was low). And if you read it, it talks specifically about low memory conditions where Windows is forced to create a page file so that it can continue to satisfy memory requests--basically, when the page file is insufficient to cover the gap between physical and your peak commit charge. If your physical is sufficiently large that there is no gap for the page file to fill (which would be rare back in those days but is feasible today with 4+ GB systems), then this article is not relevant to you.

Link to comment
Share on other sites

I knew you'd pick that up ;)

The specific example is not important (even if its the only example)

My point was Minimum 50Meg, and I've said this a hundred times (that may be an exaggeration!)

Link to comment
Share on other sites

My point was Minimum 50Meg, and I've said this a hundred times (that may be an exaggeration!)

But the problem is that this 50MB doesn't make any technical sense. Have you actually tried no page file vs. a 50 MB page file?

And in the case of the Crysis game, where paging out stale regions may mean more physical RAM that can be used to assist the video card, that would be the result of a real page file, not a token 50 MB page file (which I can assure you would be virtually indistinguishable from having no page file at all in terms of performance)

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.