Advisable to disable paging file?


Recommended Posts

windows is a virtual memory os and WILL use it no matter what you do, until windows is changed to a non virtual memory based os you should leave the virtual memory setting alone but of course it's your system and you do what you want to it.

  hdood said:
Keeping it on the boot volume serves one purpose though. It will let you store crash dumps (which are written directly to the sectors of the boot volume where the page file lives).

That's why you create a 50MB page file on the boot drive and a normal one on another drive.

  soldier1st said:
windows is a virtual memory os and WILL use it no matter what you do, until windows is changed to a non virtual memory based os you should leave the virtual memory setting alone but of course it's your system and you do what you want to it.

That is not quite how it works. Virtual memory simply means that there isn't a direct relation between the actual physical address space and the virtual address space applications are presented with. Each application gets its own private continuous 32-bit/64-bit virtual address space, and the OS maps addresses in this to physical memory (or other resources) on demand. A side effect of this abstraction is that the OS can map addresses to any resource, not just the system memory. You can tell it to map an MP3 file into the virtual address space and it can be treated as if it was in memory (when it's really being read from disk). The OS can also map addresses into a page/swap file the same way, but this is an additional feature and not a requirement.

According to the X1.5 rule the system will allocate a 18GB pagefile. This pagefile is written into at shutdown to preserve RAM , system files and configurations files, so, on system crash you will be able to recover with "last known good configuration" and files. Minimizing pagefile size will unable the system save to all the data needed.

The same rule for preserving data occurs on sleep mode \ hibernating mode.

The following run fine without a page file, (Tested for about ~ 1.5 years)

Windows XP Professional SP2/3 x32

Windows XP Home Edition SP2/3

Windows Server 2003 SP2 x32

Windows XP Professional SP2 x64

Windows 7 Ultimate x64

The following will not,

*null*

That is, as long as you have enough RAM, otherwise Windows WILL KILL off any application that goes over the limit.

Enjoy more space for your porn.

  ilev said:
This pagefile is written into at shutdown to preserve RAM , system files and configurations files, so, on system crash you will be able to recover with "last known good configuration" and files. Minimizing pagefile size will unable the system save to all the data needed.

The same rule for preserving data occurs on sleep mode \ hibernating mode.

The "last known good configuration" data is saved when you log on, not when you log off. I mean, think about it. The point of the feature is to save certain information about the last configuration known to successfully boot. You can't do that by saving anything when the system shuts down, because then you'd potentially be saving the last _bad_ configuration.

The information is also not saved in the page file. Why would it? The page file is not a centralized database for storing random things. Nothing is saved in the page file except pages*, and they are only valid during the session. The contents of the page file is not, and cannot be, reused after a reboot.

*The exception is if you have crash dumps enabled and you have a page file on the boot volume, in which case Windows will dump crash data to the sectors occupied by the page file (bypassing the file system driver, which is why it cannot just create a new page file for this purpose if you have it disabled). When Windows boots back up, it can detect this and read this data back and write it into a properly formatted crash dump file.

Been running without pagefile since vista sp1 on 8gb memory. Never had any problems. Quite honestly i believe i've had less problems due to the fact that the memory managers overhead for virtual memory gets illiminated. Less code to run, less change for faults. QED

windows memory manager is much efficient to take care that it doesn't make system slower with paging.

all the pages in the "ram" are updated when they are accessed & a LRU or MRU list is maintained with the information about how frequently & last time page was accessed. it only pages out the the pages that have not been recently accessed or are predicted not to do so. so, obviously the pages that are needed to be in ram are not involved in paging so no point of making it slower.

also the pages that are paged out have a "dirty" bit that is updated if page is modified. now if a unmodified page is to be brought up again in ram that is not overwritten in memory by other pages, no disk activity is involved & the previously deleted page in memory is made active.

when the program is first started all the required pages are brought in memory based on working set of the program. only the page that is not accessed frequently will be paged out. now if the large memory is available in the system the previously paged out pages will not involve any additional disk activity & they will be marked active in ram. thus again no point of going slower. & if lesser memory is available you definetely needs page file since the current working set is larger than available ram.

  hdood said:
Of course it is. Why on earth do you believe Windows creates super secret page files on your hard drive even though you've explicitly instructed it not to? What do you base this absurd claim on?

+1

  ilovetech said:
windows memory manager is much efficient to take care that it doesn't make system slower with paging.

I will not go into detail to explain why this statement flaunts every known coding principle, suffice it to say that there's no way that adding an extra layer of disk read/writes is going to make a memory manager MORE efficient.

  petrossa said:
I will not go into detail to explain why this statement flaunts every known coding principle, suffice it to say that there's no way that adding an extra layer of disk read/writes is going to make a memory manager MORE efficient.

oh really?? the word "efficient" just means more speed?? doesn't it mean to use the resources in a efficient manner?? the page file allows to use more programs efficiently & concurrently. also regarding speed just read the previous post once again before commenting on the first line...

unless we can't agree on actual interpretation of efficiency we can't agree whether page file is efficient or not. for me efficiency is best possible use of all available resources & page file is an important instrument in it.

  hdood said:
It doesn't? I've tried and been unable to measure any difference. I've also watched the page fault statistics, and it virtually never pages to disk anyway.

Of course, this depends on your memory usage patterns. You know that.

Windows will page out pages that haven't been accessed for a long time and that have become dormant, even when there is free memory. The idea behind this is that this is memory that probably won't be accessed again (probably memory that the app has "leaked"), and that this will free up memory for any needs down the road (or even for, say, the file system cache).

Normally, this is a useful optimization. Let's say that you're running an app that has leaked 512MB of memory. The leaked pages will become stale and will be paged out, which minimizes the impact that a leaky application has on your system's physical RAM usage. But this optimization is a heuristic, and with all heuristics, there are cases when Windows will get it wrong. If you have 8GB of RAM, chances are, you don't care if a leaky app is hogging half a GB of RAM (you have plenty to spare!), in which case, there is no point in wasting that disk I/O to page out. Worse, it may be that the memory isn't really "leaked", and that the application just has an usual memory usage pattern in which large numbers of pages will become stale, only to be accessed again later. In this case, paging will definitely hurt your performance.

And there is a very good example of this in Firefox: there are very few actual true memory leaks in Firefox because Mozilla is pretty vigilant about using various automated tools to detect them. But there are memory "leaks" in the practical sense, through objects that have long lifetimes (but that do get cleaned up during Gecko's shutdown housekeeping and is thus not technically a leak) or through address space fragmentation. For example, I once had a Firefox process that had 1.3GB committed, but it was only using 0.3GB of physical, since about 1GB of that was de facto "leaked" and stale (and I was nowhere near exhausting my physical RAM, so Windows was not paging this stuff out due to memory pressure). The problem was that this de facto "leaked" memory was not really "leaked", and when you exit Firefox, it will start cleaning all this stuff up, and not by chaotically decommitting a whole range of pages (that wouldn't have been a problem), but rather by deleting objects and freeing stuff from the heap piece by piece, which meant that Windows has to page much of that stuff back for this orderly (though ultimately unnecessary) bookkeeping, and partly why that's why Firefox shutdowns can sometimes take a long time (emphasis on "partly", since one shouldn't pin it all on this).

The point is, IF you are sure that you have enough physical RAM that you can do without a page file, then there are definitely scenarios (the likelihoods of which are dependent on individual usage patterns) where you really can be "smarter than the memory manager". And even if you're not, at worst, you'll have the same performance as before. At best, you can come out ahead in these sorts of scenarios.

Edited by code.kliu.org

I agree, except when some programs explicitly require the PageFile even if you have 8Gig of ram installed (actually I think even Office prefers some PageFile)

I noticed that even on high Ram installed amounts that a PageFile having minimum of 50meg will be all that is required (to satisfy these programs, we really need a list, but one of them may be the actual memory required for BSOD)

So during this thread of back and forth debating, we can say that:

1. Yes disabling the PageFile, will disable the PageFile (this part seems obvious, but not to some)

2. The PageFile will be used even before Ram is run out, I'd say always

3. The PageFile disabled fully will speed up performance as there is now no managing of the PageFile by the processor

4. The PageFile should be given 50Meg (Min and Max) as a minimum (just in case some program needs it)

5. 1.5 X is now old (ie 1.5 x 6Gig is a 9Gig PageFile - absolutely ridiculous; although on 512Meg > ~768Meg PF is ok)

6. The PageFile (minimum 50Meg) can also be allocated on another drive, to improve performance

Are we settled, or are there going to be more comments that the PF can't be disabled (?) or that it needs to be more than 2Gig (?) or system managed (this only required because the operator doesn't know what value to place in it, for a specific system)

So the question is, how do we determine the amount of PageFile our system requires?

Here's a hint (I have said a few times) Start at 50Meg Min and Max, and adjust from there

I personally feel ~1.5 Gig PF would be absolutely Max (for both Min and Max values) on a system with more than 2 Gig of Ram (note: ideally 50Meg)

I wonder what others have given their PageFile, and what works for them?

  kimsland said:
So the question is, how do we determine the amount of PageFile our system requires?

Stress the system (in terms of memory load) for several days (give it time so that you can also account for memory leaks, which can take time to creep up and manifest themselves). Look at the peak commit charge in task manager. Now set your page file size such that it bridges the gap between your physical and that peak commit. Because that's what the page file is for: bridging the gap (if any) between your physical RAM and the amount of memory you actually need. In other words, in general, the less physical you have, the more page file you need. The actual amount depends on what you do with your system and what your memory-stressed load looks. Any advice that says "set it to #% of your memory or just set it to #GB" is mostly bunk.

Or, you can just let the system manage it. Because except in the case where you end up disabling the page file (or setting it so small that it is in effect disabled), there is not going to be any appreciable difference between a manually-sized page file and a system-managed one (except that the latter would probably eat up more disk space, but a few GB is nothing these days).

And out of curiosity, what exactly are these programs that go out of their way to snoop around for a page file? Because I've never seen such an abomination. I find it difficult to believe that any program not related to memory management would ever have a need to know about the state of the page file or that any programmer would waste his time and effort to code in such a check unless there was an actual purpose for it.

Hm actually after you answered that. I thought oh yeah I knew that

Its all about checking performance in Task Manager, and running all the programs that you normally use

But thanks for the recap (and letting others know, which I should've done !)

As for "which programs"

Hmm I'm sure there were some. I thought even Office, and some games. Could be wrong - maybe nothing needs the pagefile. But I'm sure 50Meg was required for something... I just can't remember. Actually it was the "just in case" part ;)

  kimsland said:
As for "which programs"

Hmm I'm sure there were some. I thought even Office, and some games. Could be wrong - maybe nothing needs the pagefile. But I'm sure 50Meg was required for something... I just can't remember. Actually it was the "just in case" part ;)

First, I can't imagine any need for a program to even know whether a page file exists (unless the program is supposed to create and display a memory report for the user). Second, if a program did know whether a page file exists or even the size and current usage of the page file, what is it going to do with that information? The existence or non-existence of the page file doesn't enable or hinder anything (except for crash dumps, but only the kernel needs that; regular memory dumps don't--and can't--make use of the page file). There is no feature or functionality that is enabled or disabled based on the existence of a page file. Finally, there is nothing that a program can do with the page file. The kernel and only the kernel has access to the page file (imagine the chaos that there would be if programs could directly access the page file!). No program can directly deal with the page file: it's all handled through the kernel's memory manager, and the process doesn't know (or care to know) whether its memory is paged out or not. The most that the process can do is signal to the memory manager to lock certain pages in physical RAM and never page them out or to signal to the memory manager that it's okay to trim its working set and page some things out, if the memory manager wants to.

So I find it highly unlikely that any program will fail to run in the absence of a token page file. If there is such a program, then the limitation is a purely arbitrary one (i.e., an idiot programmer who has no understanding of the page file checks for a page file and throws up an error message if none is found, but even that scenario seems a bit far-fetched). So I think it's safe to chalk this one up as an old urban legend; if you don't want a page file, just disable it--there's no need for a 50MB token page file.

Edited by code.kliu.org

I think it was the game Crysis that seemed to use the PageFile, for better performance

But if PageFile was disabled, then it would still run, just not at 100%. Again this could be wrong - untested

Oh and I found this on XP: http://support.microsoft.com/kb/257758/

  Quote
When you set the paging file (Pagefile.sys) on your computer to a size that is lower than the recommended size of 12 megabytes (MB) plus the amount of random access memory (RAM), a temporary paging file (Temppf.sys) may be created

So (again about this) to avoid any "Temppf.sys" file, make the PageFile 50Meg at Minimum

  code.kliu.org said:
Windows will page out pages that haven't been accessed for a long time and that have become dormant, even when there is free memory. The idea behind this is that this is memory that probably won't be accessed again (probably memory that the app has "leaked"), and that this will free up memory for any needs down the road (or even for, say, the file system cache).

I forgot to add in my original post, that in certain situations, this can be quite important.

For example, I have an old machine (without a lot of RAM) that I use as a build server (for compiling code). I noticed one day that my peak memory usage during compilation never exceeded my physical RAM, and that, furthermore, one of the major bottlenecks was disk I/O. So I decided to see if I could do better by disabling paging; I figured that if I forced the memory manager to never page to disk, then I'd get better disk I/O. This was, of course, a mistake because I had forgotten that Windows has a file system cache that pretty much uses whatever free physical RAM was available (this is not Superfetch; this is the standard file system cache that has been in NT since the very beginning, and since this is a part of the kernel and not a separate service, the memory used for this does not show up in Task Manager or Process Explorer *), and by preventing the system from paging out rarely-used regions of memory, I had effectively reduced the amount of memory available to the disk cache. In the end, this ended up hurting my disk I/O performance; it turned out to be far better to page the rarely-accessed pages to disk and then use that freed physical memory to assist in general disk I/O because the data in the latter was being accessed far more than the data in the former. The moral of the story is that you should should leave a generous cushion between your peak memory usage and your total memory (physical + page file) because you'll never know when it'll come in handy.**

(*) On an unrelated note, this is why the toning down of Superfetch on Windows 7 isn't such a bad thing, because an aggressive Superfetch just crowds out this traditional file system cache, and you just end up robbing Peter to pay Paul.

(**) Of course, if you have 6 or 8GB of RAM, then chances are, your cushion is already pretty darn big even with no page file, and this would not apply to you.

  kimsland said:
I think it was the game Crysis that seemed to use the PageFile, for better performance

But if PageFile was disabled, then it would still run, just not at 100%. Again this could be wrong - untested

Let's imagine that this is true. So how exactly would it "use" the page file, since there is no way whatsoever that it can directly access it? If there is a performance boost, then it will probably be the result of the forms of caching that might be crowded out if rarely-used memory can't be paged out (e.g., Windows can hold a memory-mapped file in RAM for longer, or perhaps more system RAM can be used to assist the video card for texture caching).

  Quote
Oh and I found this on XP: http://support.microsoft.com/kb/257758/

So (again about this) to avoid any "Temppf.sys" file, make the PageFile 50Meg at Minimum

So you only have 38 MB of RAM? :D It says 12+RAM, not 12. And note the age of the article. This article was relevant back when people actually ran out of virtual memory (even then, the 12+RAM is a bad benchmark, since you could need a lot more than 12+RAM if your RAM was low). And if you read it, it talks specifically about low memory conditions where Windows is forced to create a page file so that it can continue to satisfy memory requests--basically, when the page file is insufficient to cover the gap between physical and your peak commit charge. If your physical is sufficiently large that there is no gap for the page file to fill (which would be rare back in those days but is feasible today with 4+ GB systems), then this article is not relevant to you.

  kimsland said:
My point was Minimum 50Meg, and I've said this a hundred times (that may be an exaggeration!)

But the problem is that this 50MB doesn't make any technical sense. Have you actually tried no page file vs. a 50 MB page file?

And in the case of the Crysis game, where paging out stale regions may mean more physical RAM that can be used to assist the video card, that would be the result of a real page file, not a token 50 MB page file (which I can assure you would be virtually indistinguishable from having no page file at all in terms of performance)

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • Glow 25.08 by Razvan Serea Glow provides detailed reporting on every hardware component in your computer, saving you valuable time typically spent searching for CPU, motherboard, RAM, graphics card, and other stats. With Glow, all the information is conveniently presented in one clean interface, allowing you to easily access and review the comprehensive hardware details of your system. Glow provides detailed information on various system aspects, including OS, motherboard, processor, memory, graphics card, storage, network, battery, drivers, and services. The well-organized format ensures easy access to the required information. You can export all the gathered data to a plain text file, facilitating sharing with others for troubleshooting purposes. No installation needed. Just decompress the archive, launch the executable, and access computer-related information. Glow runs on Windows 11 and Windows 10 64-bit versions. Glow 25.08 release notes: What's new Glow's render engine has been improved. The program now supports high-resolution displays even on multi-monitor setups and monitors with varying DPI levels. It delivers sharp and clear visuals on 8K and higher DPI screens. The TSImageRenderer algorithm has been integrated into Glow. All visual icons in the interface are now automatically resized in a DPI-aware manner, ensuring high-resolution display quality. We know that Glow's Installed Drivers and Installed Services sections load slowly. That's why the loading algorithms have been reprogrammed into a parallel structure. Now it loads with up to 95% speed increase compared to the processor core. Glow's monitor testing tools have been reprogrammed. The Dead Pixel Test and Dynamic Color Range Test now function with improved accuracy. The Screen Overlay Tool has been redesigned, featuring theme sensitivity and new functions such as a close button. The startup engine for all Glow tools has been redeveloped, allowing for more efficient and effective management of the tools. The search engine's clear button in the "Installed Drivers", "Installed Services" and "Installed Applications" sections has been refreshed with a DPI-aware design for enhanced visibility. Icons have been added to the BIOS Update, Battery Report Generation, and Export buttons. Icons have been added to all buttons across Glow's tools. The Tab key functionality in Glow's interface has been improved, enabling more precise and stable navigation between elements. Glow's logo has been updated with a new design, offering a more elegant and modern appearance. Glow's primary colors have been redesigned within the Adobe RGB Color Space, giving the interface a more contemporary look. The About section has been reprogrammed. All social media buttons now feature icons, and the close button is DPI-aware and more prominent. Fixed Bugs Fixed an issue causing control buttons to overlap and shift position at high DPI settings. Resolved a DPI-related issue where checkmarks in the top menu distorted visually at high DPI values. Fixed a parallel processing error that caused the program to crash after clicking and closing information text in any monitor test tool. Corrected a bug in the Dynamic Color Range Tool that caused white space to appear on the right and bottom when resizing. Fixed calculation errors affecting the color scale and ratios in the Dynamic Color Range Tool. Resolved a layering issue that sometimes caused message boxes to appear behind the program window. Changes The backend code structure of Glow has been improved to a modular architecture, ensuring full compatibility and easier integration with other Türkay Software products. Tools have been moved back to the top menu. Some interface icons have been replaced to provide better visual clarity. A YouTube link has been added to the About section. Note: Always unzip the program before using it. Otherwise you may get an error. Download: Glow 25.08 | 3.1 MB (Open Source) View: Glow Homepage | Screenshot Get alerted to all of our Software updates on Twitter at @NeowinSoftware
    • Vivaldi 7.5.3735.56 by Razvan Serea Vivaldi is a cross-platform web browser built for – and with – the web. A browser based on the Blink engine (same in Chrome and Chromium) that is fast, but also a browser that is rich in functionality, highly flexible and puts the user first. A browser that is made for you. Vivaldi is produced with love by a founding team of browser pioneers, including former CEO Jon Stephenson von Tetzchner, who co-founded and led Opera Software. Vivaldi’s interface is very customizable. Vivaldi combines simplicity and fashion to create a basic, highly customizable interface that provides everything a internet user could need. The browser allows users to customize the appearance of UI elements such as background color, overall theme, address bar and tab positioning, and start pages. Vivaldi features the ability to "stack" and "tile" tabs, annotate web pages, add notes to bookmarks and much more. Vivaldi 7.5.3735.56 changelog: [Chromium] Update to 138.0.7204.173 Download: Vivaldi 64-bit | 123.0 MB (Freeware) Download: Vivaldi 32-bit | ARM64 View: Vivaldi Home Page | Screenshot | Release Notes Get alerted to all of our Software updates on Twitter at @NeowinSoftware
    • Floorp 11.29.0 by Razvan Serea Floorp is a cutting-edge web browser that combines the trusted foundation of Mozilla's Firefox with a unique Japanese perspective, offering users an exceptional online experience. This open-source browser prioritizes privacy, customization, and security. Floorp is transparent, with no user tracking or data sharing, and it's completely open source. With a strict no-tracking policy and full transparency, your personal information remains private. As an open-source project, Floorp not only shares its source code but also its build environment, inviting users to contribute and build their unique versions. The regular updates, based on Firefox ESR, ensure that you always have the latest features and security enhancements. Get to the point with Floorp Lightning's minimalism With a keen eye on user preferences, Floorp is gearing up to launch "Floorp Lightning," a streamlined and performance-focused browser, harkening back to the fundamentals of web browsing. This browser has undergone a meticulous transformation, shedding more than 80% of the features that characterized its predecessor. What remains are only the high-demand functionalities within the Floorp ecosystem. The result is a sleek, lean, and swift web browser optimized for maximum efficiency. In the ever-accelerating digital world, "Floorp Lightning" is poised to offer users a refreshingly nimble and responsive browsing experience, set to debut in beta mode this November. Floorp key features: Strong Tracking Protection: Floorp offers robust tracking protection, safeguarding users from malicious tracking and fingerprinting on the web. Flexible Layout: Customize Floorp's layout to your heart's content, including moving the tab bar, hiding the title bar, and more for a personalized browsing experience. Switchable Design: Choose from five distinct designs for the Floorp interface, and even switch between OS-specific designs for a unique look Regular Updates: Based on Firefox ESR, Floorp receives updates every four weeks, ensuring up-to-date security even before Firefox's releases. No User Tracking: Floorp prioritizes user privacy by abstaining from collecting personal information, tracking users, or selling user data, with no affiliations with advertising companies. Completely Open Source: The full source code for Floorp is open to the public, allowing transparency and enabling anyone to explore and build their own version. Dual Sidebar: Floorp features a versatile built-in sidebar for webpanels and browsing tools, making it perfect for multitasking and quick access to bookmarks, history, and websites. Flexible Toolbar & Tab Bar: Customize your browser with Tree Style Tabs, vertical tabs, and bookmark bar modifications, catering to both beginners and experts in customization. User-Centric Web Experience: Floorp prioritizes user privacy and collaboratively blocks harmful trackers. Floorp 11.29.0 changelog: Security fixes Download: Floorp 64-bit | 85.2 MB (Open Source) Links: Floorp Website | Github Website | Screenshot Get alerted to all of our Software updates on Twitter at @NeowinSoftware
    • "ethical", what an interesting term. Subjectively objective but objectively subjective. I argue there are very few "ethics" in copyright. Hell, I see very few applications of true ethics in anything. I see a great deal of unethical actions being done under the guise of ethics however. How unethical... I greatly prefer open source LLMs. Copyright ethics are mickey mouse deep.
  • Recent Achievements

    • Collaborator
      fernan99 earned a badge
      Collaborator
    • Collaborator
      MikeK13 earned a badge
      Collaborator
    • One Month Later
      Alexander 001 earned a badge
      One Month Later
    • One Month Later
      Antonio Barboza earned a badge
      One Month Later
    • Week One Done
      Antonio Barboza earned a badge
      Week One Done
  • Popular Contributors

    1. 1
      +primortal
      592
    2. 2
      ATLien_0
      225
    3. 3
      Michael Scrip
      167
    4. 4
      Xenon
      139
    5. 5
      +FloatingFatMan
      128
  • Tell a friend

    Love Neowin? Tell a friend!