Recommended Posts

Why does it need i7 when i5 works almost the same in games?

GPU list looks okay , however we all know game system requirements are bull**** and overrated most the time.

 

Why do you say that?

 

IME, i3 = Atom\Celeron class, i5 = low/mid range enterprise desktop, i7 = high end desktop/workstation.

 

Aren't there 2-core i5s with only 3MB L3? Don't most game comparisons that show only 10% max improvement using the iGPU? Since devs haven't been taking real advantage of the CPU, how do we know other than to use them?

 

A measurable 10% is huge btw.

 

While I haven't experienced any next gen games with an i5, I simply couldn't accept i5 performance or perceived performance for my personal desktop having an i7 in the office. IMO, i7 is the way to go if you want performance across the board.

Link to comment
Share on other sites

Why do you say that?

 

IME, i3 = Atom\Celeron class, i5 = low/mid range enterprise desktop, i7 = high end desktop/workstation.

 

<snip>

 

Where do you get that i3 = atom/celeron? That's kind of wrong...

 

EDIT: also, look at i5 vs i7 here. No performance differences when operating using real world settings (i.e. not >>60 FPS): http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1061

Link to comment
Share on other sites

I was referring to the desktop - the push at the SERVER end came with Server/Exchange 2003 - the first x64 iterations of both.

 

I'm quite aware of the lack of x64 add-ins - however, if you don't need such add-ins (for me, the lack of add-ins for Outlook went away due to a change in Hotmail), why NOT crossgrade?

 

And your point is, in fact, one I agree with, and have been trying to make since Vista's launch, in fact (which is when I started with the upgrades for my x64-ready users - among ALL the Windows hardware run by all the folks I support, there is exactly NO case of x32 software - other than browsers - being used where an x64 version is available; none at all).  If anything, the x64 lack of penetration is of greater impact in terms of browsers than productivity applications, let alone suites - there is no x64 version of Google Chrome, for example.

 

Just hearing Exchange 2003 still sends shivers down my spine. The upgrade from 2003 to 2007 and migrating public folders was the last project before I decided Architecting and Management was where I would spend the rest of my career.

Link to comment
Share on other sites

While I haven't experienced any next gen games with an i5, I simply couldn't accept i5 performance or perceived performance for my personal desktop having an i7 in the office. IMO, i7 is the way to go if you want performance across the board.

 

There's a fair amount of cross over in the i5 / i7 range. If you're primarily gaming, even an i3 will be fine, but there's no reason why a k series i5 wouldn't be as good as an i7. They're both quad-cores, they're both more than fast enough for any modern games.

Link to comment
Share on other sites

Where do you get that i3 = atom/celeron? That's kind of wrong...

 

EDIT: also, look at i5 vs i7 here. No performance differences when operating using real world settings (i.e. not >>60 FPS): http://www.ocaholic.ch/modules/smartsection/item.php?itemid=1061

 

lol, that's just my personal feel/use meter. An i3 is of no value to me, I can't wait on it. I have an Atom based Elitebook tablet that feels faster than a desktop i3.

 

The thing about benchmarks is the overall experience. You can run a benchmark that ends up with both being 60fps. But frames are dropped, screens tear, sometimes sound skips, etc.

If you upgrade to an i7, keep an i5 around. Play your most demanding game, then put the i5 back in, all settings the same. If you get the same experience, then I will concede the issue :)

Link to comment
Share on other sites

There's a fair amount of cross over in the i5 / i7 range. If you're primarily gaming, even an i3 will be fine, but there's no reason why a k series i5 wouldn't be as good as an i7. They're both quad-cores, they're both more than fast enough for any modern games.

 

You're probably right at the high end.

Link to comment
Share on other sites

My pc at work has an i3-2120 3.3ghz and it flies, had a couple of Virtual Machines running along with Photoshop and never had any issues. Compared to the i5-2500K 3.3ghz in my home PC i've honestly never personally noticed a difference between the processing power.

Granted i dont play games on my work PC, however i know someone who also has an i3-2120 and he can play Battlefield 3 as good as my i5 can do. Maybe now games are taking advance of more cores an i3 would be less viable for gaming, however i certainly would not class the i3 as a bad processor at all.

Maybe i'm just not hardcore enough any more to need so much processing power.

Link to comment
Share on other sites

lol, that's just my personal feel/use meter. An i3 is of no value to me, I can't wait on it. I have an Atom based Elitebook tablet that feels faster than a desktop i3.

 

The thing about benchmarks is the overall experience. You can run a benchmark that ends up with both being 60fps. But frames are dropped, screens tear, sometimes sound skips, etc.

If you upgrade to an i7, keep an i5 around. Play your most demanding game, then put the i5 back in, all settings the same. If you get the same experience, then I will concede the issue :)

 

I'm quite sure the i3 isn't the issue in your desktop. It doesn't particularly make sense that the atom "feels faster" than the i3. The atom is quite literally going to be worst across the board spec wise unless you are talking about generations different architectures. I can imagine that it is the SSD in your tablet vs HDD in your desktop that is quite likely the issue and that you are attributing the problem to the wrong component...

 

Honestly, I have a 2600k and my brother has a 4670k -- guess which one gets better gaming performance in general? It's almost entirely dependent on the GPU we put in the systems... Riddle me this, how is that the CPU is bottlenecking the FPS below 60fps if the benchmarks I just showed you get hundreds of frames when you turn down the GPU settings? It would stand to reason that that show's you the CPU isn't a performance issue otherwise you'd see performance issues on low GPU settings. A general rule of thumb in computing (and HPC) is that you are only as slow as your bottleneck and unless you have CPU Bound game, you aren't going bottleneck on your CPU. If you look at the XBone and PS4 this is why they have netbook low-powered cores. Based on your argument, you are going to see stuttering and frame issues on these systems...

Link to comment
Share on other sites

I'm quite sure the i3 isn't the issue in your desktop. It doesn't particularly make sense that the atom "feels faster" than the i3. The atom is quite literally going to be worst across the board spec wise unless you are talking about generations different architectures. I can imagine that it is the SSD in your tablet vs HDD in your desktop that is quite likely the issue and that you are attributing the problem to the wrong component...

 

 

I personally don't use i3's. I have a couple just laying around, nothing to use them in. I usually build my own but bought a digital storm bolt for the SFF quiet case with full size video. Bout it with the lowest end CPU and replaced it day 1.

 

Generational differences are hard to measure, and the current generation of Atoms are in some of our mini desktops in tight quarters and give Celerons a run for their money clock for clock. The i3 is good for cheap laptops I guess, but it's been eliminated on our desktops. Most capable multitasking employees can out work it.

 

When a CPU upgrade doesn't present significant benefit, trust me, we won't pay for it.

 

I will say there are probably many i5 systems that have been upgraded to overpower poor performing x32 software that would be quite capable workstations as developers begin actually taking advantage of x64, multiple cores, and >4GB RAM.

 

With the i7, there's little reason to upgrade as is relates to CPU performance now, when X64 matures, there may be in the near future, but I believe i7 prices aren't dropping because it will be some time before we see a true CPU performance boost as a reason for an upgrade. In fact, my next upgrade will probably be for DDR4 capability, if then.

 

If you don't have the highest end i5, I do think an i7 is a worth upgrade and you will notice a performance increase across all your computing tasks.

 

Think CPU doesn't make a difference? Play your most demanding game or app on your i5 or i7, now overclock the CPU.

Link to comment
Share on other sites

Why do you say that?

 

IME, i3 = Atom\Celeron class, i5 = low/mid range enterprise desktop, i7 = high end desktop/workstation.

 

Aren't there 2-core i5s with only 3MB L3? Don't most game comparisons that show only 10% max improvement using the iGPU? Since devs haven't been taking real advantage of the CPU, how do we know other than to use them?

 

A measurable 10% is huge btw.

 

While I haven't experienced any next gen games with an i5, I simply couldn't accept i5 performance or perceived performance for my personal desktop having an i7 in the office. IMO, i7 is the way to go if you want performance across the board.

An i3 isn't anywhere remotely near atom territory. Not even close.

Link to comment
Share on other sites

My pc at work has an i3-2120 3.3ghz and it flies, had a couple of Virtual Machines running along with Photoshop and never had any issues. Compared to the i5-2500K 3.3ghz in my home PC i've honestly never personally noticed a difference between the processing power.

Granted i dont play games on my work PC, however i know someone who also has an i3-2120 and he can play Battlefield 3 as good as my i5 can do. Maybe now games are taking advance of more cores an i3 would be less viable for gaming, however i certainly would not class the i3 as a bad processor at all.

Maybe i'm just not hardcore enough any more to need so much processing power.

 

I wouldn't classify any of them as "bad." They all have their sweet spot. But given a choice, and money not limiting that choice, the i7 is the clear one IMO. Not sure about the extremes, just know they're not worth it based on cost.

 

I do agree with Skin and others on x32 software. It has been written so bad the processors seem equal. As for Photoshop, if you have memory headroom, I think at that point you will notice general improvement. Of course it depends on the filters you use and how often you use them. Photoshop does take advantage of multiple cores and will benefit from a faster processor with more cores and  cache. The best CPU is probably one of the 30MB internal cache Xeon's (~$4,000), lol. Or an i7 Extreme. Just depends on what the performance is worth to you. For gaming, obviously it's not "that" important.

Link to comment
Share on other sites

Why do you say that?

 

IME, i3 = Atom\Celeron class, i5 = low/mid range enterprise desktop, i7 = high end desktop/workstation.

 

Aren't there 2-core i5s with only 3MB L3? Don't most game comparisons that show only 10% max improvement using the iGPU? Since devs haven't been taking real advantage of the CPU, how do we know other than to use them?

 

A measurable 10% is huge btw.

 

While I haven't experienced any next gen games with an i5, I simply couldn't accept i5 performance or perceived performance for my personal desktop having an i7 in the office. IMO, i7 is the way to go if you want performance across the board.

Dual-core i5s - yes; they are in portable-formfactor PCs (Ultrabooks).  However, these same i5s are closer to desktop i3s in that they support HTT (2 virtual cores in addition to their two physical cores), as opposed to desktop i5s (four real cores, and no HTT support).  However, even multicore support remains largely missing - even though that is NOT a requirement for x64 support.

 

Still, where is the support of larger amounts of per-core cache in games (or anything other than installers, for that matter)?  Every Intel CPU from Core 2 up has had at least 1024k cache per core - yet how many applicaitons, games, etc. = - basically, anything other than installers - takes advantage of that?

Link to comment
Share on other sites

An i3 isn't anywhere remotely near atom territory. Not even close.

 

You are correct. I was being somewhat sarcastic but I did use hard equal signs. :/

 

Obviously processors are tiered for a reason, mostly equitable. They're all good for their segment. I just find the notion from gamers of all people, that there's no value in CPU upgrades, particularly between i3, i5, and i7, I find that poppycock. And I think we've all been waiting for devs to actually utilize some of this desktop power that's been available for some time. It's dangerous to now complain about the use of such.

 

I like PC gaming, and with what I've seen of the x64 Frostbite Engine, things are headed in the right direction :D. If the next Star Wars Battlefront can provide the graphics and performance of BF4 and NFS:Rivals with the gameplay of the first couple Battlefronts, I'll take a week vacation to sit in front of my PC.

Link to comment
Share on other sites

Dual-core i5s - yes; they are in portable-formfactor PCs (Ultrabooks).  However, these same i5s are closer to desktop i3s in that they support HTT (2 virtual cores in addition to their two physical cores), as opposed to desktop i5s (four real cores, and no HTT support).  However, even multicore support remains largely missing - even though that is NOT a requirement for x64 support.

 

Still, where is the support of larger amounts of per-core cache in games (or anything other than installers, for that matter)?  Every Intel CPU from Core 2 up has had at least 1024k cache per core - yet how many applicaitons, games, etc. = - basically, anything other than installers - takes advantage of that?

 

I'm with you. Baby steps, and right now going x64 and >4GB RAM is a nice start.

 

BTW, these games will sell more new PCs no?

Link to comment
Share on other sites

Because they are purpose built and optimised systems. Its a mistake I see time and time again, people look at the sum of the parts (i.e. the gfx chip, the cpu and the ram) and they make comparisons to the same (or similar) components plugged into a pc motherboard with windows loaded running a game.

 

Its not the same, the whole system (ps4/xbox one) is custom designed, the motherboard is different and made especially for the chosen chips and for the main purpose (gaming). Then we have the fact that they are set in stone specs so developers can know exactly how the systems will behave and not have to account for a multitude of CPU's/GFX chips, RAM speeds and motherboards, RAM amounts and video card vRAM amounts, background processes etc.

 

The game consoles are on paper nowhere near a decent pc build, but they don't need to be.

 

How can this game perform well on the 360 and ps3?

  • Like 2
Link to comment
Share on other sites

Obviously processors are tiered for a reason, mostly equitable. They're all good for their segment. I just find the notion from gamers of all people, that there's no value in CPU upgrades, particularly between i3, i5, and i7, I find that poppycock. And I think we've all been waiting for devs to actually utilize some of this desktop power that's been available for some time. It's dangerous to now complain about the use of such.

If you're a gamer building your own PC, these are the advantages: i3 cheap, i5 quadcore and k series available, i7 ... ?

 

I don't know of any features the i7 really offers that makes a difference for gamers. Sheer power you could just get a k series and easily beat it.

Link to comment
Share on other sites

I personally don't use i3's. I have a couple just laying around, nothing to use them in. I usually build my own but bought a digital storm bolt for the SFF quiet case with full size video. Bout it with the lowest end CPU and replaced it day 1.

 

Generational differences are hard to measure, and the current generation of Atoms are in some of our mini desktops in tight quarters and give Celerons a run for their money clock for clock. The i3 is good for cheap laptops I guess, but it's been eliminated on our desktops. Most capable multitasking employees can out work it.

 

When a CPU upgrade doesn't present significant benefit, trust me, we won't pay for it.

 

I will say there are probably many i5 systems that have been upgraded to overpower poor performing x32 software that would be quite capable workstations as developers begin actually taking advantage of x64, multiple cores, and >4GB RAM.

 

With the i7, there's little reason to upgrade as is relates to CPU performance now, when X64 matures, there may be in the near future, but I believe i7 prices aren't dropping because it will be some time before we see a true CPU performance boost as a reason for an upgrade. In fact, my next upgrade will probably be for DDR4 capability, if then.

 

If you don't have the highest end i5, I do think an i7 is a worth upgrade and you will notice a performance increase across all your computing tasks.

 

Think CPU doesn't make a difference? Play your most demanding game or app on your i5 or i7, now overclock the CPU.

 

I ran Core 2 duo laptop with 4GB of RAM with an SSD until recently. Contrast that with my 2600k desktop processor with 16GB of RAM and an SSD. You'd think there would be perceivable differences? But the reality is that there were rarely were noticeable differences under normal workloads. The only time I'd notice a difference was if I specifically threw it an HPC related workload that was CPU bound.

 

I'm curious, why does everyone think you can get essentially free performance by using the x64 portion of the ISA? You get SSE5, ABI changes, and more registers to play with. Performance isn't magically better, in general, which is why you haven't seen modern browsers flock to create x64 builds. Hell, most compilers have historically done a pretty poor job of vectorizing so even if you were throwing around a lot of FP operations you probably wouldn't be making great use of the SSE instructions.

 

In reference to OCing i5s and i7s, I would challenge you to go look at benchmarks, for example: http://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus

 

Notice how there is very little variation between i5s and i7s of the same generation and specs? What really matters is the generation more than anything. And this makes sense if you consider what the differences between the i5 and i7 are: HT & a small amount of cache. Games generally aren't CPU bound so there's little chance that you are going to peg all of your cores so HT is not going to do anything

 

But unfortunately, we are talking about about benchmarks where the GPU isn't struggling. When you throw a video card that is struggling into the mix, the variation's between ALL generations shrink for the i7 architectures (perhaps not for the really poor old results listed). I can't seem to find any on google that show this effect at the moment though. The problem is that whenever they show CPU tests, they tend to throw in a beefy card to remove GPU bottlenecks and to highlight the CPU bottlenecks (so you get >>60 FPS). It isn't exactly fair, because it doesn't represent what the mid-range gamer is going to see and it isn't exactly a real-world scenario for the majority of gamers.

Link to comment
Share on other sites

<snip>

 

Obviously processors are tiered for a reason, mostly equitable. They're all good for their segment. I just find the notion from gamers of all people, that there's no value in CPU upgrades, particularly between i3, i5, and i7, I find that poppycock. And I think we've all been waiting for devs to actually utilize some of this desktop power that's been available for some time. It's dangerous to now complain about the use of such.

 

<snip>

 

I'm not a gamer. I'm an electrical and computer engineer who works in the field of HPC. Moreover, I don't think anyone argued that there is no value in upgrading from an i3 to i5. Just from an i5 to an i7 for gaming.

Link to comment
Share on other sites

I'm curious, why does everyone think you can get essentially free performance by using the x64 portion of the ISA? You get SSE5, ABI changes, and more registers to play with. Performance isn't magically better, in general, which is why you haven't seen modern browsers flock to create x64 builds. Hell, most compilers have historically done a pretty poor job of vectorizing so even if you were throwing around a lot of FP operations you probably wouldn't be making great use of the SSE instructions.

While it doesn't magically give better performance, it does let the app use more RAM without having to release and reload content.  Less strain on the drives at the very least, which makes performance a bit more stable.

 

WRT CoD: Ghosts, sure it might not need 6GB, but if they have headroom in the spec they can release new game modes or maps that do use it without worrying about it.  On the other end of the same coin, there's SimCity, where they're having problems expanding the core game in the way they'd like to because of the minimum spec.

 

We live in interesting times, and for once that's a good thing. :D

  • Like 2
Link to comment
Share on other sites

If you guys are worried about the system requirements for this game, just wait for the knockoff "Observation Canines" to come out next year. It'll run on Windows XP/Linux with 512mb ram.

Link to comment
Share on other sites

I'm with you. Baby steps, and right now going x64 and >4GB RAM is a nice start.

 

BTW, these games will sell more new PCs no?

MorganX - it's not even baby steps.

 

Look at the Celeron DC E3xxx (a historical Celeron DC in LGA775); it became an alternative to (shockingly) the Core 2 Duo in the low-end desktop space because it used less power and cost less.  It even fit into the same motherboards - what's more, when the Great Quad-Core Fire Sale happened, you had a ready-made drop-in upgrade - when Q6xxx went from current to legacy, I was able to go directly from cut-down Wolfdale (which E3xxx is) to full-size dual-dual Conroe (which Kentsfield is) and change nothing else.  And considering how awful my personal economic situation was, being able to do so made a LOT of sense.  I had already started the OS and application chageover (from x32 to x64), because all Celeron DCs - back to E1xxx  - supported x64, as did the Core 2 Duos they were based on.  (The one difference that mattered to me - that C2D supported VT-x, but Celeron DC did not - was, in fact, fixed with E3xxx, the cut-down Wolfdale - it was why I upgraded, in fact.)  I've been able to stall and stall and stall because nothing forced me NOT to stall (in terms of software) - surprisingly, that has included Windows 8, and now 8.1; the ONLY feature that 8.1 has that 7 lacks - Hyper-V - is also the only feature that absolutely positively requires a CPU socket change.  Even that feature is an outlier - a want, not a need.  I can still run desktop virtualization software (the same software of the type I used with Windows 7 - Oracle Virtual Box or vmWare Player/Workstation) - Windows 8 itself did not require me to move or change, and neither does Windows 8.1.  (And that one requirement  CAN be end-run, simply by dual-booting with a Windows server OS - in my case, it's Windows Server 2012R2 - but still, it's an end-run on the cheap.  Nothing says that it's a need - even I'm not saying that it's a need.)  Every hardware upgrade I've done since I got this motherboard has been on the cheap, and has been want-driven, not need-driven.  None have been expensive, because I'm sensible enough to not overspend.  Cheap, cheap, cheap.

 

However, I don't plan, or expect, to be able to get away with staying in the cheap seats forever - however, gaming is likely the only category of things I do that would get me out of said seats.  Productivity software - not even productivity suites - certainly hasn't done so; operating systems hasn't, either.  (I am ALREADY running x64, as default, in the productivity and OS categories, and even in the Web-browser category, while Chrome in x32 is my most-used browser, it isn't my default; IE11 is.)  However, that continues to make me an outlier, even among gamers - though they have not dared admit it, the insistence on keeping DX10, if not DX9c, shows that even gamers are loath to spending money - at least when it has come to GPU-based technology.  They will spend a boatload anywhere else, and everywhere else; however, they have consistently stalled on GPU technology.  Worse, there's far less reason to continue to stall on GPU technology, as it has fallen in price faster than any other sort of PC technology.  Despite going from HD5450 (AMD) to GTX550Ti (nVidia) on my desktop, and seeing the bus bandwidth triple (from 64-bit to 192-bit) this was, at best, a crossgrade.

 

Crossgrade?  Yes - crossgrade.

 

I'm GPU-bottlenecked in far-fewer games - however, I'm display-bottlenecked in quite a few games today - and no cure for that is on the immediate horizon.  Pretty much the only reason I WOULD need to upgrade my GPU in the immediate future is to add support for features I currently have no use for (higher levels of AA, for example).  I'm not going to have a larger display anytime soon (lack of space on the desk for one) and removing the GPU bottleneck on the games I'm GPU-bottlenecked in would still leave me display-bottlenecked in those same games.  (Never mind that removing that GPU bottleneck can be done on the relatively cheap still - GTX660, even non-Ti, would be enough - getting rid of the display bottleneck would have to happen before addressing the GPU bottleneck makes any sense - as long as I have a 1920x1080 display ceiling, I will be display-bottlenecked for quite a stretch.)

Link to comment
Share on other sites

While it doesn't magically give better performance, it does let the app use more RAM without having to release and reload content.  Less strain on the drives at the very least, which makes performance a bit more stable.

 

WRT CoD: Ghosts, sure it might not need 6GB, but if they have headroom in the spec they can release new game modes or maps that do use it without worrying about it.  On the other end of the same coin, there's SimCity, where they're having problems expanding the core game in the way they'd like to because of the minimum spec.

 

We live in interesting times, and for once that's a good thing. :D

 

Whoops, I had intended to preface that with with 'other than the 64 bit address range'. Completely forgot to in the end.

Link to comment
Share on other sites

MorganX - it's not even baby steps.

 

 

I'm GPU-bottlenecked in far-fewer games - however, I'm display-bottlenecked in quite a few games today - and no cure for that is on the immediate horizon.  Pretty much the only reason I WOULD need to upgrade my GPU in the immediate future is to add support for features I currently have no use for (higher levels of AA, for example).  I'm not going to have a larger display anytime soon (lack of space on the desk for one) and removing the GPU bottleneck on the games I'm GPU-bottlenecked in would still leave me display-bottlenecked in those same games.  (Never mind that removing that GPU bottleneck can be done on the relatively cheap still - GTX660, even non-Ti, would be enough - getting rid of the display bottleneck would have to happen before addressing the GPU bottleneck makes any sense - as long as I have a 1920x1080 display ceiling, I will be display-bottlenecked for quite a stretch.)

 

My last GPU upgrade was a 650Ti to a 760 and based on current performance with BF4, NFS, and Warframe, I don't think I'll upgrade again. I agree on productivity. I don't even need to go to CS7 even with Photoshop's highly optimized x64 engine. It would be as an enthusiast only and I'm not that enthusiastic about it.

 

There won't be much need to upgrade based on OS or productivity software for years. MS has a lot of other things to fix before they worry about taking advantage of high end desktops.

 

The ball is in the devs court now.

Link to comment
Share on other sites

Whoops, I had intended to preface that with with 'other than the 64 bit address range'. Completely forgot to in the end.

 

I'll leave this one to the programmers, but here are a few interesting tidbits on the notion of x64 and performance:

 

http://www.viva64.com/en/k/0003/

 

 

General notes from MS, server related but probably generally applicable:

  • A 64-bit architecture provides more and wider general-purpose registers, which contribute to greater overall application speed. When there are more registers, there is less need to write persistent data to memory and then have to read it back just a few instructions later. Function calls are also faster in a 64-bit environment because as many as four arguments at a time can be passed in registers to a function.
  • Poor performance in 32-bit systems is often not the result of a lack of available memory, but the unavailability of large enough blocks of continuous memory. In a typical Windows SharePoint Services 3.0 deployment, Windows, Internet Information Services (IIS), common language runtime (CLR), ASP.NET, SharePoint Products and Technologies, SSPs, and MDACs can all claim a portion of a server?s available virtual memory and can leave a 32-bit address space quite fragmented. When the CLR or SharePoint services request new memory blocks, it can be difficult to find a 64-MB segment in the crowded 32-bit address space. A 64-bit system offers practically unlimited address space for user mode processes.

  • Even 32-bit applications can benefit from increased virtual memory address space when they are running in a 64-bit environment. For example, although a 32-bit application is still restricted to 4 GB of virtual memory, it no longer has to share that memory space with the operating system. As a result, it receives an effective increase in available virtual memory.

Link to comment
Share on other sites

This topic is now closed to further replies.