Recommended Posts

Can you back your claim of stability/robustness advantages up with some citations and specifics? As far as I know, there is nothing in the x64 portions of the ISA that would yield advantages in these areas. The only effective differences in the modes from a practical standpoint is that you have larger/more registers, can address more memory, and you have some additional x86_64 specific instructions.

 

Unless you are talking about security advantages in terms of ASLR, I don't see any inherent reason why 64-bit OSes would be at a stability advantage. I've honestly never heard of anyone suggesting that x64 OSes are inherently more stable before.

Take two VMs with the only difference being x64 in one case.  Same application mix.  Push them until they throw up.

Link to comment
Share on other sites

Take two VMs with the only difference being x64 in one case.  Same application mix.  Push them until they throw up.

 

Intentionally creating an OoM situation != stability.

 

Properly written code should be able to gracefully handle OoM issues too.

Link to comment
Share on other sites

Intentionally creating an OoM situation != stability.

 

Properly written code should be able to gracefully handle OoM issues too.

Not necessarily - it's more finding when (not if) they will throw up under load - with all else equal, x64 can be pushed harder before it throws up than x32 - THAT is what I mean by stability (it takes more punishment before it sicks - not that it doesn't sick).

 

Yes - properly-written code should be able to handle it - however, hasn't poorly-written code been a problem in every operating system, and especially in Windows?  Poorly-written driver code has been especially problematical when it comes to Windows - it's why I prefer hardware that has successfully completed the x64 testing gauntlet - far less risk of driver-code sloppiness.

Link to comment
Share on other sites

Take two VMs with the only difference being x64 in one case.  Same application mix.  Push them until they throw up.

 

That's rather vague, are you talking about a specific usage scenario with a specific virtualization software? I've never had issues running either 32-bit or 64-bit operating systems as VMs. Actually, I would say that in hyper-v I've had better luck with 32-bit Linuxes than 64-bit so that would go against what you are saying.

 

Intentionally creating an OoM situation != stability.

 

Properly written code should be able to gracefully handle OoM issues too.

 

If that's what he means. I'm not really sure what he means considering his example is just really vague and implies that if you throw both x32 and x64 Windows in VMs, the 32-bit one will crash eventually. I've done plenty of both myself with large memory foot prints resulting from disassembler usage and never seen crashes...

Link to comment
Share on other sites

Not necessarily - it's more finding when (not if) they will throw up under load - with all else equal, x64 can be pushed harder before it throws up than x32 - THAT is what I mean by stability (it takes more punishment before it sicks - not that it doesn't sick).

 

Yes - properly-written code should be able to handle it - however, hasn't poorly-written code been a problem in every operating system, and especially in Windows?  Poorly-written driver code has been especially problematical when it comes to Windows - it's why I prefer hardware that has successfully completed the x64 testing gauntlet - far less risk of driver-code sloppiness.

 

Are you implying that if you allocate a ton of memory that both will crash? I'm really confused by these vague statements. Or are you implying that 32-bit Window's address space will fragment quicker and run slower as a result?

 

You do realize that vendors put their 32-bit drivers through the very same testing gauntlet as the 64-bit ones these days right? If they didn't, you'd get giant warnings during driver installs...

Link to comment
Share on other sites

Not necessarily - it's more finding when (not if) they will throw up under load - with all else equal, x64 can be pushed harder before it throws up than x32 - THAT is what I mean by stability (it takes more punishment before it sicks - not that it doesn't sick).

 

Yes - properly-written code should be able to handle it - however, hasn't poorly-written code been a problem in every operating system, and especially in Windows?  Poorly-written driver code has been especially problematical when it comes to Windows - it's why I prefer hardware that has successfully completed the x64 testing gauntlet - far less risk of driver-code sloppiness.

 

Well, yes - that's stating the obvious. If you're intentionally creating OoM situations, a 64-bit OS with access to >3GB of RAM will naturally "last longer".

 

But as I said in my previous point, that does not equal more innate stability, you're just increasing the RAM pool. In the reverse situation with <3GB of RAM, a 64-bit OS will OoM faster than a 32-bit one.

 

That does not make one more innately stable than the other, you're just mitigating resource depletion.

 

If that's what he means. I'm not really sure what he means considering his example is just really vague and implies that if you throw both x32 and x64 Windows in VMs, the 32-bit one will crash eventually. I've done plenty of both myself with large memory foot prints resulting from disassembler usage and never seen crashes...

 

His posts are very much anecdotal in nature, they remind me of the classic XP-era compulsive "tweaker" stereotype. So I think OoM is what he's referencing here, even if he might not realise it himself.

  • Like 1
Link to comment
Share on other sites

You'll see an OOM error in a 32bit app much earlier than a 64bit app purely due to virtual memory. A 32bit app can only allocate 4GB in total, while a 64bit app will quite happily go until the page file takes up your entire HDD.

That's got nothing to do with stability though, a 64bit app will crash just as hard if it hits a OOM situation (Since it's extremely hard to recover from that)

Link to comment
Share on other sites

Well, there is the point that the 64 bit versions of Windows have significantly higher handle limits and that stuff can definitely affect games and other apps stability directly.  But last I heard about it being a problem was Vista...I honestly don't know enough about it to say much more.

Link to comment
Share on other sites

Well, there is the point that the 64 bit versions of Windows have significantly higher handle limits and that stuff can definitely affect games and other apps stability directly.  But last I heard about it being a problem was Vista...I honestly don't know enough about it to say much more.

 

That's again, a matter of mitigating resource depletion.

 

I'm not fond of analogies or other comparisons as most posters here absolutely brutalise them, but I think this one is close enough to summarise the "spirit" of the argument.

 

Slapping a bigger petrol tank on a car doesn't make said car any more or less likely to break down / crash, you're just giving it more fuel to do it's job longer with.

Link to comment
Share on other sites

The only reason that they are asking for an i7 is the same because they ask for an FX8350. It has nothing to do with overshadowing i5... it is purely because the number of threads. i5 can be quad core but without HTT it cannot handle 8 threats. The Fx8350, which is literally a much faster clocked version of the processors in both x1 and ps4, has already 8 threats. Since it is going to be a port, I assure you, coding for 8 threats is nice and fancy, trying to separate different workloads according to the number of threats available per processor is a nightmare.

 

So bye bye older processors that doesn't have 8+ threats, at the end PC receives quite a bit of ports, which come straight from their implementations on the consoles. Admitedly, good developers will find a way to mitigate this... but again, is a nightmare, specially with the original code focused always on 8 threads... on both main consoles.

Link to comment
Share on other sites

I think they mentioned the HT processors purely due to thread count, because HT actually slows down the CPU in most workloads (i.e. a i5 2500k outperformed a higher clocked i7 2700k with HT)

Unless Watch_Dogs actually has the type of workload that benefits from HT, but I doubt it (HT hurts proper threads, helps cases where the threads are the same, like video encoding)

Link to comment
Share on other sites

The only reason that they are asking for an i7 is the same because they ask for an FX8350. It has nothing to do with overshadowing i5... it is purely because the number of threads. i5 can be quad core but without HTT it cannot handle 8 threats. The Fx8350, which is literally a much faster clocked version of the processors in both x1 and ps4, has already 8 threats. Since it is going to be a port, I assure you, coding for 8 threats is nice and fancy, trying to separate different workloads according to the number of threats available per processor is a nightmare.

 

So bye bye older processors that doesn't have 8+ threats, at the end PC receives quite a bit of ports, which come straight from their implementations on the consoles. Admitedly, good developers will find a way to mitigate this... but again, is a nightmare, specially with the original code focused always on 8 threads... on both main consoles.

 

:huh: Where are you getting that a quad core without HT can't handle 8 threads? That's the entire point of a preemptive scheduling. You'd quite literally pin 2 threads: 1 core if you needed 8 threads. Also, I think you are misunderstanding the point of HT. HT doesn't give you 8 cores or more resources, it gives you the means by which to better utilize the resources of the 4 cores you do have; i.e. to do a better job scheduling those 4 cores. So you will never see 2x the performance gains and in some workloads you may only see a modest increase of 5-15% in some cases. It all depends on how well the scheduling is done on a single core.

 

Take look at this. Notice, how the i7 is not double the performance than the i5 when pegging all cores:

http://cpuboss.com/cpus/Intel-Core-i7-3770K-vs-Intel-Core-i5-3570K

 

 

Here's a more fair comparison for an 8 core AMD vs a i5. Take note, this 8120 is clocked about 2 times of that of the PS4, yet even with that advantage and 8 cores it still gets worse overall performance when using all cores compared to an i5 with half has many cores:

http://cpuboss.com/cpus/Intel-Core-i5-2500K-vs-AMD-FX-8120

 

Basically if we were going to generalizing this, what this says is that you can do essentially the same workload faster and more efficiently using an i5 than you can with a PS4 processor that has 2x the amount of cores. Or if we were looking at this on a single core bases, you can finish the same work more than 2x faster on an single core of an i5 than you can on a PS4 core (check the passmark scores for the AMD i linked and consider that it is clocked 2x faster than a PS4 core).

 

I think that about sums up this, you need 8-core for future games thing. 

Link to comment
Share on other sites

:huh: Where are you getting that a quad core without HT can't handle 8 threads? That's the entire point of a preemptive scheduling. You'd quite literally pin 2 threads: 1 core if you needed 8 threads. Also, I think you are misunderstanding the point of HT. HT doesn't give you 8 cores or more resources, it gives you the means by which to better utilize the resources of the 4 cores you do have; i.e. to do a better job scheduling those 4 cores. So you will never see 2x the performance gains and in some workloads you may only see a modest increase of 5-15% in some cases. It all depends on how well the scheduling is done on a single core.

 

Take look at this. Notice, how the i7 is not double the performance than the i5 when pegging all cores:

http://cpuboss.com/cpus/Intel-Core-i7-3770K-vs-Intel-Core-i5-3570K

 

 

Here's a more fair comparison for an 8 core AMD vs a i5. Take note, this 8120 is clocked about 2 times of that of the PS4, yet even with that advantage and 8 cores it still gets worse overall performance when using all cores compared to an i5 with half has many cores:

http://cpuboss.com/cpus/Intel-Core-i5-2500K-vs-AMD-FX-8120

 

Basically if we were going to generalizing this, what this says is that you can do essentially the same workload faster and more efficiently using an i5 than you can with a PS4 processor that has 2x the amount of cores. Or if we were looking at this on a single core bases, you can finish the same work more than 2x faster on an single core of an i5 than you can on a PS4 core (check the passmark scores for the AMD i linked and consider that it is clocked 2x faster than a PS4 core).

 

I think that about sums up this, you need 8-core for future games thing. 

Nobody said that they cannot handle "8 threads", but considering how the OS even detects such threats as logical processors, well... you get the idea. Try foobar for example, it decodes and encodes as many audio files as much "cores" you have. on an i5, it will still decode 4 files at once, why then foobar hasn't implemented preemtive scheduling?? again, it's not that easy.

Link to comment
Share on other sites

Nobody said that they cannot handle "8 threads", but considering how the OS even detects such threats as logical processors, well... you get the idea. Try foobar for example, it decodes and encodes as many audio files as much "cores" you have. on an i5, it will still decode 4 files at once, why then foobar hasn't implemented preemtive scheduling?? again, it's not that easy.

 

Preemptive scheduling is part of the OS not the application (and aided by architectural features -- just to be clear). Foobar just decided to do 1:1 mapping between audio files and cores. They could have chosen to start 4 more OS threads up if they wanted to and then pinned the threads to specific cores. It would literally just be the effort of doubling the amount of system calls they currently do to do so... not much effort at all. It really is that easy...

 

The reason they didn't do it is because there was no performance benefit to do it and potential performance hits in terms of cache misses. Why would you process more audio files than you have resources to process them? It wouldn't gain you anything...

Link to comment
Share on other sites

That's again, a matter of mitigating resource depletion.

Yes, but it is directly related to stability.  IIRC if you run out of handles the game is just going to crash when it needs one.  There's no 'managing' it....boom, done.  Considerably different from how being low on memory is going to go since that has virtual memory to fall back on.

 

I can't actually find the info I'm referring to since it's old news.  Figured it'd be easier to find since it was Brad Wardell talking about it.

Link to comment
Share on other sites

Yes, but it is directly related to stability.  IIRC if you run out of handles the game is just going to crash when it needs one.  There's no 'managing' it....boom, done.  Considerably different from how being low on memory is going to go since that has virtual memory to fall back on.

 

I can't actually find the info I'm referring to since it's old news.  Figured it'd be easier to find since it was Brad Wardell talking about it.

 

No no no no no. It's the exact same concept, you have a set pool of a resource and you have to manage that resource accordingly. If your code is sloppy and does not properly manage whatever resource, you'll encounter issues.

 

Throwing more of a resource at a problem is nothing more than a mitigation, it does not make sloppy code better.

  • Like 1
Link to comment
Share on other sites

No no no no no. It's the exact same concept, you have a set pool of a resource and you have to manage that resource accordingly. If your code is sloppy and does not properly manage whatever resource, you'll encounter issues.

 

Throwing more of a resource at a problem is nothing more than a mitigation, it does not make sloppy code better.

 

I think what he is talking about is actually GDI Resource leaks -- so it is exactly what you are saying: buggy code that doesn't manage its memory and then causes the system to run out of memory. That use to be a huge issue back in the day. I don't see how x64 would mitigate the issue. It is solely dependent on how much physical memory you have (not the address space size).

 

EDIT: it is also worth mentioning that there is a GDI handle limit, but it is the same size (64K handles) for both 32-bit and 64-bit windows and has nothing to do with memory limits.

Link to comment
Share on other sites

People still use GDI? :whistle:

GDI is something you'd use for a desktop app (Ideally one that needs to run on old OSs like XP), these days you really shouldn't need to use GDI.

Edit: What I meant to say, was that you wouldn't see a game using it. And GDI is just as bad on 64bit as it is on 32bit (or 16bit)

  • Like 1
Link to comment
Share on other sites

People still use GDI? :whistle:

GDI is something you'd use for a desktop app (Ideally one that needs to run on old OSs like XP), these days you really shouldn't need to use GDI.

Edit: What I meant to say, was that you wouldn't see a game using it. And GDI is just as bad on 64bit as it is on 32bit (or 16bit)

 

This is all true :laugh:. But it is the closest thing I have heard of with handle issues so I assume that's what he is referring to.

Link to comment
Share on other sites

No no no no no. It's the exact same concept, you have a set pool of a resource and you have to manage that resource accordingly. If your code is sloppy and does not properly manage whatever resource, you'll encounter issues.

 

Throwing more of a resource at a problem is nothing more than a mitigation, it does not make sloppy code better.

It's not the same.  You can't guarantee the OS or other applications won't be using the majority of this resource on a 32 bit system.  It isn't per application.  There's no virtual fallback.

 

If there was only one application running on every system, sure, making sure your game uses as few as possible might work fine...as long as your game doesn't actually need anything that's cool.

 

And it has nothing to do with GDI.  - http://blogs.technet.com/b/markrussinovich/archive/2009/09/29/3283844.aspx

 

Anyway, I'm done talking about this because as long as I can't find my source article there's nothing more I can add.

Link to comment
Share on other sites

It's not the same.  You can't guarantee the OS or other applications won't be using the majority of this resource on a 32 bit system.  It isn't per application.  There's no virtual fallback.

 

If there was only one application running on every system, sure, making sure your game uses as few as possible might work fine...as long as your game doesn't actually need anything that's cool.

 

And it has nothing to do with GDI.  - http://blogs.technet.com/b/markrussinovich/archive/2009/09/29/3283844.aspx

 

Alas, go back and read my post again (or even the one before it) carefully. I think I explained the concept quite clearly, so I'm not going to repeat myself further.

Link to comment
Share on other sites

Alas, go back and read my post again (or even the one before it) carefully. I think I explained the concept quite clearly, so I'm not going to repeat myself further.

I did.  You're not entirely wrong, but you are very good at not actually acknowledging critical technical differences.

Link to comment
Share on other sites

I did.  You're not entirely wrong, but you are very good at not actually acknowledging critical technical differences.

 

Which are completely irrelevant in the context of the discussion taking place, hence why I ignored them.

Link to comment
Share on other sites

It's not the same.  You can't guarantee the OS or other applications won't be using the majority of this resource on a 32 bit system.  It isn't per application.  There's no virtual fallback.

 

If there was only one application running on every system, sure, making sure your game uses as few as possible might work fine...as long as your game doesn't actually need anything that's cool.

 

And it has nothing to do with GDI.  - http://blogs.technet.com/b/markrussinovich/archive/2009/09/29/3283844.aspx

 

Anyway, I'm done talking about this because as long as I can't find my source article there's nothing more I can add.

 

The article you linked says you have approximately the same amount of handle limits for a process (i.e. per application) whether 32bit or 64bit Windows. Why would a game ever be allocating 16 million event handles anyway  :huh: (or many Windows Event handles at all for that matter -- it is a game and should be doing rendering calls, not accessing Windows subsystems)? The article itself even says that you would exhaust your physical resource limits before you reached the handle limit anyway (in the same way as GDI handle leaks do).

Link to comment
Share on other sites

This topic is now closed to further replies.