PS4 and Xbox One resolution / frame rate discussion


Recommended Posts

The PS4 DDR3 256 MB is for the standby and background download functions.

The point was is has a separate bandwidth available to avoid memory contention. It doesn't have to directly do anything graphics related to reduce contention. They have shared memory so everything it does that the main system momory doesn't have to do reduces that memory contention graphics related or not.

For example I believe some of the functions that memory is used for have to do with voice communication, the video game DVR, and compressing/decompressing assets when loaded from disk. The PS4 can therefore do these things for "free" since it's using a separate memory bus while the Xbox One would have to do it in the main System RAM cutting into the bandwidth available for other things (like graphics)

Everything is connected, just because it doesn't directly do graphics doesn't mean it doesn't effect the graphics performance.

Second, the XB1 has 8 GB DDR3 for the CPU and 32 MB of ESRAM for the GPU. This was intentional to AVOID memory contention that's observed on the PS4 sharing the same pool of memory for GPU and CPU. The intent is that the GPU should never access the DDR3 directly.

There is no way the GPU can do all of it's graphics related functions on 32MB of RAM so that it never has to access the DDR3 directly. If that were true PC GPUs would never have even needed to go to 1GB let alone beyond. GPU functions require a lot of RAM and 32MB isn't even remotely close to enough to handle ALL GPU functions.

You might say 32 MB is not enough memory for 1080p but that's not true. The primary reasons for low resolution is poor ESRAM API and poor profiling tools. This is getting fixed with a new feature within PIX and also new ESRAM API with DirectX 12. Both of these features should resolve future games although that will depend on the rendering engines. I am sure engines such as Unreal Engine 4, Unity 5, and CryEngine 3 will adopt these features quickly.

It is true. Furthermore future API changes have thing to do with current and previously released games which is what we're talking about.
Link to comment
Share on other sites

I find it hard to believe that all of it can be attributed to "lazy developers". There has to be a deeper issue at play here.

 

Whether its a poor API, poor SDK or a hardware issue of some sort remains to be seen.

I think you're lumping me in with other posters here. I did not say anything about "lazy developers". I honestly have no idea what the issue is. It may even be a different reason for every game with the issue. My point was just that vcfan claimed "its a known fact that the xbox one has much more memory bandwidth than the ps4." and that's NOT "a known fact". The professional gaming press doesn't seem to know what the issue with AF in particular and in general on bandwidth I'll post that last sentence from Digital Foundry again:

"the Sony console has more bandwidth than its Microsoft counterpart and should be able to handle the job just as well, if not better."

That doesn't seem to jive with vcfan's "known fact" now does it?

Link to comment
Share on other sites

lol, ok so what are you doing in this thread exactly then?

Well it's certainly NOT to just believe every absurd claim by any random person on the message board.

I commented on this particular occasion because you stated:

"its a known fact that the xbox one has much more memory bandwidth than the ps4."

and it's absolutely NOT "a known fact"

In fact it's directly contradicted by DF's:

"the Sony console has more bandwidth than its Microsoft counterpart and should be able to handle the job just as well, if not better."

If it's such a "known fact" why would DF say that? I can bring up more articles from the professional gaming press that say PS4 has more bandwidth as well if you like.

At best it's a disputed fact, and maybe you can even out debate me on the topic but who am I? I'm just another random person on the internet and I'm not making ANY claim about KNOWING what the issue with AF is. I'm saying I don't know, the gaming press doesn't seem to know, and that directly contradicts your "known fact" claim.

If it comes down to believing you or believing Digital Foundry or other professional gaming press sites I'm going to have to go with Digital Foundry. It's not an issue with what I say vs. what you say. It's an issue with what you say vs. pretty much everyone else in the gaming press. That's an easy decision for me on who to believe.

Link to comment
Share on other sites

snip

from your beloved DF

 

Microsoft's argument seems pretty straightforward then. In theory, Xbox One's circa 200GB/s of "real-life" bandwidth trumps PS4's 176GB/s peak throughput. The question is just to what extent channelling resources through the relatively tiny 32MB of the much faster ESRAM is going to cause issues for developers. Microsoft's point is that game-makers have experience of this already owing to the eDRAM set-up on Xbox 360 - and ESRAM is the natural evolution of the same system.

Baker is keen to tackle the misconception that the team has created a design that cannot access its ESRAM and DDR3 memory pools simultaneously. Critics say that they're adding the available bandwidths together to inflate their figures and that this simply isn't possible in a real-life scenario.

"You can think of the ESRAM and the DDR3 as making up eight total memory controllers, so there are four external memory controllers (which are 64-bit) which go to the DDR3 and then there are four internal memory controllers that are 256-bit that go to the ESRAM. These are all connected via a crossbar and so in fact it will be true that you can go directly, simultaneously to DRAM and ESRAM," he explains.

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

Link to comment
Share on other sites

 

That's great except I'm not disputing there is a separate memory bus for the ESRAM.  AT NO TIME did I say there wasn't one.  Furthermore as you point out DF KNOWS there is a separate bus and yet they still said the PS4 has more bandwidth and they still don't know what the deal with the AF is.  You're statement that it's "a known fact" is still clearly false.

Link to comment
Share on other sites

I don't even understand why that bandwidth discussion has to happen without solid proofs. That's totally puzzling specially on a website visited by tech enthusiasts.

 

Around 10 years old PC equipped with 1-2GB of DDR2  and an antiquated gpu with 256-512MB of GDDR3 were able to push aniso 16x at 1600x1200 in games like Half Life 2, F.E.A.R. and Oblivion with resonable framerate (between 50-60). And the texture filtering looked better in those games running on those systems than the screenshots of DMC on PS4 posted by vcfan.

 

In fact i'm just looking at a review of the 8800 GTS 640MB GDDR3 running on a Core 2 Extreme X6800 having 2GB of DDR2 and it was able to run Oblivion at 1920x1080 with aniso 16x at 40 fps.

 

Now i don't have the time to check if such a system has significantly more memory bandwidth than the PS4 shared GDDR5 but i highly doubt it is.

 

I don't know what the problem is but i would be extremely surprised to learn from a bunch of impartial devs that it is bandwidth related.

Link to comment
Share on other sites

First off, when you guys like to bring up the bandwidth argument I always see the theoretical numbers being posted, that is NOT what you get in the real world. As much as you like to toss out that 176Gb/s number for the PS4 that's just not what developers get in the real world. It's also a fact that the PS4 does pass data to the CPU and GPU through the same bus. Doing that further takes away bandwidth from just the GPU because you just split some for the CPU. This all comes into play as to what your final bandwidth ends up.

 

MS splits and shares things better, if not a bit more tricky, just means developers have to work things out more. With API, SDK and future DX12 improvements, they'll have a easier time I expect.

 

As far as the PC goes, it's also split, the CPU doesn't see or use your video cards memory. All the 1 or 2GB it has is all it's own. The CPU still uses system RAM to get things going. Thus at any time you play a game the only thing is the CPU sending data through PCIe to the GPU. System RAM is never really a bottleneck for gaming, it's always been the CPU itself followed by the speed of the bus it uses to talk with the GPU.

Link to comment
Share on other sites

I think you're lumping me in with other posters here. I did not say anything about "lazy developers". I honestly have no idea what the issue is. It may even be a different reason for every game with the issue. My point was just that vcfan claimed "its a known fact that the xbox one has much more memory bandwidth than the ps4." and that's NOT "a known fact". The professional gaming press doesn't seem to know what the issue with AF in particular and in general on bandwidth I'll post that last sentence from Digital Foundry again:

"the Sony console has more bandwidth than its Microsoft counterpart and should be able to handle the job just as well, if not better."

That doesn't seem to jive with vcfan's "known fact" now does it?

 

Sorry - didn't mean to imply, that it was your point of view. It just seems like the general consensus around the internet, that it can be blamed on the developers.

Link to comment
Share on other sites

That's great except I'm not disputing there is a separate memory bus for the ESRAM.  AT NO TIME did I say there wasn't one.  Furthermore as you point out DF KNOWS there is a separate bus and yet they still said the PS4 has more bandwidth and they still don't know what the deal with the AF is.  You're statement that it's "a known fact" is still clearly false.

 

this is fact, unless now you think the xbox one architect is lying. 200 > 100to130

 

And then if you say what can you achieve out of an application - we've measured about 140-150GB/s for ESRAM.

"That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally."

 

 

 

snip

 

that's like saying I could run a game in 1080p 10 years ago, so why is this game 900p on next gen consoles. games are more complicated today, more effects,lighting,alpha. bandwidth usage is much crazier.

Link to comment
Share on other sites

First off, when you guys like to bring up the bandwidth argument I always see the theoretical numbers being posted, that is NOT what you get in the real world. As much as you like to toss out that 176Gb/s number for the PS4 that's just not what developers get in the real world. It's also a fact that the PS4 does pass data to the CPU and GPU through the same bus. Doing that further takes away bandwidth from just the GPU because you just split some for the CPU. This all comes into play as to what your final bandwidth ends up.

 

MS splits and shares things better, if not a bit more tricky, just means developers have to work things out more. With API, SDK and future DX12 improvements, they'll have a easier time I expect.

 

As far as the PC goes, it's also split, the CPU doesn't see or use your video cards memory. All the 1 or 2GB it has is all it's own. The CPU still uses system RAM to get things going. Thus at any time you play a game the only thing is the CPU sending data through PCIe to the GPU. System RAM is never really a bottleneck for gaming, it's always been the CPU itself followed by the speed of the bus it uses to talk with the GPU.

 

Doesn't change the fact that old antiquated gpus with 512GDDR3 were able to push aniso 16x without too much trouble at 1680x1050 or 1600x1200 which is not that far from 1920x1080. That a 15% difference in pixels count for 1680x1050 and even less for 1600x1200. Again i would be extremely surprised if the problem is bandwidth (hardware) related.

Link to comment
Share on other sites

Doesn't change the fact that old antiquated gpus with 512GDDR3 were able to push aniso 16x without too much trouble at 1680x1050 or 1600x1200 which is not that far from 1920x1080. That a 15% difference in pixels count for 1680x1050 and even less for 1600x1200. Again i would be extremely surprised if the problem is bandwidth (hardware) related.

Because like eSRAM in the X1, the GPU's didn't have to share the bus with the CPU so they didn't care how many times the had to load the textures to filter. If you look at FPS performance on PC APU's with AF, there's a large performance hit.

Link to comment
Share on other sites

this is fact, unless now you think the xbox one architect is lying. 200 > 100to130

It's also a fact that the ESRAM is only 32MB. Having the VAST MAJORITY of your RAM less than half the speed of your competitor and then throwing on a tiny amount that is faster doesn't make your overall memory bandwidth beat the competition.

You act as if because both can work at the same time you can just add them together for total memory bandwidth but you can't because very little can fit in the ESRAM.

Yes the ESRAM makes it faster than if it wasn't there. Yes, if you can keep both read and writes filled that tiny 32MB is faster than the PS4. But that little boost doesn't completely close the gap and surpass the fact that the PS4s memory is faster than the vast majority of the memory on the Xbox One.

With respect to AF specifically it requires the textures to work and the textures can't fit in the ESRAM. So sure you can put the frame buffer in the ESRAM but every time the AF does it texture lookup it has to go to the slow main memory so it can't just do AF for free in the ESRAM on the side while the rest of the game is running.

People here keep acting the like ESRAM is this great graphics RAM and the GPU can just use that without having to touch the DDR3 RAM and the DDR3 RAM is just used by the OS and CPU but that's not true. That's more like how the PS3 was with it's non-unified memory but the Xbox 360, Xbox One, and PS4 don't work that way.

The ESRAM is a TINY amount of RAM that is fine for doing certain small tasks (such as holding the frame buffer) but it's too small to do most of the graphics tasks let alone ALL of them. It adds a nice little boost but in no way makes the overall memory bandwidth of the Xbox One superior to the PS4.

Furthermore it's not even clear that bandwidth IS the issue with AF anyway. Again I'm not saying I know what is or isn't the issue. All I'm saying is it seems to be a mystery to not just me but the gaming press in general (for which I used Digital Foundry as an example but if you have an issue with them I could use others) and thus your claim that "its a known fact that the xbox one has much more memory bandwidth than the ps4" is false.

Link to comment
Share on other sites

snip

theres a reason its 32MB. its meant to only hold the bandwidth intensive data such as render targets,and shadowmaps that have a lot of overdraw. if a big texture gets loaded in ddr3,and remains untouched until it is copied to the framebuffer, having more bandwidth doesnt change a thing. there will be minor performance differences.

the initial loading of the texture does need bandwidth for the texture filtering, yes, BUT like has been stated, loading a texture can be done simultaneouly while the GPU shaders are actually working in the esram, without affecting the shader's work,making it essentially free because its on a different bus,and happens in the background. same as if the CPU needs to transfer some data on ddr3, the shader keeps going at the same rate without noticing a thing.

Doesn't change the fact that old antiquated gpus with 512GDDR3 were able to push aniso 16x without too much trouble at 1680x1050 or 1600x1200 which is not that far from 1920x1080. That a 15% difference in pixels count for 1680x1050 and even less for 1600x1200. Again i would be extremely surprised if the problem is bandwidth (hardware) related.

so why doesn't every single xbox one and ps4 games use 16x AF? because people keep forgetting to toggle a register?

Link to comment
Share on other sites

so why doesn't every single xbox one and ps4 games use 16x AF? because people keep forgetting to toggle a register?

 

Dunno i'm still totally puzzled by the fact the new gen consoles can't do 1920x1080, aniso 16x, msaa 4x with a framerate of around 50 fps and without tearing on all games that are not ground breaking graphically speaking.

Link to comment
Share on other sites

This new generation is depressing. Such a disappointment so far. There will be good games of course. But you can find good games on about all systems. From an hardware technical perspective this is so far the most disappointing generation ever.

 

If i was Nintendo i would be preparing to cut the grass under Sony and MS foots in 2017.

Link to comment
Share on other sites

While this gen isn't the best hardware power wise it's not like the goal is something high, 1080p and 60fps should be more common on both systems as code gets better and developers do.

 

Anyways, Sony may not want to as soon but I think Nintendo and MS could release new systems sooner. Of course if a few things go well then maybe MS doesn't rush either.

 

It'll be interesting if the next systems aim for 4k, have to see how many TVs are on the market at the time. If we have 40-50GB games now though, god 4k games could be some crazy size.

Link to comment
Share on other sites

While this gen isn't the best hardware power wise it's not like the goal is something high, 1080p and 60fps should be more common on both systems as code gets better and developers do.

 

Anyways, Sony may not want to as soon but I think Nintendo and MS could release new systems sooner. Of course if a few things go well then maybe MS doesn't rush either.

 

It'll be interesting if the next systems aim for 4k, have to see how many TVs are on the market at the time. If we have 40-50GB games now though, god 4k games could be some crazy size.

 

The next Xbox should just have expansion slots for GTX Titans that you can load on for 4K.

 

gtxbox.png

Link to comment
Share on other sites

The next Xbox should just have expansion slots for GTX Titans that you can load on for 4K.

 

gtxbox.png

Hehe, I know it's a joke but actually with new DX12 being able to pool resources, like SLi but better, they could probably, using one of the USB3 ports, tack on a second GPU/co-processor to boost performance if they really wanted to. It's not like that hasn't been done in the past.

Link to comment
Share on other sites

While this gen isn't the best hardware power wise it's not like the goal is something high, 1080p and 60fps should be more common on both systems as code gets better and developers do.

 

 

I hope so. I'm certainly not buying one before it is. I can live with a some PC ports of the best looking PC games running at 900p but there's no excuse for games developed primarily on next gen consoles.

Link to comment
Share on other sites

It's also a fact that the ESRAM is only 32MB. Having the VAST MAJORITY of your RAM less than half the speed of your competitor and then throwing on a tiny amount that is faster doesn't make your overall memory bandwidth beat the competition.

You act as if because both can work at the same time you can just add them together for total memory bandwidth but you can't because very little can fit in the ESRAM.

Yes the ESRAM makes it faster than if it wasn't there. Yes, if you can keep both read and writes filled that tiny 32MB is faster than the PS4. But that little boost doesn't completely close the gap and surpass the fact that the PS4s memory is faster than the vast majority of the memory on the Xbox One.

With respect to AF specifically it requires the textures to work and the textures can't fit in the ESRAM. So sure you can put the frame buffer in the ESRAM but every time the AF does it texture lookup it has to go to the slow main memory so it can't just do AF for free in the ESRAM on the side while the rest of the game is running.

People here keep acting the like ESRAM is this great graphics RAM and the GPU can just use that without having to touch the DDR3 RAM and the DDR3 RAM is just used by the OS and CPU but that's not true. That's more like how the PS3 was with it's non-unified memory but the Xbox 360, Xbox One, and PS4 don't work that way.

The ESRAM is a TINY amount of RAM that is fine for doing certain small tasks (such as holding the frame buffer) but it's too small to do most of the graphics tasks let alone ALL of them. It adds a nice little boost but in no way makes the overall memory bandwidth of the Xbox One superior to the PS4.

Furthermore it's not even clear that bandwidth IS the issue with AF anyway. Again I'm not saying I know what is or isn't the issue. All I'm saying is it seems to be a mystery to not just me but the gaming press in general (for which I used Digital Foundry as an example but if you have an issue with them I could use others) and thus your claim that "its a known fact that the xbox one has much more memory bandwidth than the ps4" is false.

 

I think MS solution to this are Tiled Resources and Split Render Target. You only need to use ESRAM if it's bandwidth limited (or maybe latency limited). Most stuffs rendered on the screen don't need super-high bandwidth and so the render frame can be split between ESRAM or DDR3. According to Brad Wardell, the current ESRAM API is pretty crappy that's getting improved with an ESRAM profiling tool in PIX. And then an entirely new ESRAM API is coming with DX12.

 

Don't forget MS first-party games are doing better in terms of resolution and graphics. Third-party multiplat are still struggling probably because they don't have the same inside access as MS does. I think this will change when the XB1 is updated to W10 with DX12. You have to wonder how Turn 10 was able to run Forza Horizon 2 at 1080p locked 30 fps with 4xMSAA and also open-world --- all of this on meager 32 MB ESRAM???

 

http://www.redgamingtech.com/gdc-2015-feature-xbox-one-dx12-brad-wardell-says-dx12-looks-realistic-people-didnt-believe/

 

xbox-one-ddr3-esram-split-render-target.

Link to comment
Share on other sites

The next Xbox should just have expansion slots for GTX Titans that you can load on for 4K.

 

gtxbox.png

 

 

Though you are joking, this isn't a bad idea.  What if consoles were even more PC like, next Xbox had an Expansion slot (like the expansion pak for the N64 back in the days).  A cap could be set in place on what the maximum graphics/resolution and fps that can be supported.

 

Developers could still build all games on a sliding scale, like they do now for Xbox One games.

 

Stock Xbox: comes with 2 Expansion slots on back.  Box would still have to maintain it's stocky body to ensure proper airflow.

 

Stock Xbox: All games 1080p 60-120fps

Xbox w/ 1 Expansion card:  1440p games 30-60fps

Xbox w/ 2 Expansion cards 4K 30-60fps.

 

With Xbox becoming more of a platform than a dedicated box, I can see this happening in the future. 

Link to comment
Share on other sites

Though you are joking, this isn't a bad idea.  What if consoles were even more PC like, next Xbox had an Expansion slot (like the expansion pak for the N64 back in the days).  A cap could be set in place on what the maximum graphics/resolution and fps that can be supported.

 

Developers could still build all games on a sliding scale, like they do now for Xbox One games.

 

Stock Xbox: comes with 2 Expansion slots on back.  Box would still have to maintain it's stocky body to ensure proper airflow.

 

Stock Xbox: All games 1080p 60-120fps

Xbox w/ 1 Expansion card:  1440p games 30-60fps

Xbox w/ 2 Expansion cards 4K 30-60fps.

 

With Xbox becoming more of a platform than a dedicated box, I can see this happening in the future. 

 

Aka kinda like some of the possible Steam machines that have been mooted.

Link to comment
Share on other sites

Face-Off: Battlefield Hardline

However, as with the multiplayer side, we can confirm the PS4's solo gameplay still runs at a 1600x900 native resolution, while Xbox One sits at 1280x720.

But what's the story for the actual settings on each version? Pitting both console editions against PC at its ultra graphics preset (rendering at 1920x1080 to easily match shots), it's surprising to find how little there is to distinguish each. The PS4 and Xbox One are entirely identical, right down to the form of ambient occlusion and their texture filtering. There's only a slight quirk on PS4, where texture positions are offset, marks any contrast in our shots.

On to performance, and this is curiously where the PS4 starts to slip. Unlike Battlefield 4's preference for Sony's hardware in the frame-rate stakes, Hardline clearly favours Xbox One across the spectrum of our tests. Both target 60fps with v-sync permanently engaged, but it's only Xbox One that manages to hold this number in areas, particularly with alpha effects involved.

A GPU-side bottleneck is the likely culprit on PS4, though it is unusual to see the stronger console buckle, even with its higher resolution. In one example, a warehouse filled with transparency effects prompts the PS4 to waver between 50-60fps, while Xbox One clears it without an issue. Only one hotel shoot-out prompts any major drops on Microsoft's hardware, while PS4 draws closer to the 50fps line in almost every case. It is worth emphasising that these are the worst-case scenarios though, and the rest of play unfolds at 60fps for each console.

On the console front, the equation is far simpler: Xbox One excels in frame-rate metrics this time, and has a much better grip on 60fps compared to PS4. It's the most improved version of the three in its handling of the Frostbite 3 engine, and often runs above the PS4's update during stress tests. That said the 720p/900p resolution divide, where image quality favours Sony's platform, makes it an apples and pears scenario when choosing between the two.

http://www.eurogamer.net/articles/digitalfoundry-2015-battlefield-hardline-face-off

Link to comment
Share on other sites

This topic is now closed to further replies.