PS4 and Xbox One resolution / frame rate discussion


Recommended Posts

I'm no expert but DX12 improvements will help even older DX11 games once you have DX12 drivers installed, the CPU overhead reduction which kicks in with the new drivers and WDDM 2.0 don't need a developer to change their game code. It's true though that you'll get the most out of it with full support, plus all the new features it will add.

Link to comment
Share on other sites

I'm no expert but DX12 improvements will help even older DX11 games once you have DX12 drivers installed, the CPU overhead reduction which kicks in with the new drivers and WDDM 2.0 don't need a developer to change their game code. It's true though that you'll get the most out of it with full support, plus all the new features it will add.

How would that work if the DX11 games don't use the DX12 API? For example, non-Mantle games don't get a boost even if the hardware supports it.

Link to comment
Share on other sites

Sony has two custom APIs for the PS4.  One is a high level API like DX (up until 12) and OpenGL that familiar and easier for developers to use.  The other is a low level API that's down to the metal which makes it faster but also less familiar since it's specific to the particular hardware in the PS4 and being so low level is more difficult to use (devs have to do most things manually that the higher level APIs manage for them).  The manual control gives developers more flexability and control but it's more work.  DX12 and Mantel didn't even exist when Sony made those APIs.  Now MS is introducing a similar set of APIs.  DX has always been the high level one and that's even continuing... along with DX12 comes DX11.3 that doesn't get much hype but it's still a high level API for developers who don't want to get their hands dirty close to the metal.  DX12 is an entirely new beast for DX that provides a low level API for the first time for Windows that gives developers across different hardware similar low level access to the custom APIs consoles have traditionally had.

 

If you'd like to look into it more the high level API for the PS4 is called GNMX.  It's similar in capabilities as OpenGL and DX11.x.  The low level API is called GNM (I know, it stinks the difference in API names is only one letter... kind of confusing) and is like Mantle and DX12.  Both shipped with the PlayStation 4 at launch.  There have been a number of interviews where actual developers have been asked if DX12 is going to give MS a big advantage on Xbox One over PS4 and the developers have said no, because GNM on the PS4 already gives them similar low level hardware access, it's just proprietary and specific to the PS4 instead of being an API that can be used with different graphics cards across Intel, AMD, nVidia, etc.  That's not new or unusual.  Consoles usually give devs low level access to the hardware.  The unusual thing is that the Xbox One did NOT ship with a low level API.  Since launch though Microsoft has made HUGE improvements in the XDK stripping out all the things in the API that didn't apply to the Xbox One specifically and adding in new lower level features to improve performance so it's pretty good now and it will get better with DX12 along with providing a common API for developers that's not just specific to Xbox One hardware.

 

Microsoft even has a shading language called High Level Shader Language (HLSL) as part of DX and it's proprietary but Sony made a similar one for the PS4 called PlayStation Shader Language (PSSL) so developers have similar capabilities (OpenGL has GLSL).  Again they had it from launch.  Since GNM, GNMX, and PSSL are all Sony's own APIs that they control they can tweak them and add to them as they like if devs need something the hardware supports that isn't already there.  By and large devs seem to be happy with what's there though unlike the initial reaction to the Xbox API which a lot felt was too high level to open up the full potential of the hardware.  Again though MS has made HUGE improvements on that front in the XDK updates so it's not much of an issue anymore. It's hard to get specific about the Sony APIs though because they're under an NDA. The public doesn't know EXACTLY what's in them and the devs that have access to them can't talk about specifics but their general impression seems to be uniformly good.

I meant that whatever new APIs come as part of DX12, PS4 will benefit as long as Sony includes those new DX12 APIs in their SDK. I am sure PS4 has its own API but I am referring to the DX friendly wrapper. Sony obviously don't have the DX12 API in SDK(even if they support all features in their own API) because it is still in development.
Link to comment
Share on other sites

How would that work if the DX11 games don't use the DX12 API? For example, non-Mantle games don't get a boost even if the hardware supports it.

In the case of mantel it's a completely different API, in which case the developer has to change their DX code to it in order to use it, there's no compatibility there. Since DX 10.1 each newer version has been a superset of the one before it, DX12, the way I understand it, is no different, it can run all the DX11 code but better with the reduced overhead. Only if a game wants to use the new DX12 features does it have to be written for that specifically.

 

There's also two parts to this, the API improvements and the kernel level driver improvements with WDDM 2.0. This is why DX12 can run on a wide range of older graphics cards and isn't limited to just new cards like in the past. Long story short, DX12 should run DX11 games fine, and give you performance gains regardless.

Link to comment
Share on other sites

In the case of mantel it's a completely different API, in which case the developer has to change their DX code to it in order to use it, there's no compatibility there. Since DX 10.1 each newer version has been a superset of the one before it, DX12, the way I understand it, is no different, it can run all the DX11 code but better with the reduced overhead. Only if a game wants to use the new DX12 features does it have to be written for that specifically.

 

There's also two parts to this, the API improvements and the kernel level driver improvements with WDDM 2.0. This is why DX12 can run on a wide range of older graphics cards and isn't limited to just new cards like in the past. Long story short, DX12 should run DX11 games fine, and give you performance gains regardless.

I don't think that's how it works. There might be some minor improvements as far as the OS using the new API is concerned but I wouldn't expect for the games themselves to change. For example DX11 bought performance improvements over DX9 & DX10 but that didn't mean games made for the two older APIs ran better if the new one was on the system. Also, Mantle might be a different API but it uses the same principle as DX12.

 

WARNING! (car analogy): Let's say there's a dirt road and a highway. The driver only knows about the dirt road so that is the only one that he'll use (unless somebody tells him about the highway).

 

What I'm saying is that, unless the old games themselves get patched to use the DX12, there's not going to be any magical performance boost. That Star Swarm that everybody is using was built specifically for that lower level API access. I would have expected some older DX11 demos to be upgraded for Mantle/DX12 by now if your claim were feasible.

Link to comment
Share on other sites

I don't think that's how it works. There might be some minor improvements as far as the OS using the new API is concerned but I wouldn't expect for the games themselves to change. For example DX11 bought performance improvements over DX9 & DX10 but that didn't mean games made for the two older APIs ran better if the new one was on the system. Also, Mantle might be a different API but it uses the same principle as DX12.

 

WARNING! (car analogy): Let's say there's a dirt road and a highway. The driver only knows about the dirt road so that is the only one that he'll use (unless somebody tells him about the highway).

 

What I'm saying is that, unless the old games themselves get patched to use the DX12, there's not going to be any magical performance boost. That Star Swarm that everybody is using was built specifically for that lower level API access. I would have expected some older DX11 demos to be upgraded for Mantle/DX12 by now if your claim were feasible.

 

I said already that you'll get the most out of it if you write to it specifically but I expect a boost in performance regardless, DX12 can still handle DX11 code, that's why it's called a superset, there's a level of compatability already present. There's also the fact that we're getting driver improvements here, not just API. Everyone knows by now that any driver performance gains show up for games without those needing to get patched.

Link to comment
Share on other sites

I don't think that's how it works. There might be some minor improvements as far as the OS using the new API is concerned but I wouldn't expect for the games themselves to change. For example DX11 bought performance improvements over DX9 & DX10 but that didn't mean games made for the two older APIs ran better if the new one was on the system. Also, Mantle might be a different API but it uses the same principle as DX12.

 

WARNING! (car analogy): Let's say there's a dirt road and a highway. The driver only knows about the dirt road so that is the only one that he'll use (unless somebody tells him about the highway).

 

What I'm saying is that, unless the old games themselves get patched to use the DX12, there's not going to be any magical performance boost. That Star Swarm that everybody is using was built specifically for that lower level API access. I would have expected some older DX11 demos to be upgraded for Mantle/DX12 by now if your claim were feasible.

 

Some general performance improvements can already be felt in the way Windows 10 handles DX. Faster load times .. snappier. Naturally we need to wait for a newer version of Windows 10. Not everything revolves around an FPS boost.

Link to comment
Share on other sites

I don't think that's how it works. There might be some minor improvements as far as the OS using the new API is concerned but I wouldn't expect for the games themselves to change. For example DX11 bought performance improvements over DX9 & DX10 but that didn't mean games made for the two older APIs ran better if the new one was on the system. Also, Mantle might be a different API but it uses the same principle as DX12.

 

WARNING! (car analogy): Let's say there's a dirt road and a highway. The driver only knows about the dirt road so that is the only one that he'll use (unless somebody tells him about the highway).

 

What I'm saying is that, unless the old games themselves get patched to use the DX12, there's not going to be any magical performance boost. That Star Swarm that everybody is using was built specifically for that lower level API access. I would have expected some older DX11 demos to be upgraded for Mantle/DX12 by now if your claim were feasible.

(I love car analogies)

 

But if they pave that dirt road the driver would still be able to go a little faster on that same road, even if he doesn't take the highway.

 

If the software makes a call and the hardware improves the way it handles that same call then the software staying the same would still see an improvement.

Link to comment
Share on other sites

I said already that you'll get the most out of it if you write to it specifically but I expect a boost in performance regardless, DX12 can still handle DX11 code, that's why it's called a superset, there's a level of compatability already present. There's also the fact that we're getting driver improvements here, not just API. Everyone knows by now that any driver performance gains show up for games without those needing to get patched.

And like I said, I expect the boost to be on the OS level since DX is closely integrated, but not from DX12's main feature, lower level hardware access. If that is what you're saying, then I don't disagree.

Some general performance improvements can already be felt in the way Windows 10 handles DX. Faster load times .. snappier. Naturally we need to wait for a newer version of Windows 10. Not everything revolves around an FPS boost.

Yeah, but I was interested in what specifically DX12 brings, not the its OS integration, but speaking of that, you have any links to benchmarks comparing W10 with W8 or is what you're saying anecdotal?

(I love car analogies)

 

But if they pave that dirt road the driver would still be able to go a little faster on that same road, even if he doesn't take the highway.

 

If the software makes a call and the hardware improves the way it handles that same call then the software staying the same would still see an improvement.

(I hate them, it was mostly a joke)

 

A little yes, but still very far from that highway speed.

Link to comment
Share on other sites

Yeah, but I was interested in what specifically DX12 brings, not the its OS integration, but speaking of that, you have any links to benchmarks comparing W10 with W8 or is what you're saying anecdotal?

 

 

I was talking about the performance of starting up / Alt tabbing/ loading games. Naturally this is anecdotal and you will not see any concrete benchmarks in this regards until graphics drivers and Windows 10 hit RTM.

Link to comment
Share on other sites

  • 3 weeks later...

Face-Off: DmC Devil May Cry Definitive Edition



From a visual perspective, we're mostly looking at parity between PS4 and Xbox One - with just a few caveats. Anti-aliasing coverage is a little better on Microsoft's system, although the difference is largely academic, visible only when zooming in on still screenshots. The same post-process algorithm is used across both consoles (creating a sharper image than the PC game) and in motion it's basically impossible to spot the difference.


Instead, the main variable in graphical quality centres on how well the artwork is displayed at a distance and from steeper angles. The Xbox One and PC versions of DmC feature a high level of anisotropic filtering that keeps textures looking crisp and clear, whereas the PS4 game uses a simpler trilinear technique instead. This results in textures appearing blurry from a distance on the Sony platform, diminishing image quality significantly compared to both PC and Xbox One. In comparison, the less aggressive anti-aliasing implementation and the high levels of AF allow the Xbox One to display slightly sharper texture details than the PC release on artwork far away from the camera.


PlayStation 4 favours a predominantly v-synced set-up with minimal screen-tear, although this does seem to vary from scene to scene. In some sequences the engine appears intent on more strictly adhering to v-sync, dropping frames when the renderer goes over budget, while in others the game is allowed to tear, keeping the frame-rate up at the expense of interfering with image integrity. Curiously we also see occasional frame-time dips to 50ms - something that's quite jarring on a game working to a 16ms render budget. In comparison the Xbox One game more consistently adopts the adaptive v-sync approach - where frames are torn if the engine can't sustain its target frame-rate. This leads to more tearing when the engine is under load, but as a consequence frame-rates are slightly higher as a result and there are fewer dips in controller response. Split-second frame-time spikes occur on Xbox One too, but not quite with the same frequency found on


Between the two, the mild wobble caused by the tearing on Xbox One is a little more obvious, but it's worth pointing out that both consoles regularly deliver a perceptual 60fps experience where many small blips in performance have no noticeable effect on gameplay or the apparent smoothness of the action.


DmC is a decent port across both current-gen consoles, although both versions have some plus and minus points to consider. The tearing on the Xbox One is a little more intrusive than the short dips in frame-rate on the PS4, though both manage to deliver extended segments of solid 60fps gameplay. Image quality is basically identical, but the lack of anisotropic filtering harms the presentation of the PS4 game, leaving blurry artwork displayed on-screen far more frequently than the Xbox One game, which is clearer and cleaner in comparison. With this in mind, we're inclined to give the Xbox One the final nod here: the dips and tearing are intermittent, while the reduced texture clarity on the PS4 is a more frequent annoyance.


http://www.eurogamer.net/articles/digitalfoundry-2015-dmc-definitive-edition-face-off

Link to comment
Share on other sites

Why is it that several games on the PS4 seems to have issues with AF? It almost sounds like there's an issue with the SDK. Since the PS4 is more powerful than the XB1, it should be able to handle it.

Link to comment
Share on other sites

Why is it that several games on the PS4 seems to have issues with AF? It almost sounds like there's an issue with the SDK. Since the PS4 is more powerful than the XB1, it should be able to handle it.

 

this

HUzsJCM.png

will always be a product of memory bandwidth, regardless of how many shaders there are. its a known fact that the xbox one has much more memory bandwidth than the ps4. number of shaders is not the be all end all of the final result.

 

and

 

Javascript is not enabled or refresh the page to view.

Click here to view the Tweet
Link to comment
Share on other sites

 

this

HUzsJCM.png

will always be a product of memory bandwidth, regardless of how many shaders there are. its a known fact that the xbox one has much more memory bandwidth than the ps4. number of shaders is not the be all end all of the final result.

 

 

That looks.. unreal. It looks like something out of the early 2000's..

Link to comment
Share on other sites

will always be a product of memory bandwidth, regardless of how many shaders there are. its a known fact that the xbox one has much more memory bandwidth than the ps4. number of shaders is not the be all end all of the final result.

Where did you get that from? That's far from "a known fact". Having more shaders isn't the only advantage the PS4 has, the other main one is its GDDR5 memory has 176 GB/s bandwidth compared to the XBox Ones DD3 RAM with it's 68.3 GB/s bandwidth.

Last I checked 176 was a little higher than 68.3. Xbox One tries to compensate with it's ESRAM but it's a tiny amount (32MB) and it's 109GB/s which is still lower than 176.

From Wikipedia:

Eurogamer has been told that for simultaneous read and write operations the ESRAM is capable of a theoretical memory bandwidth of 192 GB/s and that a memory bandwidth of 133 GB/s has been achieved with operations that involved alpha transparency blending

So the theoretical max is 192 which finally is bigger than 176 but the 133 still isn't.  So even in the ideal situation the Xbox One has 32MB of 192GB/s memory and 8GB of 68.3GB/s memory compared to PS4s 8GB of 176GB/s memory.  Now people may prefer other features of the Xbox One (HDMI-in, Kinect, higher clocked CPU, more frequent updates, better media support, etc.) but memory bandwidth is pretty much always an edge given to the PS4.  I it's not close to "a known fact that the xbox one has much more memory bandwidth than the ps4", quite the opposite.
Link to comment
Share on other sites

That looks.. unreal. It looks like something out of the early 2000's..

 

It's from a DMC game i think. Probably using a poorly ported and optimized PS2 engine from the 2000's. But yeah those textures :x .

 

BTW i really don't see how a lack of bandwidth would prevent the PS4 from using AF. I'm sorry but i've been using AF on PC for over 10 years now with no impact on performance. It's one of the first thing i put on max quality without bothering to check if it reduces my framerate.

 

Honestly i think the original Guild Wars released 10 years ago had better textures quality or at least on par with that running on hardware the was vastly inferior to the PS4 and One. I think i had an Opteron 180 back then with 2GB of DDR2 and the gfx was probably a 8800gts 320 or a x1800 XL.

 

That's definitely at best on par with WoW and WoW is kind of a technically ugly game (artistically okay) able to run on very old PCs.

  • Like 1
Link to comment
Share on other sites

snip

i have explained this countless times in this thread already, xbox one has 2 memory busses, esram + ddr3 that run side by side. esram bandwidth will vary between 109GB/s to 192GB/s(depending on simultaneous read and write ops), plus 68GB/s on the ddr3 bus, making the xbox one memory bandwidth available at minimum 177GB/s up to 260GB/s using both busses simultaneously. this is what is most likely happening with the ability for the xbox one to have no problems using AF. texture filtering is a bandwidth hog.

one example, like the alpha blending op example you posted for 133gb/s on the esram. now at the same time, a texture with filtering could be in the process of being loaded in the ddr3 using its bandwidth, which is effectively "free" from the perspective of the esram.

176GB/s ps4 ram is only the speed of the bus and also theoretical. actual performance of the ram to the gpu varies between 100GB/s - 130GB/s. see the following sony slide

KQDrauC.png

Link to comment
Share on other sites

Doesn't change the fact that 10 years old PC can push proper aniso (8x) without sweating. So memory bandwidth is certainly not the reason why the PS4 can't.

 

A x1800 xt with 1-2gb ddr2 could easily push around 30-40 fps in Doom 3 at 1600x1200 with aniso 8x. Can't say the screens you posted look much better than Doom 3 (which is extremely sad to be honest). And there's not much difference between aniso 8x and 16x from what i remember. Like i said aniso 16x is one of those given settings gamers always turn on cause it doesn't reduce the fps much even on weak entry level modern gpu.

 

Personally i would look at the engine first as the reason. The game is a last gen game made for 360, ps3, One and PS4 so likely an old last gen engine like ue3. Maybe this engine for a reason or another doesn't support aniso on PS4. Almost all ue3 games i played on pc last gen did not support out of the box anything other than fxaa on PC. Not because PC can't do MSAA 4x easily but probably because of engine limitations.

Link to comment
Share on other sites

xbox one has 2 memory busses, esram + ddr3 that run side by side.

As does the PS4. It has the 8GB GDDR5 RAM as well as 256MB of DDR3 RAM.

176GB/s ps4 ram is only the speed of the bus and also theoretical. actual performance of the ram to the gpu varies between 100GB/s - 130GB/s. see the following sony slide

That slide is a result of the fact the SoC is an APU with shared memory where both the CPU and GPU compete for memory bandwith. That's true on the Xbox One as well and thus the 68.3GB/s is just as theoretical for the Xbox One as the 176GB/s for the PS4.

When the AF issue first came up a while back I searched and there didn't seem to be any sort of consensus on what the issue was. Now it's coming up again I did a quick search to see if anyone had figured it out and still no luck. If the answer were as simple as you'd have us believe I'm sure all the major tech sites would have posted as much but they haven't.

In fact Digital Foundry wrote just yesterday:

 

From a hardware standpoint there's no reason we're aware of why Sony's console can't deliver similarly decent texture filtering to the Xbox One and PC - after all, there's an immense level of commonality between the systems. It's not exactly clear why developers are having problems in this specific area on Sony's console, and it's something we're discussing with contacts right now, with a view to getting to the bottom of what is a rather bizarre mystery. Anisotropic filtering is bandwidth intensive - and that's a precious commodity on both PS4 and Xbox One, but assuming the textures aren't being kept in ESRAM, the Sony console has more bandwidth than its Microsoft counterpart and should be able to handle the job just as well, if not better.

(emphasis added)

 

Now there is that "assuming" part but there is no way they can fit the textures in 32MB of RAM and again if were clearly possible then there wouldn't be such a mystery for them to try to get to the bottom of as they state.  Forgive me if I give more credibility to Digital Foundry than vcfan from the Neowin forums.  The professional gaming press seems genuinely confused at why some games don't use AF on the PS4 but do on the Xbox One.  I hope Digital Foundry is able to find something out, I'm not optimistic though because as I said we've gone through this before near launch and no one seems to be able to get to the bottom of it.

Link to comment
Share on other sites

As does the PS4. It has the 8GB GDDR5 RAM as well as 256MB of DDR3 RAM.

That slide is a result of the fact the SoC is an APU with shared memory where both the CPU and GPU compete for memory bandwith. That's true on the Xbox One as well and thus the 68.3GB/s is just as theoretical for the Xbox One as the 176GB/s for the PS4.

When the AF issue first came up a while back I searched and there didn't seem to be any sort of consensus on what the issue was. Now it's coming up again I did a quick search to see if anyone had figured it out and still no luck. If the answer were as simple as you'd have us believe I'm sure all the major tech sites would have posted as much but they haven't.

In fact Digital Foundry wrote just yesterday:

(emphasis added)

 

Now there is that "assuming" part but there is no way they can fit the textures in 32MB of RAM and again if were clearly possible then there wouldn't be such a mystery for them to try to get to the bottom of as they state.  Forgive me if I give more credibility to Digital Foundry than vcfan from the Neowin forums.  The professional gaming press seems genuinely confused at why some games don't use AF on the PS4 but do on the Xbox One.  I hope Digital Foundry is able to find something out, I'm not optimistic though because as I said we've gone through this before near launch and no one seems to be able to get to the bottom of it.

 

The PS4 DDR3 256 MB is for the standby and background download functions. It has nothing to do with graphics. The XB1 has 8 GB of DDR3, 32 MB of ESRAM, and also 8 GB of NAND. The NAND has nothing to do with graphics but it's used for standby and resume functions and probably caching of system stuffs. Point being, the 256 MB of DDR3 is not at all relevant to graphics.

 

Second, the XB1 has 8 GB DDR3 for the CPU and 32 MB of ESRAM for the GPU. This was intentional to AVOID memory contention that's observed on the PS4 sharing the same pool of memory for GPU and CPU. The intent is that the GPU should never access the DDR3 directly. Instead, all the work should be done by the CPU or the Move Engines. This leaves the GPU with just the 32 MB ESRAM at full bandwidth of 204 GB/s (peak read/write) or about 140 GB/s (measured read/write). You might say 32 MB is not enough memory for 1080p but that's not true. The primary reasons for low resolution is poor ESRAM API and poor profiling tools. This is getting fixed with a new feature within PIX and also new ESRAM API with DirectX 12. Both of these features should resolve future games although that will depend on the rendering engines. I am sure engines such as Unreal Engine 4, Unity 5, and CryEngine 3 will adopt these features quickly.

 

http://www.slideshare.net/DevCentralAMD/inside-x-box-one-by-martin-fuller

http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Inside-XBox-One-Martin-Fuller_new.ppsx

 

8ZMKLJL.png

Link to comment
Share on other sites

Now there is that "assuming" part but there is no way they can fit the textures in 32MB of RAM and again if were clearly possible then there wouldn't be such a mystery for them to try to get to the bottom of as they state.  Forgive me if I give more credibility to Digital Foundry than vcfan from the Neowin forums.  The professional gaming press seems genuinely confused at why some games don't use AF on the PS4 but do on the Xbox One.  I hope Digital Foundry is able to find something out, I'm not optimistic though because as I said we've gone through this before near launch and no one seems to be able to get to the bottom of it.

 

I find it hard to believe that all of it can be attributed to "lazy developers". There has to be a deeper issue at play here.

 

Whether its a poor API, poor SDK or a hardware issue of some sort remains to be seen.

Link to comment
Share on other sites

As does the PS4. It has the 8GB GDDR5 RAM as well as 256MB of DDR3 RAM.

no, it does not. the 256mb is on a different chip and the cpu and gpu have no access to it.

 

That slide is a result of the fact the SoC is an APU with shared memory where both the CPU and GPU compete for memory bandwith. That's true on the Xbox One as well and thus the 68.3GB/s is just as theoretical for the Xbox One as the 176GB/s for the PS4.

it is, 68 theoretical, 50gb/s real world(from xbox engineer). still, esram+ddr3 far exceeds the gddr5 configuration by around 100gb/s.

 

Now there is that "assuming" part but there is no way they can fit the textures in 32MB of RAM and again if were clearly possible then there wouldn't be such a mystery for them to try to get to the bottom of as they state.

that would be true if all the gpu was doing was loading textures, but thats not how it works. no developer would waste time only loading textures. the whole point of having compute pipes in addition to graphics pipes is to do multiple things at once.

DF scenario: ps4, load a texture to gddr5 with your 176gb/s bandwidth

XB1, load a texture to ddr3 with your 68gb/s bandwidth

ps4 wins

heres a real scenario. the gpu must execute a shader and also load a texture at the same time.

on the ps4: the shader read/write and texture reads are all sharing the ram bandwidth while executing both operations.

on xbox one: the gpu loads a texture to ddr3 , while a shader is executing(that reads and writes to a render target stored in esram).

the end result, from a bandwidth perspective, the xbox one wins. what do you think the whole point of having such a configuration of ddr3 + esram is for? just for fun? not only can the gpu use both pools, but like the slide posted above by KevinN206 shows, there can be no memory contention between the cpu and gpu. this saves massive cycles that would otherwise be wasted. why do you figure the ps4 gpu bandwidth takes a massive hit once it starts sharing the ram with the cpu? contention.

 

Forgive me if I give more credibility to Digital Foundry than vcfan from the Neowin forums.

lol, ok so what are you doing in this thread exactly then?

  • Like 2
Link to comment
Share on other sites

This topic is now closed to further replies.