Xbox One developer: upcoming SDK improvements will allow for more 1080p games

Just before Microsoft launched the Xbox One in November, there was a lot of Internet debate and chatter over the fact that many games made for the console would run natively at 720p. They included titles like Call of Duty Ghosts and Battlefield 4, both of which could run at 1080p resolution natively on Sony's PlayStation 4.

Rebellion's Sniper Elite 3 is due out in 2014 for the Xbox One and other platforms.

Now one developer claims that Microsoft will soon release an updated SDK that should solve this display difference among games made for both the Xbox One and PS4. Rebellion Games’ Jean-Baptiste Bolcato, a Senior Producer on the upcoming WWII shooter Sniper Elite 3, states;

They (Microsoft) are releasing a new SDK that’s much faster and we will be comfortably running at 1080p on Xbox One. We were worried six months ago and we are not anymore, it’s got better and they are quite comparable machines.

So why is it so hard to get 1080p resolutions on Xbox One games at the moment? Bolcato claims that Microsoft's decision to add 32 MB of eSRAM, in addition to its 8GB of DDR3 RAM, is the main issue. He states that the amount of eSRAM is just too small to support 1080p. He adds:

It’s such a small size within there that we can’t do everything in 1080p with that little buffer of super-fast RAM. It means you have to do it in chunks or using tricks, tiling it and so on. 

However, it would appear that Microsoft will do something to allow developers to go over this hardware hurdle with the eSRAM with the improvements made for the SDK. Bolcato seems confident that the Xbox One "is gonna catch up" to the PS4 in terms of hardware power in the months to come.

And these improvements might actually come sooner rather than later. Microsoft's upcoming Xbox One update, due later on today, will include changes to the SDK. It's not clear if these changes are the ones mentioned here but we may be enjoying higher resolution games in the near future.

Source: GamingBolt | Image via Rebellion

Report a problem with article
Previous Story

Microsoft reportedly offering Windows Phone 8.1 SDK access to some developers

Next Story

Microsoft to sell black and purple Xbox 360 chrome controllers in March

60 Comments

Commenting is disabled on this article.

I would definitely be shock if microsoft never consider the xbox one will ever play natively at 1080p and after the controversy of resolution that force them to make sure 1080p is playable for the X1. This is why two is better than one in some cases and if PS4 didn't exist i think many of the games will still stuck at 720p again. Thank God!

I'm skeptical about this whole comment.

First, a full 1080p frame at 32bpp is about 8 MB and therefore, the eSRAM buffer could fit about 4 of them.
Also, whatever Microsoft does in the SDK, if the buffer really is too small, there is nothing that can be done to fix this outside making it easier for developers to use slower external (the eSRAM that is) memory.

The XBOX One is a less powerful machine and we will all have to admit it. This is a fact.
Whether an updated SDK can lower down the difference is yet to be demonstrated.

TheCyberKnight said,
I'm skeptical about this whole comment.

First, a full 1080p frame at 32bpp is about 8 MB and therefore, the eSRAM buffer could fit about 4 of them.
Also, whatever Microsoft does in the SDK, if the buffer really is too small, there is nothing that can be done to fix this outside making it easier for developers to use slower external (the eSRAM that is) memory.

The XBOX One is a less powerful machine and we will all have to admit it. This is a fact.
Whether an updated SDK can lower down the difference is yet to be demonstrated.


hopefully they should be able to use it efficiently, they had the edram on the 360 so its not like developers should be a stranger to external memory like that.

I just find it interesting how the development teams on both sides went for different hardware. MS sacrificed die space for esram to use ddr3, Sony thought bugger it we'll use just expensive gddr 5 and have more CU's

You are forgetting to take into account alpha channel, depth buffer, additional buffers for post processing effects and HDR lighting, bloom lighting effects, motion blur effects, reflection mapping, other stencils, and anti aliasing, and if you're doing double/triple buffering for smooth video without tearing. This can take huge amounts of memory.

Don't forget the additional needs of the Xbox One interface for live TV and snapping, which needs to be buffered and available at all times, even during gaming.

I called this a long time ago in my comment here: http://www.neowin.net/news/mic...ed-and-more#comment-2348491

Edited by Geezy, Feb 11 2014, 7:31pm :

Geezy said,
You are forgetting to take into account alpha channel, depth buffer, additional buffers for post processing effects and HDR lighting, bloom lighting effects, motion blur effects, reflection mapping, other stencils, and anti aliasing, and if you're doing double/triple buffering for smooth video without tearing. This can take huge amounts of memory.

Don't forget the additional needs of the Xbox One interface for live TV and snapping, which needs to be buffered and available at all times, even during gaming.

I called this a long time ago in my comment here: http://www.neowin.net/news/mic...ed-and-more#comment-2348491

You are mostly correct, with one exception.

The 'composer' doesn't need to consume or use eSRAM. Things snapped to the side or other textures being retained in the composer can stay in System RAM and write directly through without touching the eSRAM.

This works a lot like the composer technology in Windows 7/8.

In Vista the original composer had to maintain copies of textures and even though it would use virtualized system RAM, had to copy this back to VRAM to display it.

With Windows 7, this changed, and it further changed in Windows 8. Windows 7 no longer needed to use the VRAM step on DX10 or newer hardware, as the composer can write directly from virtualized System RAM, no longer needing to keep additional copies of the texture or waste time transferring it back and forth to VRAM.

The XB1's composer is able to slap any additional textures into the final output without dumping non-game textures into eSRAM if they don't need additional processing.

(Even game content doesn't have to go through eSRAM, as Microsoft demonstrates when they talk about theoretical performance of the RAM as they are adding the eSRAM speed with the System RAM speed, as BOTH can be filling the final composition at the same time.)


You are correct about more of the eSRAM being used for 'gaming' composition than just a 1080p frame with effect processing going on.. This is why 16mb was the absolute minimum and Microsoft bumped it to 32mb to ensure future headroom.

So the DDR3 bottleneck will be even more of an issue, unless MS doesn't want to guarantee resources to either the game or snapped apps, defeating their multitasking idea. I can't believe they were ever considering 16MB... *smh*

I guess it depends on what type of apps are being snapped though, perhaps this is why they won't let you snap Skype while gaming.

Microsoft should have spent that little extra money IMO, it will bite them in the ass in years to come as games start using consoles full power. I still prefer the Xbone to the ps4 though never been a Sony fan (but I admit there console is not bad just its not my personal preference.)

Simon Fowkes said,
Probably will need all 3 due to exclusives, think I will go back to PC gaming...

Couldn't hurt. Titanfall was the only Xbox One exclusive I cared about, and it's coming to PC anyway.

We will see which console is the best as soon as one of them has a game line-up that actually makes me want to buy a console

Mario World 3d on the Wii-U is a must buy. So there is your answer - best game I have played for years on the first console I ever bought - I was a PC gamer for 20 years, but got the Wii-U for my six year old son. Can also play all the original Wii games. Will buy ps4/xbone when my kids are older...

So you buy the Wii-U for "one" game only.
Well cant be much of a choice there on the Wii-U.
But reading your past comments you love Nintendo anyways. Well guess someone has to.

The Nintendo Wii-U is a classic example that it is not just about raw hardware power. Wii-u has much less powerful hardware than ps4/xbone (think it is more powerful than the 360/ps3 though) but it can play some great games:-

Super Mario World 3d (60fps @1080p)
Need for speed most wanted U
Lego City, etc...

The Wii-U is also a classic example of a product manufactured without regard to the competition. I like Nintendo and all, but let's not pretend the Wii-U is selling like hotcakes here...

They are essentially the same hardware maybe tiny advantage on the GPU. Developers generally build games for the lowest common denominator so I expect games to be similar on both platforms. PS4 does not look like its that far ahead.

Melfster said,
They are essentially the same hardware maybe tiny advantage on the GPU. Developers generally build games for the lowest common denominator so I expect games to be similar on both platforms. PS4 does not look like its that far ahead.

exactly. The PS4 advantage wont come out probably until 4-5 years down the line with PS4 exclusive games.

Hell, imagine if MS released a One S model with an upgraded GPU. technically that happened with 360S when they combined all 3 chips, they sped up GPU but had to downclock it to keep compatibility. Do the same with the one, but have the games run higher gfx when played on the One S.

considering the move to x86, either of the companies could pull a move like that down the line. Just as AMD for a new chip...

+6 CUs, +560 GFlops, +16 ROPs, +8 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory is a tiny advantage? okay

This is what's been said for a while now, the esram needs tricks and the tricks will come in the dx update, with devs having to do nothing to take advantage of the small but incredibly fast ram. This is what I've said when trying to talk to the fan boys (I'm not a fanboy btw, i can simply read articles and am able to separate fud from fact). Now i have no idea exactly what increase in performance this will simply give to devs with zero changes, but imagine that after this the xbox one games currently out could simply be patched to run at 1080 across the board, including bf4? That would upset things a little wouldn't it? It would be very funny, not because i want one to be better, but because of all the fanboys claiming one thing or another while ignoring key details or not understanding the way games use hardware (and the fact that the low level software is just as important as the hardware).

And still, improving the bandwidth by using the eSRAM will not make up for the raw shader performance deficit nor for having half of the ROPs.

You just need to understand that all this newer SDK will enable is for games that were bandwidth starved to run better, but it will not magically make the ROPs to push more pixels which is one of the main factors limiting the X1 1080p capability.

Yeah I understand this, I also understand that the shader performance has a smaller amount of real world effect than you are making out. The game devs can simply use slightly less effects which most people would be hard pressed to notice while the faster bandwidth will give a direct and user noticeable boost to fidelity and performance.

gonchuki said,
And still, improving the bandwidth by using the eSRAM will not make up for the raw shader performance deficit nor for having half of the ROPs.

You just need to understand that all this newer SDK will enable is for games that were bandwidth starved to run better, but it will not magically make the ROPs to push more pixels which is one of the main factors limiting the X1 1080p capability.

This is correct, even with infinite memory size or bandwidth, if the GPU and ROPs are too weak, there's no point.

If Xbox one games were running at 720p to fit the framebuffer in ESRAM, that means the ESRAM at 720p isn't a GPU bottleneck, therefore the GPU doesn't have any 'bottlenecked power' to utilize when jumping up to 1080p. Therefore even if the ESRAM size/width issues are taken care of, going from 720p to 1080p will incur the expected framerate penalty.

There is no "free lunch" where an Xbox game can jump from 720p to 1080p while keeping its framerate.

There were other anonymous sources in the past every time this resolution or frame rate debate comes up that have said the XB1s drivers and SDK are behind where they should have been from day one.

Only time will tell and after a few updates we'll see where things stand.

And when Microsoft catches up, PS4 will most likely go forward again. Xbox One can't compete in terms of raw power with PS4, and will never be able to unless in some upcoming hardware upgrade put in more powerful components.

Yogurth said,
And when Microsoft catches up, PS4 will most likely go forward again. Xbox One can't compete in terms of raw power with PS4, and will never be able to unless in some upcoming hardware upgrade put in more powerful components.

Except we don't need hardware parity as Microsoft's vision is different from Sony's. All the games I've played on my Xbox One look very good; regardless of resolution. I could not care less; even if I tried.

Yogurth said,
And when Microsoft catches up, PS4 will most likely go forward again. Xbox One can't compete in terms of raw power with PS4, and will never be able to unless in some upcoming hardware upgrade put in more powerful components.

Except the 'math' doesn't support your claims.

At most the PS4 has 3-6 FPS of an advantage at 1080p/60fps in terms of pure hardware power.

This is a tiny hardware difference, and that is if you don't fully account for the added eSDRAM speed when it doesn't have to wait.

Sony will marginally improve their PS4 SDK, but they are competing against Microsoft, the company that defined and designed how 3D gaming and the hardware works today. OpenGL 4.x would not exist if they didn't adopt and copy Direct3D 10/11, and Sony is working from that, not creating it.

The whole debate keeps overlooking the outliers. For example, CoD & BF on the PS4 do run at a higher resolution but they don't maintain 60fps, where the XB1 at the slightly lower resolution does.

This also does matter to gamers. I know professional gamers that prefer the XB1 versions, as they can't stand a FPS drop that gets them killed. Jumping up to 1080p is irrelevant if it is at the cost of gameplay.

Yogurth said,
And when Microsoft catches up, PS4 will most likely go forward again. Xbox One can't compete in terms of raw power with PS4, and will never be able to unless in some upcoming hardware upgrade put in more powerful components.

While I'm not going to deny or argue the PS4 has a slight GPU advantage, but the targets aren't moving higher for games on both systems unlike the PC. By that I mean the target is 1080p and 60fps. Both systems have enough hardware to hit that without issue so the few extra CUs the PS4 has over the XB1 won't really matter. If the targets where open like on the PC where you can play games higher than 1080p be it 1200p or 1440p then the advantage would help but at this point that's not the case.

Jarrichvdv said,

Except we don't need hardware parity as Microsoft's vision is different from Sony's. All the games I've played on my Xbox One look very good; regardless of resolution. I could not care less; even if I tried.

There is a massive difference in how some games look. Forza 5 for example looks stunning, where as Dead Rising 3 and Ghosts look like slightly sharper Xbox 360 games with a bit of anti aliasing added. Need for Speed looks great also, however been locked at 30fps for a fast moving game like that ruins the experience a bit i think.

I do hope things improve, i loved the Xbox 360, however the Xbox One is a bit underwhelming in both performance and user experience.

The only thing, thus far, the Xbox One has been lacking is the stupid interface. I can't ever (quickly) find what I'm looking for unless I have it pinned to the home screen. Gaming on it (once it loads, which does seem to take forever) its nice and smooth, and looks great to me. I still think it will be a few years before any titles take true control of the consoles, and that's where this endless bickering might actually matter, when developers know how to fully utilize the systems.

George P said,

While I'm not going to deny or argue the PS4 has a slight GPU advantage, but the targets aren't moving higher for games on both systems unlike the PC. By that I mean the target is 1080p and 60fps. Both systems have enough hardware to hit that without issue so the few extra CUs the PS4 has over the XB1 won't really matter. If the targets where open like on the PC where you can play games higher than 1080p be it 1200p or 1440p then the advantage would help but at this point that's not the case.


I wouldn't call 50% slight.

http://www.extremetech.com/gam...-the-hardware-specs-compare

Did you read your article?

"Despite the resolution difference and upscaling, though, there is visually very little difference between the Xbox One and PS4. By virtue of being based on the same GPU architecture, games on both consoles will look very, very similar"

Mobius Enigma said,
At most the PS4 has 3-6 FPS of an advantage at 1080p/60fps in terms of pure hardware power.
Where did you get these numbers?

Yogurth said,

Anyone who follows hardware for long will tell you that performance gains don't match 1:1 with hardware gains. You can add 50% more hardware sharers but that doesn't give you a 50% performance gain, it's never been like that. You also have the difference between theoretical numbers and real world, you can do the math on paper and come out with a peak performance number and in real world testing that number is never hit/matched. I also remember a post a month or two ago that said out of the 18 CUs in the PS4, 4 are used for something else and 14 are for the game, that, if true makes it 12 vs 14 and not 12 vs 18.

Regardless, my original point stands, both have enough power to hit 1080p and 60fps. If we were talking higher, like 1440p then the advantage on the hardware side would come more into play.

George P said,
Anyone who follows hardware for long will tell you that performance gains don't match 1:1 with hardware gains. You can add 50% more hardware sharers but that doesn't give you a 50% performance gain, it's never been like that.
Sometimes they do. If you read video cards reviews, for a given architecture a certain x% more shaders/clock speed usually translates in x% more performance, although that varies from game to game. Since the PS4 and Xbox One have a very similar architecture, it's entirely legitimate to expect higher performance from the console that has the higher specs; it's not like the Xbox One had much to compensate.

While I agree that there other factors at play, it's vain to hope that the Xbox One will ever fully catch up to the PS4.

Andre S. said,
Sometimes they do. If you read video cards reviews, for a given architecture a certain x% more shaders/clock speed usually translates in x% more performance, although that varies from game to game. Since the PS4 and Xbox One have a very similar architecture, it's entirely legitimate to expect higher performance from the console that has the higher specs; it's not like the Xbox One had much to compensate.

While I agree that there other factors at play, it's vain to hope that the Xbox One will ever fully catch up to the PS4.

Fully catch up in what way? I don't deny the PS4 advantage but what are we racing to? The end goal as far as performance goes is 1080p and 60fps, something I full expect the XB1 to reach in time. After that point what hardware advantage the PS4 has gives it minor differences between games, the ability to have some more effects in game like particles or a bit better lighting, nothing substantial like say 1080p@60 vs 720p@60.

That's really my point on all this, the finish line for both systems is the same mark, now if the PS4 wanted to up that to something higher like 1440p then sure but that's not going to happen.

George P said,
Fully catch up in what way? I don't deny the PS4 advantage but what are we racing to? The end goal as far as performance goes is 1080p and 60fps, something I full expect the XB1 to reach in time.
I don't like spending too much time in speculation, but based on what it's achieved so far and the large hardware advantages of the PS4 (GDDR5, 50% more shaders), I don't see that happening. There's only so much driver updates can do; providing built-in SDK functions for efficient use of ESRAM won't do anything for developers already making good use of it, like BF4 certainly does - DICE has the very best graphics programmers in the world; yet BF4 runs at 720p on Xbox One.

Realistically, I think differences like 720 vs 1080 or 30 vs 60fps will probably happen less as developers learn to use the Xbox One better, but 720 vs 900 or 900 vs 1080 is pretty much what the raw hardware difference seems to substantiate (i.e. 1080p is roughly 50% more pixels than 900p).

Of course developers could always choose to lower the level of detail on Xbox One to keep the same framerate and resolution, but since they practically never do that, they must have some good reasons. Perhaps the sacrifice would be too great to achieve that goal, gameplay would be affected, assets would need redesign, I don't know.

Edited by Andre S., Feb 12 2014, 1:29am :

George P said,
Fully catch up in what way? I don't deny the PS4 advantage but what are we racing to? The end goal as far as performance goes is 1080p and 60fps, something I full expect the XB1 to reach in time. After that point what hardware advantage the PS4 has gives it minor differences between games, the ability to have some more effects in game like particles or a bit better lighting, nothing substantial like say 1080p@60 vs 720p@60.

That's really my point on all this, the finish line for both systems is the same mark, now if the PS4 wanted to up that to something higher like 1440p then sure but that's not going to happen.


If you've ever played on a PC, then you're well aware of the numerous tweaks one could make to a game that could enhance it visually without necessarily pushing the resolution higher.

Andre S. said,
Where did you get these numbers?

First let us stop you from posting this: "PS4 (GDDR5, 50% more shaders)"

Technically this is true, but is meaningless. You have to compare the processing speed, so even with 2 additional shader cores (not 4 like Sony leaves customers to believe), the performance difference of shader output is:
1.6-1.7 teraflops PS4
1.4-1.5 teraflops XB1

Notice that even with all the GDDR5, and extra shaders, this is not a 50% gain. Microsoft also closed this gap a bit by increasing the GPU core speed which they found to benefit overall performance more than having more cores.

So you are looking at a technical difference of around 5-10% in FPS on screen. What is 5/10% of 60fps? You have your answer.


Side Notes:
GDDR5 would be far more important if the AMD based CPU/GPU cores were faster per core; however, they are rather slow, which makes the RAM boost irrelevant on the CPU side for the PS4 and the XB1 is able to compensate for it with the eSRAM on the GPU side.

The slight GPU bump Microsoft gave the XB1 is more significant than throwing more shader cores at the problem. It is like the argument of bandwidth versus latency.

For example: In terms of 'speed' for navigation, gaming, etc. having 50gb of bandwidth doesn't help if you have 500ms latency. Adding in another 50gb of bandwidth won't fix the latency issue; however, reducing latency down to 300ms will help.

AMD's CPU architecture is horribly slow in single thread/core speeds. Adding in more cores only speeds up parallel operations, but does not help get instructions through a single core faster.

This is also why Intel owns the PC gaming market, as games still need fast loops that can't be split between cores. As I have mentioned before, use any site to compare the single core/thread performance difference between Intel and AMD. Intel's i3s trample the highest end AMD CPUs in single core performance by a massive amount.

If you want to look at the AMD difference in just GPU terms, NVidia continues to hold their own with AMD by having much faster clock rates with lower shader counts.


Mobius Enigma said,

First let us stop you from posting this: "PS4 (GDDR5, 50% more shaders)"

Technically this is true, but is meaningless. You have to compare the processing speed, so even with 2 additional shader cores (not 4 like Sony leaves customers to believe), the performance difference of shader output is:
1.6-1.7 teraflops PS4
1.4-1.5 teraflops XB1

Again, where did you get those numbers? Every technical reviewer around the net has put these at 1.3TF for Xbox One and 1.84TF for PS4, which is a 41,5% difference. 41,5% is not 50% but it's enough to amply justify different resolutions or framerates.

http://www.neoseeker.com/news/...-vs-playstation-4-hardware/
http://uk.ign.com/blogs/finalv...ardware-and-specifications/

The idea that the Xbox One fully compensates for slower memory by ESRAM is exactly what this article disproves: developers are saying it is too small to fit all the buffers needed for 1080p rendering, forcing use of elaborate caching mechanisms. All this memory swapping is not free. Nevermind that the GPU doesn't have fast access to anything else than those 32MB of memory, which essentially rules out any computation involving very large textures or datasets, and severely limits the use of temporary buffers and any otherwise memory-hungry technique. Due to the fundamental memory-time tradeoff, this necessarily makes the Xbox One slower, nevermind a lot less flexible to work with.

Andre S. said,
Again, where did you get those numbers? Every technical reviewer around the net has put these at 1.3TF for Xbox One and 1.84TF for PS4, which is a 41,5% difference. 41,5% is not 50% but it's enough to amply justify different resolutions or framerates.

http://www.neoseeker.com/news/...-vs-playstation-4-hardware/
http://uk.ign.com/blogs/finalv...ardware-and-specifications/

The idea that the Xbox One fully compensates for slower memory by ESRAM is exactly what this article disproves: developers are saying it is too small to fit all the buffers needed for 1080p rendering, forcing use of elaborate caching mechanisms. All this memory swapping is not free. Nevermind that the GPU doesn't have fast access to anything else than those 32MB of memory, which essentially rules out any computation involving very large textures or datasets, and severely limits the use of temporary buffers and any otherwise memory-hungry technique. Due to the fundamental memory-time tradeoff, this necessarily makes the Xbox One slower, nevermind a lot less flexible to work with.

Let just go with your numbers.

I'm going to use the network analogy again - You are measuring BANDWIDTH when it is LATENCY that is important.

I can do a Truck analogy as well.

If you have ABC 50 trucks versus XYZ 30 trucks hauling from LA to San Diego. The ABC 50 trucks travel at 50 mph and the XYZ 30 trucks travel at 70 mph. The XYZ trucks are going to get the product to San Diego first, even if they don't carry as much cargo.

The AMD APU design can shove TONS of information (more than needed for 1080p); however, shoving the data 'fast enough' is the problem. The PS4 can shove more data, but it does it slower in the INITIAL period of time.

This is where people are failing on this difference.

As long as Microsoft can shove 'enough' data for 1080p, having extra 'trucks' isn't going to help them, as the data will get there TOO LATE.

The update should bring them all bump they need. They have the speed with properly managing the eSRAM and handling the more advanced tiling in the framework and they have a bit faster streams.

There is an insane complexity model for this type of computational difference, but this is not the place to jump into this level of math.

The interesting thing is that the XB1 can get the 'changes' there faster than the PS4, (which is important in game rendering) and that is how Microsoft can and does close the hardware gap.


Mobius Enigma said,
I'm going to use the network analogy again - You are measuring BANDWIDTH when it is LATENCY that is important.

I can do a Truck analogy as well.

I understand the difference very well, thank you, but I don't understand why you try to argue that computational power is unimportant. GPUs are by definition highly parallel computers because they are designed to compute an image one pixel (or vertex) at a time where the same, relatively simple computation is done for every pixel (i.e. the idea of a "shader"). A system with 42% more computational power can render a picture with 42% more pixels in the same amount of time. If you have 16ms to render a frame (because you're rendering at 60fps) and it would take 15ms on PS4 and 21ms on Xbox One, either you'll drop the framerate to 30 or render a smaller image. The computational power difference is entirely sufficient to explain resolution or framerate differences.

The AMD APU design can shove TONS of information (more than needed for 1080p); however, shoving the data 'fast enough' is the problem. The PS4 can shove more data, but it does it slower in the INITIAL period of time.
What? If you have to access data from DDR3 because it didn't fit your 32MB ESRAM cache, you're paying the DDR3 latency penalty instead. I don't know the exact numbers but I'm pretty sure there's no significant advantage there for Xbox One. Without numbers it's meaningless to draw conclusions.

It's also well known that in video cards (including AMD's own GCN video cards), GDDR5 versions handily beat DDR3 ones. Again, GPUs are highly parallel computers and therefore designed for throughput rather than low latency. See http://www.redgamingtech.com/p...-one-gddr5-vs-ddr3-latency/

Edited by Andre S., Feb 12 2014, 3:19pm :

Andre S. said,
I understand the difference very well, thank you, but I don't understand why you try to argue that computational power is unimportant. GPUs are by definition highly parallel computers because they are designed to compute an image one pixel (or vertex) at a time where the same, relatively simple computation is done for every pixel (i.e. the idea of a "shader"). A system with 42% more computational power can render a picture with 42% more pixels in the same amount of time. If you have 16ms to render a frame (because you're rendering at 60fps) and it would take 15ms on PS4 and 21ms on Xbox One, either you'll drop the framerate to 30 or render a smaller image. The computational power difference is entirely sufficient to explain resolution or framerate differences.

What? If you have to access data from DDR3 because it didn't fit your 32MB ESRAM cache, you're paying the DDR3 latency penalty instead. I don't know the exact numbers but I'm pretty sure there's no significant advantage there for Xbox One. Without numbers it's meaningless to draw conclusions.

It's also well known that in video cards (including AMD's own GCN video cards), GDDR5 versions handily beat DDR3 ones. Again, GPUs are highly parallel computers and therefore designed for throughput rather than low latency. See http://www.redgamingtech.com/p...-one-gddr5-vs-ddr3-latency/

Yes GPU operations are highly more parallel than traditional computing. However, there is still a curve, where smaller 'faster' operations puts a dent in overall computational performance.

Not everything thrown through the GPU is going to be filling all shaders, in fact, seldom are they fully being utilized. (In a related way, this is why moving from VS/PS to a unified shader model was a huge jump in performance as neither the VS or PS were ever fully being utilized.)

So using the truck analogy, if 10 of the trucks are empty 80% of the time, the extra trucks aren't helping, yet the faster trucks still are.

...

It doesn't have to fit in the 32mb eSRAM, as the XB1 can shove data from BOTH the system RAM and the eSRAM at the same time. This is why Microsoft's performance model combines the speed of BOTH the eSRAM and the system RAM to illustrate the maximum performance.

http://www.eurogamer.net/artic...-vs-the-xbox-one-architects

So instead of 176GB/s GDDR5 being compared to 68GB/s, with the additional memory controllers and eSRAM, the XB1 at minimum is offering 204GB/s and most of the time is providing 204GB/s plus 68GB/s for a combined speed of 270GB/s - which is significantly faster than the PS4.

The XB1 also has significantly lower latencies than the PS4. - Even pulling from the link you provide, the GDDR5 latency of the PS4 is around 11-12, where the XB1's DDR3 is around 7-8; and as you know the eSRAM essentially has 0 latency.

This truly adds up...

Going back to the original article and why the SDK/Framework changes will make a significant difference is that developers are NOT using the eSRAM well, and many even admit they don't know what to do with it.

For example:
http://gamingbolt.com/xbox-one...n-dying-light-tech-director

By taking this work off the developers and giving them a clear way to use the system and eSRAM properly, these changes will make a massive difference on the XB1. It should easily get games to 1080p at a competitive level with the PS4 and PC titles.

It is interesting to see how others are perceiving the hardware differences, and why so many people truly think the PS4 has a larger advantage than it effectively does.

I enjoyed this debate and thank you for offering sound and strong arguments.

Mobius Enigma said,
Yes GPU operations are highly more parallel than traditional computing. However, there is still a curve, where smaller 'faster' operations puts a dent in overall computational performance.
If that was significant, DDR3 versions of video cards would perform similarly to GDDR5 variants, which isn't the case. There are tremendous differences.

It's not hard to see how an array composed of more shaders can crunch through a rendering task faster than a smaller one. They just process more pixels per cycle, and the rendering task consists of providing a color value for each pixel. There's no reason not all shaders can be summoned to the task, unless you're running into bandwidth issues feeding them with data. And with 8GB of high bandwidth memory the PS4 isn't likely to run into that.

So instead of 176GB/s GDDR5 being compared to 68GB/s, with the additional memory controllers and eSRAM, the XB1 at minimum is offering 204GB/s and most of the time is providing 204GB/s plus 68GB/s for a combined speed of 270GB/s - which is significantly faster than the PS4.
Assuming your data is split in such a way, which isn't practical in any scenario that I can imagine. If you're simply iterating through a large dataset, the ESRAM won't help at all - you'll be entirely bound by DDR3 bandwidth.

The entire idea of cache is that it's nice for repeatedly accessing the same small set of data. But especially when it comes to high-resolution images, your data quickly doesn't fit the cache and then what matters is system memory bandwidth. It's likely for this reason that the Xbox One has and will continue having a hard time delivering full HD. High resolutions require large high-bandwidth amounts of memory. That's why enthusiast video cards ship with 2GB+ GDDR5 RAM, not DDR3 RAM and a small cache.

The XB1 also has significantly lower latencies than the PS4. - Even pulling from the link you provide, the GDDR5 latency of the PS4 is around 11-12, where the XB1's DDR3 is around 7-8; and as you know the eSRAM essentially has 0 latency.
You're comparing a CAS latency (i.e. in cycles) with a time (in ns) latency; read again. The article puts both consoles around 10-12ns of latency despite the CAS difference.

What "mobius engima" has written is wrong or deceptive, and on the technical level of misterxmedia or astrograd. Latency, move engines, CPU bottlenecks, and offloading are all pointless straw grasping.

Factual PS4 Hardware Advantages: +6 CUs, +560 GFlops, +16 ROPs, +8 ACEs/CQs, better GPGPU support, better performing CPU, and faster unified memory. PS4 OS may also have less overhead or reserves.

There is no software MS can write to overcome that gap. We will continue to see PS4 games outperform Xbox the entire generation. The 8% GPU reserve reduction and better SDK will help, but the gap can't close beyond the limits of the hardware.

Edited by truthfax, Feb 20 2014, 6:58am :

Niekess said,
This proves that hardware is not everything. Now a few of my friends can shut up.

you don't really think the xbox one will be able to run all the same effects and have the same framerate as the ps4 after this sdk is released do you? It will just improve things, it can't catch up due to the 50% more graphical EU processing units.

torrentthief said,

you don't really think the xbox one will be able to run all the same effects and have the same framerate as the ps4 after this sdk is released do you? It will just improve things, it can't catch up due to the 50% more graphical EU processing units.


Its entirely incorrect. software optimization is equally important. Sony have always had better hardware even PS3 was better than XB360 but most games looked better on 360. what I wished MS could do in this console was to integrate PC games into console and If Steam wasn't a dick, they could work together with MS to make it happen.

Potential <> Actual. Just because PS4 has better HW doesn't mean anything. Proof:

Dim bFlag as Boolean=true

While bflag
loop forever
end while

Software matters, HW is a much smaller piece of the pie.

peashooter said,
Potential <> Actual. Just because PS4 has better HW doesn't mean anything.

Yes true, I'm running COD at 1080p on my intel 80486, boo-yah....

n_K said,
Yes true, I'm running COD at 1080p on my intel 80486, boo-yah....

Kind of snarky, but I got a laugh out of it at least. Software updates will only help things to an extent. It's not going to make miracles happen.

Unless of course, you really are running COD at 1080p in which case that's amazing!

That's really good news; although I find it suspicious that they're the only one reporting on this 'new and much faster SDK'; even Microsoft is dead silent about it.

- or could it be the 'behind the scenes updates and improvements' promised for the Feb 11 Xbox One system update? Very interesting.

Jarrichvdv said,
That's really good news; although I find it suspicious that they're the only one reporting on this 'new and much faster SDK'; even Microsoft is dead silent about it.

- or could it be the 'behind the scenes updates and improvements' promised for the Feb 11 Xbox One system update? Very interesting.

Microsoft 'kind of' talked about this in the fall before the release.

They talk about the DX11.2 improvements and mention the XB1, that is later followed up with how the framework and 'upcoming tools' (SDK) will take the work off developers.

Essentially they were talking about how the framework will keep the 32mb eSRAM full instead of forcing the developers to mange it themselves with extra work.

The DX11.2 tiling changes being the most important part of this, as Bolcato mentions.

Developers can already do some of this with the current SDK, but it is more work for them, especially on cross platform games. The changes take this work off their hands letting the SDK and framework manage it specifically for the XB1 RAM configuration and do the 'tricks' for them that the 32mb eSRAM needs.

That's what it sounds to me, basically they'll provide out-of-the-box APIs for doing the ESRAM management that devs are forced to do manually now. While this will make life easier for developers, it won't make the console any faster; if someone was already making good use of ESRAM and still not reaching 1080p, this update won't help. It's also likely that these built-in functions will provide a solid implementation for most cases, but won't be optimal for all.

Tiling will still have its own problems, not being able to keep the entire scene in the same buffer will lead to a lot of redundant fetching. At least they're trying to make it easier on the developer, not that they have any choice.

Anyway, the proof is in the pudding, we'll only be able to see when the games are out.

Geezy said,
Tiling will still have its own problems, not being able to keep the entire scene in the same buffer will lead to a lot of redundant fetching. At least they're trying to make it easier on the developer, not that they have any choice.

Anyway, the proof is in the pudding, we'll only be able to see when the games are out.

Except the net effect is the exact opposite problem of keeping the texture in a different buffer.

The tiling changes will allow a HUGE texture to be kept in the eSRAM, as the non-rendered portions are intelligently discarded.

Letting developers not worry about the eSRAM and let the framework manage it also means the developers won't have to learn how to use the eSRAM to get the speeds out of it.


The huge texture won't fit in eSRAM, so portions will be streamed in as necessary, rendered out, and then other data will be swapped in. The bottleneck will still be DDR3. Various buffers will still take up a chunk of eSRAM. This didn't work out amazingly well for the PS2 with its tiny 4MB buffer, it won't work so well with the Xbox One either. Also consider that the PS2 didn't have to composite nearly as much data while doing a rendering pass. 32MB is tiny for this type of stuff!

Edited by Geezy, Feb 12 2014, 5:20am :