The complete XBOX One architects interview - DF


Recommended Posts

There's a lot of misinformation out there and a lot of people who don't get it.

 

Real world ram performance over 200GB/S. ~50% more bandwidth than the competition

 

It will be true that you can go directly, simultaneously to DRAM and ESRAM.

That equivalent on ESRAM would be 218GB/s. However, just like main memory, it's rare to be able to achieve that over long periods of time so typically an external memory interface you run at 70-80 per cent efficiency.

we've measured about 140-150GB/s for ESRAM. That's real code running. That's not some diagnostic or some simulation case or something like that. That is real code that is running at that bandwidth. You can add that to the external memory and say that that probably achieves in similar conditions 50-55GB/s and add those two together you're getting in the order of 200GB/s across the main memory and internally.

Digital Foundry: So 140-150GB/s is a realistic target and you can integrate DDR3 bandwidth simultaneously?

Nick Baker: Yes. That's been measured.

The biggest thing in terms of the number of compute units, that's been something that's been very easy to focus on. It's like, hey, let's count up the number of CUs, count up the gigaflops and declare the winner based on that. My take on it is that when you buy a graphics card, do you go by the specs or do you actually run some benchmarks? Firstly though, we don't have any games out. You can't see the games. When you see the games you'll be saying, "What is the performance difference between them?" The games are the benchmarks.

 

 

Explaining what balanced means in a system.

The goal of a 'balanced' system is by definition not to be consistently bottlenecked on any one area. In general with a balanced system there should rarely be a single bottleneck over the course of any given frame - parts of the frame can be fill-rate bound, other can be ALU bound, others can be fetch bound, others can be memory bound, others can be wave occupancy bound, others can be draw-setup bound, others can be state change bound, etc. To complicate matters further, the GPU bottlenecks can change within the course of a single draw call!

 

How important is the CPU to framerates and why cpu offloading was a big part of the design

Another very important thing for us in terms of design on the system was to ensure that our game had smooth frame-rates. Interestingly, the biggest source of your frame-rate drops actually comes from the CPU, not the GPU. Adding the margin on the CPU... we actually had titles that were losing frames largely because they were CPU-bound in terms of their core threads. In providing what looks like a very little boost, it's actually a very significant win for us in making sure that we get the steady frame-rates on our console. And so that was a key design goal of ours - and we've got a lot of CPU offload going on.

 

 

 

The scalar sounds cool. It can dynamically change per frame to reduce frame drops.

We've done things on the GPU side as well with our hardware overlays to ensure more consistent frame-rates. We have two independent layers we can give to the titles where one can be 3D content, one can be the HUD. We have a higher quality scaler than we had on Xbox 360. What this does is that we actually allow you to change the scaler parameters on a frame-by-frame basis. I talked about CPU glitches causing frame glitches... GPU workloads tend to be more coherent frame to frame. There doesn't tend to be big spikes like you get on the CPU and so you can adapt to that.

 

About the function of the eMMC memory.

 

Digital Foundry: Another thing that came up from the Hot Chips presentation that was new information was the eMMC NAND which I hadn't seen any mention of. I'm told it's not available for titles. So what does it do?

Andrew Goossen: Sure. We use it as a cache system-side to improve system response and again not disturb system performance on the titles running underneath. So what it does is that it makes our boot times faster when you're not coming out of the sleep mode - if you're doing the cold boot. It caches the operating system on there. It also caches system data on there while you're actually running the titles and when you have the snap applications running concurrently. It's so that we're not going and hitting the hard disk at the same time that the title is. All the game data is on the HDD. We wanted to be moving that head around and not worrying about the system coming in and monkeying with the head at an inopportune time.

 

Clock increases

Digital Foundry: Can you talk us through how you arrived at the CPU and GPU increases that you did and did it have any effect on production yield?

Nick Baker: We knew we had headroom. We didn't know what we wanted to do with it until we had real titles to test on. How much do you increase the GPU by? How much do you increase the CPU by?

lots more interesting details about the architecture

 

 

http://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

  • Like 1
Link to comment
Share on other sites

So for the first few milliseconds of usage of the 32MB of ESRAM you can transfer at a rate of 150GB/s but what happens after that when the data in the ESRAM is read by the Processor and the ESRAM needs to put more data inside?

It gets data off the DDR3 which has a transfer rate of 50-55GB/s? or worse cause they mention that the DDR3 and ESRAM will function simultaneously from the HDD? 

 

All Microsoft did here was put in an additional step in processing which gives the impression that it has one thing over the PS4 in terms of system specs, realistically that 32MB of ESRAM will be pointless outside of maybe UI usage as todays games go through 32MB of data almost instantaneously then after that the next step in the process becomes the bottleneck, Xbox Ones DDR3, then after that the HDD.

 

In regards to eMMC NAND its flash memory same as SSD, basically they put a solid state chip in there with the OS for faster boot times. This is the only system specs advantage over the PS4 but its a short lived one, because the PS4 HDD is user-replaceable and anyone who is wanting the faster boot times can just replace it with a SSD, doing so will give more benefits that a small eMMC NAND flash memory chip because it will also improve game load times and future proof itself for when OS data becomes too big for a size limited eMMC NAND chip.

 

I know people get upset when someone compares system stats of the Xbox One with the PS4, but as it is its direct competitor the PS4 is the benchmark for comparisons. 

Link to comment
Share on other sites

I saw this article when it first got published and its a very good read. Its fascinating how and why they've gone with the design changes they have, especially around the software stack and building the box around virtualisation with no overheads. 

 

So for the first few milliseconds of usage of the 32MB of ESRAM you can transfer at a rate of 150GB/s but what happens after that when the data in the ESRAM is read by the Processor and the ESRAM needs to put more data inside?

It gets data off the DDR3 which has a transfer rate of 50-55GB/s? or worse cause they mention that the DDR3 and ESRAM will function simultaneously from the HDD? 

 

All Microsoft did here was put in an additional step in processing which gives the impression that it has one thing over the PS4 in terms of system specs, realistically that 32MB of ESRAM will be pointless outside of maybe UI usage as todays games go through 32MB of data almost instantaneously then after that the next step in the process becomes the bottleneck, Xbox Ones DDR3, then after that the HDD.

 

In regards to eMMC NAND its flash memory same as SSD, basically they put a solid state chip in there with the OS for faster boot times. This is the only system specs advantage over the PS4 but its a short lived one, because the PS4 HDD is user-replaceable and anyone who is wanting the faster boot times can just replace it with a SSD, doing so will give more benefits that a small eMMC NAND flash memory chip because it will also improve game load times and future proof itself for when OS data becomes too big for a size limited eMMC NAND chip.

 

I know people get upset when someone compares system stats of the Xbox One with the PS4, but as it is its direct competitor the PS4 is the benchmark for comparisons. 

Why do people continue to question the best system architects in the world and their decisions? Its beyond me honestly.

 

Have you read anything around the article? They aren't throwing numbers around, they got the 204Gb/s mark with real code running on the box. Questioning those rates are invalid since MS have actually had those transfer rates on retail hardware. 

 

The eSRAM is useless? Its the key to the RAM infrastructure on the X1. Free AA, lightning fast post processing with little overhead. Its a tool which has been used successfully throughout the 360's life cycle. Like said in the article, if you have a artefact which has little overdraw, then that'll spill over into DDR3 because it simply doesn't need to lie is eSRAM because it's not going to need that extra BW to post process.

 

The X1 turns on instantly when you say "Xbox On" because of its reserved state in the flash. If the PS4 always boots cold, which I'm quite sure it does, it'll never just boot instantly. 

 

This attitude towards this just sums up the threads on N4G and NeoGAF around this, its ridiculous how false the information people throwing around is. People claiming balance is a PR term obviously have no slight knowledge about system architecture at all. In a pipeline if you have power ratios to 1:1:0.5:1 your power is 0.5. Its a simply analogy.

  • Like 4
Link to comment
Share on other sites

Excuse my ignorance here i'm just trying to get my head around how these types of memory work.

 

So whilst they are giving raw numbers, saying DDR3+ESRAM is faster than the GDDR5 in the PS4, how does the amounts of that ram affect things?

 

So they have 32MB of memory running at the faster speed meaning, in my mind, that they can shift 32MB of data at that rate. Where as the PS4 has GDDR5 at a slower speed than ESRAM, but it has 8GB of it. Meaning it can shift 8GB of data at the slower speed but due to the volume it would be shifting far more data still?

 

Does any of that make sense or is it complete rubbish?  I am no hardware engineer, i'm not even an armchair hardware engineer, as I said just trying to think it through logically and get my head around it.

Link to comment
Share on other sites

So for the first few milliseconds of usage of the 32MB of ESRAM you can transfer at a rate of 150GB/s but what happens after that when the data in the ESRAM is read by the Processor and the ESRAM needs to put more data inside?

It gets data off the DDR3 which has a transfer rate of 50-55GB/s? or worse cause they mention that the DDR3 and ESRAM will function simultaneously from the HDD?

 

Don't be silly. A 32bit 1080p render target is only 8-12MB. Read the article,its explained by the engineer, and he says the move engines can shift relevant data in or out of the ESRAM. So lets do say you have a 1080p render target in the ESRAM that you are working on,and soon you need to swap to another one, what would happen is that the move engine would have already shifted this render target back in ESRAM from DDR during unused ESRAM memory cycles,so you can switch to it instantaneously with no memory operations when you are ready,and the previous render target will once again be shifted back out the same way.

 

You're also forgetting one huge point. The compression/decompression engines part of the move engines. The move engines can shift this data even quicker because its compressed,so less cycles for the move.

 

from the article

 

Digital Foundry: Obviously though, you are limited to just 32MB of ESRAM. Potentially you could be looking at say, four 1080p render targets, 32 bits per pixel, 32 bits of depth - that's 48MB straight away. So are you saying that you can effectively separate render targets so that some live in DDR3 and the crucial high-bandwidth ones reside in ESRAM?

Andrew Goossen: Oh, absolutely. And you can even make it so that portions of your render target that have very little overdraw... For example, if you're doing a racing game and your sky has very little overdraw, you could stick those subsets of your resources into DDR to improve ESRAM utilisation. On the GPU we added some compressed render target formats like our 6e4 [six bit mantissa and four bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360, which instead of doing a 16-bit float per component 64pp render target, you can do the equivalent with us using 32 bits - so we did a lot of focus on really maximizing efficiency and utilisation of that ESRAM.

You can use the Move Engines to move these things asynchronously in concert with the GPU so the GPU isn't spending any time on the move. You've got the DMA engine doing it. Now the GPU can go on and immediately work on the next render target rather than simply move bits around.

Link to comment
Share on other sites

There's also quite a number of other design aspects and requirements that we put in around things like latency, steady frame-rates and that the titles aren't interrupted by the system and other things like that. You'll see this very much as a pervasive ongoing theme in our system design.

-Andrew Goossen

Link to comment
Share on other sites

I saw this article when it first got published and its a very good read. Its fascinating how and why they've gone with the design changes they have, especially around the software stack and building the box around virtualisation with no overheads. 

 

Why do people continue to question the best system architects in the world and their decisions? Its beyond me honestly.

 

Have you read anything around the article? They aren't throwing numbers around, they got the 204Gb/s mark with real code running on the box. Questioning those rates are invalid since MS have actually had those transfer rates on retail hardware. 

 

The eSRAM is useless? Its the key to the RAM infrastructure on the X1. Free AA, lightning fast post processing with little overhead. Its a tool which has been used successfully throughout the 360's life cycle. Like said in the article, if you have a artefact which has little overdraw, then that'll spill over into DDR3 because it simply doesn't need to lie is eSRAM because it's not going to need that extra BW to post process.

 

The X1 turns on instantly when you say "Xbox On" because of its reserved state in the flash. If the PS4 always boots cold, which I'm quite sure it does, it'll never just boot instantly. 

 

This attitude towards this just sums up the threads on N4G and NeoGAF around this, its ridiculous how false the information people throwing around is. People claiming balance is a PR term obviously have no slight knowledge about system architecture at all. In a pipeline if you have power ratios to 1:1:0.5:1 your power is 0.5. Its a simply analogy.

 

I don't think you read the article either. They got 140-150GB/sec out of the esram running real code. A far cry from that 204GB/sec you are throwing around.

 

Also, there's no such thing as completely "Free AA". It didn't exist on the 360, and it likely wont here. Now if you were to say "low cost" then you might be right.

 

Link to comment
Share on other sites

Also, there's no such thing as completely "Free AA". It didn't exist on the 360, and it likely wont here. Now if you were to say "low cost" then you might be right.

Actually, there is such a thing as free AA on the 360. The EDRAM die had 192 component processors that could be used to do 4xMSAA without affecting the performance of the GPU. Developers didnt have to use it for AA though,so they could choose to use some low cost shader based AA,and use the component processors for something else. GTA4 on the 360 had 2xMSAA at 720p ,unlike the ps3 version,and it likely used this method.

Link to comment
Share on other sites

I don't think you read the article either. They got 140-150GB/sec out of the esram running real code. A far cry from that 204GB/sec you are throwing around.

Also, there's no such thing as completely "Free AA". It didn't exist on the 360, and it likely wont here. Now if you were to say "low cost" then you might be right.

154+50=204, is there something i missed? He says this in the article, with running code.

There is such thing as free aa. Well in terms of free from the GPU.

Link to comment
Share on other sites

 

In regards to eMMC NAND its flash memory same as SSD, basically they put a solid state chip in there with the OS for faster boot times. This is the only system specs advantage over the PS4 but its a short lived one, because the PS4 HDD is user-replaceable and anyone who is wanting the faster boot times can just replace it with a SSD, doing so will give more benefits that a small eMMC NAND flash memory chip because it will also improve game load times and future proof itself for when OS data becomes too big for a size limited eMMC NAND chip.

 

I know people get upset when someone compares system stats of the Xbox One with the PS4, but as it is its direct competitor the PS4 is the benchmark for comparisons. 

 

 

You are right about the ps4 having the bonus option of using an ssd, but you forgot to list any of the negatives.

 

First of all, if you switch to an ssd, your going to be extremely limited storage space wise vs a standard drive. Secondly, your going to pay a lot vs a standard hdd.  I would hazard a guess that your average gamer will not be making that switch.

 

I think this is a wash. MS offers a console that can already offer the advantages of an ssd without requiring the user to buy an ssd. I'm not sure about the OS data getting bigger overtime though. I mean usually Sony and MS shrink the OS footprint overtime, so I don't know if that would growing over time. Maybe your talking about user data, so I don't know.

Link to comment
Share on other sites

This is a brave move on Microsoft's part.

After reading that (only grasping bits and pieces), it seems Microsoft has built one Box (no pun intended) to take on all it's challengers in one shot.

 

The really meant it when they said you only need this one device for ALL your entertainment needs.

 

And by design it's left very very open...  I honestly believe app makers and Indies are going to be in their glory come CES and E3 '14.

 

As I stated in another post... I was thinking so small for Xbox One. 

Link to comment
Share on other sites

You are right about the ps4 having the bonus option of using an ssd, but you forgot to list any of the negatives.

 

First of all, if you switch to an ssd, your going to be extremely limited storage space wise vs a standard drive. Secondly, your going to pay a lot vs a standard hdd.  I would hazard a guess that your average gamer will not be making that switch.

 

I think this is a wash. MS offers a console that can already offer the advantages of an ssd without requiring the user to buy an ssd. I'm not sure about the OS data getting bigger overtime though. I mean usually Sony and MS shrink the OS footprint overtime, so I don't know if that would growing over time. Maybe your talking about user data, so I don't know.

 

I will agree that in order for PS4 users to get the extra benefit they will have to pay a premium. (As SSD's are fairly expensive in comparison to HDD's) But size wise you are not limited by SSD as the standard harddrive on both the PS4 and Xbox One is only 500GB and 1TB SSD which are double that size and have been on sale for a few months now to average consumers (While 2TB SSD's are available to the not-so-average consumer at a ridiculous price).

- SSD's are also increasing in size every few months a bigger one so the gap will only be getting bigger over time.

 

I was under the impression that the OS footprint expands over time which is the reason why both Sony and Microsoft reserved more RAM for OS operations than they needed to for future updates and OS additions. And the thing that changed over time was the reserved RAM as they get closer to finalizing the console OS they give more RAM to games as the OS size is more definitive. 

Link to comment
Share on other sites

The article was a good read, but theoretical numbers are a long way from practical ones. So far the performance of the games, from what we can tell, has been less than stellar. Time will tell if the One's architecture is really as good as it needs to be.

Link to comment
Share on other sites

The article was a good read, but theoretical numbers are a long way from practical ones. So far the performance of the games, from what we can tell, has been less than stellar. Time will tell if the One's architecture is really as good as it needs to be.

 

Hasn't it been less then stellar on the PS4 as well. Both of these consoles suck so its not big surprise.  BF4 looks crap on the PS4 even first party don't look good at all.

Link to comment
Share on other sites

Hasn't it been less then stellar on the PS4 as well. Both of these consoles suck so its not big surprise.  BF4 looks crap on the PS4 even first party don't look good at all.

Console gaming has its limitations. I'm not sure what you were expecting here. :ermm:

Link to comment
Share on other sites

Launch titles are never the benchmark for a new system.  Wait for the 2nd batch where developers have more time to work on their code, that's when you see what the games can do.  Hell, when the PS2 first came out or the PS3 and so on, it's launch lineup wasn't anything great either.

Link to comment
Share on other sites

Hasn't it been less then stellar on the PS4 as well. Both of these consoles suck so its not big surprise.  BF4 looks crap on the PS4 even first party don't look good at all.

Most launch games were/are being developed on very early SDK's

Those who's games are coming out in Spring 2014 and later on down the road get the benefits of switching over to better SDK

Link to comment
Share on other sites

I will agree that in order for PS4 users to get the extra benefit they will have to pay a premium. (As SSD's are fairly expensive in comparison to HDD's) But size wise you are not limited by SSD as the standard harddrive on both the PS4 and Xbox One is only 500GB and 1TB SSD which are double that size and have been on sale for a few months now to average consumers (While 2TB SSD's are available to the not-so-average consumer at a ridiculous price).

- SSD's are also increasing in size every few months a bigger one so the gap will only be getting bigger over time.

 

I was under the impression that the OS footprint expands over time which is the reason why both Sony and Microsoft reserved more RAM for OS operations than they needed to for future updates and OS additions. And the thing that changed over time was the reserved RAM as they get closer to finalizing the console OS they give more RAM to games as the OS size is more definitive.

The OS footprint will get better as they clean it up over time. It's the app footprint that gets bigger (as the apps update and get improved)

Link to comment
Share on other sites

Digital Foundry: There's concern that custom hardware may not be utilised in multi-platform games but I'm assuming that hardware-accelerated functions would be integrated into middlewares and would see wide utilisation.

Nick Baker: Yeah, Andrew can talk about the middleware point but some of these things are just reserved for the system to do things like Kinect processing. These are system services we provide. Part of that processing is dedicated to the Kinect.

Andrew Goossen: So a lot of what we've designed for the system and the system reservation is to offload a lot of the work from the title and onto the system. You have to keep in mind that this is doing a bunch of work that is actually on behalf of the title. We're taking on the voice recognition mode in our system reservations whereas other platforms will have that as code that developers will have to link in and pay out of from their budget. Same thing with Kinect and most of our NUI [Natural User Interface] features are provided free for the games - also the Game DVR.

Link to comment
Share on other sites

I will agree that in order for PS4 users to get the extra benefit they will have to pay a premium. (As SSD's are fairly expensive in comparison to HDD's) But size wise you are not limited by SSD as the standard harddrive on both the PS4 and Xbox One is only 500GB and 1TB SSD which are double that size and have been on sale for a few months now to average consumers (While 2TB SSD's are available to the not-so-average consumer at a ridiculous price).

- SSD's are also increasing in size every few months a bigger one so the gap will only be getting bigger over time.

 

I was under the impression that the OS footprint expands over time which is the reason why both Sony and Microsoft reserved more RAM for OS operations than they needed to for future updates and OS additions. And the thing that changed over time was the reserved RAM as they get closer to finalizing the console OS they give more RAM to games as the OS size is more definitive. 

 

 

Yes, you can get large SSDs, but your still going to be limited vs standard hard drives.  You can get 2.5" hdds with large capacities for much less.  I haven't even mentioned if you through in the option to use an external 3.5", which allows for even more capacity at a cheaper price. 2.5" hdds are growing in capacity as quickly as SSDs are at this point.

 

But hey, I would personally love to buy a 2tb ssd and use that, its just not in my price range. 

 

As far as the OS footprint, does anyone have any info on if the OS size grew overtime for the ps3 and xbox 360?  I just assumed its size shrank or at least didn't grow any.  Heck, MS has even beem able to shrink the size of Windows via various updates over time where they cut out non essential pieces or are able to replace parts with something that takes up less space.

Link to comment
Share on other sites

I will agree that in order for PS4 users to get the extra benefit they will have to pay a premium. (As SSD's are fairly expensive in comparison to HDD's) But size wise you are not limited by SSD as the standard harddrive on both the PS4 and Xbox One is only 500GB and 1TB SSD which are double that size and have been on sale for a few months now to average consumers (While 2TB SSD's are available to the not-so-average consumer at a ridiculous price).

- SSD's are also increasing in size every few months a bigger one so the gap will only be getting bigger over time.

 

I was under the impression that the OS footprint expands over time which is the reason why both Sony and Microsoft reserved more RAM for OS operations than they needed to for future updates and OS additions. And the thing that changed over time was the reserved RAM as they get closer to finalizing the console OS they give more RAM to games as the OS size is more definitive. 

You don't really need a full blown SSD in PS4 to match XBO but more of a hybrid drive provided the PS4 OS is smart enough to use it (which is asking too much given sony's abysmal track record).

 

The article was a good read, but theoretical numbers are a long way from practical ones. So far the performance of the games, from what we can tell, has been less than stellar. Time will tell if the One's architecture is really as good as it needs to be.

Same thing applies to PS4 too. It has a spec advantage (on paper at least) but none of its launch games are impressive yet. One of the most visually impressive game "The Order" is not going to be 1080p HD and it is not even a launch game.

The simple answer for both console is we don't know how they will look 2-5 years down the line.

Link to comment
Share on other sites

You don't really need a full blown SSD in PS4 to match XBO but more of a hybrid drive provided the PS4 OS is smart enough to use it (which is asking too much given sony's abysmal track record).

 

Same thing applies to PS4 too. It has a spec advantage (on paper at least) but none of its launch games are impressive yet. One of the most visually impressive game "The Order" is not going to be 1080p HD and it is not even a launch game.

The simple answer for both console is we don't know how they will look 2-5 years down the line.

 

 

This time around I don't think we will see massive gains like we saw last generation.   The reason for this is the hardware is very similar to existing PC.  There is no cell or PowerPC architecture. The will be some gains as developers get to know the consoles.

Link to comment
Share on other sites

Interesting read.  I know sony had said their POA for the PS4 was to reduce/eliminate as many bottlenecks as possible.  Their whole thing was to keep the pipes flowing so to speak.  They wanted to be able to have the CPU read/write to memory as fast as the memory can take it and the same goes for HDD and GPU Access.   It should be interesting to see how this all plays out. 

Personally I have no want for an XB1 however.. a guy down the street from me is all over it (has one pre-ordered).  He tried to tell me how it was superior to PS4.. however i told him how it was weaker than the PS4 and how things like Kinect are pointless for me.  We ended up agreeing the consoles are pretty close to the same, and that it would be sick if we could both play online on the same game servers (Cross console). 

Link to comment
Share on other sites

This topic is now closed to further replies.