Digital Foundry vs. the Xbox One architects


Recommended Posts

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

Long article, read at source.

Two months away from the release of the next generation consoles, many have already made up their minds about which machine offers more gaming power before a single game has been released. Compare basic graphics and memory bandwidth specs side-by-side and it looks like a wash - PlayStation 4 comprehensively bests Xbox One to such a degree that sensible discussion of the respective merits of both consoles seems impossible. They're using the same core AMD technologies, only Sony has faster memory and a much larger graphics chip. But is it really that simple?

In the wake of stories from unnamed sources suggesting that PS4 has a significant advantage over its Xbox counterpart, Microsoft wanted to set the record straight. Last Tuesday, Digital Foundry dialled into a conference call to talk with two key technical personnel behind the Xbox One project - passionate engineers who wanted the opportunity to put their story across in a deep-dive technical discussion where all the controversies could be addressed. Within moments of the conversation starting, it quickly became clear that balance would the theme.

Baker is keen to tackle the misconception that the team has created a design that cannot access its ESRAM and DDR3 memory pools simultaneously. Critics say that they're adding the available bandwidths together to inflate their figures and that this simply isn't possible in a real-life scenario.

"You can think of the ESRAM and the DDR3 as making up eight total memory controllers, so there are four external memory controllers (which are 64-bit) which go to the DDR3 and then there are four internal memory controllers that are 256-bit that go to the ESRAM. These are all connected via a crossbar and so in fact it will be true that you can go directly, simultaneously to DRAM and ESRAM," he explains.

 

"On the SoC, there are many parallel engines - some of those are more like CPU cores or DSP cores. How we count to fifteen: [we have] eight inside the audio block, four move engines, one video encode, one video decode and one video compositor/resizer," says Nick Baker.

"The audio block is completely unique. That was designed by us in-house. It's based on four tensilica DSP cores and several programmable processing engines. We break it up as one core running control, two cores running a lot of vector code for speech and one for general purpose DSP. We couple that with sample rate conversion, filtering, mixing, equalisation, dynamic range compensation then also the XMA audio block. The goal was to run 512 simultaneous voices for game audio as well as being able to do speech pre-processing for Kinect."

"Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing, but we could go and do the experiment - if we were actually at 14 CUs what kind of performance benefit would we get versus 12? And if we raised the GPU clock what sort of performance advantage would we get? And we actually saw on the launch titles - we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did."

p.s. Let's drink every time they say "balance".

  • Like 3
Link to comment
Share on other sites

Such a good article around the architecture and has answered a lot of my questions regarding some bits of the X1. Dat audio chip for instance.

 

The clock speed snippet is insightful, and actually having 14 CU's in there and leaving some for redundancy due to the size of the APU is a good engineering design/choice. Also, hearing an engineer expand on resolution and actually mentioning "dynamic changes" to the resolution can suggest games which hop around from 1080p depending on frame-rate dips. I wonder if KI or Ryse do that?

 

Having ESRAM and DRAM in the same page tables is insane also, this means artefacts can be moved around rapidly between DDR and ESRAM internally at a high BW to counteract the smaller size of the ESRAM. Answers a lot of hate towards the 'only 32MB' remarks.

 

Although, the comments on N4G, DF and GAF around this are already just mind-blowingly dumb and non-understanding of the concept and turn the topic of software and hardware 'balance' into so called PR speak (Engineers use PR speak now?). Like said, its one of the most important points of a console if an un-balance creates a bottleneck, that one bottleneck will hold down the whole system. 

 

Really goes to show how much they've took the power/performance and cost into account. 

  • Like 2
Link to comment
Share on other sites

Although, the comments on N4G, DF and GAF around this are already just mind-blowingly dumb and non-understanding of the concept and turn the topic of software and hardware 'balance' into so called PR speak (Engineers use PR speak now?). Like said, its one of the most important points of a console if an un-balance creates a bottleneck, that one bottleneck will hold down the whole system.

GAF is somehow turning their discussion about balance into "is MS saying PS4 is not a balanced system???". What a bunch.
  • Like 2
Link to comment
Share on other sites

GAF is a bad joke. Said it all this time and will keep repeating it. They offer nothing to the gaming internet but BS these days and I think many developers have gotten the picture too as I see less and less of their presence of there as well. MS should just steer clear from there as well and leave the cesspool for the brainrotting fools.

Link to comment
Share on other sites

Man those comments on the bottom of the webpage (not even going to bother going to GAF). 

 

MS it seems have put a bit of thought into their system (like Sony undoubtedly has). Sure, there may be a bit of damage control there- which system architect wouldn't want to defend their hard work-  but if I am to believe the commentators on that page, the X1 system architects are nothing more than incompetent PR shills who have designed a system far too convoluted and underpowered to work; rather than being engineers who have thought hard about how to design a good system.

Link to comment
Share on other sites

GAF is a bad joke. Said it all this time and will keep repeating it. They offer nothing to the gaming internet but BS these days and I think many developers have gotten the picture too as I see less and less of their presence of there as well. MS should just steer clear from there as well and leave the cesspool for the brainrotting fools.

 

GAF is a a trollboard. it's like gamer 4chan, well it's more like playstation 4chan. 

Link to comment
Share on other sites

so lets set some things straight,according to what the engineer is saying

 

1. adding 2 CUs to the 12 doesnt really add anything. upping the clock is much more beneficial,since it also speeds up other aspects of the gpu.

 

2. sony sdk docs also state the system is balanced for 14 CU,although all can be used(which wont really add any benefits for graphics[we'll see why below]. So the 4 others will likely be used for GPGPU)

 

Microsofts approach is to not have extra CUs for gpgpu, opting instead to using co processors to offload work from the cpu and gpu

 

This makes sense that there is an amount of CUs that make the system "balanced". We know what "balanced" means If we look at the AMD GCN architecture documentation

 

http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf

 

we see that 12 CUs can support a bandwidth of around 300GB/S(32bytes a CU per clock), a little more than the amount of bandwidth the memory of both consoles can provide. Adding more CUs doesnt do much graphically since there isnt the bandwidth of memory to support any extra CUs. GPGPU doesnt need the bandwidth requirements to put the extra CUs to use,since you throw numbers at them,and they crunch and crunch,so sony adding 4 CUs will provide some GPGPU power,but microsoft using offload chips will free up the cpu/gpu resources for GPGPU. different approach,same goal.

 

Now to the ESRAM. its exactly as I predicted. the GPU works on the ESRAM(but can also spill to the DDR3 ),since the 4 8MB ESRAM blocks' busses are plugged directly into the GPU(as seen in the xbox one sdk docs), and the move engine can shift stuff in and out of ESRAM from the DDR3,while the DDR3 is still free to work simultaneously at other parts of the system. Total bandwidth at any given time can be 204+GB/S ESRAM + 68GB/S DDR3. As for concerns that ESRAM will be hard to use? The engineer states that the xbox360 was very easy to develop for, and it used a smaller chunk of EDRAM. ESRAM is an improvement over EDRAM,and developers already have alot of experience with this since forever.

 

BEASTING

Link to comment
Share on other sites

well theres to much crying going on about both systems now, ive done me fair share even though im not buying either of them. they both handle it differently and like they needed the lower latency ram for kinect to but seeing as the xboxs hdmi passthrough will allow a PS4 to be played through it snapping both games together will be alot easier to see which one looks better and by how much.

Link to comment
Share on other sites

I think after this they should just let the games talk for themselves, they've customized the inside of the SoC from the CPU to the GPU and so on, with co-processors and other things to make sure any type of bottleneck is all but limited from the system.  Without any details like these from the PS4 SoC we don't know what custom work Sony has done, to what extent and so on.  Maybe the fact they went with GDDR5 compared to what MS has done means they've done little custom work on their SoC.    I'm surprised that they've been so quiet about this, if their advantage was as great as people make it out to be I have no doubt Sony, of all people, would be talking that up till they were blue in the face.

 

Anyways, to me this just boils down to the XB1 can do more with slower RAM due to the tweaks while the PS4 can do what it does by brute forcing as much data through GDDR5, much like what we have going on with PC graphics cards.   I'm sure Sony has it's own tweaks to handle GPGPU and multitask a bit but I honestly don't think they've went anywhere near the amount of customization MS has done.   The PS4 is probably way more closer to stock PC CPU/GPU parts compared to the XB1. 

Link to comment
Share on other sites

Well if this is from an actual engineer for the X1, then I think his comments carry a lot of weight.

 

The fanboy response would be to dismiss his claims as 'pr bs', which would be completely unfair.  When Mark Cerny talks about the details of the PS4, people don't attack him as sharing 'pr bs', so why do that to a MS engineer? Like Mark, this guy has a passion for the work they have done and they don't want there to be misinformation out there creating a bad vibe around his project.  Completely understandable.

 

 

Its a very interesting read.  I'm trying to digest what it means to be honest.  He certainly seems to be making the case that its a lot closer than that article slamming the X1 hardware was trying to say. 

 

In the end, what matter is if we see enough good games on the X1.  The ps4 is a powerful system, so is the X1, so there is little excuse for either not to have good games come along.  The ps4 has some sort of advantage, but we wont really know what that means until we start seeing games that they can point to.  It could turn out like this gen, where it was ultimately a wash, or there could be some much bigger, I don't know at this point. 

Link to comment
Share on other sites

Well if this is from an actual engineer for the X1, then I think his comments carry a lot of weight.

 

The fanboy response would be to dismiss his claims as 'pr bs', which would be completely unfair.  When Mark Cerny talks about the details of the PS4, people don't attack him as sharing 'pr bs', so why do that to a MS engineer? Like Mark, this guy has a passion for the work they have done and they don't want there to be misinformation out there creating a bad vibe around his project.  Completely understandable.

 

 

Its a very interesting read.  I'm trying to digest what it means to be honest.  He certainly seems to be making the case that its a lot closer than that article slamming the X1 hardware was trying to say. 

 

In the end, what matter is if we see enough good games on the X1.  The ps4 is a powerful system, so is the X1, so there is little excuse for either not to have good games come along.  The ps4 has some sort of advantage, but we wont really know what that means until we start seeing games that they can point to.  It could turn out like this gen, where it was ultimately a wash, or there could be some much bigger, I don't know at this point. 

 

I would agree.  As an engineer of a great system it probably kills them to see boatloads of fanboys who don't actually know anything about the system rant and rave against the thing they created.  I'm sure they enjoy being able to actually talk about it for once and to dismiss them would be ridiculous.

 

Interesting article all around though.  It'll be more interesting to see what things look like when the consoles come out though.  I'm excited.

Link to comment
Share on other sites

Its fairly sad to read comments on this article.

 

You know a fanboy or otherwise disgruntled person is posting when they discredit everything the MS engineer says while crediting everything said in the article attacking the X1 hardware via anonymous sources.

 

Those guys are unwilling to hear two sides to these stories.  Unwilling to just take in the info we get from MS and Sony without making it a battle, something to deny simply because of the name attached to it.  It happens too much among people that unfairly hate MS or Sony. Guess what?  We are trying to understand how two fairly complex consoles will work.

 

He made a point to talk about how easy it is for developers to make use of the hardware.  He seemed to think it should be an easy transition thanks to the fact that they evolved the eDRAM setup from the 360, fixing the limitations it had.  So experienced 360 developers should not have a hard time. 

 

Its hard to follow all of the technical details, but after reading through this one, I've come to the conclusion that MS and Sony actually chose different routes to reach the same goal of overall performance.  We now know more about MS' choices and path, while we don't know as much about Sony' specific tech details.  Regardless, we know both have taken off the shelf hardware and customized it.  The customization is the part that seems to confuse people and create these wild claims in one way or another.  Its also a part that we may not be able to compare at all and just have to accept that they are different, letting the games show us what each is capable of.

 

I think Sony has the raw spec advantage, but I'm still at a loss to know what that will mean in the real world.  Plus, since we don't know how the custom work done by MS or Sony will swing the numbers, its hard to say what will happen.  Once games are out, maybe that will clear up. 

Link to comment
Share on other sites

"The audio block is completely unique. That was designed by us in-house. It's based on four tensilica DSP cores and several programmable processing engines. We break it up as one core running control, two cores running a lot of vector code for speech and one for general purpose DSP. We couple that with sample rate conversion, filtering, mixing, equalisation, dynamic range compensation then also the XMA audio block. The goal was to run 512 simultaneous voices for game audio as well as being able to do speech pre-processing for Kinect."

But to what extent will this hardware actually see utilisation, especially in cross-platform games?

"So a lot of what we've designed for the system and the system reservation is to offload a lot of the work from the title and onto the system. You have to keep in mind that this is doing a bunch of work that is actually on behalf of the title," says Andrew Goosen.

"We're taking on the voice recognition mode in our system reservations whereas other platforms will have that as code that developers will have to link in and pay out of from their budget. Same thing with Kinect and most of our NUI [Natural User Interface] features are provided free for the games - also the Game DVR."

Can someone help me to understand this?.. Are they staying that developers can offload audio to a system level for their games? Basically a dedicated sound card to relieve stress off the CPU...

So the X1 is almost designed like a musical/symphony.. Always the right tune, and never missing a beat... Watch Le' Miserable, Sound of Music, Fantasia, or listen to the Star Wars soundtrack, close your eyes... It's like the team at Microsoft did that, and when they opened their eyes, the Xbox One was the final product...

Link to comment
Share on other sites

This topic is now closed to further replies.