Xbox One Silicon Talk


Recommended Posts

the backtracking from the tech bloggers/press has begun

 

http://www.extremetech.com/gaming/16...d-by-microsoft

 

AMD-APU-Diagram.png

 

 

XBO_diagram_WM.jpg

 

 


The Xbox One SoC appears to be implemented like an enormous variant of the Llano/Piledriver architecture we described for the PS4. One of our theories was that the chip would use the same ?Onion? and ?Garlic? buses. That appears to be exactly what Microsoft did

 

Here?s the important points, for comparison?s sake. The CPU cache block attaches to the GPU MMU, which drives the entire graphics core and video engine. Of particular interest for our purposes is this bit: ?CPU, GPU, special processors, and I/O share memory via host-guest MMUs and synchronized page tables.? If Microsoft is using synchronized page tables, this strongly suggests that the Xbox One supports HSA/hUMA and that we were mistaken in our assertion to the contrary. Mea culpa.

 


You can see the Onion and Garlic buses represented in both AMD?s diagram and the Microsoft image above. The GPU has a non-cache-coherent bus connection to the DDR3 memory pool and a cache-coherent bus attached to the CPU. Bandwidth to main memory is 68GB/s using 4?64 DDR3 links or 36GB/s if passed through the cache coherent interface. Cache coherency is always slower than non-coherent access, so the discrepancy makes sense.

 

 


The big picture takeaway from this is that the Xbox One probably is HSA capable, and the underlying architecture is very similar to a super-charged APU with much higher internal bandwidth than a normal AMD chip. That?s a non-trivial difference ? the 68GB/s of bandwidth devoted to Jaguar in the Xbox One dwarfs the quad-channel DDR3-1600 bandwidth that ships in an Intel X79 motherboard. For all the debates over the Xbox One?s competitive positioning against the PS4, this should be an interesting micro-architecture in its own right. There are still questions regarding the ESRAM cache ? breaking it into four 8MB chunks is interesting, but doesn?t tell us much about how those pieces will be used. If the cache really is 1024 bits wide, and the developers can make suitable use of it, then the Xbox One?s performance might surprise us.
Link to comment
Share on other sites

did you guys read that correctly. the esram may be broken up into 8 megabyte chunks.

 

therefore the bandwidth for one 8MB chunk is 204GB/S. but there are actually 4 of these 8MB esram chunks. 204GB x 4 = 816 GB/S

 

:rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl:

Link to comment
Share on other sites

these comments are pretty interesting

 


"Need For Speed Rivals will feature better graphics on one next-gen console than the other, Ghost Games? executive producer Marcus Nilsson has suggested ? but refused to clarify which.

?What we?re seeing with the consoles are actually that they are a little bit more powerful than we thought for a really long time ? ESPECIALLY ONE OF THEM, but I?m not going to tell you which one," Nilsson told VideoGamer.com at Gamescom earlier today.

?And that makes me really happy. But in reality, I think we?re going to have both those consoles pretty much on parity ? maybe one sticking up a little bit. And I think that one will look as good as the PC.?"

 

 

:laugh:

Link to comment
Share on other sites

Well its clear that MS did their homework when designing this hardware.  All they need now is for developers to take advantage of it.  Ultimately, we need to see good looking games from both consoles to show off the hardware they have chosen.

Link to comment
Share on other sites

It was known since announcement that the X1 had unified memory, why the hell are people questioning it now?

 

Also, the articles which you posted vcfan honestly don't surprise me. The harmony of software/hardware in a console mean way much more than baseless numbers, as people who actually have an idea of what they've been talking about have been saying. The use of cloud also is going to create a sizeable difference between the consoles, I've recently read an article which shows EA are evaluating and figuring out how to use it in their games graphically and logically. If they don't implement any of the same features on the PS4 on which they'd have to fork out for, you're really going to see a difference in the quality of the experience each console provides.

 

Bring on the next-gen.

Link to comment
Share on other sites

Breaking up the esRAM into 8MB chunks is interesting,  dunno why they'd do it that way but I suppose it gives developers more control in the end over what to feed into it and how much, instead of it all being one big 32MB chunk of something it could be broken down into parts of different data.

 

Unless i'm thinking of it wrong?

Link to comment
Share on other sites

Breaking up the esRAM into 8MB chunks is interesting,  dunno why they'd do it that way but I suppose it gives developers more control in the end over what to feed into it and how much, instead of it all being one big 32MB chunk of something it could be broken down into parts of different data.

 

Unless i'm thinking of it wrong?

 

Well, I'm no expert on this stuff, but I believe there is a performance advantage when you have pools of storage running in parallel vs one single , larger chunk.

 

Sort of why running ram in a pc as a set of two provides a performance boost over a single stick. Dual channel vs single channel.

Link to comment
Share on other sites

One of my bosses, had to do processor design as one of his classes while in college,  he is gonna take a look at the documents and break it down to me tomorrow...

Hopefully I can fully grasp this..

Link to comment
Share on other sites

816GB/sec bandwidth in the esram... ps 4 say whaaat :D pure speculation but would be funny if true :D

 

Hang on so if i understand this right the SoC has quad channel DDR3 memory which can be used as coherent cache for things like the audio processor, ethernet and cpu stuff which the gpu doesnt need to process obviously but can also be used as huMA (maybe) where the gpu + cpu can share the same processes and data and stuff when it needs it. So the CPU tasks benefit from low latency DDR3 and when its running games etc data is sped upto the GPU via a unified memory access, but then also you basically have quad channel eSRAM, each section having a 256 bit data bus with peak transfer of each one at 204GB/sec min 109GB/sec min. So thats like 1024 bit memory bus min transfer rate of 436GB/sec, max transfer of 816GB/sec to feed the GPU.

 

tile based rendering, think it has partially resident textures to and graphic data or something can be sped into the GPU at least at 436GB/sec. Obviously its speculation till we see the boxes in action but its going to destroy the PS4.

 

I think the DDR3 is there as low latency link for when your watching films, TV, messing about with the kinect etc so everything is as fast as poss. Should pay off for them

Link to comment
Share on other sites

Well, if you check google and look what actually brings more power to the graphics card, you will note that increassing memory frequency has a limit into how much it can actually do. 816GB/sec is highly unrealistic by me to be honest, and also there is the fact that while you have that you still have to load objects into that. Programming in PS4 will become quite easy indeed, while in Xbox it may require fine tunning. to obtain the same -or better- result but I'm seriously having my doubts.

Link to comment
Share on other sites

Well, if you check google and look what actually brings more power to the graphics card, you will note that increassing memory frequency has a limit into how much it can actually do. 816GB/sec is highly unrealistic by me to be honest, and also there is the fact that while you have that you still have to load objects into that. Programming in PS4 will become quite easy indeed, while in Xbox it may require fine tunning. to obtain the same -or better- result but I'm seriously having my doubts.

 

You're missing ONE important clue there

 

Sony doesn't provide a high level optimized coding language for the graphics, opting instead since they don't have experience or comeptence in making this, to give developers low level access, much harder to use, requires devs to themselves code for the hardware and find out how to optimize their code for the hardware. most/all 3rd party devs will never do any optimizing, this wll be left to first party devs. Eventually I guess Sony will do what they did with the PS3, release a brand new SDK and coding tool kit for the PS4 4 or so years down the line when they get around to making it and a high level optimized language for it. and you'll see another bump in graphics form 3rd party devs then.

 

MS has their Xbox one optimized version of DirectX. the whole SDK and language is high level and easy to use,on top of that, it's already pre optimized for the hardware, so developers don't need to worry about special programmign tricks to make use of the ESRAM and the 4 channels of ESRAM memory and the 3 channels of DDR3 ram and the 15 special processors and all this. The SDK and DirectX will do this all for them, while for some, there is ready made functions they can just drop in the code where appropriate. 

  • Like 4
Link to comment
Share on other sites

Well, if you check google and look what actually brings more power to the graphics card, you will note that increassing memory frequency has a limit into how much it can actually do. 816GB/sec is highly unrealistic by me to be honest, and also there is the fact that while you have that you still have to load objects into that. Programming in PS4 will become quite easy indeed, while in Xbox it may require fine tunning. to obtain the same -or better- result but I'm seriously having my doubts.

I really don't know why people like you pull this conclusion from, its the other way round on a massive scale. The X1's games will be highly optimized out of the box, we're going to see more game advancement not in the local optimisation of the code but the further understanding of the cloud. Check the X1 architect panel.

 

The X1 is the only console so far which has been able to provide a 1080p60fps experience with real retail hardware. I'm getting more worried about the PS4, I even voiced this concern on reveal when all games were encountering a certain stutter. Shows rushing and a not final console.

Link to comment
Share on other sites

I really don't know why people like you pull this conclusion from, its the other way round on a massive scale. The X1's games will be highly optimized out of the box, we're going to see more game advancement not in the local optimisation of the code but the further understanding of the cloud. Check the X1 architect panel.

 

The X1 is the only console so far which has been able to provide a 1080p60fps experience with real retail hardware. I'm getting more worried about the PS4, I even voiced this concern on reveal when all games were encountering a certain stutter. Shows rushing and a not final console.

Well, let's just say that for the moment I think I'm more informed about GPUs, CPUs and APU (specially on the AMD side, not so much on intel) and also I've watched pretty much all consoles all the way from Atary 2600, you can say that I have some sort of intuition looking at all the history so far, and also I'm a software developer myself (although, this is more to research than to production or gaming code). Please note that intuition != predicting, hence I accept that I *might* be wrong but at the same time my own judgement tilts over one side regarding performance and it's not X1.

 

Edit: Please note, that if anything, I don't care of consoles at all at the moment, considering all the PC ports that surged because there was no single console winner the past generation let's just say... consoles are so low spec compared to my "humble" rig, what I'm more interested though, is the consequences of using a PC like architecture, because this will reflect on the ports after all and more specifically, if you see my signature you will note that I have pretty much a PC that matches the hardware of said consoles -lighly speaking-.

Link to comment
Share on other sites

I really don't know why people like you pull this conclusion from, its the other way round on a massive scale. The X1's games will be highly optimized out of the box, we're going to see more game advancement not in the local optimisation of the code but the further understanding of the cloud. Check the X1 architect panel.

 

The X1 is the only console so far which has been able to provide a 1080p60fps experience with real retail hardware. I'm getting more worried about the PS4, I even voiced this concern on reveal when all games were encountering a certain stutter. Shows rushing and a not final console.

People want the X1 to not be, as good as its goin to be.

Developers (AAA down to Indoes) having access to every nook and cranny of the hardware, and Ms highly touted SDK's , should make for some amazing games on the Xbox One.

I am a 100% sure that the X1 is goin to grab a TON of attention once it comes out, a lot of journalist will have to backtrack on some of their views of the X1.

MS should do one more event around late September (I think that's the time around TGS), and just show the X1 and how it all works from an end user perspective. Just to bring it all in, and show the experience people will have on day one. Full Kinect gaming, voice, and gestures... To put a nice shiny wrapper on it for release...

Link to comment
Share on other sites

Well, let's just say that for the moment I think I'm more informed about GPUs, CPUs and APU (specially on the AMD side, not so much on intel) and also I've watched pretty much all consoles all the way from Atary 2600, you can say that I have some sort of intuition looking at all the history so far, and also I'm a software developer myself (although, this is more to research than to production or gaming code). Please note that intuition != predicting, hence I accept that I *might* be wrong but at the same time my own judgement tilts over one side regarding performance and it's not X1.

 

Watching so closely that you can't spell 'Atari'?

 

Also, we're supposed to believe your 'intuition' over our knowledge and the documents in this thread? Come on man, you're living in dream world number one.

Link to comment
Share on other sites

Well, let's just say that for the moment I think I'm more informed about GPUs, CPUs and APU (specially on the AMD side, not so much on intel) and also I've watched pretty much all consoles all the way from Atary 2600, you can say that I have some sort of intuition looking at all the history so far, and also I'm a software developer myself (although, this is more to research than to production or gaming code). Please note that intuition != predicting, hence I accept that I *might* be wrong but at the same time my own judgement tilts over one side regarding performance and it's not X1.

 

Edit: Please note, that if anything, I don't care of consoles at all at the moment, considering all the PC ports that surged because there was no single console winner the past generation let's just say... consoles are so low spec compared to my "humble" rig, what I'm more interested though, is the consequences of using a PC like architecture, because this will reflect on the ports after all and more specifically, if you see my signature you will note that I have pretty much a PC that matches the hardware of said consoles -lighly speaking-.

I've been a software developer for 9 years, released open-source projects which include game frameworks.

 

If people only go by hard numbers, then in terms of the CPU, the 360 and & PS3 can be seen as faster than the X1 and the PS4. See what I mean? The harmony of the software and hardware, with the architecture means far more to a console than sheer numbers due to it being a fixed platform. This doesn't matter in the PC world because games are optimised in a general sense and not down to individual parts and schematics. In regards to actually putting your game on each platform, the X1 provides far more libraries and optimised API's to push the hardware to its limit. For instance, the PS4 will be more so how the developers line their tasks up to be handled by the GPU/CPU whereas the X1 is more so how things like DirectX work with the GPU, which is hugely 100% optimised. So when Sony say "code to the metal", it's not necessarily a good thing.

 

I'm not trying to bash the PS4 because with that extra GPU oomph, you are going to see some beautiful first party games. Compared to the X1 though, that will take much longer as there's a sharper learning curve.

Link to comment
Share on other sites

Watching so closely that you can't spell 'Atari'?

 

Also, we're supposed to believe your 'intuition' over our knowledge and the documents in this thread? Come on man, you're living in dream world number one.

Come on guy, that's the best you can do? point out that I missed an i instead of a y? I don't think I need to show my quite large history with games, but I'll tell you one thing, I passed the hell of Dark Chambers back when I couldn't even speak my native language properly.

Link to comment
Share on other sites

Come on guy, that's the best you can do? point out that I missed an i instead of a y?

 

Maybe you missed the rest of the post if you think that's all i pointed out.

 

My apologies, i didn't realise you were not a native speaker. I'm sure you speak better English than i speak your language but the rest of my points remain.

Link to comment
Share on other sites

I've been a software developer for 9 years, released open-source projects which include game frameworks.

 

If people only go by hard numbers, then in terms of the CPU, the 360 and & PS3 can be seen as faster than the X1 and the PS4. See what I mean? The harmony of the software and hardware, with the architecture means far more to a console than sheer numbers due to it being a fixed platform. This doesn't matter in the PC world because games are optimised in a general sense and not down to individual parts and schematics. In regards to actually putting your game on each platform, the X1 provides far more libraries and optimised API's to push the hardware to its limit. For instance, the PS4 will be more so how the developers line their tasks up to be handled by the GPU/CPU whereas the X1 is more so how things like DirectX work with the GPU, which is hugely 100% optimised. So when Sony say "code to the metal", it's not necessarily a good thing.

 

I'm not trying to bash the PS4 because with that extra GPU oomph, you are going to see some beautiful first party games. Compared to the X1 though, that will take much longer as there's a sharper learning curve.

 

Hmmm... if Direct X where that truly optimized, I would already play Crysis 3 at 1080p with everything enabled (except perhaps antialiasing) at 60 fps on my current rig, which by the way uses windows 8 and the latest 13.8 Beta 2 drivers on my overclocked radeon 7950, it isn't the case, at least not constant 60 fps. And besides, isn't that OpenGL isn't that far behind, one good thing about software being software, is that it can be improved while hardware remains as it is.... at least on consoles, unless a revision modifies that but that would be a bit of cheating.

Link to comment
Share on other sites

Hmmm... if Direct X where that truly optimized, I would already play Crysis 3 at 1080p with everything enabled (except perhaps antialiasing) at 60 fps on my current rig, which by the way uses windows 8 and the latest 13.8 Beta 2 drivers on my overclocked radeon 7950, it isn't the case. And besides, isn't that OpenGL isn't that far behind, one good thing about software being software, is that it can be improved while hardware remains as it is.... at least on consoles, unless a revision modifies that but that would be a bit of cheating.

DirectX is a very competent platform on the computer, but as said, it only can be generally optimized because it has to cater for huge amounts of hardware variations. This includes a ton of extra code, and just extra weight to the platform which slows it down that bit more. On the other hand, when you have a box which is a fixed platform and you know is never going to change, you can really trim down the code and make it fully functional but directly optimized to the exact way that box works and performs at its best. 

 

I'm not 100% on the PS4, but if its anything like the PS3, OpenGL on there was more so a simple wrapper to allow basic functionality and features with the hardware. Towards the middle of the generation, actually developers stopped using this and actually looked down on it because it was a clear bottleneck in the system.

Link to comment
Share on other sites

Well, let's just say that for the moment I think I'm more informed about GPUs, CPUs and APU (specially on the AMD side, not so much on intel) 

 

Yeah well, the thing is of course that the Xbox one APU is so "custom" it's no longer much of an AMD Jaguer CPU. there's parts of a Jaguar in there. but it's very small part of what is a highly custom made APU unit. one that dwarfs the expectations of even the most hardcore of CPU and GPU techies at tech magazines and blogs like Ars and extremetech. 

 

as they said themselves, it's not like ANYTHING else, and it in many ways blows intel processors off the field even. 

Link to comment
Share on other sites

Hmmm... if Direct X where that truly optimized, I would already play Crysis 3 at 1080p with everything enabled (except perhaps antialiasing) at 60 fps on my current rig, which by the way uses windows 8 and the latest 13.8 Beta 2 drivers on my overclocked radeon 7950, it isn't the case, at least not constant 60 fps. And besides, isn't that OpenGL isn't that far behind, one good thing about software being software, is that it can be improved while hardware remains as it is.... at least on consoles, unless a revision modifies that but that would be a bit of cheating.

 

 

I think you're confusing an optimized framwork on a multiplatform solution, and a fully optimized mono platform DirectX based solution.

 

one of them is optimized, but still has to support an infnite amount of configurations, the other one, can do the same quality and speed of the first one with 1/4th the hardware, thanks to being monoplatform. 

Link to comment
Share on other sites

The CPUs connect to four 64b wide 2GB DDR3-2133 channels for a grand total of 68GB/sec bandwidth. Do note that this number exactly matches the width of a single on-die memory block. One interesting thing to note is that the speed of the CPU MMU?s coherent link to the DRAM controller is only 30GBps, something that strongly suggests that Microsoft sticks with Jaguar?s half-clock speed NB. If the NB to DRAM controller is 256b/32B wide, that would mean it runs at about 938MHz, 1.88GHz if it is 128b/16B wide.

SemiAccurate would be very surprised if it was 128b wide, wires are cheap, power saving areas not. Why is this important? Unless Microsoft?s XBox One architects are masochists that enjoy doing needless and annoying work they would not have reinvented the wheel and put an arbitrarily clockable asynchronous interface between the NB and the CPU cores/L2s. Added complexity, lowered performance, and die penalty for absolutely no useful upside is not a good architectural decision. That means the XBox One?s 8 Jaguar cores are clocked at ~1.9GHz, something that wasn?t announced at Hot Chips. Now you know.

 

 

 

http://semiaccurate.com/2013/08/29/a-deep-dive-into-microsofts-xbox-ones-architecture/

 

Thats a 20% speed advantage. Also add the 1 core SHAPE offloading, potentially 33% more CPU computing power right there. Considering there are over a dozen custom chips in this thing, the computing power of xbox one will be unrivaled.

  • Like 2
Link to comment
Share on other sites

This topic is now closed to further replies.