AnandTech: Xbox One vs. PS4 - Hardware comparison


Recommended Posts

What's interesting to me is the idea that the specs of the One really won't matter, depending on how much computing devs offload to the cloud. I read an article on Venturebeat that said that devs could offload AI processing, physics calculations, and even some rendering tasks to the cloud, and over time, the net raw processing power will increase, as MS replaces their servers.

If this is the case, then the One has a *huge* advantage over the PS4, which makes spec comparisons like this almost irrelevant. Who cares about the CPU/memory speed if there are massive datacenters that can perform computation tasks?

The only thing that might make a slight difference is the GPU - I'm surprised it's so much less powerful than the PS4.

Sony bought a company that will be providing a similar service a few years ago if I recall? and they've been working on something similar also, I'd still like to have a console that doesn't have to rely on data centers and servers, sometimes my internet is down, or I'm taking my console away for the weekend or down to a friends, what use is it then? I think Sony knew that and don't want to rely solely on this service, but I do think they'll release another version of the PS4 that will be for streaming only purposes, so people who want that service can just buy a box that streams from their data centres. (basically onlive)

Link to comment
Share on other sites

and yet, you're still ignoring the fact we're still talking about fixed displays at fixed fill rates and fixed FPS where once you achieve the rates you need, anything more, is just wasted cycles. There is simply no denying this.

On PC's, yeah, you have people pushing 6 displays where you need the fillrate to run at asinine resolutions, but neither the ONE nor PS4 are filling those roles.

Did you even click and read any of the link? None of the data there exceeded 1080p, and lots of it didn't exceed 60fps(and in some cases for the 7770, 30fps)

Link to comment
Share on other sites

I would highly recommend anyone viewing the fanboyish nonsense in this thread to first take some time to research the basic function of a 3D graphics engine.

The bizzare focus on the framebuffer and quoting of buzzwords does not address the fact there is far more data that needs to be moved between VRAM (or in this case, shared system/video RAM) and internal GPU caches (Think L1/L2).

You'll thank yourselves in the future, and find you have a new appreciation of the games you play when you understand how they function. Plus you just saw past all of the console war nonsense.

PS4 and One will use a shared memory architecture, there is no "vram" to move in / out of.

Link to comment
Share on other sites

and yet, you're still ignoring the fact we're still talking about fixed displays at fixed fill rates and fixed FPS where once you achieve the rates you need, anything more, is just wasted cycles. There is simply no denying this.

On PC's, yeah, you have people pushing 6 displays where you need the fillrate to run at asinine resolutions, but neither the ONE nor PS4 are filling those roles.

Just look at the frame rates, the 7770 is barely pushing 30FPS in 1080p which means the PS4 will get crap ports because no developer/publisher is going to be in a situation where one console can push 1080p 60FPS and the other 1080p 30FPS or one look noticeably different than the other.

Having fast cache doesn't make up the difference in GPU performance.

Link to comment
Share on other sites

True! it does have more shaders, but then again, at 1080p resolutions, will they be necessary? Nvidia has been able to optimize throughput without increasing shaders.. maybe AMD / MS has done some of that voodoo magic..

for example my 6850 had what, 2000 shaders but my NVidia has 480 and smokes it..

Different tech between the AMD and NV cards, shaders not comparable, while on the new consoles its the same so there they will have a big impact on graphics capability.

and yet, you're still ignoring the fact we're still talking about fixed displays at fixed fill rates and fixed FPS where once you achieve the rates you need, anything more, is just wasted cycles. There is simply no denying this.

On PC's, yeah, you have people pushing 6 displays where you need the fillrate to run at asinine resolutions, but neither the ONE nor PS4 are filling those roles.

So you're telling me that there is no difference between a 7770 and a 7850 on my PC @1680. Frames are not stuck on 60 and when something intensive is displayed (or is badly optimised) they dip. Take a wild guess which console will suffer more.

Link to comment
Share on other sites

Did you even click and read any of the link? None of the data there exceeded 1080p, and lots of it didn't exceed 60fps(and in some cases for the 7770, 30fps)

The ONE GPU isn't a 7770, the tflop limit is a design limit, not the capability of the ROP and shaders. The 7770 is also a 128bit bus, not 192bit.. (and ironically runs gddr5 memory lol..)

Link to comment
Share on other sites

Just look at the frame rates, the 7770 is barely pushing 30FPS in 1080p which means the PS4 will get crap ports because no developer/publisher is going to be in a situation where one console can push 1080p 60FPS and the other 1080p 30FPS or one look noticeably different than the other.

Having fast cache doesn't make up the difference in GPU performance.

Genuine question (I don't want to open those videos at work): Those 7770 videos, they are from regular PCs - right?

Link to comment
Share on other sites

There are no videos its just a bar chart of the GPUs compared, but yeah they are benchmarks on a PC with GPUs as close as you can get to what both consoles are running.

Link to comment
Share on other sites

Just look at the frame rates, the 7770 is barely pushing 30FPS in 1080p which means the PS4 will get crap ports because no developer/publisher is going to be in a situation where one console can push 1080p 60FPS and the other 1080p 30FPS or one look noticeably different than the other.

Having fast cache doesn't make up the difference in GPU performance.

The xbox one isn't a 7770. Its a Xenos with the advances Microsoft have baked in and lessons learned from 360. The ps4 is based on radeon so the ps4 may be more comparible to pc cards in that respect.. Having a fast cache makes up for any lack of memory performance in reads/writes and that helps alleviate cpu misses which translates to smoother operations to the end user.

Link to comment
Share on other sites

No they're not. With the xbox one seemingly having the 1.2tflop gpu originally rumored that puts it on a whole different tier of gpu compared to the ps4. To put it in pc gpu perspective, it's basically a radeon 7850 vs a radeon 7770. The difference between those two cards is quite significant.

Ok, granted. And exclusive titles may be able to utilise this extra grunt, time will tell. But my point was, "Game A" on "Console X" cannot be compared to "Game B" on "Console Y". Apples to oranges. I would wager, however, that cross-platform titles will not take advantage of any extra horsepower one console might offer over the other, so from that perspective you have to look at what else the consoles can do besides gaming to decide which is "better". And as "better" is subjective, there is no definitive answer.

Link to comment
Share on other sites

PS4 and One will use a shared memory architecture, there is no "vram" to move in / out of.

The shared nature is irrelevant in this context, it still performs the same role as the primary storage of ancillary data (Textures, meshes) when said data is not currently stored in GPU cache in order for computation to occur.

Link to comment
Share on other sites

The xbox one isn't a 7770. Its a Xenos with the advances Microsoft have baked in and lessons learned from 360. The ps4 is based on radeon so the ps4 may be more comparible to pc cards in that respect.. Having a fast cache makes up for any lack of memory performance in reads/writes and that helps alleviate cpu misses which translates to smoother operations to the end user.

I know its not, its the nearest we can compare to PC gpu wise. We know both are Radeon GCN architecture, Based on the leaks which are correct PS4 has 18 CUs and Xbox One has 12 CUs, they said in the Architectural round table that the GPU can do 768 operations per second, since we know its GCN and a CU can do 64ops/clock we can deduct that the Xbox One GPU has 12CUs, though we don't know the clock speed it is pretty much guaranteed to be 1.2TF GPU

All the cache in the world isn't going to make up the difference in raw graphics performance.

Link to comment
Share on other sites

Ok, granted. And exclusive titles may be able to utilise this extra grunt, time will tell. But my point was, "Game A" on "Console X" cannot be compared to "Game B" on "Console Y". Apples to oranges. I would wager, however, that cross-platform titles will not take advantage of any extra horsepower one console might offer over the other, so from that perspective you have to look at what else the consoles can do besides gaming to decide which is "better". And as "better" is subjective, there is no definitive answer.

They may or may not, just like there's console ports on PCs that have improved graphics and some that are copy pasta.

Link to comment
Share on other sites

If its anything like corporate computing, in 8 years most of it will be on cloud anyway.. easier to scale & grow & adapt & change without being limited to on premise hardware..

thus, i'm REALLY interested in seeing what MS has done and what Sony will offer in comparison. The idea of changing worlds and dynamic environments because there is a massive computational cluster capable of creating such worlds in the backend is just .. awesome..

If the future is cloud computing, I'd be looking forward to what Apple and Google having coming up, with Microsoft a distant third.

For what it's worth, I don't think that cloud gaming is ever going to pan out. As I've mentioned elsewhere, the ability to do this has existed in the PC world for years and it's never caught on in any appreciable way. Not to mention, the majority of the improvements touted in every new console release are real-time, immediate graphic improvements (better lighting, accurate bullet drop, real time tire wear, etc).

Link to comment
Share on other sites

The xbox one isn't a 7770. Its a Xenos with the advances Microsoft have baked in and lessons learned from 360. The ps4 is based on radeon so the ps4 may be more comparible to pc cards in that respect.. Having a fast cache makes up for any lack of memory performance in reads/writes and that helps alleviate cpu misses which translates to smoother operations to the end user.

Do we have a facepalm emoticon here? Cause this post saying it's a modified xenos(the 360 gpu) really needs it.

Link to comment
Share on other sites

Different tech between the AMD and NV cards, shaders not comparable, while on the new consoles its the same so there they will have a big impact on graphics capability.

The Console GPUs are not the same.

So you're telling me that there is no difference between a 7770 and a 7850 on my PC @1680. Frames are not stuck on 60 and when something intensive is displayed (or is badly optimised) they dip. Take a wild guess which console will suffer more.

Your PC isn't an engineered system built for specific purpose and the ONE gpu isn't a 7770 vs 7850. If something is badly optimized, its badly optimized.

The console that will suffer more is the one where developers ship bad software. PC gaming hasn't exactly been on the forefront even though it has exponentially higher growth in specs

Link to comment
Share on other sites

The Console GPUs are not the same.

Yes they are.

We know both are Radeon GCN architecture (an AMD engineer on twitter who worked on the APU says so), Based on the leaks which are correct PS4 has 18 CUs and Xbox One has 12 CUs, they said in the Architectural round table that the GPU can do 768 operations per second, since we know its GCN and a CU can do 64ops/clock we can deduct that the Xbox One GPU has 12CUs, though we don't know the clock speed it is pretty much guaranteed to be 1.2TF GPU if they are trying to hit 100W TDP.

Sony used up die space for extra GPU power, Microsoft used it for the ESRAM.

All the cache in the world isn't going to make up the difference in raw graphics performance.

Link to comment
Share on other sites

Do we have a facepalm emoticon here? Cause this post saying it's a modified xenos(the 360 gpu) really needs it.

The one is based on the xenos architecture more than is it is based on the Radeon.

It's the whole edram/estram concept you refuse to accept:

http://en.wikipedia.org/wiki/Xenos_(graphics_chip)

no one but MS knows the new code name for the current gem, but that's what it is, regardless of you choosing to accept it or not.

Yes they are.

We know both are Radeon GCN architecture (an AMD engineer on twitter who worked on the APU says so), Based on the leaks which are correct PS4 has 18 CUs and Xbox One has 12 CUs, they said in the Architectural round table that the GPU can do 768 operations per second, since we know its GCN and a CU can do 64ops/clock we can deduct that the Xbox One GPU has 12CUs, though we don't know the clock speed it is pretty much guaranteed to be 1.2TF GPU if they are trying to hit 100W TDP.

All the cache in the world isn't going to make up the difference in raw graphics performance.

No one disputes that they're based on AMD's next generation:

BUt they are engineered for different interfaces and different ways of getting data in and out. Microsoft went with its xenos type design with the eSRAM, sony with GDDR5..

i'm not sure why that's so hard to comprehend.. Microsoft isn't new to customizing their chipsets and designing theirs even if its "loosely" based on the same stuff.. The original xenos was based on what, the r500? "loosely"

Link to comment
Share on other sites

Good lord.

Having fast memory to negate the slower DDR3 memory isn't going to make up the lack of extra 6CUs/384 Shaders and that's not even touching on the fact that Sony worked with AMD to beef up the ALUs, so it can do graphics and compute in parallel rather than waiting for one thread to finish before the other starts.

Link to comment
Share on other sites

No need to include that. Both systems are based on 8gigs of shared memory or some allotment of shared resources thereof. ALl of that data should already be in the shared memory when the game runs so there is no shifting unless you're loading from disk in which case the disk is the limiting factor, not the ram speed.

And i'm pretty sure Microsoft & Microsoft Research did the math..

it could be stated in some respects that the eSRAM will offer better cache hit ratios as content is moving between CPU and GPU vs the pipeline of GPU to GDDR..

its ENGINEERED for a reason, i'm sure we will soon find out! The PS3 was over engineered and fancier hardware.. they swore it was the lack of ram stopping them that generation.. are they going to say its the performance of RAM now?

What you need to account for is that when games run out of video memory they can start swapping data to and from system memory. Now when you're talking about fast paced action games in which the view portal of the player changes rapidly it's highly possible you could end up with a situation in which data is being swapped between video memory and system memory. Having faster system memory and VRAM will reduce lag and microstuttering in such a scenario.

Link to comment
Share on other sites

What you need to account for is that when games run out of video memory they can start swapping data to and from system memory. Now when you're talking about fast paced action games in which the view portal of the player changes rapidly it's highly possible you could end up with a situation in which data is being swapped between video memory and system memory. Having faster system memory and VRAM will reduce lag and microstuttering in such a scenario.

There is no VRAM, its shared memory architecture, Microsoft went with a SoC style build with Xbox 360. They did lots of things learning from the 360.

Heat was a problem, so they may have eliminated some brute force power that wasn't necessary. They may have also optimized the performance of the GPU so they could achieve the rates they need at the performance they needed.

While there is a lot in common between PS4 and ONE, the numbers aren't 1:1 comparisons, there are engineering differences that don't make it so easy to compare.

If you step back and read the entire article though, even Anand points this out.. PS4 has more specs on top, but until we know the engineering differences and how developers embrace the platform, specs haven't really done anyone any better before now have they?

Link to comment
Share on other sites

True! it does have more shaders, but then again, at 1080p resolutions, will they be necessary? Nvidia has been able to optimize throughput without increasing shaders.. maybe AMD / MS has done some of that voodoo magic..

for example my 6850 had what, 2000 shaders but my NVidia has 480 and smokes it..

Seeing as both systems are designed by AMD and apparently based on the same APU technology, I expect the additional shaders on PS4 to simply make it that much faster. It could justify the need for higher memory bandwidth, i.e. a GPU that does more has to make more memory accesses. Current-generation AMD APUs (Trinity/Richland) are notoriously bandwidth-starved.
Link to comment
Share on other sites

Good lord.

Having fast memory to negate the slower DDR3 memory isn't going to make up the lack of extra 6CUs/384 Shaders and that's not even touching on the fact that Sony worked with AMD to beef up the ALUs, so it can do graphics and compute in parallel rather than waiting for one thread to finish before the other starts.

oh boy.. they both have 8 core CPUs where the processing handles.. and if you only need xyz power to do xyz fill rate.. (once again, repeat a broken record) to have xyz performace, what is the point to having more? additional heat? additional silicon? specs don't make great games.

Link to comment
Share on other sites

@spudtrooper

I'm very curious as to what point your really trying to make in saying that GDDR5 is unnecessary?

Are you trying to stand up for Microsoft by saying that people are implying Microsoft don't know/didn't do "the math", yet you insist on implying actually stating the same of Sony?

Reading your post make me think you would make a good politician :)

Regarding memory the data has to be processed, so the faster the ram is the faster the data can be read/saved from/to memory from the CPU/GPU.

Also the latency doesn't determine how fast the data is transferred only how quick the memory responds to requests, therefore the latency may be the same for GDDR3/5 but the speed of data transferred is not. (assuming the bus width is the same)

Another benefit of GDDR5 over 3 is it generally consumes less power.

Link to comment
Share on other sites

Seeing as both systems are designed by AMD and apparently based on the same APU technology, I expect the additional shaders on PS4 to simply make it that much faster. It could justify the need for higher memory bandwidth, i.e. a GPU that does more has to make more memory accesses. Current-generation AMD APUs (Trinity/Richland) are notoriously bandwidth-starved.

make what faster though?

All Microsoft had to do was put enough out there to lock in 1080p/60.. anything more.. and you just have more heat and more idle processing.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.