AnandTech: Xbox One vs. PS4 - Hardware comparison


Recommended Posts

First Xbox was an x86 class processor. Xbox 360/PS3 are based on PPC (5 I think ?) which were also used in Macs until 2007. Macs are a type of PC etc. etc.

Consoles have been specialized PCs is what I was trying to get at.

The PPC chips in the xbox360, and Wii / Wii U where custom designed not off the shelf chips you got in the Macs... they had the same instruction sets, but they had differences also that made them not the same...

the first xbox was only an intel Celeron chip, and that was just to save time and money to get a system out so it was kinda a one off at the time... everyone else had custom designed chips.... in the end we are all from the same lines.... CISC or RISC basically.. heck even now days intel CISC chips have RISC cores... we went from the old day of make everything custom, to today customize the normal

Link to comment
Share on other sites

First Xbox was an x86 class processor. Xbox 360/PS3 are based on PPC (5 I think ?) which were also used in Macs until 2007. Macs are a type of PC etc. etc.

Consoles have been specialized PCs is what I was trying to get at.

They were still quite different than any x86 architecture around, especially the PS3 with the cell cpus. In any case as a PC user I thought we were finally going to get a jump on the GFX side for a few years but MS pull this console with already outdated hardware and we all know what this mean games will be made for the lowest common denominator.

Link to comment
Share on other sites

if 4k is for video only, then gddr5 is wasted entirely..

Lol. To say that gddr5 is wasted on a gaming machine. You've gotta be insane.

What's interesting to me is the idea that the specs of the One really won't matter, depending on how much computing devs offload to the cloud. I read an article on Venturebeat that said that devs could offload AI processing, physics calculations, and even some rendering tasks to the cloud, and over time, the net raw processing power will increase, as MS replaces their servers.

If this is the case, then the One has a *huge* advantage over the PS4, which makes spec comparisons like this almost irrelevant. Who cares about the CPU/memory speed if there are massive datacenters that can perform computation tasks?

The only thing that might make a slight difference is the GPU - I'm surprised it's so much less powerful than the PS4.

In a perfect would, maybe something like that might be feasible. But this ain't a perfect world. There are too many issues that'll stop that whole 'cloud computing' idea from allowing the xbox one to become more powerful of a machine.

Link to comment
Share on other sites

Lol. To say that gddr5 is wasted on a gaming machine. You've gotta be insane.

If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate. 1080p/60 is what, ~3gbps and no game developer is wasting cycles redrawing every pixel of every scene but rather modeling textures and lighting already active in memory or on ROP

In a perfect would, maybe something like that might be feasible. But this ain't a perfect world. There are too many issues that'll stop that whole 'cloud computing' idea from allowing the xbox one to become more powerful of a machine.

Doesn't make sense. The problems with games aren't really CPU/GPU, its the fact they're largely scripted, process driven and event base - you play one, you play them all. ONe of the announced features were dynamic maps and dynamic multiplayer so the worlds you play would be different each time you play them. This is possible because of the cloud. That's the kind of stuff that will make gaming fun if you ask me.

The graphics are already amazing, but again, come on, we're talking 1080p. HD is already said and done, we're talking about more interactivity, more personalized experiences, more interaction and more store. More WIN

Link to comment
Share on other sites

I am more interested in the quality of the components and which console will last longer. LOL, there are 30+ year old Ataris that work still and many people's Xboxs and Playstations died within several years of using them!

Link to comment
Share on other sites

If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate. 1080p/60 is what, ~3gbps and no game developer is wasting cycles redrawing every pixel of every scene but rather modeling textures and lighting already active in memory or on ROP

Doesn't make sense. The problems with games aren't really CPU/GPU, its the fact they're largely scripted, process driven and event base - you play one, you play them all. ONe of the announced features were dynamic maps and dynamic multiplayer so the worlds you play would be different each time you play them. This is possible because of the cloud. That's the kind of stuff that will make gaming fun if you ask me.

The graphics are already amazing, but again, come on, we're talking 1080p. HD is already said and done, we're talking about more interactivity, more personalized experiences, more interaction and more store. More WIN

I genuinely don't want to offend you or upset you, but you really do need to do a bit more research into console/PC architecture and how the memory speed is important to a unified system. To put it simply, the happy reaction games developers had to Sony using very fast memory speaks volumes, it is far from 'wasted'.

Link to comment
Share on other sites

If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate.

If the bandwidth of gddr3 was still adequate, amd and nvidia wouldn't have stopped using it on cards aimed at gaming years ago.

Link to comment
Share on other sites

If the bandwidth of gddr3 was still adequate, amd and nvidia wouldn't have stop using it on cards aimed at gaming years ago.

People with GDDR5 on PC's are typically playing in higher resolutions, higher refresh rates or multiple screens. And they're also using discrete components that weren't exactly engineered cohesively but rather tools to brute force one piece or another.

If there was a gaming PC that was built around 32mb eSRAM, 192bit memory bus and GDDR3, it would be a very capable machine.

It's not atypical for a gamer pc to run at 1920x1200 at 120fps, while an HD tv would only do 1920x1080p at 24/48/60fps otherwise the tv will drop frames since it can't fresh as fast.

if you want high end pc gaming, stick with PC's.. Your video cards will cost more than the entire xbox one / ps4 anyway..

I genuinely don't want to offend you or upset you, but you really do need to do a bit more research into console/PC architecture and how the memory speed is important to a unified system. To put it simply, the happy reaction games developers had to Sony using very fast memory speaks volumes, it is far from 'wasted'.

I've already spelled out how it is wasted, you're refusing to accept those facts. There just isn't enough pixels to oversaturate a GDDR3 bus + eSRAM that would impact that over going GDDR5. The fill rate to fill a 1920x1080p display is simply NOT THAT HIGH.

Those PC video cards are being designed to play at 2560x1600 resolution at high frames per second, The TV is 1920x1080p at fixed 60hz/60fps or 4k at 24fps, nothing would make or break either of those memory speeds.

to compare specs the eSRAM + GPU + GDDR3 memory will still be able to play at what most gamers call "ULTRA" settings cranked up to the max at 1080p.

Does sony plan on doing 4k gaming? if so, current PC video cards will smack them down too so i'm not sure what the point of this is..

Link to comment
Share on other sites

I've already spelled out how it is wasted, you're refusing to accept those facts. There just isn't enough pixels to oversaturate a GDDR3 bus + eSRAM that would impact that over going GDDR5. The fill rate to fill a 1920x1080p display is simply NOT THAT HIGH.

Those PC video cards are being designed to play at 2560x1600 resolution at high frames per second, The TV is 1920x1080p at fixed 60hz/60fps or 4k at 24fps, nothing would make or break either of those memory speeds.

It's nice that you've figured out how to calculate the required bandwidth for the frame buffer, but you're not giving any thought to the rest of the data that needs to be shifted in and out of RAM. Textures, texture masks, depth buffers and mesh data for instance.

Link to comment
Share on other sites

It's nice that you've figured out how to calculate the required bandwidth for the frame buffer, but you're not giving any thought to the rest of the data that needs to be shifted in and out of RAM. Textures, texture masks, depth buffers and mesh data for instance.

No need to include that. Both systems are based on 8gigs of shared memory or some allotment of shared resources thereof. ALl of that data should already be in the shared memory when the game runs so there is no shifting unless you're loading from disk in which case the disk is the limiting factor, not the ram speed.

And i'm pretty sure Microsoft & Microsoft Research did the math..

it could be stated in some respects that the eSRAM will offer better cache hit ratios as content is moving between CPU and GPU vs the pipeline of GPU to GDDR..

its ENGINEERED for a reason, i'm sure we will soon find out! The PS3 was over engineered and fancier hardware.. they swore it was the lack of ram stopping them that generation.. are they going to say its the performance of RAM now?

Link to comment
Share on other sites

I think it's easiest said the developers who are actually creating the games know best - http://www.eurogamer.net/articles/2013-02-22-ps4-pc-like-architecture-8gb-ram-delight-developers

Of course developers are delighted to have MORE ram, the 256megs they had before was a pain..

No matter what, the laws of physics still apply regardless.

Link to comment
Share on other sites

No need to include that. Both systems are based on 8gigs of shared memory or some allotment of shared resources thereof. ALl of that data should already be in the shared memory when the game runs so there is no shifting unless you're loading from disk in which case the disk is the limiting factor, not the ram speed.

And i'm pretty sure Microsoft & Microsoft Research did the math..

You are aware that data still has to be continually shifted into GPU-local cache from RAM in order to perform computation on right?

Link to comment
Share on other sites

I am more interested in the quality of the components and which console will last longer. LOL, there are 30+ year old Ataris that work still and many people's Xboxs and Playstations died within several years of using them!

Speak for yourself my original black Brick Xbox is still soldiering on fine, now modded and XBMC and 1080i output. Same as my original Release 360 White.

Link to comment
Share on other sites

You are aware that data still has to be continually shifted into GPU-local cache from RAM in order to perform computation on right?

You are aware that I've already said that bus is already wide enough and fast enough to do this without the bus being the bottleneck right? Also, I've already stated that the eSRAM + MultiCPU + GPU config can offer better cash hit rates (less misses) than MultiCPU to GPU direct.

Apparently the latencies of GDDR3 and GDDR5 memory are pretty much the same - it costs the same amount. Where GDDR5 shines is when you need throughput but we're talking fixed resolutions here where the throughput doesn't demand GDDR5.

Where the eSRAM shines is its latency is very negligible because its on die with the processor and can continue to see improvements as the chip size shrinks, where as GDDR5 has the same latency as 3 sot he "performance" of a memory read/write is EXACTLY THE SAME

There are some IBM papers about that if you want to read up.

Link to comment
Share on other sites

You are aware that I've already said that bus is already wide enough and fast enough to do this without the bus being the bottleneck right? Also, I've already stated that the eSRAM + MultiCPU + GPU config can offer better cash hit rates (less misses) than MultiCPU to GPU direct.

There are some IBM papers about that if you want to read up.

How can you claim to know the bus is wide enough when that value is completely arbitrary and dependant on the current workload? All you've done so far is factor in a figure for a 1080p/60 framebuffer.

Really, I don't think you really know what you're talking about. It seems you've just read some spec sheets and are quoting things verbatim to sound smart.

Link to comment
Share on other sites

I am more interested in the quality of the components and which console will last longer. LOL, there are 30+ year old Ataris that work still and many people's Xboxs and Playstations died within several years of using them!

The "Fat" PS2 is the last console I've seen that seems to have any longevity.

The original xbox only has 3-5 (use) year life span, since the OS to run the device is on a standard ide hard drive.

The Wii could turn out be a long lasting console.

The other is the leadless sodder the industry switched to (environmental) which doesn't hold up as well.

Issues related to heat be it poor heat/cool design (original 360), or cheap thermal paste that dries out (ps3).

Link to comment
Share on other sites

How can you claim to know the bus is wide enough when that value is completely arbitrary and dependant on the current workload? All you've done so far is factor in a figure for a 1080p/60 framebuffer.

Because I know that both consoles are limited to 1080p/60 resolutions with finite pixels and finite refresh rates. Because you have these fixed values you can do the math.

Really, I don't think you really know what you're talking about. It seems you've just read some spec sheets and are quoting things verbatim to sound smart.

I'm just doing the math.. You're just guessing.

Look, Memory performance is based on throughput, the actual READ/WRITE time is almost the same and entirely dependent on the configuration and chips used. I'll repeat GDDR3/GDDR5 have the same latency for cost to read/write. There the GDDR5 shines is when you need more FILL rate, more THROUGHPUT because you're messing with larger PIXEL counts or larger TEXTURES but the beauty of console gaming is that we're talking about FIXED displays with FIXED fill rates and FIXED framerates so you can *DO THE MATH* to see what you really need

There is NOTHING WRONG with GDDR5, its simply a marketing decision to get people to think the bigger the better when the reality is GDDR3 is PERFECTLY CAPABLE of 1080p/60 at FULL FILL RATE and GDDR3/GDDR5 have the same read/write latencies for those texture sizes that it really is moot.

The eSRAM could actually be beneficial to making sure CPU cache misses are minimized and the smaller bus is used more efficiently and it may allow developers to programmatically optimize their engine to get around CAS latencies and all that mumbo jumbo that you can go google but probably won't

Link to comment
Share on other sites

Because I know that both consoles are limited to 1080p/60 resolutions with finite pixels and finite refresh rates. Because you have these fixed values you can do the math.

I'm just doing the math.. You're just guessing.

Oh wow, you've not even understood a single word of what I've said have you.

Congratulations, you did the math to calculate the effective bandwidth per second of a 1920*1080 framebuffer with 24-bit pixel (assuming RGB888) depth at 60 intervals per second. You get a gold star.

You still however are completely forgetting that there is data other than the framebuffer that needs to be shifted in and out of RAM. You've only accounted for one small piece of the pie. Do you comprehend now?

Link to comment
Share on other sites

I wonder how long this guy will continue to focus on nothing but the size of the framebuffer while ignoring everything else in the rendering process that can eat up bandwidth..

Link to comment
Share on other sites

I've already spelled out how it is wasted, you're refusing to accept those facts. There just isn't enough pixels to oversaturate a GDDR3 bus + eSRAM that would impact that over going GDDR5. The fill rate to fill a 1920x1080p display is simply NOT THAT HIGH.
And you think that memory bandwidth is only to transfer pixels? Any memory access of any data will use some bandwidth. Higher bandwidth will mean being able to feed both the GPU and CPU more efficiently. Sony engineers didn't choose more expensive and power-hungry memory just to be able to brag about it on tech forums, it will mean a more powerful gaming machine overall. Whereas that translates into tangible advantages for gamers remains to be seen (Xbox 360 had like 5 times the bandwidth of PS3 and games didn't always look better on it), but on theoretical grounds the PS4 is the faster hardware.

I'm just doing the math.. You're just guessing.
You're doing a simplistic calculation and assuming AMD/Sony engineers don't know what they're doing.
Link to comment
Share on other sites

Oh wow, you've not even understood a single word of what I've said have you.

Congratulations, you did the math to calculate the effective bandwidth per second of a 1920*1080 framebuffer with 24-bit pixel (assuming RGB888) depth at 60 intervals per second. You get a gold star.

You still however are completely forgetting that there is data other than the framebuffer that needs to be shifted in and out of RAM. You've only accounted for one small piece of the pie. Do you comprehend now?

oi vey, there is nothing shifted in and out of ram unless its going through the CPU and it takes the same amount of time to go from GDDR5 or GDDR3 ram to the CPU and BACK , the latencies are the same.

Link to comment
Share on other sites

Because I know that both consoles are limited to 1080p/60 resolutions with finite pixels and finite refresh rates. Because you have these fixed values you can do the math.

I'm just doing the math.. You're just guessing.

Look, Memory performance is based on throughput, the actual READ/WRITE time is almost the same and entirely dependent on the configuration and chips used. I'll repeat GDDR3/GDDR5 have the same latency for cost to read/write. There the GDDR5 shines is when you need more FILL rate, more THROUGHPUT because you're messing with larger PIXEL counts or larger TEXTURES but the beauty of console gaming is that we're talking about FIXED displays with FIXED fill rates and FIXED framerates so you can *DO THE MATH* to see what you really need

There is NOTHING WRONG with GDDR5, its simply a marketing decision to get people to think the bigger the better when the reality is GDDR3 is PERFECTLY CAPABLE of 1080p/60 at FULL FILL RATE and GDDR3/GDDR5 have the same read/write latencies for those texture sizes that it really is moot.

The eSRAM could actually be beneficial to making sure CPU cache misses are minimized and the smaller bus is used more efficiently and it may allow developers to programmatically optimize their engine to get around CAS latencies and all that mumbo jumbo that you can go google but probably won't

The issue is that its not just moving video data across the bus, the memory is unified which means it's going to be used to store and transfer game data not just graphics data.

Link to comment
Share on other sites

You're doing a simplistic calculation and assuming AMD/Sony engineers don't know what they're doing.

I'm doing the same thing they did. Sony felt the eSRAM or eDRAM option added complexity so they went with flat GDDR5 memory.

Xbox 360 on the other hand is already comfortable with the technology as it worked for their prior platform, so they just enhanced it for the Xbox One.

You have to remember that part of the issue with the ps3 was complexity, Microsoft didn't have that problem. Sony may be trying to go too much commodity and too simple.

As the constraints aren't the differences of GDDR3 and GDDR5 when it comes to television console gaming. It was an engineering decision and i'm not saying its a wrong one.

I'm just saying it won't be the impact you guys claim because what Microsoft chose still can compete at any and all resolutions these systems will be played at.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.