Jump to content



Photo

AnandTech: Xbox One vs. PS4 - Hardware comparison


  • Please log in to reply
311 replies to this topic

#31 OP Audioboxer

Audioboxer

    Hermit Arcana

  • 35,990 posts
  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 22 May 2013 - 18:19

If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate. 1080p/60 is what, ~3gbps and no game developer is wasting cycles redrawing every pixel of every scene but rather modeling textures and lighting already active in memory or on ROP



Doesn't make sense. The problems with games aren't really CPU/GPU, its the fact they're largely scripted, process driven and event base - you play one, you play them all. ONe of the announced features were dynamic maps and dynamic multiplayer so the worlds you play would be different each time you play them. This is possible because of the cloud. That's the kind of stuff that will make gaming fun if you ask me.

The graphics are already amazing, but again, come on, we're talking 1080p. HD is already said and done, we're talking about more interactivity, more personalized experiences, more interaction and more store. More WIN


I genuinely don't want to offend you or upset you, but you really do need to do a bit more research into console/PC architecture and how the memory speed is important to a unified system. To put it simply, the happy reaction games developers had to Sony using very fast memory speaks volumes, it is far from 'wasted'.


#32 Blackhearted

Blackhearted

    .....

  • 3,240 posts
  • Joined: 26-February 04
  • Location: Ohio
  • Phone: Samsung Galaxy S2 (VM)

Posted 22 May 2013 - 18:20

If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate.


If the bandwidth of gddr3 was still adequate, amd and nvidia wouldn't have stopped using it on cards aimed at gaming years ago.

#33 spudtrooper

spudtrooper

    Neowinian Senior

  • 3,095 posts
  • Joined: 19-October 10
  • OS: Windows 8
  • Phone: Nokia 920

Posted 22 May 2013 - 18:30

If the bandwidth of gddr3 was still adequate, amd and nvidia wouldn't have stop using it on cards aimed at gaming years ago.


People with GDDR5 on PC's are typically playing in higher resolutions, higher refresh rates or multiple screens. And they're also using discrete components that weren't exactly engineered cohesively but rather tools to brute force one piece or another.

If there was a gaming PC that was built around 32mb eSRAM, 192bit memory bus and GDDR3, it would be a very capable machine.

It's not atypical for a gamer pc to run at 1920x1200 at 120fps, while an HD tv would only do 1920x1080p at 24/48/60fps otherwise the tv will drop frames since it can't fresh as fast.

if you want high end pc gaming, stick with PC's.. Your video cards will cost more than the entire xbox one / ps4 anyway..

I genuinely don't want to offend you or upset you, but you really do need to do a bit more research into console/PC architecture and how the memory speed is important to a unified system. To put it simply, the happy reaction games developers had to Sony using very fast memory speaks volumes, it is far from 'wasted'.


I've already spelled out how it is wasted, you're refusing to accept those facts. There just isn't enough pixels to oversaturate a GDDR3 bus + eSRAM that would impact that over going GDDR5. The fill rate to fill a 1920x1080p display is simply NOT THAT HIGH.

Those PC video cards are being designed to play at 2560x1600 resolution at high frames per second, The TV is 1920x1080p at fixed 60hz/60fps or 4k at 24fps, nothing would make or break either of those memory speeds.

to compare specs the eSRAM + GPU + GDDR3 memory will still be able to play at what most gamers call "ULTRA" settings cranked up to the max at 1080p.

Does sony plan on doing 4k gaming? if so, current PC video cards will smack them down too so i'm not sure what the point of this is..

#34 Athernar

Athernar

    ?

  • 2,992 posts
  • Joined: 15-December 04

Posted 22 May 2013 - 18:37

I've already spelled out how it is wasted, you're refusing to accept those facts. There just isn't enough pixels to oversaturate a GDDR3 bus + eSRAM that would impact that over going GDDR5. The fill rate to fill a 1920x1080p display is simply NOT THAT HIGH.

Those PC video cards are being designed to play at 2560x1600 resolution at high frames per second, The TV is 1920x1080p at fixed 60hz/60fps or 4k at 24fps, nothing would make or break either of those memory speeds.


It's nice that you've figured out how to calculate the required bandwidth for the frame buffer, but you're not giving any thought to the rest of the data that needs to be shifted in and out of RAM. Textures, texture masks, depth buffers and mesh data for instance.

#35 spudtrooper

spudtrooper

    Neowinian Senior

  • 3,095 posts
  • Joined: 19-October 10
  • OS: Windows 8
  • Phone: Nokia 920

Posted 22 May 2013 - 18:41

It's nice that you've figured out how to calculate the required bandwidth for the frame buffer, but you're not giving any thought to the rest of the data that needs to be shifted in and out of RAM. Textures, texture masks, depth buffers and mesh data for instance.


No need to include that. Both systems are based on 8gigs of shared memory or some allotment of shared resources thereof. ALl of that data should already be in the shared memory when the game runs so there is no shifting unless you're loading from disk in which case the disk is the limiting factor, not the ram speed.

And i'm pretty sure Microsoft & Microsoft Research did the math..

it could be stated in some respects that the eSRAM will offer better cache hit ratios as content is moving between CPU and GPU vs the pipeline of GPU to GDDR..

its ENGINEERED for a reason, i'm sure we will soon find out! The PS3 was over engineered and fancier hardware.. they swore it was the lack of ram stopping them that generation.. are they going to say its the performance of RAM now?

#36 OP Audioboxer

Audioboxer

    Hermit Arcana

  • 35,990 posts
  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 22 May 2013 - 18:41

I think it's easiest said the developers who are actually creating the games know best - http://www.eurogamer...ight-developers

#37 spudtrooper

spudtrooper

    Neowinian Senior

  • 3,095 posts
  • Joined: 19-October 10
  • OS: Windows 8
  • Phone: Nokia 920

Posted 22 May 2013 - 18:46

I think it's easiest said the developers who are actually creating the games know best - http://www.eurogamer...ight-developers


Of course developers are delighted to have MORE ram, the 256megs they had before was a pain..

No matter what, the laws of physics still apply regardless.

#38 Athernar

Athernar

    ?

  • 2,992 posts
  • Joined: 15-December 04

Posted 22 May 2013 - 18:46

No need to include that. Both systems are based on 8gigs of shared memory or some allotment of shared resources thereof. ALl of that data should already be in the shared memory when the game runs so there is no shifting unless you're loading from disk in which case the disk is the limiting factor, not the ram speed.

And i'm pretty sure Microsoft & Microsoft Research did the math..


You are aware that data still has to be continually shifted into GPU-local cache from RAM in order to perform computation on right?

#39 Mando

Mando

    Neowinian Senior

  • 2,120 posts
  • Joined: 05-April 02
  • Location: Scotland, Dundee
  • OS: Win 7 Ultimate x64/Pro x64/Home prem x64
  • Phone: Samsung Note ICS

Posted 22 May 2013 - 18:50

I am more interested in the quality of the components and which console will last longer. LOL, there are 30+ year old Ataris that work still and many people's Xboxs and Playstations died within several years of using them!

Speak for yourself my original black Brick Xbox is still soldiering on fine, now modded and XBMC and 1080i output. Same as my original Release 360 White.

#40 spudtrooper

spudtrooper

    Neowinian Senior

  • 3,095 posts
  • Joined: 19-October 10
  • OS: Windows 8
  • Phone: Nokia 920

Posted 22 May 2013 - 18:50

You are aware that data still has to be continually shifted into GPU-local cache from RAM in order to perform computation on right?


You are aware that I've already said that bus is already wide enough and fast enough to do this without the bus being the bottleneck right? Also, I've already stated that the eSRAM + MultiCPU + GPU config can offer better cash hit rates (less misses) than MultiCPU to GPU direct.


Apparently the latencies of GDDR3 and GDDR5 memory are pretty much the same - it costs the same amount. Where GDDR5 shines is when you need throughput but we're talking fixed resolutions here where the throughput doesn't demand GDDR5.

Where the eSRAM shines is its latency is very negligible because its on die with the processor and can continue to see improvements as the chip size shrinks, where as GDDR5 has the same latency as 3 sot he "performance" of a memory read/write is EXACTLY THE SAME

There are some IBM papers about that if you want to read up.

#41 OP Audioboxer

Audioboxer

    Hermit Arcana

  • 35,990 posts
  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 22 May 2013 - 18:51

There's a lot of good comments here - http://www.reddit.co...stand_the_deal/

Or even here - http://www.playstati...emory-analyzed/

#42 Athernar

Athernar

    ?

  • 2,992 posts
  • Joined: 15-December 04

Posted 22 May 2013 - 18:55

You are aware that I've already said that bus is already wide enough and fast enough to do this without the bus being the bottleneck right? Also, I've already stated that the eSRAM + MultiCPU + GPU config can offer better cash hit rates (less misses) than MultiCPU to GPU direct.

There are some IBM papers about that if you want to read up.


How can you claim to know the bus is wide enough when that value is completely arbitrary and dependant on the current workload? All you've done so far is factor in a figure for a 1080p/60 framebuffer.

Really, I don't think you really know what you're talking about. It seems you've just read some spec sheets and are quoting things verbatim to sound smart.

#43 Jason Stillion

Jason Stillion

    Neowinian

  • 1,408 posts
  • Joined: 04-April 12
  • Location: United States

Posted 22 May 2013 - 18:57

I am more interested in the quality of the components and which console will last longer. LOL, there are 30+ year old Ataris that work still and many people's Xboxs and Playstations died within several years of using them!


The "Fat" PS2 is the last console I've seen that seems to have any longevity.
The original xbox only has 3-5 (use) year life span, since the OS to run the device is on a standard ide hard drive.
The Wii could turn out be a long lasting console.

The other is the leadless sodder the industry switched to (environmental) which doesn't hold up as well.
Issues related to heat be it poor heat/cool design (original 360), or cheap thermal paste that dries out (ps3).

#44 spudtrooper

spudtrooper

    Neowinian Senior

  • 3,095 posts
  • Joined: 19-October 10
  • OS: Windows 8
  • Phone: Nokia 920

Posted 22 May 2013 - 19:03

How can you claim to know the bus is wide enough when that value is completely arbitrary and dependant on the current workload? All you've done so far is factor in a figure for a 1080p/60 framebuffer.


Because I know that both consoles are limited to 1080p/60 resolutions with finite pixels and finite refresh rates. Because you have these fixed values you can do the math.

Really, I don't think you really know what you're talking about. It seems you've just read some spec sheets and are quoting things verbatim to sound smart.


I'm just doing the math.. You're just guessing.

Look, Memory performance is based on throughput, the actual READ/WRITE time is almost the same and entirely dependent on the configuration and chips used. I'll repeat GDDR3/GDDR5 have the same latency for cost to read/write. There the GDDR5 shines is when you need more FILL rate, more THROUGHPUT because you're messing with larger PIXEL counts or larger TEXTURES but the beauty of console gaming is that we're talking about FIXED displays with FIXED fill rates and FIXED framerates so you can *DO THE MATH* to see what you really need

There is NOTHING WRONG with GDDR5, its simply a marketing decision to get people to think the bigger the better when the reality is GDDR3 is PERFECTLY CAPABLE of 1080p/60 at FULL FILL RATE and GDDR3/GDDR5 have the same read/write latencies for those texture sizes that it really is moot.

The eSRAM could actually be beneficial to making sure CPU cache misses are minimized and the smaller bus is used more efficiently and it may allow developers to programmatically optimize their engine to get around CAS latencies and all that mumbo jumbo that you can go google but probably won't

#45 Athernar

Athernar

    ?

  • 2,992 posts
  • Joined: 15-December 04

Posted 22 May 2013 - 19:09

Because I know that both consoles are limited to 1080p/60 resolutions with finite pixels and finite refresh rates. Because you have these fixed values you can do the math.

I'm just doing the math.. You're just guessing.


Oh wow, you've not even understood a single word of what I've said have you.

Congratulations, you did the math to calculate the effective bandwidth per second of a 1920*1080 framebuffer with 24-bit pixel (assuming RGB888) depth at 60 intervals per second. You get a gold star.

You still however are completely forgetting that there is data other than the framebuffer that needs to be shifted in and out of RAM. You've only accounted for one small piece of the pie. Do you comprehend now?