Jump to content



Photo

Rumored XBox One memory boost won't make a difference


  • Please log in to reply
36 replies to this topic

#1 Motoko.

Motoko.

    Neowinian Senior

  • Joined: 04-November 09
  • Location: United States of America
  • OS: Windows 8 Pro

Posted 29 June 2013 - 00:27

This guy right here gives a plain explanation for why eSRAM is more complicated

 

http://youtu.be/JJW5OKbh0WA?t=38m57s




#2 kaotic

kaotic

    Neowinian

  • Joined: 20-June 04
  • Location: Asheville, NC

Posted 29 June 2013 - 00:32

Correct me if I'm wrong, but isn't MS using eSRam not eDRAM?



#3 OP Motoko.

Motoko.

    Neowinian Senior

  • Joined: 04-November 09
  • Location: United States of America
  • OS: Windows 8 Pro

Posted 29 June 2013 - 00:35

Correct me if I'm wrong, but isn't MS using eSRam not eDRAM?

The argument remains the same

 

The CPU cannot access that eSRAM at all, it is only there as a GPU cache.
Compute workloads can only run across the slower DDR3. If the GPU needs to return a result to the CPU it has to be written to the common DDR3 as that is the only coherent memory in the system.

Truth be told the Xbone's memory arrangement isn't *terrible* there are a number of systems out there that use something similar. The PS4 with its hUMA configuration is just substantially better.



#4 vcfan

vcfan

    Straight Ballin'

  • Tech Issues Solved: 3
  • Joined: 12-June 11

Posted 29 June 2013 - 00:41

this sony guy just admitted the edram method does indeed give you more bandwidth, but its more complicated. Just because sony doesn't know how to make it easier for developers to use doesn't mean Microsoft doesn't. And Microsoft is isn't even using edram, they are using eSRAM,which is way faster.



#5 HawkMan

HawkMan

    Neowinian Senior

  • Tech Issues Solved: 4
  • Joined: 31-August 04
  • Location: Norway
  • Phone: Noka Lumia 1020

Posted 29 June 2013 - 00:45

The argument remains the same

 

The CPU cannot access that eSRAM at all, it is only there as a GPU cache.
Compute workloads can only run across the slower DDR3. If the GPU needs to return a result to the CPU it has to be written to the common DDR3 as that is the only coherent memory in the system.

Truth be told the Xbone's memory arrangement isn't *terrible* there are a number of systems out there that use something similar. The PS4 with its hUMA configuration is just substantially better.

 

 

Do you know what's more important than memory architecture ? a good API and SDK that allows you to make use of the system, or even makes use of the system for you.

 

MS has been making great SDK's that lets you do this since the original Xbox, and if you don't purposely make use of it, it makes use of it for you. Sony's history of making good SDK's is... well non existant. 

 

Meanwhile developers who pretty much lived in the Sony bubble and lived and breathed Sony, (ie the Metal Gear Solid dude) is saying the difference is minimal and won't have any real effect. 



#6 OP Motoko.

Motoko.

    Neowinian Senior

  • Joined: 04-November 09
  • Location: United States of America
  • OS: Windows 8 Pro

Posted 29 June 2013 - 00:54

this sony guy just admitted the edram method does indeed give you more bandwidth, but its more complicated. Just because sony doesn't know how to make it easier for developers to use doesn't mean Microsoft doesn't. And Microsoft is isn't even using edram, they are using eSRAM,which is way faster.

 

 

Do you know what's more important than memory architecture ? a good API and SDK that allows you to make use of the system, or even makes use of the system for you.

 

MS has been making great SDK's that lets you do this since the original Xbox, and if you don't purposely make use of it, it makes use of it for you. Sony's history of making good SDK's is... well non existant. 

 

Meanwhile developers who pretty much lived in the Sony bubble and lived and breathed Sony, (ie the Metal Gear Solid dude) is saying the difference is minimal and won't have any real effect. 

 

Throwing ungodly amounts of bandwidth at a GPU does nothing for it unless the GPU actually has the execution resources to make use of it.
Its like installing a 8 lane highway in a town with only 12 people. You have plenty of wide open lanes, but you can never fill them.

The PS4 has more bandwidth and 50% more ALUs than the Xbone. It has that high bandwidth because it actually has the GPU to use it.
A reference Radeon 7850 with 16 GCN engines has 153.6 GB/s memory bandwidth.
The PS4's GPU with 18 GCN engines has 176 GB/s memory bandwidth
The Xbone only has 12 GCN engines. Giving 12 GCN engines 100,000,000 GB/s memory bandwidth will literally not improve their performance at all over even 150 GB/s.



#7 vcfan

vcfan

    Straight Ballin'

  • Tech Issues Solved: 3
  • Joined: 12-June 11

Posted 29 June 2013 - 01:08

Throwing ungodly amounts of bandwidth at a GPU does nothing for it unless the GPU actually has the execution resources to make use of it.
Its like installing a 8 lane highway in a town with only 12 people. You have plenty of wide open lanes, but you can never fill them.

The PS4 has more bandwidth and 50% more ALUs than the Xbone. It has that high bandwidth because it actually has the GPU to use it.
A reference Radeon 7850 with 16 GCN engines has 153.6 GB/s memory bandwidth.
The PS4's GPU with 18 GCN engines has 176 GB/s memory bandwidth
The Xbone only has 12 GCN engines. Giving 12 GCN engines 100,000,000 GB/s memory bandwidth will literally not improve their performance at all over even 150 GB/s.

 

 

LOL so now anything over 150GB/s is useless...Right. Now that the tables have turned,and it turns out xbox one has more bandwidth, all of a sudden it doesn't matter.

 

And lack of bandwidth caps your card,it doesn't matter if it has 100,000,000 GCN engines. Unless you have benchmarks,then you cannot speak of performance. We know Microsoft had the edram configuration last gen,and developers didn't have to jump through crazy hoops to develop games. Microsoft has the right tools to take advantage of their configuration. Microsoft also had an ASIC last gen that took a lot of computation intensive elements away from the GPU such as MSAA,alpha blending and z buffering. You don't have the die shots down to the gate and transistor levels.

 

You are also forgetting the fact that xbox one will have a custom HD audio engine chip that will keep a whole bunch of free CPU cycles. This matters a whole lot when it comes to actual performance. 

 

In that video, Sony basically just admitted they used the easier method because they don't want to over complicate things for developers,since they've proven last gen they don't have the software know how to take advantage of such complex systems. They just did the exact same thing with their PS4 chip,taking the easy way out by using 18 GCNs. Microsoft took the complicated way last time around by using the edram configuration,and using some other custom logic on their chip. Looks like they are doing the same today, and by the sounds of it,with stuff like this latest news coming out,i wouldn't be surprised if xbox one ends up being the one with the much superior and better performing hardware.



#8 yardmanflex

yardmanflex

    Neowinian

  • Joined: 29-October 03
  • Location: Bronx, NY
  • OS: Windows 8.1
  • Phone: Nokia Lumia 920

Posted 29 June 2013 - 01:16

LOL so now anything over 150GB/s is useless...Right. Now that the tables have turned,and it turns out xbox one has more bandwidth, all of a sudden it doesn't matter.

 

And lack of bandwidth caps your card,it doesn't matter if it has 100,000,000 GCN engines. Unless you have benchmarks,then you cannot speak of performance. We know Microsoft had the edram configuration last gen,and developers didn't have to jump through crazy hoops to develop games. Microsoft has the right tools to take advantage of their configuration. Microsoft also had an ASIC last gen that took a lot of computation intensive elements away from the GPU such as MSAA,alpha blending and z buffering. You don't have the die shots down to the gate and transistor levels.

 

You are also forgetting the fact that xbox one will have a custom HD audio engine chip that will keep a whole bunch of free CPU cycles. This matters a whole lot when it comes to actual performance. 

 

In that video, Sony basically just admitted they used the easier method because they don't want to over complicate things for developers,since they've proven last gen they don't have the software know how to take advantage of such complex systems. They just did the exact same thing with their chip,taking the easy way out by using 18 GCNs. Microsoft took the complicated way last time around by using the edram configuration,and using some other custom logic on their chip.

LOL you see how fanboys can turn ish in there favour...



#9 Blackhearted

Blackhearted

    .....

  • Joined: 26-February 04
  • Location: Ohio
  • Phone: Samsung Galaxy S2 (VM)

Posted 29 June 2013 - 01:26

Throwing ungodly amounts of bandwidth at a GPU does nothing for it unless the GPU actually has the execution resources to make use of it.
Its like installing a 8 lane highway in a town with only 12 people. You have plenty of wide open lanes, but you can never fill them.

The PS4 has more bandwidth and 50% more ALUs than the Xbone. It has that high bandwidth because it actually has the GPU to use it.
A reference Radeon 7850 with 16 GCN engines has 153.6 GB/s memory bandwidth.
The PS4's GPU with 18 GCN engines has 176 GB/s memory bandwidth
The Xbone only has 12 GCN engines. Giving 12 GCN engines 100,000,000 GB/s memory bandwidth will literally not improve their performance at all over even 150 GB/s.

 

Don't bother trying to explain anything technical to those on team xbox. As we've seen in the big thread about the ps4/one specs, it's largely a waste of effort as they don't care much about anything other than their own theories.

 

 

LOL so now anything over 150GB/s is useless...Right. Now that the tables have turned,and it turns out xbox one has more bandwidth, all of a sudden it doesn't matter.

 

And lack of bandwidth caps your card,it doesn't matter if it has 100,000,000 GCN engines. Unless you have benchmarks,then you cannot speak of performance. We know Microsoft had the edram configuration last gen,and developers didn't have to jump through crazy hoops to develop games. Microsoft has the right tools to take advantage of their configuration. Microsoft also had an ASIC last gen that took a lot of computation intensive elements away from the GPU such as MSAA,alpha blending and z buffering. You don't have the die shots down to the gate and transistor levels.

 

You are also forgetting the fact that xbox one will have a custom HD audio engine chip that will keep a whole bunch of free CPU cycles. This matters a whole lot when it comes to actual performance. 

 

On a GPU of the one's power, 150GB/sec is a bit higher than necessary, and anything more than that is definitely more than it could take full advantage of. There's no being a fan of one side or another about it, it's just a simple fact. Another fact is that even if the increased bandwidth gives it a little boost in performance, it still wont make it suddenly be able to match the peak performance of a box with a noticeably more powerful gpu.

 

Also, this isn't the 90's anymore. The difference a dedicated audio chip will make to performance on a modern computer or game console is minuscule.



#10 OP Motoko.

Motoko.

    Neowinian Senior

  • Joined: 04-November 09
  • Location: United States of America
  • OS: Windows 8 Pro

Posted 29 June 2013 - 01:38

Don't bother trying to explain anything technical to those on team xbox. As we've seen in the big thread about the ps4/one specs, it's largely a waste of effort as they don't care much about anything other than their own theories.

That's fine; at least those who want to read up and remain objective will have the info readily available for them to see.



#11 vcfan

vcfan

    Straight Ballin'

  • Tech Issues Solved: 3
  • Joined: 12-June 11

Posted 29 June 2013 - 01:40

On a GPU of the one's power, 150GB/sec is a bit higher than necessary, and anything more than that is definitely more than it could take full advantage of.

 

really? care to give some technical examples to show us why 150GB/s is more than necessary?

 

There's no being a fan of one side or another about it, it's just a simple fact. Another fact is that even if the increased bandwidth gives it a little boost in performance, it still wont make it suddenly be able to match the peak performance of a box with a noticeably more powerful gpu.

 

again,GCNs are just one aspect of a GPU,just like bus width,ram configuration,and other things that we don't know. theres examples set from last gen that shows there are more customized elements part of GPUs that massively affect performance.

 

Also, this isn't the 90's anymore. The difference a dedicated audio chip will make to performance on a modern computer or game console is minuscule.

 

are you kidding? we're not talking about a dumb buffer fill like a cpu assisted sound card. try for example using some audio processing plugins in your digital audio workstation applications,and wait for your powerful CPU to process this audio signal. Yeah, not that simple.



#12 OP Motoko.

Motoko.

    Neowinian Senior

  • Joined: 04-November 09
  • Location: United States of America
  • OS: Windows 8 Pro

Posted 29 June 2013 - 01:45

LOL so now anything over 150GB/s is useless...Right. Now that the tables have turned,and it turns out xbox one has more bandwidth, all of a sudden it doesn't matter.

 

And lack of bandwidth caps your card,it doesn't matter if it has 100,000,000 GCN engines. Unless you have benchmarks,then you cannot speak of performance. We know Microsoft had the edram configuration last gen,and developers didn't have to jump through crazy hoops to develop games. Microsoft has the right tools to take advantage of their configuration. Microsoft also had an ASIC last gen that took a lot of computation intensive elements away from the GPU such as MSAA,alpha blending and z buffering. You don't have the die shots down to the gate and transistor levels.

 

You are also forgetting the fact that xbox one will have a custom HD audio engine chip that will keep a whole bunch of free CPU cycles. This matters a whole lot when it comes to actual performance. 

 

In that video, Sony basically just admitted they used the easier method because they don't want to over complicate things for developers,since they've proven last gen they don't have the software know how to take advantage of such complex systems. They just did the exact same thing with their PS4 chip,taking the easy way out by using 18 GCNs. Microsoft took the complicated way last time around by using the edram configuration,and using some other custom logic on their chip. Looks like they are doing the same today, and by the sounds of it,with stuff like this latest news coming out,i wouldn't be surprised if xbox one ends up being the one with the much superior and better performing hardware.

Just to answer

 

The XBone is 60% less powerful than the PS4 in processing, adding 30 times more bandwidth won't help for jack because of that. Also, the restricted 3GB of RAM to the OS at all times and 2 for Kinect. What developers have asked for the most is more RAM


If your embedded memory bandwidth is 1000 TB/sec, it's still 32 MB and it's isolated from the main RAM pool, meaning you're gonna have to go through those "move engine" co-processors to get there. The more and more I learn about the XBone's design, the more and more I think the move engines are really gonna be its Achilles' Heel. If you're shuffling data in and out of 32MB of which is essentially glorified cache, those move engines are going to have to be cranking a mile a minute - developers will likely have to reprogram them to suit their needs depending on the game, and they WILL cause bottleneck no matter how you slice it.



#13 AR556

AR556

    Neowinian Senior

  • Joined: 07-August 03

Posted 29 June 2013 - 01:53

Serious question: Do games these days (or the near future) on a high-end PC or console actually take that kind of bandwidth and make use of it?



#14 +Brandon Live

Brandon Live

    Seattle geek

  • Joined: 08-June 03
  • Location: Seattle, WA

Posted 29 June 2013 - 01:58

It's simply not true. One of the Xbox 360's greatest advantages was its 10MB EDRAM buffer. It was not difficult to use, especially at 720p. Even at 1080p with some extra legwork it was very effective. The thing to keep in mind is that it's tailored to be used for the most memory bandwidth intensive tasks the system performs, which are those high-throughput low-size requirements. Namely the frame buffer. This frees the main memory to have ample bandwidth for everything else, which tend to be tasks requiring a lot of low-latency memory but not necessarily all that much bandwidth. These are generalizations of course, but the success of this model in the 360, and the fact that the XB1 has a larger, faster chunk of it (enough for a 1080p frame buffer without any fancy tricks) is encouraging.

 

Also, take note of the DX 11.2 work discussed at Build this week. One of the big focuses was on making drastically more efficient use of graphics memory via an impressive new tiled resource architecture. Don't think for second that they didn't design the XB1 with this in mind.



#15 helios01

helios01

    Neowinian

  • Joined: 30-April 07

Posted 29 June 2013 - 02:04

Throwing ungodly amounts of bandwidth at a GPU does nothing for it unless the GPU actually has the execution resources to make use of it.
 

 And what makes you think that it doesn't...  :s

 

In the same video posted, Mark Cerny states that the GPU is doing far more than just graphics, GPUs can do so much more nowadays.