Rumored XBox One memory boost won't make a difference


Recommended Posts

Just look at various graphics cards. You don't see much, if any, with the around same power of the one's gpu sporting 150GB/sec or higher of bandwidth. There's a reason for that and it should be pretty obvious.

its been proven the same power graphics card on a console will give better fps than a similar card in a desktop PC, because of many bottlenecks in the PC gpu along the way,especially in software. therefore comparing such things is useless. unless you've got some gpu profiler data to show how the bandwidth is not bottlenecking the fps, then you have no leg to stand on.

 

Yes, the compute is one aspect, that's true, but when you cut down that part of a gpu you usually cut down other parts in the process. Again, look at any modern pc gpu for examples if you're unsure.

 

no, when you cut down that part of the gpu, youre making space to add another part. what you said makes absolutely no sense.

 

 

Too bad this is a game console and not a digital audio workstation. If it was the latter you may actually have some kind of point.

 

huh? im showing you how audio processing sucks up CPU cycles,and takes time to process.Thats what the example with the DAW was about. Games don't have HD surround? 3d audio effects? You can have better audio effects with the SHAPE xbox one chip without affecting the CPU. this is a pretty big deal. 

Link to comment
Share on other sites

The most benefit that eSRAM will be able to provide is the framebuffer advantage that the eDRAM provided on the 360, but that tech is going to essentially be a null point now because the PS4's unified RAM is more than up to the task of a 1080p framebuffer. If you wanted to do anything ELSE with it, the amount of transferring in and out of main RAM you'd have to do (particularly for CPU tasks exceeding the cache, because the CPU has no direct access to the eSRAM) would essentially nullify any sort of benefit.

 

The EDRAM in the 360 is only there for use as a frame buffer. Its can essentially be free AA. It is not even remotely similar to the eSRAM employed in the Xbone.

Just to clarify further, The EDRAM of Xbox 360 is not just memory, but half the drawing work of the GPU. Besides the 10MB memory, there are 192 ROPs embeded on the memory itself, thus why the "free MSAA".The memory draws itself and asks the GPU for the pixel shaded data.

Now the Xbone memory is just memory, so no free MSAA for it.

 

I never said the xbox one is going to have penalty free AA and all those other things, im showing you real life examples where using compute units alone doesn't tell you actual performance, because the xbox 360 had this configuration where it would get penalty free functions,whereas if this was done on the GPU itself,it would eat up a lot of its computation time,and I backed this up with benchmarks of gpus doing this stuff and having their framerates split in half. The point im trying to make is we cant compare actual performance of a chip based on theoretical computation power,because other aspects of the chip can affect performance,as we saw last gen with the 360.

Link to comment
Share on other sites

Ok what the heck is the point of throwing all of these numbers around? I have heard everything from 33% to 60% 'power' between the two consoles.  The way you guys argue over the numbers makes me think you either don't know what the real situation is or just love rubbing numbers into someone else's face.

 

If this rumor is true, then why get all worked up over it? Why make a thread immediately talking about how it sucks and doesn't do anything. Its as if a Sony fan ran in and got angry seeing the rumor and just had to run it down. I'm going to assume the OP isn't a fanboy though and just wants to share some info. I just don't get the point.

 

I didn't see anyone running out saying how this rumor changed everything and suddenly the X1 is 100x more powerful. Most of the comments I've seen amount to 'cool, increasing performance is always good'.

 

The technical reasons for this being useful are fine to debate, but that only works if we know how the entire system works.  Early on, Anandtech did what I consider the best review of the hardware config as we know it, laying out the plusses and minuses for both systems. Their conclusion amounted to everything being a wash apart from Sony's stronger gpu, which they pegged at ~50%. They considered both ram systems on par performance wise, but stressed that the ps4 config was simpler. Even after their analysis, they point out how we still don't know everything about both systems to know for sure.

 

In my view, if MS is going to alter the hardware in a positive way, that's good news. The fact that they would put effort into making a change makes me think that they had a reason to do so, something that helped the system. Does this change the raw power difference? Not in regards to the gpu, but the rest of the system? I really don't know, but I don't see why the rush to denounce a rumor.

Link to comment
Share on other sites

We had this same argument ps3 v Xbox 360. Last time Sony told us their console was better cus it had more power, would be able to run games at 1080p and would win that console war.

I think there basically the same overall on power and price when you minus the Kinect so IMO, the question is which console based on the specs they release with, will be able to reduce their price the fastest?

It seems like the Xbox one will be at that advantage here.

Link to comment
Share on other sites

I like how people have been sidestepping the whole tile feature that's part of DX11.2 and used in the XB1.   The way they showed it off and how I've come to understand the way it works it will allow developers to using and bring high detailed textures to their games as needed on screen without having to pack them all into the GPUs memory.   It's in those types of moments when you don't need super fast bandwidth memory because you're grabbing chunks as needed not filling all 8GBs up and stuffing them through as is normally the case.

 

I'm sure there's more technical ways to explain it be it's clear that with the new ability in DX11.2 + faster eSRAM and the fancy new move engines that even with slower DDR3 compared to GDDR5, developers should be able to bring the same level of detail to their games on both systems in the end.

Link to comment
Share on other sites

There is not going to be a whole lot difference graphically between the one and the PS4.  They are very similar architectures this time around.  If you really care about graphics just get a PC 2 years from now I am sure it will blow both consoles out of the water. 

Link to comment
Share on other sites

You guys are leaving a very important point out of this discussion.

EVERY game shown on the X1 is 60fps. That includes Forza 5, MGS5, BF4, Ryse.

Every game on the PS4 struggles to hit 30fps. I was planning on buying both, but unfortunately if they don't fix the frame rate, its just the X1 for me.

http://www.eurogamer.net/articles/digitalfoundry-hands-on-with-playstation-4

Its not all about cold hard facts but there is one fact, the esdram makes a huge difference. Its ignorance and simply being a fan boy disregarding it.

Link to comment
Share on other sites

Throwing ungodly amounts of bandwidth at a GPU does nothing for it unless the GPU actually has the execution resources to make use of it.

Its like installing a 8 lane highway in a town with only 12 people. You have plenty of wide open lanes, but you can never fill them.

The PS4 has more bandwidth and 50% more ALUs than the Xbone. It has that high bandwidth because it actually has the GPU to use it.

A reference Radeon 7850 with 16 GCN engines has 153.6 GB/s memory bandwidth.

The PS4's GPU with 18 GCN engines has 176 GB/s memory bandwidth

The Xbone only has 12 GCN engines. Giving 12 GCN engines 100,000,000 GB/s memory bandwidth will literally not improve their performance at all over even 150 GB/s.

 

You do realize the CPU and GPU of both these machines have far more power than the bandwidth of these can push. but hey, sure, make up arbitrary random reasons for your "opinion"

 

Not sure why you quoted me in there since you didn't respond to my post at all. 

Link to comment
Share on other sites

Just to answer

 

The XBone is 60% less powerful than the PS4 in processing, adding 30 times more bandwidth won't help for jack because of that. Also, the restricted 3GB of RAM to the OS at all times and 2 for Kinect. What developers have asked for the most is more RAM

If your embedded memory bandwidth is 1000 TB/sec, it's still 32 MB and it's isolated from the main RAM pool, meaning you're gonna have to go through those "move engine" co-processors to get there. The more and more I learn about the XBone's design, the more and more I think the move engines are really gonna be its Achilles' Heel. If you're shuffling data in and out of 32MB of which is essentially glorified cache, those move engines are going to have to be cranking a mile a minute - developers will likely have to reprogram them to suit their needs depending on the game, and they WILL cause bottleneck no matter how you slice it.

 

oh 60% now. damn when they originally did the calculations it was 40%. the fanboys quickly inflated it to 50% and now we're up to 60%. by launch time we'll be at 100%. and the PS4 will still launch with first party games struggling to maintain a consistent 30FPS at 1080, while third party devs on the Xbox One will flow freely along at 1080p60 without framdedrops thanks to the esram and dev tools and SDK's that actually allows the developers to make use of the hardware as opposed to "We didn't bother to make a good dev kit or a proper graphics solution so we just dump the developers with a low level coding language so that they have to make everythign themselves and figure out how to optimize it", I'm sure in a couple of years the first party devs will have optimized their first generation game engines to operate at the same level as the first generation launch engines on the Xbox One, of course then those devs are on second and third gen engines still operating at an even higher efficiency.

 

but quality SDK's and API's are of no importance right... not like Sony hasn't shown this 3 times already....

Link to comment
Share on other sites

The most benefit that eSRAM will be able to provide is the framebuffer advantage that the eDRAM provided on the 360, but that tech is going to essentially be a null point now because the PS4's unified RAM is more than up to the task of a 1080p framebuffer. If you wanted to do anything ELSE with it, the amount of transferring in and out of main RAM you'd have to do (particularly for CPU tasks exceeding the cache, because the CPU has no direct access to the eSRAM) would essentially nullify any sort of benefit.

 

The EDRAM in the 360 is only there for use as a frame buffer. Its can essentially be free AA. It is not even remotely similar to the eSRAM employed in the Xbone.

Just to clarify further, The EDRAM of Xbox 360 is not just memory, but half the drawing work of the GPU. Besides the 10MB memory, there are 192 ROPs embeded on the memory itself, thus why the "free MSAA".The memory draws itself and asks the GPU for the pixel shaded data.

Now the Xbone memory is just memory, so no free MSAA for it.

 

The EDRAM on the 360 like the ESRAM on the One can do more than that. in Halo for example it was used for the 3 pass renderer (multi pass renderers is how high end CGI hybrid raytracers operate btw, since you can do a lot better quality a lot faster) to blend the multiple sub frames into one frame. to do real time multippass rendering you NEED a solution like the Xbox EDRAM/ESRAM. 

You guys are leaving a very important point out of this discussion.

EVERY game shown on the X1 is 60fps. That includes Forza 5, MGS5, BF4, Ryse.

Every game on the PS4 struggles to hit 30fps. I was planning on buying both, but unfortunately if they don't fix the frame rate, its just the X1 for me.

http://www.eurogamer.net/articles/digitalfoundry-hands-on-with-playstation-4

Its not all about cold hard facts but there is one fact, the esdram makes a huge difference. Its ignorance and simply being a fan boy disregarding it.

 

And that's what happens when one console is developed by a company with extensive software knowledge who has been making SDK's and developer tools and the best optimized compiler for ages, versus a company who sucks so bad at software and making SDK they just threw the developers a low level SDK. making each individual developer figure out the things they should have figured out for them and optimized in the SDK. 

Link to comment
Share on other sites

On a GPU with 12 GCN engines clocked at just 800mhz, yes. That is a fact. Increasing memory bandwidth does literally nothing if the GPU itself can't process workloads fast enough to consistently fill the memory.

No, 800MHz is not a fact. You just pulled that number out of thin air.

Link to comment
Share on other sites

Seeing how MS changed the whole DRM thing so quick, you'd think if they wanted to boost the GPU a bit they could clock it higher.    Depending on yields and all that but 800Mhz, though I don't know if that's true or not, is probably on the safe side to keep heat down.   So who knows?   I see no reason why they can't change that.   Either way though, with the changes to DX11.2 I don't think the bandwidth difference will matter at all, the new hardware supported tile feature, if used right, can make up for that difference.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.