Witcher 3 dev: No major power diff. Hidden XBO power?


Recommended Posts

I love how the hypothetical argument right now is developers unlocking more power with the One as the time goes on, but the PS4 will remain the same.

Both consoles will grow and flourish over time, the cloud will be a big weapon in that.

Link to comment
Share on other sites

I love how the hypothetical argument right now is developers unlocking more power with the One as the time goes on, but the PS4 will remain the same.

Except we know that the PS4 has a bog standard chipset. Whereas the Xbox is very custom with several components that would enhance performance a LOT that isn't in use yet.

Both consoles would get increased "power" over time naturally. It's just that we know there are significant elements of the Xbox not being used yet. And you know this so I don't see why you're making such arguments you know are pointless.

Link to comment
Share on other sites

I love how the hypothetical argument right now is developers unlocking more power with the One as the time goes on, but the PS4 will remain the same.

There is another hypothetical argument as well:

That both consoles will show little improvement over time because they are so pc standard hardware wise.

I don't know who is right in all this back and forth. All I know is that no one here can confirm anything, we all have to wait and see.

Personally, I believe that both consoles will improve over time, maybe not as much as previous generations, but to some degree. My only question is if the X1 ends up being able to offer most games at 1080p/60 if a dev chooses to use it. The ps4 could still offer better performance beyond that, but I think most are just looking for the consoles to hit that magic minimum number.

Link to comment
Share on other sites

what if your cpu is a bottleneck? you could throw a titan in there, it wont give you the performance you expect.

That depends on what you do your CPU and your GPU. It's up to the developer to balance the loads.  Nevertheless, 50% more shaders is 50% more GPU computational power - minus a slight clock speed advantage for Xbox One - so their comment comes off as evasive.

Link to comment
Share on other sites

That depends on what you do your CPU and your GPU. It's up to the developer to balance the loads.  Nevertheless, 50% more shaders is 50% more GPU computational power - minus a slight clock speed advantage for Xbox One - so their comment comes off as evasive.

its not just what you do with the CPU. its what the CPU can do. if there is a limit to the amount of draw calls you can make, then it doesn't matter if even you have a million shaders. that's the difference between paper specs and real world performance. that's 4 huge developers so far that have come out and laid the exact same claim. this guy,kojima, carmack, and the resident evil creator. im sure they've run their benches and what not already.

Link to comment
Share on other sites

That depends on what you do your CPU and your GPU. It's up to the developer to balance the loads.  Nevertheless, 50% more shaders is 50% more GPU computational power - minus a slight clock speed advantage for Xbox One - so their comment comes off as evasive.

 

But again, you can't just rip out the gpu say this gpu is 50% faster(theoretically ,doesn't translate to even near 50% power for graphics at any rate) and say that one console has 50% more graphics power than the other when they both use APU's and one of them has a vastly more complex APU that changes the way the whole works internally optimizing the resources that are there to a whole other level. 

 

Right now what you see are either games that look and perform the same on both, or games with highes res and FPS drops on the PS4 and games with lower res and stable FPS on the One. right now there's no proof of any more power on the PS4. just different utilization and both consoles havign games that are vastly underperforming and under optimized. . 

Link to comment
Share on other sites

That depends on what you do your CPU and your GPU. It's up to the developer to balance the loads.  Nevertheless, 50% more shaders is 50% more GPU computational power - minus a slight clock speed advantage for Xbox One - so their comment comes off as evasive.

Exactly. Sony's approach is definitely the brute-force GPU approach with heavy focus on GPGPU for particles, audio and even AI. This is evident due to the sheer size of it and the GDDR memory, which CPU's can't stand, especially APU's. Whereas the X1 definitely hasn't taken that approach and taken the route of offloading a lot of tasks away from the GPU/CPU: audio, memory allocation etc. This is evident with SHAPE, 4 DMV's, 15 co-processors. The X1 definitely has the CPU edge with the higher clock speed and DDR ram.

 

In no way can you cater for some of the spec differences in the GPU (32 ROPS for example) but then again, you can't make up for the CPU either. Swings and roundabouts.

Link to comment
Share on other sites

There is clearly a difference in power between the two consoles. That is a fact at this point. Xbox One games would not be running at a lower resolution i.e. sub-1080p if that was not the case. Whether CD Projekt will take advantage of that is another thing, but it appears that they have no reached that part of production yet. The optimization part.

Yes and that clearly explains why BF4 and The Order on PS4 are running sub-1080p, right? As you said, 1080p is everything.
Link to comment
Share on other sites

Yes and that clearly explains why BF4 and The Order on PS4 are running sub-1080p, right? As you said, 1080p is everything.

 

I can not comment on "The Order", as I have not heard anything, but regarding Battlefield 4. The game was completely broken and was not ready for release. It recently came to light that EA had DICE push the game out before it was ready in order to beat Call of Duty. Now, when the PC versions were almost unplayable, how do you think the PS4 version with minimal optimization would turn?

 

Anyway, my point was that there was a power difference.

Link to comment
Share on other sites

Any significant 'power' difference will show up in games down the line. Eventually, developers have done all they can to squeeze performance out of a console.

For anyone to take launch titles and hold them up as what a console can really do, is just short sighted in my opinion.

Oh and lets try to remember that we should not be shocked if both the ps4 and the x1 continue to have sub 1080/60 games as developers decide that hitting that resolution mark is not as important as other things. I think its more likely that exclusive titles for both hit 1080p as we go along vs 3rd party multiplatform titles.

Link to comment
Share on other sites

I can not comment on "The Order", as I have not heard anything, but regarding Battlefield 4. The game was completely broken and was not ready for release. It recently came to light that EA had DICE push the game out before it was ready in order to beat Call of Duty. Now, when the PC versions were almost unplayable, how do you think the PS4 version with minimal optimization would turn?

 

Anyway, my point was that there was a power difference.

you just admitted that BF is broken and rushed. COD also doesn't run well at 1080 either. it runs at an unlocked 60fps, with framedrops and big gaps between frames that causes judder,whereas it runs a nice and smooth locked 60fps at 720p on XBO. the only other game with resolution differences between the 2 consoles is assasins creed,a last gen looking game that had the developers optimizing the PS4 version specifically to hit 1080p after the games gone gold, then putting out a release basically bragging about it, only because they have an agreement with sony (they didn't even advertise the xbox one version in some marketing material but xbox360 and ps3 were present). every other multiplat is pretty much identical.

Link to comment
Share on other sites

its not just what you do with the CPU. its what the CPU can do. if there is a limit to the amount of draw calls you can make, then it doesn't matter if even you have a million shaders.

Are you suggesting the PS4's CPU is insufficient to drive its GPU? It doesn't take any more draw calls to render a 1600x900 picture or a 1920x1080 one, but it does take about 40% more shader calculations. It doesn't take any more draw calls to do 2x or 4x MSAA. There's a lot you can do without necessarily increasing the number of draw calls. Besides, consoles have low-level, low-latency APIs with very little overhead in terms of draw calls. The issue is not nearly what it is on PC, and on PC we still see great scaling with GPU power on just about any game benchmark you can find.

Link to comment
Share on other sites

But again, you can't just rip out the gpu say this gpu is 50% faster(theoretically ,doesn't translate to even near 50% power for graphics at any rate) and say that one console has 50% more graphics power than the other when they both use APU's and one of them has a vastly more complex APU that changes the way the whole works internally optimizing the resources that are there to a whole other level. 

50% more shaders is 50% more computational power on the GPU. Taking the clock speed difference into account, it's more like 41%, but hey, that's approximately the area difference between a 900p framebuffer and 1080p one - or a 720p and a 900p. And we already have several examples of cross-platform titles exhibiting exactly these resolution differences on the consoles. This is not a coincidence.

 

Regarding "vastly more complex APU" that "internally optimizes the resources to a whole other level" - do you have any source on the PS4's detailed SoC layout? Or are you simply assuming it doesn't have something equivalent to the Xbox One's co-processors? Or are you referring to something else? If so what? 

 

Whereas the X1 definitely hasn't taken that approach and taken the route of offloading a lot of tasks away from the GPU/CPU: audio, memory allocation etc. This is evident with SHAPE, 4 DMV's, 15 co-processors.

Again, on what basis do you think the PS4 doesn't have equivalent hardware functionality?

Link to comment
Share on other sites

This is evident due to the sheer size of it and the GDDR memory, which CPU's can't stand, especially APU's. 

What do you mean by that? GDDR5 is exactly what an APU wants; these crave bandwidth above everything else.

Link to comment
Share on other sites

 

Regarding "vastly more complex APU" that "internally optimizes the resources to a whole other level" - do you have any source on the PS4's detailed SoC layout? Or are you simply assuming it doesn't have something equivalent to the Xbox One's co-processors? Or are you referring to something else? If so what? 

 

Well the SDK, the sony info about the APU, the Xrays of the CPU and so on. 

 

as for the power, not GPU power doesn't directly translate to graphics performance. you can buy an exactly twice and fast GPU for your computer, and it won't offer near that performance increase. several tech sites also showed this by using identical computers, and inserting graphics cards with identical differences as the Xbox one and PS4 to show the difference, the increased real power was at best half of the theoretical and raw numbers difference of the hardware. 

Link to comment
Share on other sites

Well the SDK, the sony info about the APU, the Xrays of the CPU and so on. 

Which reveal what? Can you actually point out something specific?

 

as for the power, not GPU power doesn't directly translate to graphics performance. you can buy an exactly twice and fast GPU for your computer, and it won't offer near that performance increase. several tech sites also showed this by using identical computers, and inserting graphics cards with identical differences as the Xbox one and PS4 to show the difference, the increased real power was at best half of the theoretical and raw numbers difference of the hardware. 

 

Graphics performance is more than GPU computational power, but the PS4 still has about 41% more GPU computational power, which can entirely account for resolution differences, for instance. There's more to graphics than resolution, but the PS4 is likely to keep pushing higher resolutions than the Xbox One, simply because it can run the same shader in the same time on a larger framebuffer than Xbox One. So the witcher devs stating that there's no significant computational power difference is strange.

As for tests on PCs, these do not take into account GPU compute and bandwidth differences, both of which making the GPU on consoles, and PS4 especially, a much more important component than it is on PC - Kaveri should provide a better test bench to get an idea of how a system with unified memory scales with shader cores.

Link to comment
Share on other sites

Again, on what basis do you think the PS4 doesn't have equivalent hardware functionality?

I thought it was pretty much accepted that the ps4 used a more straightforward pc-like design. They don't have dedicated/custom chips for things like audio or video scaling/processing. They chose to just leverage what AMD offered as part of its apu.

As far as the apu goes, we already know that MS went a much more custom path thanks to the esram setup and the additional cache built right into the cpu side.

Also, keep in mind that many of the choices MS made hardware wise such as the custom 'move engine' pieces are there to reduce or eliminate the drawbacks to relying on a very fast, but also relatively small batch of esram. This would be to negate any advantage that using gddr5 would provide. The downside obviously is that developers have to work harder to maximize performance, but the upside is a big reduction in cost and higher part availability. Whether this setup is successful remains to be seen.

None of this means anything though until we see more quality games. Much is the same for the ps4. We love to talk specs, but it matters little if games aren't made to take advantage of it.

Link to comment
Share on other sites

I thought it was pretty much accepted that the ps4 used a more straightforward pc-like design. They don't have dedicated/custom chips for things like audio or video scaling/processing. They chose to just leverage what AMD offered as part of its apu.

As far as the apu goes, we already know that MS went a much more custom path thanks to the esram setup and the additional cache built right into the cpu side.

Well if it's accepted and there's no real source, then it's a popular myth. As far as I know, we've seen very detailed specifications from Microsoft, and only some numbers and a broad overview of the architecture from Sony. We know there's no ESRAM on PS4 but it doesn't need that due to GDDR5 RAM, and we know there's a hardware audio decode chip. As for video encode/decode and all the other Xbox One co-processors, does the PS4 have something similar or no? I haven't seen any real documentation to that effect. I'd be very curious to see it.

 

EDIT:

from here:

Additionally, the system has the ability to download games in the background, even while it is powered off, using the same chip. Sony says that users can even upload videos to the internet while playing games using the system's secondary background processors. 

So apparently it has co-processors for network processing and video encoding as well. That's a start.

Link to comment
Share on other sites

Are you suggesting the PS4's CPU is insufficient to drive its GPU? It doesn't take any more draw calls to render a 1600x900 picture or a 1920x1080 one, but it does take about 40% more shader calculations. It doesn't take any more draw calls to do 2x or 4x MSAA. There's a lot you can do without necessarily increasing the number of draw calls. Besides, consoles have low-level, low-latency APIs with very little overhead in terms of draw calls. The issue is not nearly what it is on PC, and on PC we still see great scaling with GPU power on just about any game benchmark you can find.

 

that's exactly what im suggesting. you're assuming the CPUs are processing the same amount of draw calls in the same amount of time. that is probably not true. sony used the extra die space for shaders, Microsoft used them to offload the CPU and GPU with special coprocessors. more CPU resources = faster draw calls = less GPU idle. what good are shaders if they're idling? the CPU does many other things that make the gpu much more efficient.

 

one thing I heard nobody really talking about is that pre patch COD ghosts in 720p on PS4 still manages to run worse than the xbox one version. the framerate is unlocked,so there is a ton of judder, and there are big framedrops that don't exist on the xbox one version,which runs on a locked and rock solid 60fps. the 1080p version makes the framerate much worse,and the judder still happens.

Link to comment
Share on other sites

that's exactly what im suggesting. you're assuming the CPUs are processing the same amount of draw calls in the same amount of time. that is probably not true.

Why? They're the same Jaguar cores with the same GCN-based GPU.

 

 sony used the extra die space for shaders, Microsoft used them to offload the CPU and GPU with special coprocessors.

Actually if you've seen a die photo of these chips, most of the shader sacrifice on Xbox One is to give space to the ESRAM. It's quite striking: http://gamrconnect.vgchartz.com/thread.php?id=173161 . As for extra special co-processors, exactly which ones are you talking about?

 

more CPU resources = faster draw calls = less GPU idle. what good are shaders if they're idling? the CPU does many other things that make the gpu much more efficient.

Speed of draw calls has little to do with CPU speed, it's just a matter of getting the data over to the GPU. On PC that is extremely slow (although Mantle is supposed to address this), on consoles it's basically a non-issue, if you've been listening to developer talks about Mantle lately.

  • Like 1
Link to comment
Share on other sites

Glad someone with the right amount of tech knowledge from the PC side has entered the discussion here, good to read your comments Andre S.

  • Like 3
Link to comment
Share on other sites

 

<snip>

 

Speed of draw calls has little to do with CPU speed, it's just a matter of getting the data over to the GPU. On PC that is extremely slow (although Mantle is supposed to address this), on consoles it's basically a non-issue, if you've been listening to developer talks about Mantle lately.

 

 

Have they showed any performance numbers for Mantle at this point? I've always been curious how much the call slowdown translates to actual performance loss. I also thought the issue had to do largely with with the draw call code being single threaded (and bad performance when multi-threaded in dx11) in command queuing/construction.

Link to comment
Share on other sites

Why? They're the same Jaguar cores with the same GCN-based GPU.

and? they aren't running the same code. you're assuming free resources of the cpu are equal.

 

Actually if you've seen a die photo of these chips, most of the shader sacrifice on Xbox One is to give space to the ESRAM. It's quite striking: http://gamrconnect.vgchartz.com/thread.php?id=173161 . As for extra special co-processors, exactly which ones are you talking about?

PLpvSrf.jpg?1

you cant possibly see those 28nm transistor gates on the die to determine their function from a block diagram. the esram also isn't purely for the GPU. there is a total of 47MB of memory on the die, and there is esram in the cpu block,audio block. im pretty sure caches and fast memory are part of increasing the efficiency of the cpu as well. there is also hardware accelerated compression/decompression, as well as compressed render target support.

oh yeah, there is also the advantage of having 2 different memories running at the same time. for example, one thread could be working on something on the DDR3 memory,while another thread could have a chunk of esram allocated and working on that, whereas with one unified pool of gddr5, while one thing is using the memory,everything else is waiting.

 

Speed of draw calls has little to do with CPU speed, it's just a matter of getting the data over to the GPU. On PC that is extremely slow (although Mantle is supposed to address this), on consoles it's basically a non-issue, if you've been listening to developer talks about Mantle lately.

youre missing the point. getting the data to the gpu is one thing, waiting until you can send the data over to the gpu is another. the cpu is also responsible for many things in the game engine, physics, AI, audio,OS,etc... a busy cpu will hurt performance, mantle or no mantle. freeing up these resources makes hardware usage more efficient. jaguar cores are pretty weak as is.

Link to comment
Share on other sites

Well if it's accepted and there's no real source, then it's a popular myth. As far as I know, we've seen very detailed specifications from Microsoft, and only some numbers and a broad overview of the architecture from Sony. We know there's no ESRAM on PS4 but it doesn't need that due to GDDR5 RAM, and we know there's a hardware audio decode chip. As for video encode/decode and all the other Xbox One co-processors, does the PS4 have something similar or no? I haven't seen any real documentation to that effect. I'd be very curious to see it.

 

EDIT:

from here:

So apparently it has co-processors for network processing and video encoding as well. That's a start.

Did you even read my reply?

I said that Sony is using the audio and video bits that are provided by the stock AMD APU. MS decided for whatever reason, to create custom parts instead.

I never said Sony was not using audio or video decoding chips. I was just pointing out more proof that Sony went with a more pc-like design. something much more straightforward for developers to make use of quicker.

This isn't some knock against the ps4 mind you.

Heck there was a thread on this very site about the audio hardware/dsp being the one that is stock built into the AMD APU (AMD calls it TrueAudio). If you want me to paste articles, I'd be happy to share.

I'll look around for the articles that mentioned the ps4 using the amd stock hardware video scaler/decoder as well.

As far as the ESRAM issue. As I said, the 'move engine' custom co-processors are there because MS think they came up with a way to mitigate the downside of having such a small amount of ESRAM and at the same time, offering performance to match the GDDR5 on the ps4. I have no idea if it works or not, that's just the info that has been brought up so far.

Maybe that all turns out to be false, I was just replying with the info that is out there right now. Maybe we get a closer look and find out that all of it is custom for the ps4 as well with extra bits that have not been tapped yet.

Link to comment
Share on other sites

Have they showed any performance numbers for Mantle at this point? I've always been curious how much the call slowdown translates to actual performance loss. I also thought the issue had to do largely with with the draw call code being single threaded (and bad performance when multi-threaded in dx11) in command queuing/construction.

10x less overhead than Direct3D, excellent multi-thread scaling, making an FX-8350 actually compete against haswell i7s, much larger batches (100K vs 20K IIRC). Oxide had a presentation with concrete numbers and an actual demo running at APU13, I can't seem to find it right now but google it perhaps you'll have better luck.

Link to comment
Share on other sites

This topic is now closed to further replies.