Jimquisition: Ubisoft Talks Bollocks About Framerate And Resolution


Recommended Posts

You have to have guaranteed resources for the tasks your going to be doing. With my examples I've mention above, if you have a 32ms CPU thread and off-load 25% of that work to the GPU, thats say 24ms for the CPU thread to process. Then your GPU is going to be at 32ms. Then you've also got to the think of the dependencies of the tasks and the order they run. For example, you don't want to tell the GPU to start rendering the environment when the AI hasn't even been calculated. That's also saying that the GPU isn't already full, which it will be, since Ubisoft use GPGPU for cloth simulation which is in Unity.

 

The GPU isn't already full.  If it was full then the game would be GPU bound NOT CPU bound (and there would be a much bigger difference in performance between the Xbox One and PS4).  That's what those terms mean.  Being X Bound means that X is at it's max capacity and the rest of the system is waiting for it.  Ubisoft has said Unity is CPU bound which means the CPU is reaching its limit and the GPU is not, instead the GPU has to wait for the CPU in order to get the required data it needs.  This is why there isn't much difference between the Xbox One and PS4 performance.  They have nearly identical CPUs (MS has a slight MHz boost) and the PS4s superior GPU makes no difference since the CPU can't get it data fast enough to outperform the Xbox One as it should given it's hardware advantage.  This is POOR "next-gen" game design.

 

Lets use a simple example.

Lets say the Xbox One is using 100% CPU and 90% GPU to run Unity. (thus it's CPU bound, it can't use the extra 10% GPU because it needs the data from the CPU and it's already at 100% so it can't get it any faster.)

The same code on the PS4 would thus be 100% CPU (and a few fps slower since the CPU is slower, just like Ubi has said) and 60% GPU because of it's superior GPU architecture (50% more Compute Units).  Now that's a conservative estimate based on hardware specs, if we used Ubisofts own benchmarks where the PS4 is capable of almost double the performance of the Xbox One in their cloth simulation then it's even worse with the PS4 only using around 45% of it's GPU for the same tasks the Xbox One is at 90% with.

 

Now before Compute Shaders (such as last console gen) that's just how things happen and there isn't much you can do about it.  This is in line with what you are saying but that's the prior gen way to write games and at best severely underutilizes or at worst completely ignores the new (to consoles) compute shaders.  With Compute Shaders pretty much everything the CPU is doing can also be done on the GPU.  So whatever tasks are pushing the CPU to 100% can be moved the GPU.  If your AI calculations on the CPU are holding up your GPU then move your AI calculations to the GPU. The GPUs this generation are FAR more powerful than the CPUs and so developers should be milking every bit of performance out of them.  Every console game should be GPU limited.  If something is running on the CPU and holding up the GPU then it should be moved to the GPU until the GPU becomes the bottleneck.

Link to comment
Share on other sites

The GPU isn't already full.  If it was full then the game would be GPU bound NOT CPU bound (and there would be a much bigger difference in performance between the Xbox One and PS4).  That's what those terms mean.  Being X Bound means that X is at it's max capacity and the rest of the system is waiting for it.  Ubisoft has said Unity is CPU bound which means the CPU is reaching its limit and the GPU is not, instead the GPU has to wait for the CPU in order to get the required data it needs.  This is why there isn't much difference between the Xbox One and PS4 performance.  They have nearly identical CPUs (MS has a slight MHz boost) and the PS4s superior GPU makes no difference since the CPU can't get it data fast enough to outperform the Xbox One as it should given it's hardware advantage.  This is POOR "next-gen" game design.

 

Lets use a simple example.

Lets say the Xbox One is using 100% CPU and 90% GPU to run Unity. (thus it's CPU bound, it can't use the extra 10% GPU because it needs the data from the CPU and it's already at 100% so it can't get it any faster.)

The same code on the PS4 would thus be 100% CPU (and a few fps slower since the CPU is slower, just like Ubi has said) and 60% GPU because of it's superior GPU architecture (50% more Compute Units).  Now that's a conservative estimate based on hardware specs, if we used Ubisofts own benchmarks where the PS4 is capable of almost double the performance of the Xbox One in their cloth simulation then it's even worse with the PS4 only using around 45% of it's GPU for the same tasks the Xbox One is at 90% with.

 

Now before Compute Shaders (such as last console gen) that's just how things happen and there isn't much you can do about it.  This is in line with what you are saying but that's the prior gen way to write games and at best severely underutilizes or at worst completely ignores the new (to consoles) compute shaders.  With Compute Shaders pretty much everything the CPU is doing can also be done on the GPU.  So whatever tasks are pushing the CPU to 100% can be moved the GPU.  If your AI calculations on the CPU are holding up your GPU then move your AI calculations to the GPU. The GPUs this generation are FAR more powerful than the CPUs and so developers should be milking every bit of performance out of them.  Every console game should be GPU limited.  If something is running on the CPU and holding up the GPU then it should be moved to the GPU until the GPU becomes the bottleneck.

Your long paragraph becomes invalid, exactly like I said in my previous post, around reserving performance. To think that the GPU is always utilised at around 60% is non-sense. Games go through optimization phases, to think that they haven't thought about these things during game design and the initial phases of their engine considering their official Unity documentation goes into GPGPU, is stupid. As the guy I quoted said, they only managed to get 900/30, weeks ago. Stop clutching straws with theories that aren't true. If your so good at this, you design a modern game engine and sell it to Ubi.

Link to comment
Share on other sites

Your long paragraph becomes invalid, exactly like I said in my previous post, around reserving performance. To think that the GPU is always utilised at around 60% is non-sense.

At no point did I say or imply that the GPU is ALWAYS utilized at 60%. I made a simplified example of a bottleneck situation where the GPU was being constrained because of a CPU bottleneck illustrating a situation where a game is CPU and no GPU limited as Ubi has stated. I just picked 90% for the Xbox One because I needed a number for example purposes that was less than 100% and that lead to 60% for the PS4 because of the 50% extra compute units on the PS4 (I even pointed out that Ubis own documentation has the PS4 going about double instead of 50% faster for compute but I chose the more conservative performance ratio.)

Games go through optimization phases, to think that they haven't thought about these things during game design and the initial phases of their engine considering their official Unity documentation goes into GPGPU, is stupid.

Oh I'm sure they've THOUGHT of it. The problem is that it's different from how other games are done so I suspect they just decided it's not worth the effort, optimization takes time and money. PC gamers for example typically don't have such weak CPUs compared to their GPUs so you wouldn't want to try to move everything to the GPU on a PC version. Even in older console gaming they didn't have general purpose compute units so you wouldn't even be able to move non-graphics tasks to the GPU as you can now. It's a brand new capability this console generation so most developers probably haven't taken great advantage of it yet. Most developers aren't out there advertising how their game is CPU limited though, saying BS about 30fps being "more cinematic", and badmouthing other developers (Shadow of Mordor) about how their gameplay maybe be next-gen but their graphics aren't.

As the guy I quoted said, they only managed to get 900/30, weeks ago. Stop clutching straws with theories that aren't true. If your so good at this, you design a modern game engine and sell it to Ubi.

I don't care how long it's taken them to get to where they are. I don't care where they are right this second on performance. None of that is relevant to the fact that they say ON RELEASE the game will be CPU limited and 900p@30fps on both platforms with only a couple fps difference between the consoles. No RELEASED game that promotes itself as such a revolutionary "next-gen" title should be CPU bound on consoles. The console CPUs are REALLY weak and if you design your engine so that the slow CPU holds up your fast GPU when general purpose compute units are available to offload pressure from the CPU then that's just poor design.
Link to comment
Share on other sites

Yes, we have a deal with Microsoft, and yes we don't want people fighting over it

There's your answer right there.

I believe 50% of the CPU is dedicated to helping the rendering by processing pre-packaged information, and in our case, much like Unreal 4, baked global illumination lighting. The result is amazing graphically, the depth of field and lighting effects are beyond anything you've seen on the market, and even may surpass Infamous and others. Because of this I think the build is a full 50gigs, filling the bluray to the edge, and nearly half of that is lighting data.

Much like I thought. Ubisoft is offloading more of the work from the GPU to CPU in order to help the Xbox and create "Parity". It's CPU bound because it's designed that way.

Clearly the PS4 would perform better if the game was designed to offload the work to the GPU, but instead they choose to waste important cycles on "helping out the rendering" aka "helping out the Xbone", thus making it CPU bound. It has nothing to do with the AI, the CPU is just busy making up for the inferior Xbone hardware. It doesn't take a genius to read between the lines.

Craps on a few peoples 'parity' parade. Internet makes me laugh.

Quite the opposite. I think it actually confirms people's suspicions. As to Ubisoft's response, I say - me thinks thou dost protest too much.
Link to comment
Share on other sites

Now before Compute Shaders (such as last console gen) that's just how things happen and there isn't much you can do about it.  This is in line with what you are saying but that's the prior gen way to write games and at best severely underutilizes or at worst completely ignores the new (to consoles) compute shaders.  With Compute Shaders pretty much everything the CPU is doing can also be done on the GPU.  So whatever tasks are pushing the CPU to 100% can be moved the GPU.  If your AI calculations on the CPU are holding up your GPU then move your AI calculations to the GPU. The GPUs this generation are FAR more powerful than the CPUs and so developers should be milking every bit of performance out of them.  Every console game should be GPU limited.  If something is running on the CPU and holding up the GPU then it should be moved to the GPU until the GPU becomes the bottleneck.

The problem with your argument is that you assume it's always feasible to simply "move" a task from the CPU to the GPU. The big difference between a CPU and a GPU is that the former excels at serial workloads and the latter excels at massively parallel workloads (such as computing the 2 million independent color values forming the grid of pixels of your monitor). Not every task is an intrinsically parallelizable one, much less a massively parallelizable one. If you have 12 NPCs to compute AI for how do you spread the work on several hundred compute units? Note that just using 12 compute units will be glacially slow as these cores aren't very fast taken individually, it's strength in numbers there.

  • Like 3
Link to comment
Share on other sites

The problem with your argument is that you assume it's always feasible to simply "move" a task from the CPU to the GPU. The big difference between a CPU and a GPU is that the former excels at serial workloads and the latter excels at massively parallel workloads (such as computing the 2 million independent color values forming the grid of pixels of your monitor). Not every task is an intrinsically parallelizable one, much less a massively parallelizable one. If you have 12 NPCs to compute AI for how do you spread the work on several hundred compute units? Note that just using 12 compute units will be glacially slow as these cores aren't very fast taken individually, it's strength in numbers there.

What you're saying may be true on a PC but the AMD CPUs used on the consoles far from excel at serial work.  That's why there is 8 cores because each core stinks.  The 8 core versions are unique to the consoles but the 4 core versions of these CPUs that sell for PC are extremely low end parts with their value being primarily their low power consumption and price not their performance.  I'd wager that the GPUs on the consoles are capable of doing the vast majority of tasks at least as fast as the CPUs if properly optimized.

 

I do want to make clear though that I'm not saying it's "easy".  I'm not saying you can copy the code from the CPU and set some flag that says run this same code on the GPU and everything just magically works.  If your "next-gen" console game is CPU bound though it's a big problem if the weak CPU is restricting the performance of your GPU.

 

Now maybe your bottlenecking task just can't be moved over, I doubt that's the case, Ubisofts own presentation says just about anything can be done on the GPU but let's just say it is.  Then move your second most CPU intensive task over freeing more CPU to the bottlenecking task that couldn't move and using more of your GPU.  Even if the GPU is technically worse at a task it's still important to use the GPU fully (have the game GPU limited) on this generation of consoles.  If a task would take 10% of your CPU resources to complete but 30% of your GPU and your game is currently running at 100% CPU and 60% GPU (CPU bound) then it's still a good idea to move that task (simplified example to illustrate a point).  After doing such a move you'd have an ideal breakdown with it taking 90% on both CPU and GPU to do the same work in the same amount of time... so what would happen is performance would improve (it's not going to run at 90%, it's going to get that same work done faster or do more work in the same time).

Link to comment
Share on other sites

What you're saying may be true on a PC but the AMD CPUs used on the consoles far from excel at serial work.  That's why there is 8 cores because each core stinks. 

You're still comparing an 8-core CPU to a ~1000 cores GPU. It's a fundamentally different design for a different type of workload. The Jaguar isn't a particularly great CPU but it's still a CPU and it's designed and optimized for serial and lightly-threaded workloads with lots of branching and doing a variety of types of operations on different memory whereas the GPU is designed for massively parallel workloads with simple logic all working on the same data. It's not because you put some calculation on the GPU that it becomes faster, first you have to figure out how to decompose your logic into a basic operation that can be processed in parallel by several hundred cores. Depending on the nature of the task that may not be feasible or be counter-productive.

Link to comment
Share on other sites

You're still comparing an 8-core CPU to a ~1000 cores GPU. It's a fundamentally different design for a different type of workload. The Jaguar isn't a particularly great CPU but it's still a CPU and it's designed and optimized for serial and lightly-threaded workloads with lots of branching and doing a variety of types of operations on different memory whereas the GPU is designed for massively parallel workloads with simple logic all working on the same data. It's not because you put some calculation on the GPU that it becomes faster, first you have to figure out how to decompose your logic into a basic operation that can be processed in parallel by several hundred cores. Depending on the nature of the task that may not be feasible or be counter-productive.

 

I'm not saying a tasks necessarily becomes faster when you move it to the GPU.  In fact I used an example where the GPU ran a task 3x slower (in terms of % of the processor it was running on) but it still increased the performance of the app because the game was CPU limited and unable to keep the GPU fed (which is apparently the case with Unity.)  The overall point here is that because the CPU is weak you should not design your game to be primarily dependent on it in such a way that it prevents your GPU from being fully utilized as the GPU is where the platform holders put their focus this generation.

 

Any "next-gen" game that is CPU limited is poorly designed for these consoles.  You used 12 AIs in your earlier example and sure if each AI has to have a core it makes sense but that's a poor GPU design like you said.  Each of those 12 AI is making tons of decisions though so if you make the algorithm per decision instead of per entity then you suddenly get a far more scalable solution.  Again I'm not saying it's easy.  Doing AAA game optimization is undeniably difficult and you can't do things the same way on the GPU as you do on the CPU but they can be done.

 

Lets say they can't though for the sake of argument.  You've got some process you're doing that takes 50% of your CPU and 20+GB of your apps size but does really nice baked lighting and it absolutely can't be moved to the GPU.  If using that means your game is CPU limited and thus your GPU is only working at 60% it's still a bad design.  You'd almost certainly end up with a better game by using one of the less demanding lighting techniques and allowing the GPU (including compute units for non-graphics tasks) to use more than 60% of it's processing power.  Maybe that means you'd have worse lighting but you'd free up so much resources to do other things the game would almost certainly be better for it.  If you are making a game for "next-gen" consoles it should be optimized for their strengths and designing a game in such a way that it's CPU limited only highlights their weakness.

Link to comment
Share on other sites

You've got some process you're doing that takes 50% of your CPU and 20+GB of your apps size but does really nice baked lighting and it absolutely can't be moved to the GPU.  If using that means your game is CPU limited and thus your GPU is only working at 60% it's still a bad design.  You'd almost certainly end up with a better game by using one of the less demanding lighting techniques and allowing the GPU (including compute units for non-graphics tasks) to use more than 60% of it's processing power.  Maybe that means you'd have worse lighting but you'd free up so much resources to do other things the game would almost certainly be better for it.

I agree that if there's a large imbalance between CPU and GPU usage then you're wasting a ton of power that could be used to make the game better. The ideal goal for an optimized console game is using 100% of all resources all the time transforming every cycle of every processor into awesome. That said it's just an ideal and in practice every game will use more of one or the other. As long as the imbalance isn't great it doesn't matter much whether your bottleneck is one or the other, you're still using most of your resources. So I'm not sure where you get that every console game should be GPU-bound. 

 

As for Unity or any specific game I'm not getting into a discussion of how it is or isn't optimized as it's all speculation and I don't see the point.

Link to comment
Share on other sites

Let's be clear here, it's not the AI holding it back, it's rendering batch processing which is being done on the CPU. If 50% of the CPU is allocated to that one task, there's little left for anything else. I can't say for certain without seeing the code, but it sounds like some of that should be offloaded to the GPU if it's causing a bottleneck and the GPU is underutilised.

My suspicion is that Ubisoft originally designed it around the Xbone, and that decision is now holding it back on the PS4.

Link to comment
Share on other sites

I agree that if there's a large imbalance between CPU and GPU usage then you're wasting a ton of power that could be used to make the game better. The ideal goal for an optimized console game is using 100% of all resources all the time transforming every cycle of every processor into awesome. That said it's just an ideal and in practice every game will use more of one or the other. As long as the imbalance isn't great it doesn't matter much whether your bottleneck is one or the other, you're still using most of your resources. So I'm not sure where you get that every console game should be GPU-bound.

The key here is the CPU is very weak and the GPU is pretty capable so you want to do the most work on your most capable part. You don't want to have your weaker part limiting your stronger part. If both your CPU and GPU were roughly equal you're right it wouldn't matter which one didn't hit that magic 100% so much. Sure having both hit 100% is the ideal but I agree it's not really realistic. 10% of these consoles GPU processing power can do significantly more than 10% of their CPU power though, especially with their general compute capabilities. So with respect to this generation of consoles specifically you always want to have your game GPU bound (play to their strengths) if you can't hit that magic 100% on both.
Link to comment
Share on other sites

There's your answer right there.

Much like I thought. Ubisoft is offloading more of the work from the GPU to CPU in order to help the Xbox and create "Parity". It's CPU bound because it's designed that way.

Ubisoft brought this kind of thinking on themselves when they uttered the word parity. If it is ever proven that your opinion is true, they will be sunk. If it can be proven to not be true, Ubisoft should try to do that quickly, but then any explanation might be ignored at this point.

I have no idea how the game has been designed, so I relaly don't know what the truth is here.

Clearly the PS4 would perform better if the game was designed to offload the work to the GPU, but instead they choose to waste important cycles on "helping out the rendering" aka "helping out the Xbone", thus making it CPU bound. It has nothing to do with the AI, the CPU is just busy making up for the inferior Xbone hardware. It doesn't take a genius to read between the lines.

Quite the opposite. I think it actually confirms people's suspicions. As to Ubisoft's response, I say - me thinks thou dost protest too much.

Still, all of this is based on assumptions. I would love for some real facts to leak out. How does this confirm suspscisions when we dont have all the facts?

Link to comment
Share on other sites

Let's be clear here, it's not the AI holding it back, it's rendering batch processing which is being done on the CPU. If 50% of the CPU is allocated to that one task, there's little left for anything else. I can't say for certain without seeing the code, but it sounds like some of that should be offloaded to the GPU if it's causing a bottleneck and the GPU is underutilised.

My suspicion is that Ubisoft originally designed it around the Xbone, and that decision is now holding it back on the PS4.

But that doesn't make sense. If the gpu on the X1 can do more than its cpu, then why build the game to be cpu bound if you are targeting the X1? The X1 and ps4 share a lot of high level design ideas. What works well for one can work well for the other. The biggest differences that must be designed for are the different system memory configurations.

If Ubisoft had made no mention of the parity stuff, I would have thought this sounded like a game that was optimized for a pc platform first and then ported to consoles. On a pc, you have much better cpu hardware to work with, so building a game that leverages that makes more sense.

Link to comment
Share on other sites

The key here is the CPU is very weak and the GPU is pretty capable so you want to do the most work on your most capable part. You don't want to have your weaker part limiting your stronger part. If both your CPU and GPU were roughly equal you're right it wouldn't matter which one didn't hit that magic 100% so much. Sure having both hit 100% is the ideal but I agree it's not really realistic. 10% of these consoles GPU processing power can do significantly more than 10% of their CPU power though, especially with their general compute capabilities. So with respect to this generation of consoles specifically you always want to have your game GPU bound (play to their strengths) if you can't hit that magic 100% on both.

 

I think what he's saying is, just because X is more powerful than Y doesn't mean you can easily give Y's tasks to X. They are both built to excel at different types of operations. Some operations are just not able to be done on a GPU's architecture and vice-versa. regardless of where the power lies.

Link to comment
Share on other sites

I think what he's saying is, just because X is more powerful than Y doesn't mean you can easily give Y's tasks to X. They are both built to excel at different types of operations. Some operations are just not able to be done on a GPU's architecture and vice-versa. regardless of where the power lies.

 

First, no one is saying it's easy.  Lets get that out of the way.  Optimizing AAA video games is EXTREMELY hard.  I think we can probably all agree on that.

 

Even if you can't give Y's tasks to X, which Ubisofts own GDC slides say pretty much everything can be done on compute shaders, but lets set that aside for a moment.  If X is more powerful than Y and you can't give Y's tasks to X you clearly shouldn't design something where the power of the more powerful X is being held back by the weaker Y.

 

If the Y tasks can't be moved and they're holding back your more powerful X then they should be done in a simpler way so it can open up your more powerful X to do more things (play to your strengths).  Allowing your weaker Y to limit your more powerful X is a poor design.

Link to comment
Share on other sites

First, no one is saying it's easy.  Lets get that out of the way.  Optimizing AAA video games is EXTREMELY hard.  I think we can probably all agree on that.

 

Even if you can't give Y's tasks to X, which Ubisofts own GDC slides say pretty much everything can be done on compute shaders, but lets set that aside for a moment.  If X is more powerful than Y and you can't give Y's tasks to X you clearly shouldn't design something where the power of the more powerful X is being held back by the weaker Y.

 

If the Y tasks can't be moved and they're holding back your more powerful X then they should be done in a simpler way so it can open up your more powerful X to do more things (play to your strengths).  Allowing your weaker Y to limit your more powerful X is a poor design.

 

 

Well, it's easy to say, "You shouldn't do it then." But honestly without people pushing the envelope nothing will go anywhere. This is the first year of these consoles. The developers are still learning the boundaries of the hardware and toolkits so I think it's fair to give them some leeway in optimization issues. We can't assume to know better than a developer as to how to make their games. There are decisions, discussions, timelines and hurdles we are not aware of that happen internally that lead to the finished product.

Link to comment
Share on other sites

Well, it's easy to say, "You shouldn't do it then." But honestly without people pushing the envelope nothing will go anywhere. This is the first year of these consoles. The developers are still learning the boundaries of the hardware and toolkits so I think it's fair to give them some leeway in optimization issues. We can't assume to know better than a developer as to how to make their games. There are decisions, discussions, timelines and hurdles we are not aware of that happen internally that lead to the finished product.

 

It's not "pushing the envelope" to allow the weaker part to limit the stronger.  I know it's hard and I know there are time and money constraints so I would be happy to give them a pass if they weren't running around saying B.S. like:

 

30fps is more "cinematic"

 

"the PS4 couldn't handle 1080p 30fps for our game, whatever people, or Sony and Microsoft say."

Yeah it can't handle it because you designed a game that is CPU limited on consoles that have weak CPUs.  I mean what do Sony and Microsoft know about what their consoles can do?

 

They even threw other developers under the bus:

"Mordor has next gen system and gameplay, but not graphics like Unity does."

Really? Was that necessary?

 

So they bash Mordor and say that both Sony and MS are wrong and then make up excuses about 30fps being more cinematic.  To me that opens them up to criticism.  Plus if their game isn't super well optimized because of how early in this console generation it is you probably shouldn't be talking about how early you got started on the game and how well optimized it is.

Link to comment
Share on other sites

It's not "pushing the envelope" to allow the weaker part to limit the stronger.  I know it's hard and I know there are time and money constraints so I would be happy to give them a pass if they weren't running around saying B.S. like:

 

30fps is more "cinematic"

 

"the PS4 couldn't handle 1080p 30fps for our game, whatever people, or Sony and Microsoft say."

Yeah it can't handle it because you designed a game that is CPU limited on consoles that have weak CPUs.  I mean what do Sony and Microsoft know about what their consoles can do?

 

They even threw other developers under the bus:

"Mordor has next gen system and gameplay, but not graphics like Unity does."

Really? Was that necessary?

 

So they bash Mordor and say that both Sony and MS are wrong and then make up excuses about 30fps being more cinematic.  To me that opens them up to criticism.  Plus if their game isn't super well optimized because of how early in this console generation it is you probably shouldn't be talking about how early you got started on the game and how well optimized it is.

 

It's testing the system, pushing its limits. You have to test the waters before you dive in. I don't really understand why you are phrasing this as if they are intentionally letting themselves get screwed over by limited hardware. You're painting a picture I honestly don't see to be the case. Regardless of what their PR guys spit out, they have nothing to do with the development of the actual game.

 

Well optimized is relative to games previous. It may very well be optimized, but again we are only so far into the lifecycle of the console. There may be more optimizations they can make in the future. I feel you are very much overreacting to this game.

Link to comment
Share on other sites

Let's be clear here, it's not the AI holding it back, it's rendering batch processing which is being done on the CPU. If 50% of the CPU is allocated to that one task, there's little left for anything else. I can't say for certain without seeing the code, but it sounds like some of that should be offloaded to the GPU if it's causing a bottleneck and the GPU is underutilised.

My suspicion is that Ubisoft originally designed it around the Xbone, and that decision is now holding it back on the PS4.

why would they "originally" design around the Xbox given that they had a marketing deal with Sony when the game development started?
Link to comment
Share on other sites

The key here is the CPU is very weak and the GPU is pretty capable so you want to do the most work on your most capable part. You don't want to have your weaker part limiting your stronger part. If both your CPU and GPU were roughly equal you're right it wouldn't matter which one didn't hit that magic 100% so much. Sure having both hit 100% is the ideal but I agree it's not really realistic. 10% of these consoles GPU processing power can do significantly more than 10% of their CPU power though, especially with their general compute capabilities. So with respect to this generation of consoles specifically you always want to have your game GPU bound (play to their strengths) if you can't hit that magic 100% on both.

Yes, technically 10% of the GPU is more FLOPS or whatever basic arithmetic operation than the CPU, but depending on what is it your game does, again, it may not be feasible or even be counter-productive to try to shoehorn intrinsically serial operations on the GPU. 10% of that CPU is way faster than 10% of the GPU at doing certain things, i.e. not massively parallel number crunching but any complex algorithms with lots of intermediate steps and branching logic. I mean, I get your point but you can't really reduce this to a question of the "weaker part limiting the stronger part".  There's more raw power in the GPU but it's not as available as the CPU; it's more accurate to think of the two as different complementary parts than simply a weak and a strong one.

Link to comment
Share on other sites

why would they "originally" design around the Xbox given that they had a marketing deal with Sony when the game development started?

Because the Xbox is the lowest common denominator. If it runs adequately there, it'll run the same everywhere else. Look what happened with Blackflag. They put out a Patch post release that upped the playstation's resolution to 1080p.

Whether this is down to project deadlines or reasons of parity, it's hard to say. One thing's for certain though, Ubi has a history of targeting the lowest hanging fruit (Xbone) with Blackflag. It's a real kick in the face to PS4 users, who have been treated as an after thought; Which in itself is pretty crazy considering how many more PS4 users there are than Xbox.

Link to comment
Share on other sites

Because the Xbox is the lowest common denominator. If it runs adequately there, it'll run the same everywhere else. Look what happened with Blackflag. They put out a Patch post release that upped the playstation's resolution to 1080p.

Whether this is down to project deadlines or reasons of parity, it's hard to say. One thing's for certain though, Ubi has a history of targeting the lowest hanging fruit (Xbone) with Blackflag. It's a real kick in the face to PS4 users, who have been treated as an after thought; Which in itself is pretty crazy considering how many more PS4 users there are than Xbox.

Your post makes zero sense when read with your other posts in this thread. Are you saying PS3 held back X360 for previous Ubi games? I don't remember that being a case.
Link to comment
Share on other sites

Your post makes zero sense when read with your other posts in this thread. Are you saying PS3 held back X360 for previous Ubi games? I don't remember that being a case.

I'm talking about the latest generation of consoles. Black flag received a post release 1080p patch on the PS4. That means Ubisoft could have made it 1080p to start with, but instead went with the lowest common denominator's (Xbone) optimum specs at 900p. There's a history of this at Ubisoft. Together with the statement that it was done for reasons of "Parity" only adds flames to the fire.
Link to comment
Share on other sites

This topic is now closed to further replies.