Microsoft's Penello: No way is Xbox One giving up 30% power advantage t


Recommended Posts

Hmmm - if they have a mono driver, I wonder if there's a stereo  :D ?

 

But on the point of the PS4 OS - do you have any decent links pointing to how it works/is designed? I'm curious to read about it, because that's one of the things about the PS4 I've heard less about so far.

There's not really much about it:

http://www.geek.com/games/ps4-runs-modified-version-of-the-freebsd-9-0-operating-system-1559866/

Link to comment
Share on other sites

The XB1 isn't using off the shelf parts.  That is huge.  It means they have the flexibility and the efficiency to avoid most bottlenecks for example.

 

Comparing the PS4 to a PC is fine for must cases because it's basically a tweaked off the shelf PC.

 

1) The XB1 has a different architecture that can offload the processes on different chips.

2) The XB1 architecture has been designed with maximum efficiency throughout the pipeline and cache.

 

People look at the raw numbers, but those raw numbers may not be obtainable on a PC architecture like the PS4 because those Compute Units (CU's)

have to be used for other things in the game.

 

The problem with most people is that they are going to look at the raw numbers Sony put out and call it a day.  Now this would be fine if the XB1 was

just a sterile off-the-shelf PC including no "cloud compute".  That would be fully acceptable.  Sony would be the most advanced at that point no doubt about it.

Facts are Facts.

 

However, Microsoft has owned D3D 11 and built the mono driver to get the most out of their fully optimized maximum efficiency architecture and that is not the same

that is in the PC architecture.  It's a completely different beast for example.  You cannot compare the two systems unless you compare the final titles for each platform.   

 

People out there act like Microsoft is stupid and just made a WII U+ and this isn't the case at all.  We don't know for sure yet, but Microsoft's console can be so efficient it can out perform

the PS4 in a lot of tasks.  Even Albert Penello talks about this in his comparison. 

 

Here is an example of what I am talking about here... (Read the full source link to get the full picture)

 


Watch Dogs hits Xbox One and PS4 later this year when the next gen consoles hit store shelves. If recent reports are to believed however, the Xbox One version might be the way to go. Watch Dogs producer Dominic Guay has stated that Watch Dogs on the Xbox One will have a more dynamic city than the PS4. 

 

Source: http://www.gamingtarget.com/article.php?artid=13270 

Link to comment
Share on other sites

But that doesn't mean I have to make up silly reasons to try make the Xbox One seem comparable in terms of raw system performance. 

Cool, me neither  :laugh:

 

Considering most games on the 360 used to dedicate one of the PowerPC cores just to audio, its a very sensible and fair statement.

One of the guys I believe was working on the audio chip is over on beyond3d-

 

 

 

 

In theory, the audio hardware in the One can produce results that could not be replicated by the entire 360 CPU. 

I'd call that a pretty big deal.

 

I am finally falling asleep so I'll just leave the thread for further reading...mind you it's 18 pages so not a light read.

http://beyond3d.com/showthread.php?t=63677

Link to comment
Share on other sites

I think it's intellectually dishonest to say that the PS4 is 50% more powerful when Microsoft hasn't even released all of the information about it's architecture.

 

It went from 30% to 40% and now 50%.  I mean use your heads.  Next time it will be 75% and then 100%.   The bar of ignorance keeps floating.

 

So, no... I don't trust Adrian.  I don't.  I will see it in the final titles that I download and that many websites rate. 

Even then there is a huge boost for cloud compute that Sony won't be able match.

 

Gaikai is going to be ready for the USA in 2014, but the rest of the world is going to have to wait a long time just to get back compat with PS2/PS3 content

and by then they will have been surpassed by Microsoft way before then.

 

The cloud compute just in A.I. alone would be huge, you could have 100,000 A.I. creatures in the world that surpass anything in previous video games. 

I don't think some of you guys really get the cloud compute stuff and you think it is marketing, but it's not.

 

It's a HUGE advantage that isn't being taken advantage much at launch just like the start of all launch consoles.  Halo will be the first of many titles that use it. 

I personally can't wait for it.

 

I am also excited about Indies having access to all the features of the box including cloud compute, Voice recognition, 3D scanning support, biometric use,  rumble triggers, XB1 VR, etc.....

I mean like I said every Xbox one can be a full development kit like an AAA would have access to, that is huge my friends.  That is no joke.  

Link to comment
Share on other sites

So, no... I don't trust Adrian.  I don't.  I will see it in the final titles that I download and that many websites rate. 

Even then there is a huge boost for cloud compute that Sony won't be able match.

I wouldn't call it dishonest, just ignorant.

 

Speaking of which Sony already mentioned they were working with a server hosting provider to offer similar functionality, when it's ready and whether they do it quite as well or not is yet to be seen.

Link to comment
Share on other sites

I am also excited about Indies having access to all the features of the box including cloud compute, Voice recognition, 3D scanning support, biometric use,  rumble triggers, XB1 VR, etc.....

I mean like I said every Xbox one can be a full development kit like an AAA would have access to, that is huge my friends.  That is no joke.  

 

That's going to be an interesting thing, if it's ever used. Having your heart rate (I think that's the only "biometric" thing being sensed?) affect how the game plays could be a great really groundbreaking feature for anything from Kinect Sports type games to horror games..

 

But I can also definitely see people being absolutely creeped out by the fact that the little camera under the TV knows absolutely everything about you - it knows what you look like, what you sound like, what turns you on ( :laugh:) and what scares the **** out of you. The creep factor here I think may turn out to be a much bigger problem for MS than these pre-launch hypothetical spec battles.

Link to comment
Share on other sites

The XB1 isn't using off the shelf parts.  That is huge.  It means they have the flexibility and the efficiency to avoid most bottlenecks for example.

 

Comparing the PS4 to a PC is fine for must cases because it's basically a tweaked off the shelf PC.

 

1) The XB1 has a different architecture that can offload the processes on different chips.

2) The XB1 architecture has been designed with maximum efficiency throughout the pipeline and cache.

 

People look at the raw numbers, but those raw numbers may not be obtainable on a PC architecture like the PS4 because those Compute Units (CU's)

have to be used for other things in the game.

 

The problem with most people is that they are going to look at the raw numbers Sony put out and call it a day.  Now this would be fine if the XB1 was

just a sterile off-the-shelf PC including no "cloud compute".  That would be fully acceptable.  Sony would be the most advanced at that point no doubt about it.

Facts are Facts.

 

However, Microsoft has owned D3D 11 and built the mono driver to get the most out of their fully optimized maximum efficiency architecture and that is not the same

that is in the PC architecture.  It's a completely different beast for example.  You cannot compare the two systems unless you compare the final titles for each platform.   

 

People out there act like Microsoft is stupid and just made a WII U+ and this isn't the case at all.  We don't know for sure yet, but Microsoft's console can be so efficient it can out perform

the PS4 in a lot of tasks.  Even Albert Penello talks about this in his comparison. 

 

Here is an example of what I am talking about here... (Read the full source link to get the full picture)

 

 

 

Source: http://www.gamingtarget.com/article.php?artid=13270

 

I haven't seen anyone call the Xbox One the Wii U.. Its not so custom either, its designed by a PC software company and they based it on PC architecture. Its really isn't a big deal that the PS4 has better system specs. The differences in game will be small but with that being said both consoles are more than 7 years newer than their predecessors so big improvements from the PS3 and Xbox 360. (Even the Wii U).

 

For the Quote you made, the Watch Dogs producer never said that, he said "It?s what I call dynamism; basically, the way the city reacts to you, we are able to push further on the Xbox One" he says this in comparison to the Xbox 360 not the PS4 which also used the 'cloud' (aka Dedicated servers).

 

I think it's intellectually dishonest to say that the PS4 is 50% more powerful when Microsoft hasn't even released all of the information about it's architecture.

 

It went from 30% to 40% and now 50%.  I mean use your heads.  Next time it will be 75% and then 100%.   The bar of ignorance keeps floating.

 

So, no... I don't trust Adrian.  I don't.  I will see it in the final titles that I download and that many websites rate. 

Even then there is a huge boost for cloud compute that Sony won't be able match.

 

Gaikai is going to be ready for the USA in 2014, but the rest of the world is going to have to wait a long time just to get back compat with PS2/PS3 content

and by then they will have been surpassed by Microsoft way before then.

 

The cloud compute just in A.I. alone would be huge, you could have 100,000 A.I. creatures in the world that surpass anything in previous video games. 

I don't think some of you guys really get the cloud compute stuff and you think it is marketing, but it's not.

 

It's a HUGE advantage that isn't being taken advantage much at launch just like the start of all launch consoles.  Halo will be the first of many titles that use it. 

I personally can't wait for it.

 

I am also excited about Indies having access to all the features of the box including cloud compute, Voice recognition, 3D scanning support, biometric use,  rumble triggers, XB1 VR, etc.....

I mean like I said every Xbox one can be a full development kit like an AAA would have access to, that is huge my friends.  That is no joke.  

 

Cloud compute is on the PS4 also, Gaikai is a game streaming service which allows you to play a game on the 'cloud'.

 

So the PS4's has two cloud systems: OpenStack Cloud which is very similar to the Azure Cloud and the Gaikai cloud which is a Nvidia Grid cloud.

Azure type server for cloud compute and running game worlds, Nvidia grid servers for streaming game video while you control it.

 

The performance percentages I've only seen you give contradicted numbers, so not sure why your complaining about that.

Link to comment
Share on other sites

That's going to be an interesting thing, if it's ever used. Having your heart rate (I think that's the only "biometric" thing being sensed?) affect how the game plays could be a great really groundbreaking feature for anything from Kinect Sports type games to horror games..

 

But I can also definitely see people being absolutely creeped out by the fact that the little camera under the TV knows absolutely everything about you - it knows what you look like, what you sound like, what turns you on ( :laugh:) and what scares the **** out of you. The creep factor here I think may turn out to be a much bigger problem for MS than these pre-launch hypothetical spec battles.

 

The Kinect is the Xbox Ones biggest advantage I believe, the 'creep' factor as you call it I personally think will filter out some undesirable players. (basically weird creepy paranoid types)

-The ones who think the NSA? or some other government organisation really wants to sit down and watch them play games all day while eating junk food and scratching their arses.

Link to comment
Share on other sites

The Kinect is the Xbox Ones biggest advantage I believe, the 'creep' factor as you call it I personally think will filter out some undesirable players. (basically weird creepy paranoid types)

-The ones who think the NSA? or some other government organisation really wants to sit down and watch them play games all day while eating junk food and scratching their arses.

Well it'll definitely do that (filter out the super-paranoid types) but I think the supposed "creepiness" might be a bit more visible to the general public especially since that the NSA seem to be daily news now. It really would be a shame if there was a backlash because I think this sort of stuff really can move games forwards, past the endless COD COD COD (BF!) that takes up the market and mindshare nowadays. 

  • Like 1
Link to comment
Share on other sites

The Kinect is the Xbox Ones biggest advantage I believe, the 'creep' factor as you call it I personally think will filter out some undesirable players. (basically weird creepy paranoid types)

-The ones who think the NSA? or some other government organisation really wants to sit down and watch them play games all day while eating junk food and scratching their arses.

We can only hope :laugh:

Link to comment
Share on other sites

I have to confess that one of the things that I am so excited about is cloud processing or as Microsoft calls it "Cloud Compute".

This I feel is a game changer (pun intended)  and I am very excited about this technology.  It's not magic but it is science and it's a science that hasn't been used much thus far.

In consoles.. no.. but data mining has been around forever (which is what the KI team is using it for).   Also all MMOs use server to determine physics, AI, handle combat, etc.  The client in an MMO only tells it what to display and do.  The idea for the cloud is to do the same thing we have had for many many years and that a fair few games employ... but are never above and beyond with.

Link to comment
Share on other sites

That's my bad, it is a fork of FreeBSD, my memory failed me. An opensource modified OS is not going to outperform an OS built specifically for that platform, logic should tell you that.

 

How is DirectX for a fixed platform being 100% optimised marketing guff? Seriously? Its one of the main reasons why this console has been producing 1080p60fps.

Pretty sure x86 versions (and x64) versions have kernels optimized for the arhictecture too.  Also pretty sure DX is the same as OGL in that it is a framework.. that can communicate at a kernel level with hardware. 

 

I have seen Linux games preform better than their windows counterparts (FPS wise).. so it's 100% plausible to see games running better with OGL than DX.. as it does happen on PC.

Also, there is little evidence of the Console showing that.. as the last time we were supposed to see that.. it was proven to be a PC.

Link to comment
Share on other sites

Pretty sure x86 versions (and x64) versions have kernels optimized for the arhictecture too.  Also pretty sure DX is the same as OGL in that it is a framework.. that can communicate at a kernel level with hardware. 

 

I have seen Linux games preform better than their windows counterparts (FPS wise).. so it's 100% plausible to see games running better with OGL than DX.. as it does happen on PC.

Also, there is little evidence of the Console showing that.. as the last time we were supposed to see that.. it was proven to be a PC.

That doesn't even make sense. You don't optimise a graphics library on the architecture?

 

Your post is all in the PC context, as far as it goes in this thread, you're not even on context. Completely different kettle of fish. 

Link to comment
Share on other sites

That doesn't even make sense. You don't optimise a graphics library on the architecture?

 

Your post is all in the PC context, as far as it goes in this thread, you're not even on context. Completely different kettle of fish. 

A processor is a processor.  A GPU is a GPU.. and Architecture is an Architecture.   Things can be optimized for the hardware and instruction set yes.  However it is more about the kernels that are optimized.. the graphics libraries interact with the kernel,  so yes they may have done some optimization on the DX side (likely cleaner entry points, more CPU offloading, whatever the case).. most of your preformance is going to come from how the kernel can control the hardware.

 

While yes a Console is not your standard PC.. you could compare it to a smart phone.  It is a closed platform hardware wise.  However we have seen amazing mobile games that run using OGL/GLES on Android and iOS.  My point was that just because one is GL and on is DX doesn't mean DX takes the cake, or that it is more optimized than OGL. 

Also.. all they did was take the existing windows kernel and modify it for use with the xb1.. it is no different than what sony did for the ps4.

 

Link to comment
Share on other sites

http://www.theguardian.com/technology/gamesblog/2013/aug/02/xbox-one-gpu-speed-upgraded

So that means exactly what you've said. MS have stripped down and re-wrote portions of DirectX which sits perfectly with how the architecture on that box works. Hows that hard to understand?

 

No logic tells you that an OS modified for a special purpose is never going to be as efficient as an OS built for that purpose.

 

No, it doesn't mean exactly what I've said. In fact it goes to prove the exactly the opposite of what you think, if they've removed parts and rewritten some then they obviously haven't rewritten all of it. Which means they're using the same base codebase in whatever language (Likely C++), so there is still a compiler involved taking guesses and spitting out sub-100% optimal assembler.

 

100% optimisation is something you only see in things like "The Story of Mel".

 

We have not been briefed on the extent of FreeBSD code utilised and changed, nor do we know the same for Microsoft's offering. What we do know however is both Sony and Microsoft are using forks of existing codebases and then stripping down and optimising for their specific hardware. So your "logic" is as sound as the flying spagetti monster.

Link to comment
Share on other sites

No, it doesn't mean exactly what I've said. In fact it goes to prove the exactly the opposite of what you think, if they've removed parts and rewritten some then they obviously haven't rewritten all of it. Which means they're using the same base codebase in whatever language (Likely C++), so there is still a compiler involved taking guesses and spitting out sub-100% optimal assembler.

 

100% optimisation is something you only see in things like "The Story of Mel".

 

We have not been briefed on the extent of FreeBSD code utilised and changed, nor do we know the same for Microsoft's offering. What we do know however is both Sony and Microsoft are using forks of existing codebases and then stripping down and optimising for their specific hardware. So your "logic" is as sound as the flying spagetti monster.

Do you just ignore everything I say? What do you think 'portions' means? Stop posting the exact same of what I've just said just with some big boy words to actually think you know what you're talking about. This isn't optimisation in the software, but rather optimisation and alignment of the relationship between software and hardware. Are you telling me that the graphics libraries between the consoles are equally feature set and optimised as DirectX? If so, someone's in fan boy land.

 

No matter how you put it, its still not as fit for purpose. Its why MS have a upper-hand in this area and every single person knows this other than you, who just tries to one up anyone.

Link to comment
Share on other sites

A processor is a processor.  A GPU is a GPU.. and Architecture is an Architecture.   Things can be optimized for the hardware and instruction set yes.  However it is more about the kernels that are optimized.. the graphics libraries interact with the kernel,  so yes they may have done some optimization on the DX side (likely cleaner entry points, more CPU offloading, whatever the case).. most of your preformance is going to come from how the kernel can control the hardware.

 

While yes a Console is not your standard PC.. you could compare it to a smart phone.  It is a closed platform hardware wise.  However we have seen amazing mobile games that run using OGL/GLES on Android and iOS.  My point was that just because one is GL and on is DX doesn't mean DX takes the cake, or that it is more optimized than OGL. 

Also.. all they did was take the existing windows kernel and modify it for use with the xb1.. it is no different than what sony did for the ps4.

 

Nothing can be optimised for an instruction set, it just WORKS with an instruction set. If something wasn't perfectly coded for that instruction set then it simply would corrupt and break. No, the graphics libraries use libraries and resources from the kernel, but mostly talks to the hardware directly. There's no need for another layer.

 

Android runs like a dog with quad-core CPU's which it doesn't fully use. A perfect example for this argument. Compare Android to WP which runs butter smooth on the lowest spec'd hardware. In this instance DX does take the cake because its a instance which is specifically made for the X1. I shouldn't be having to give off examples, its just logical sense. OGL when stripped down and created as a wrapper for the PS4 leaves most of the work to the developer who has to organise the processing of the tasks themselves. There's just more leg work involved.

 

Real fan boy talk guys.

Link to comment
Share on other sites

Do you just ignore everything I say? What do you think 'portions' means? Stop posting the exact same of what I've just said just with some big boy words to actually think you know what you're talking about. This isn't optimisation in the software, but rather optimisation and alignment of the relationship between software and hardware. Are you telling me that the graphics libraries between the consoles are equally feature set and optimised as DirectX? If so, someone's in fan boy land.

 

No matter how you put it, its still not as fit for purpose. Its why MS have a upper-hand in this area and every single person knows this other than you, who just tries to one up anyone.

 

Optimisation of the graphics API implementation goes hand-in-hand with it's design and relationship with the hardware. You made the idiotic claim it was 100% optimised and now you lack the ability to back up your claims, because you have no data to analyse and no references to compare. Claims in either direction are without base.

 

So no, the only fanboy here is the one acting as a drooling PR mouthpiece.

Link to comment
Share on other sites

Optimisation of the graphics API implementation goes hand-in-hand with it's design and relationship with the hardware. You made the idiotic claim it was 100% optimised and now you lack the ability to back up your claims, because you have no data to analyse and no references to compare. Claims in either direction are without base.

Considering that the majority of graphics enhancements in the last decade have been driven with DirectX updates, I don't really see where this is coming from. They're almost inherently optimized by being designed alongside the hardware. Plus, DirectX 10 was a major rewrite of DirectX pipeline, which also included the improved driver model in Windows Vista and above, thus making DirectX 9 comparisons a bit moot.

 

The biggest API differences between OpenGL and DirectX lie in the fact that OpenGL is functional (C-like) while DirectX is object oriented. Being object oriented tends to add a layer of abstraction that might otherwise not exist in a functional library, but that is not the reason to pick between either library. The reason to pick OpenGL is because you want to avoid tying yourself to Windows, which is particularly relevant with mobile development (e.g. iOS and even Android) these days.

Link to comment
Share on other sites

Considering that the majority of graphics enhancements in the last decade have been driven with DirectX updates, I don't really see where this is coming from. They're almost inherently optimized by being designed alongside the hardware. Plus, DirectX 10 was a major rewrite of DirectX pipeline, which also included the improved driver model in Windows Vista and above, thus making DirectX 9 comparisons a bit moot.

 

The biggest API differences between OpenGL and DirectX lie in the fact that OpenGL is functional (C-like) while DirectX is object oriented. Being object oriented tends to add a layer of abstraction that might otherwise not exist in a functional library, but that is not the reason to pick between either library. The reason to pick OpenGL is because you want to avoid tying yourself to Windows, which is particularly relevant with mobile development (e.g. iOS and even Android) these days.

 

I wouldn't say DirectX (In general) is designed alongside the hardware so much as Microsoft, AMD, nVidia and Devs hammer out what they'd like and the hardware is adjusted to support the latest spec. If you look at AMD's GPU releases since they added support for D3D11, there has been a change of architechture each generation. (VLIW5/4, GCN)

 

When it comes to the consoles however, then they can make adjustments (optimizations) to the API - but ultimately you're still dealing with a existing (albeit modified) codebase that trades performance for convienience and maintainability. It's a smart trade as the perf gain is small for the effort required, but it means that it's not 100% optimized - just regular old optimized.

Link to comment
Share on other sites

I was reading the seekingalpha transcript of the AMD presentation at the Citi conference and here are a few things I thought were interesting:

 

Yes so in terms of how different are the console chips. They are quite different relative to architecture, they use sort of similar IP, so like our Jaguar core and our Radeon graphics but in terms of the architectures you know sort of how they decided to put it together, they really are custom designs. We run them as two completely separate teams. Given the confidential nature of the business, we have to ensure to ensure two very separate teams and so they are really separate chips from that standpoint twofold design efforts. I am sorry your other questions?

 

 

Yes, so in this particular case both products are starting at the 28 nanometer by technology node that?s actually a good thing because if you think about a 28 nanometers are fairly mature technology node, there has been some discussion about the amount of content that?s on these chips and the die sizes and stuff. The way I think about it is when you are designing a game console, you are really designing for a console that?s going to last as I said per seven year cycle, even if it short to five year cycle, so it?s a long cycle and we make sure in this process that these are future-proof designs that they really have good content,

 

 

I think the other piece of it is by adoption of the X86 plus Radeon graphics architecture it?s actually made it easier for software developers. If you looked at the last generation of game consoles although the technology was very good, it took a while for all of the software titles to come out and I think in this generation you see both console guys being very aggressive with their software partnerships. So I think it's early to say we're very focused on the 2013 holiday launch but I think we're very optimistic also about the potential of game consoles.

 

 

Read the full transcript here (warning - it's 7 pages long so you might want to use readability or something to make it one nice page, or to read after work  :p) 

Link to comment
Share on other sites

Well that does support the notion that both consoles are making use of custom designs.  It doesn't really help us find out what the customizations are, but it does hint at more going on than just the standard cpu/gpu would point to.

 

 

There is a good point made about why they are different.  Even if they are off the shelf parts, Sony and MS have to build hardware to last at least 5 years but more like 7-10 years.  That is a very tough thing to do in a world where cpu and cpu tech improves every 6-12 months.  That requires some smart chip design to max out what they can get from the hardware.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.