Xbox One: Microsoft Claims that Cloud Computing Can Provide Power of 3 Xbox Ones, 32 Xbox 360s


Recommended Posts

There's no point technically explaining anything again because obviously you and various people don't listen to it. You're completely wrong.

 

Your framerate comment makes me laugh though, did you even see the build demo?

 

Yeah, a controlled demo, unplayable, and not representative of any real world games we are currently seeing on these consoles. Remember when Sony chucked hundreds of ducks on a screen to show how the PS2 could handle graphics and physics (and then laughingly mocked that with a PS3 version)? Remember when MS demoed Milo to show us what Kinect 1 was apparently going to be capable of? Pretty much no one takes controlled demos seriously from any company unless they're playable and somehow represent an actual game, not a carefully constructed one-off scenario to try and push an agenda/point. Skeptics can eat crow afterwards if needed, to say we should be eating crow right now is not how "eating crow" works. None of the doubts has been proven to any stretch of the imagination.

  • Like 1
Link to comment
Share on other sites

People really are throwing the same arguments back and forth as to why it can and why they think it can't work yet if you break it down it can.  I think some people have the wrong idea about what cloud compute and "offloading" really means in this case.  No ones talking about rendering graphics in the cloud and streaming it, this isn't a Gekai (or however it's spelled) or OnLive type service, there's not massive amounts of data going back and forth between you and the server.   Offloading is in this case talking about having the cloud servers with their more powerful hardware do calculations and send the final answer back to your box, we're talking small amounts of data and not GBs, or in most cases not even MBs. 

 

Though I'm not technical enough to break it down into it's pieces the fact is the CPU works on it's tasks and these blocks of data aren't big at all, KBs of data flying back and forth between the CPU and the GPU.  Even below average 2mbps connections can send and receive KB chunks without any fuss.  Add to the fact that, as has been stated, not everything going on in a game or in this case the render pipeline are dependent on latency and can be offloaded without issue.

Link to comment
Share on other sites

There's no point technically explaining anything again because obviously you and various people don't listen to it. You're completely wrong.

 

Your framerate comment makes me laugh though, did you even see the build demo?

 

So you really beleive there's a method to offload a portion of the rendering to the cloud without either cutting your framerate or displaying incomplete frames half the time?

 

As for that demo, you mean the questionable one from a few weeks or so ago? Yea, i seen it. Doesn't mean i believe it's completely legit though. Mainly because even with something else calculating the physics, the increase of things being rendered, on the client(hint, that's your xbox), from the enhanced physics will not come for free like they want you to believe. In fact, i'm pretty sure that's why they capped the framerate in that demo, to hide the fact it still wouldn't be 'free'.

Link to comment
Share on other sites

Yeah, a controlled demo, unplayable, and not representative of any real world games we are currently seeing on these consoles. Remember when Sony chucked hundreds of ducks on a screen to show how the PS2 could handle graphics and physics (and then laughingly mocked that with a PS3 version)? Remember when MS demoed Milo to show us what Kinect 1 was apparently going to be capable of? Pretty much no one takes controlled demos seriously from any company unless they're playable and somehow represent an actual game, not a carefully constructed one-off scenario to try and push an agenda/point. Skeptics can eat crow afterwards if needed, to say we should be eating crow right now is not how "eating crow" works. None of the doubts has been proven to any stretch of the imagination.

Playable by them, just not the general public. A controlled demo which expanded on the idea many disbelieved which showed how it's possible. A demo which is feasible which is possible in games. For example, an online match of Halo 5 where everything is destructible but calculated on the server rather than locally. 

 

All the demos you listed were possible and true, I don't get what you're saying? What you're saying doesn't invalidate the whole argument, it's completely different.

Link to comment
Share on other sites

So you really beleive there's a method to offload a portion of the rendering to the cloud without either cutting your framerate or displaying incomplete frames half the time?

 

As for that demo, you mean the questionable one from a few weeks or so ago? Yea, i seen it. Doesn't mean i believe it's completely legit though. Mainly because even with something else calculating the physics, the increase of things being rendered, on the client(hint, that's your xbox), from the enhanced physics will not come for free like they want you to believe. In fact, i'm pretty sure that's why they locked the framerate in that demo, to hide the fact it still wouldn't be 'free'.

I don't believe, I know. There's a fundamental difference.

 

Having 3x the CPU power assigned to each X1 (According to MS) means that heavy CPU tasks could be offloaded into the cloud (Physics, Particle Effects, AI etc). With these heavy CPU tasks out of the way, the engine has way more time to work with in a frame. This could be assigned to the GPU for prettier surroundings or give to the CPU to work on things locally more. Although, like the BUILD demo shows, you could simply off-load tasks like destruction physics for better gameplay while keeping the frame-rate stable. It's quite a technical idea, but a very simple one which has been used in scientific and mathematical studies for years. You just have to keep in mind which tasks can be off-loaded which aren't effected by latency. You're not going to move collision detection to the cloud are you?

 

The problem isn't with rendering the squares, draw-calls became hardly an issue when the 360 was released. It's the calculations of how the chunks react and move in the environment which takes the time in an engine.

Link to comment
Share on other sites

 

And no, sending data to the cloud is not theoretical. You're already doing it on this website. Many games already make use of it, and there are hundreds of technologies out there built explicitly for managing such environments in as fast a way as possible. Facebook, Twitter, Steam, any multiplayer game that's peer-to-peer, etc. All these things send data in real time back and forth. So to assume doing the same with some physics calculations is theoretical is ignorant at best.

That wasn't my point. The issue is whether doing so can improve the gaming experience in a significant manner (i.e. if it allows the XB1 to do things the PS4 can't do). It's all very well offloading physics data to a server but if it can be handled locally with minimal visual or performance difference then it doesn't achieve much, plus there are issues with latency spikes and connection interruptions. Things like physics and AI are very difficult to quantify - if an AI routine is three times as demanding you won't necessarily see much difference in-game.

 

So because it hasn't been commercially implemented in games, it's all lies and PR? That's what you're saying.

Basically. It's been touted as a major feature but we haven't seen the developer support or implementations to justify that claim. It just seems to be Microsoft's way to distract people from the performance issues affecting most XB1 games. It's the same with the Kinect - Microsoft has hyped it up and bundled it with the XB1 and yet few games makes use of it and those that do are generally pretty gimmicky. The technology is impressive, much more so than the cloud, but it has very little impact upon the gaming experience and reduces immersion.

 

I don't want people to assume that I have something against Microsoft in particular, as that simply isn't true. I think the touch input on the PS4 is gimmicky and immersion breaking; the Leap Motion looked great in the videos but I have found it to be impractical in day-to-day usage; the Wii U is gimmicky and underpowered. I'm interested in the gaming experience and right now Sony is delivering the better experience and it's work on VR looks very promising - it's still way behind where PC gaming is but it's the best of the "next-gen" consoles.

 

With that logic, why even buy PS4 then? Sony has failed to show innovative experiences simply not attainable on the PS3.

As you yourself pointed out the PS4 has much better visuals than the PS3, which is an innovation. But for what it's worth I wouldn't buy a PS4, as I consider the PC to be where the innovation is occurring.

 

Link to comment
Share on other sites

Basically. It's been touted as a major feature but we haven't seen the developer support or implementations to justify that claim. It just seems to be Microsoft's way to distract people from the performance issues affecting most XB1 games. It's the same with the Kinect - Microsoft has hyped it up and bundled it with the XB1 and yet few games makes use of it and those that do are generally pretty gimmicky. The technology is impressive, much more so than the cloud, but it has very little impact upon the gaming experience and reduces immersion.

 

I don't want people to assume that I have something against Microsoft in particular, as that simply isn't true. I think the touch input on the PS4 is gimmicky and immersion breaking; the Leap Motion looked great in the videos but I have found it to be impractical in day-to-day usage; the Wii U is gimmicky and underpowered. I'm interested in the gaming experience and right now Sony is delivering the better experience and it's work on VR looks very promising - it's still way behind where PC gaming is but it's the best of the "next-gen" consoles.

There wasn't any substantial API support to help with this until the updated Azure SDK which was shown during Build. To incorporate these changes into a games engine takes a lot of time and it's something that simply isn't feasible in this time-frame for the current consoles. That doesn't mean it doesn't work, more so not implemented yet. We're 5 months into these consoles, you must understand that these things take time. 

 

VR for me is definitely interesting, just count me out for playing games with a heavy box on my head constantly. I'd definately use it, but for how much I would and for how expensive these devices will be, I couldn't see me picking one up. Especially the average joe.

Link to comment
Share on other sites

As you yourself pointed out the PS4 has much better visuals than the PS3, which is an innovation. But for what it's worth I wouldn't buy a PS4, as I consider the PC to be where the innovation is occurring.

How is adding more GPU power is innovation when it's been done a million times already?

Link to comment
Share on other sites

There wasn't any substantial API support to help with this until the updated Azure SDK which was shown during Build. To incorporate these changes into a games engine takes a lot of time and it's something that simply isn't feasible in this time-frame for the current consoles. That doesn't mean it doesn't work, more so not implemented yet. We're 5 months into these consoles, you must understand that these things take time.

Yes, but Microsoft is hyping the technology now and has been doing so since before launch.

 

How is adding more GPU power is innovation when it's been done a million times already?

More graphics power allows developers to produce better and new experiences. Also, being a DX11 chipset means it's capable of tessellation and the APUs are better equipped for processing physics.

Link to comment
Share on other sites

Yes, but Microsoft is hyping the technology now and has been doing so since before launch.

That doesn't mean theyre lying and the technology claiming to help wont. Which youre saying.
Link to comment
Share on other sites

That wasn't my point. The issue is whether doing so can improve the gaming experience in a significant manner (i.e. if it allows the XB1 to do things the PS4 can't do). It's all very well offloading physics data to a server but if it can be handled locally with minimal visual or performance difference then it doesn't achieve much, plus there are issues with latency spikes and connection interruptions. Things like physics and AI are very difficult to quantify - if an AI routine is three times as demanding you won't necessarily see much difference in-game.

 

If it can be handled with minimal visual or performance difference then they won't need to use the cloud. You're acting as if developers will use it just because and not for a good reason. The whole point of it is to reduce processing time and load on the local system so if you're not using it for that reason you're using it incorrectly. I don't even get why you'd make such a statement.

Link to comment
Share on other sites

Regardless my point wasn't that servers (cloud) can't improve games, my point is they already do on PC, PS4, PS3 and Xbox 360. Server hosts data/content and makes calculations which are done in the 'cloud' rather than the local machine. It's nothing new or specific to Xbox One.

 

What can an Xbox One do with the servers (cloud) that can't be done on any other machine? I'm sure if you answered that and gave sources to support your claims less people would think Microsoft was trying to hype up the Xbox One to compensate for the fact its a weaker system than the PS4. I personally think a toaster with a LCD monitor could utilize the cloud as much as an Xbox One prove me wrong.

Do you guys not follow other posts in this thread?

What MS is doing differently is two fold:

1. Offering access to server hardware for free.

2. Building the X1 in a way to maximize usage of the cloud (they went over that early on, I can point to articles if you need them)

ANY device can connect to a cloud server. When will we get past the false impressions people have? As much as people say MS lie about this stuff, at least point out that they have never claimed that the cloud itself is different from any other cloud. Its all just a collection of servers. MS might claim that Azure has an advantage in the tools developers can use, but again, Azure is not specific to the X1 and can work with any device.

 

TBH, the sooner people get off microsoft's cloud hype train and come into reality the better.

You will not be rendering visuals via a server. Its too latency sensitive. The only way servers could improve visuals is if they offload enough other stuff to allow the gpu to do more then it otherwise would for visuals. I have no idea if that would amount to anything.

As far as people being on the hype train, what do you exactly mean by that? For one thing, it seems like most people around here are very much against MS' investment in servers, so your already in the majority opinion. Secondly, I haven't seen anyone around here make the crazy claims about the cloud that so many have focused on. The only people left that aren't opposed to it are focusing on what it can do, you know, the reality. That's where I'm at anyway.

 

 

Yeah, a controlled demo, unplayable, and not representative of any real world games we are currently seeing on these consoles. Remember when Sony chucked hundreds of ducks on a screen to show how the PS2 could handle graphics and physics (and then laughingly mocked that with a PS3 version)? Remember when MS demoed Milo to show us what Kinect 1 was apparently going to be capable of? Pretty much no one takes controlled demos seriously from any company unless they're playable and somehow represent an actual game, not a carefully constructed one-off scenario to try and push an agenda/point. Skeptics can eat crow afterwards if needed, to say we should be eating crow right now is not how "eating crow" works. None of the doubts has been proven to any stretch of the imagination.

I think the difference here is that the demo MS showed is not completely unheard of or new. The others you mentioned were completely new things that were only being created to market something. The principle behind the MS demo is not unverifiable. Literally, the techniques being used can be tested and verified elsewhere.

This is the part I don't get. Using servers to offload number crunching is not new and it has not been controversial until now. Now that MS used it as a feature of its platform, suddenly its all in question. If MS' had not promoted this feature at all, there would be very little blow back.

The part I agree with you about is that in order for it to be a clear advantage for gamers, more games have to come to demonstrate its usefulness.

As for that demo, you mean the questionable one from a few weeks or so ago? Yea, i seen it. Doesn't mean i believe it's completely legit though. Mainly because even with something else calculating the physics, the increase of things being rendered, on the client(hint, that's your xbox), from the enhanced physics will not come for free like they want you to believe. In fact, i'm pretty sure that's why they capped the framerate in that demo, to hide the fact it still wouldn't be 'free'.

What evidence is there that it was a lie? Its easy to just through that out there and dismiss something, but I would love to see the evidence that the demo was a fake.

 

Yes, but Microsoft is hyping the technology now and has been doing so since before launch.

Probably because they invested so much into it.

I suspect that they knew there would not be many games making use of the servers beyond the basics for a while, but they also felt they needed to get the word out there to push it as a feature. It's one of those risks to take.

More graphics power allows developers to produce better and new experiences. Also, being a DX11 chipset means it's capable of tessellation and the APUs are better equipped for processing physics.

So would you say that the X1 migrating to DX12 would be considered an innovation since it allows developers to do more?

Link to comment
Share on other sites

Yes, but Microsoft is hyping the technology now and has been doing so since before launch.

 

 

More graphics power allows developers to produce better and new experiences. Also, being a DX11 chipset means it's capable of tessellation and the APUs are better equipped for processing physics.    <-----  This is called upgrading.  NOT INNOVATION

  • Like 2
Link to comment
Share on other sites

People need to move off of the fixed thinking that this is going to be a boost to graphics visuals, it's not going to work that way, MS never said it was either.  The idea is cloud compute, it does what the name says it does, it crunches numbers and returns the finished values back to the system so the CPU isn't swamped and there's no queue build up which is what slows down framerates, it doesn't impact graphics quality the way some seem to think though.

 

This whole sticking point about latency is also another thing you have to get off of, not everything going on in a game is impacted by latency or dependent on it.  I don't know how much simpler it can be said, lots of things have wiggle room and at the end of the day we're talking about sending back and forth KBs of data, it's no different then hitting a website, lots of parts of a website are small, KBs worth, while some others are bigger, like images and videos etc, all the little bits, like the JS, don't take up much bandwidth but are important parts that take up CPU, or depending on browser GPU now.

 

It's the same thinking here, there's countless things you can offload and have the cloud, the server, number crunch for you so your CPU isn't bogged down, so it can keep feeding the GPU a nice steady flow and keep performance smooth.   There's AI, there's physics, there's lighting and weather.  There's other things that don't depend on player interaction and are less "real-time" that can be offloaded, scenes can be a bit more dynamic and so on.  It's all possible, it just takes time for developers to use more of, it's only been a few months in and people expected every developer to be using this right away?  Let's be a bit realistic here, it's not a switch you can just flip on and use, there's a server side part that needs to be made and tested and so on.   Time is always a factor here, so why not give them some and see where it goes before everyone gets out the pitch forks or calls this BS.

  • Like 2
Link to comment
Share on other sites

Yes, but Microsoft is hyping the technology now and has been doing so since before launch.

 

 

More graphics power allows developers to produce better and new experiences. Also, being a DX11 chipset means it's capable of tessellation and the APUs are better equipped for processing physics.

Again...how is throwing more hardware at something innovative?

 

More cloud power allows developers to produce better and new experiences. Also, being a Azure cloud means it's capable of scaling and the games are better equipped for processing physics.

Link to comment
Share on other sites

People need to move off of the fixed thinking that this is going to be a boost to graphics visuals, it's not going to work that way, MS never said it was either.  The idea is cloud compute, it does what the name says it does, it crunches numbers and returns the finished values back to the system so the CPU isn't swamped and there's no queue build up which is what slows down framerates, it doesn't impact graphics quality the way some seem to think though.

Good post but I've got to say this part I don't agree with. If you remove so much work from the CPU, the time which the CPU used to execute that code can be given to the GPU to make the game more pretty. Just depends on the engine.

 

For example, to reach 60FPS you need to render each frame in 16.6ms. Say you had 8.8ms for the CPU and 7.8ms for the GPU and you remove the physics calculations which takes 3ms to calculate, you could give that 3ms to the GPU. It just completely depends what runs in parallel and the dependencies. Interesting concept though. 

Link to comment
Share on other sites

Good post but I've got to say this part I don't agree with. If you remove so much work from the CPU, the time which the CPU used to execute that code can be given to the GPU to make the game more pretty. Just depends on the engine.

 

For example, to reach 60FPS you need to render each frame in 16.6ms. Say you had 8.8ms for the CPU and 7.8ms for the GPU and you remove the physics calculations which takes 3ms to calculate, you could give that 3ms to the GPU. It just completely depends what runs in parallel and the dependencies. Interesting concept though.

That certainly sounds possible, but I just think its important to stress that MS is not claiming that the cloud does anything that affects the visuals directly as in doing any rendering via the servers.

What it can do could result in better visuals, but that is going to vary a lot and is not guaranteed.

Link to comment
Share on other sites

Here's why I personally do not get the claim Forza 5 would not be the same without the power of the cloud. And someone can correct me if I am wrong, but the way I understand the integration with Forza 5 specifically is yes it does the calculation for the AI, but if you then play the game offline, you will play with whatever the latest Drivatar calculation information is you have downloaded from the last time you played connected to the cloud.

So in that aspect, is it really the cloud makes it that much better of an experience and an experience that can only be had thanks to the cloud? Or does the cloud just makes things more convenient?

What I mean is it seems as if these calculations could also be done without the cloud itself. It just would obviously mean extended load times, etc. But if you can download the Drivatar information for use when the game is offline, then it is not really the cloud has to be used to get that Drivatar information. It is just it makes it easier to access and quicker to get.

So it is not really the cloud is solely responsible for making it that much better of a game. Just that it makes it a smoother experience overall.

 

The actual calculation of how the drivatar profile is made is done in the cloud. the xbox could theoretically also calculate all this many thousands of variables, during a race to create a profile. but then you'd need a few extra xboxes to do it :)

 

it's the actual creation of the drivatar profile that is done in the cloud, live while you play since it records thousands of variables for every seconds you drive and use them to create and modify the profile. without the cloud you would only have simple dumb drivatars like in the previous games.

Link to comment
Share on other sites

They could make a hdmi joiner and have two Xbox Ones outputting to a single hdmi output with each Xbox One doing one half of the screen and could even create code in the system to make them communicate over WiFi so you wouldn't need to have a USB cable. But that's not going to happen. It's too much effort for something no one is going to use, no one is going to buy two Xbox Ones to compete with the competitors already cheaper system.

 

incidentally, for previous forza games, you could link up 3 or 4 360's for panoramic output. and people use it.

 

so... 

Link to comment
Share on other sites

And for good reason. Rendering(for real time apps) and high latency do not mix. And internet has high latency to anything outside the same building. Sure, they could theoretically enchance the visuals with the cloud, but that would come at the cost of your game running at framerates around 10-15fps cause it has to wait ages before it can finish the frame due to the high latency of the cloud.

 

TBH, the sooner people get off microsoft's cloud hype train and come into reality the better.

 

 

you guys seem to have very short memories. 

 

you can increase graphics fidelity with the cloud by removing non graphics GPGPU calculation from the GPU to the cloud. freeing up up to 50% of the GPU resources in some physics heavy games. are you saying 50% more available GPU power won't have a effect on graphics output ?

 

even cloud rendering could be used in conjunction with local rendering to increase fidelity. Imagine this, the server render the background of the scene, say one of those huge Avatar vistas with trees, animals, and all kinds of things moving in the background. all of this, everything more than 500 meters away would be rendered in the cloud. and streamed to you.  Now you're going to say "ah but, it'll look weird because it'll lag behind and won't be synced with your movements", true, UNLESS you account for that, say you render 10% bigger than the screen. then the console locally will move this oversized backround matte around to account for the lag.  So even cloud assisted rendering is feasable. even though it's not something MS has said they will use or suggest doing. it's quite possible. 

Link to comment
Share on other sites

  • 2 weeks later...

That is awesome....love how the Xbox One is embracing the cloud. It's a win win for gamers. Another reason why I am really enjoying this new gen and the Xbox. I thought last gen was awesome on the 360, but this new gen will be so much better.

Link to comment
Share on other sites

  • 4 weeks later...

3 Xbox Ones in the cloud is pretty bold, but if you are talking about bursting for a specific workload of cpu usage it is believable. Not something that anyone should consider to be sustainable.

Link to comment
Share on other sites

This topic is now closed to further replies.