Cloud gaming service OnLive shines at MIT conference


Recommended Posts

Cloud gaming service OnLive shines at MIT conference

OnLive CEO Steve Perlman details what we can expect from the cloud computing system, including bandwidth requirements. With multiple major publishers on board and another successful tech demo at MIT's conference, the buzz is growing.

By John Timmer | Last updated September 23, 2009

onlive2-thumb-640xauto-3889.jpg

MIT is playing host to Technology Review's EmTech conference, which focuses on up-and-coming companies and the new technology they're bringing to market. Steve Perlman, the founder and CEO of the OnLive gaming service, was given the chance to demonstrate his company's cloud gaming service, and took some time to explain the technology backing it. OnLive is gaming's answer to cloud computing: the applications run on hardware in a server farm, while users only need low-end hardware (including OnLive's own mini-console) and broadband Internet to connect in and play. The service will have some limitations, however, and your experience may vary with network speed.

For starters, Perlman gave some indication of the network requirements for it. Anyone with a 1.5Mbps connection should be able to run the service at standard definition; 5Mbps will be required for HD content. Although bandwidth will be critical, low latency connections will be necessary to avoid hitting the user with perceptible lag. OnLive has found that the server has to be within about 1,000 miles of the end user in order to avoid this. As such, it will be launching the service with four server farms.

Another impressive tech demo

For the purposes of his demo, Perlman connected to a server farm in Virginia. MIT clearly has access to some pretty significant pipes, but the quality of the demos, which included some time showing off an arena in Crysis, was very impressive, given that it was running on a standard MacBook Pro. All sorts of environmental features, from bubbles generated while swimming to crabs scuttling along the beach, were fluid, and provided a very immersive experience. Perlman joked repeatedly about his poor gaming skills as he rushed to show the audience as much as possible before one of the experienced beta testers blew him away.

He also gave a short rundown of the service's additional features, like the ability to save clips of games that can be shared with others. There's also an arena, where people can enter a game environment that someone else is playing and watch how they experience the game. Although these are presented as end-user features, Perlman pointed out that they could provide significant benefits for game developers that could follow along as users flail through problem areas or expose bugs in the software.

Perlman was willing to talk briefly about the hardware that powers things at the server level. The basic functional unit is a standard PC motherboard. Casual games get by on built-in video, while they'll be using motherboards with high end hardware from NVIDIA and AMD for the current generation of games. The only custom hardware is a single add-on board that handles both compressing the video for transmission to the end users and smoothing over the inevitable network hiccups.

Beyond the broadband connection, all the end user needs to be able to do is handle input from controllers and display video, neither of which is especially demanding. That goes a long way towards explaining why OnLive can get away with a miniconsole that appeared to be somewhere around the size of a portable laptop hard drive.

PC gaming's audience could be broadened

The pitch to the gaming audience is obvious: no more platform exclusives and a step off the perpetual upgrade treadmill. Whatever you happen to be using, it'll be good enough for OnLive's service. But Perlman also pointed out that there are significant advantages for game makers. A single game can now run on any platform out there, greatly increasing the audience and eliminating porting issues. Since the actual software never goes out to the end users, piracy is essentially a nonissue. Perlman also noted it could kill the secondhand game market?although users might not appreciate that, the publishers will.

The porting process is also extremely simple. Perlman said it typically takes OnLive three weeks and one engineer to handle the process, most of which involves eliminating dialogs and keyboard commands that assume the user is running the game locally on their own hardware.

So far, OnLive has nine major game publishers, including EA, THQ, and UbiSoft, on board.

Will it actually work? The basic principles seem solid, and Perlman was apparently involved in developing the QuickTime video platform, so he appears to have the right experience to put things together. But the ultimate determinant may not be the technology that OnLive has control over. Instead, the local ISPs and home network may have a tremendous impact on whether the games are even playable, much less immersive.

You can sign up for beta access to the service right now, although OnLive gently told Ars that journalists are not, at the moment, being extended invitations.[/quoteSource: Ars Technicab>

Link to comment
Share on other sites

I just don't see this working at all on any significant level. Seriously, how many farms do they need in order to process millions of users for example playing at the same time? It's all fine and dandy with a demo that doesn't feature any real level of high-end graphics and similar. I guess this could work for Wii like games at SD quality that are fairly light in size and textures because of their resolution.

How in the hell do they expect to serve 1080p or at worst 720p gaming with full quality like we have on consoles? There's a reason why we have consoles. It's to offload all processing locally and create immersive experience.

Another point is, that I don't see this guy being involved with Quicktime as a huge plus to be honest. Quicktime is such a piece of clunk and never really got any serious traction outside Apple's hardware.

Overall it's a noble idea but I just don't see it working. It's not hard to pump up a demo, but put it in mainstream environment and it will collapse or severally limit the experience.

Link to comment
Share on other sites

Network latency and video quality's going to really hurt this.

It'll certainly work, but the video will be out of sync with the user playing the game, and the video won't be of a good quality (which is bad, since you need very high quality encoding)

Link to comment
Share on other sites

Another point is, that I don't see this guy being involved with Quicktime as a huge plus to be honest. Quicktime is such a piece of clunk and never really got any serious traction outside Apple's hardware.

My guess is coz its all about streaming video then quicktime would probably be very handy. The software is a bit clunky yes, but the codec is quite alright.

Link to comment
Share on other sites

Surely they were supposed to be running a semi open beta over the summer??

No, I think the invitations were handed out over the summer, the actual beta isn't until next year.

Link to comment
Share on other sites

Network latency and video quality's going to really hurt this.

It'll certainly work, but the video will be out of sync with the user playing the game, and the video won't be of a good quality (which is bad, since you need very high quality encoding)

They have already stated you will need to be within a certain mileage of their service centers and your latency will need to be in a specific range.

I don't know about you, but on my fios connection my latency is less than 40ms to most sites pretty much all day long, and typically its much lower. Considering onlive is in discussions with ISP's and backbone partners to host their servers in their aggregation points if they are able to get such deals it will work just fine.

Link to comment
Share on other sites

They have already stated you will need to be within a certain mileage of their service centers and your latency will need to be in a specific range.

I don't know about you, but on my fios connection my latency is less than 40ms to most sites pretty much all day long, and typically its much lower. Considering onlive is in discussions with ISP's and backbone partners to host their servers in their aggregation points if they are able to get such deals it will work just fine.

Their main problem is distance and the hops between the user and the server, you can throw bandwidth at the problem and all it's going to effect is the quality of the video (not by much, another problem is trying to encode the video quickly while retaining a high quality), latency is all in the distance and the hops (I have the same latency on my 24Mbps ADSL2+ connection as I did on my old Dial-Up connection, because I live in the same house)

Even then, a 40ms latency is 40 times slower than what you're experiencing when you play a game on the local system.

Link to comment
Share on other sites

  • 1 month later...

So has anyone heard any new news about the service yet? launch date? anyone in the beta? I'm gonna jump all over this as soon as its out. I think they have a real winner on their hands here if they can pull through all the hurdles. I really hope these guys do well and stick it to the console market

Link to comment
Share on other sites

So has anyone heard any new news about the service yet? launch date? anyone in the beta? I'm gonna jump all over this as soon as its out. I think they have a real winner on their hands here if they can pull through all the hurdles. I really hope these guys do well and stick it to the console market

From what I've read, they're planning to release OnLive at the end of 2009 in America, then sometime in 2010 for the rest of the world.

Link to comment
Share on other sites

i have singed up for the beta several times and heard nothing yet...the end of 2009 is almost here...people keep talking about latency but with today's technology they might have developed a way to overcome this...heck i play on servers sometimes with a ping of close to 200 and cant not notice much of an effect.....they seem to have their heads on about it though...what im wondering though is if having a good upload speed will help improve things

Link to comment
Share on other sites

The latency is due in part to the speed of light and certain physical limits, you just can't overcome those.

And even on a 200ms latency game server, your computer is still doing the rendering and button detection, doing the same on onlive would result in a 400ms (almost half a second) latency between pushing a button and you seeing the result.

Edit: The only way to negate the latency, is to move closer to the server, but even being directly connected to it over a gigabit lan will be slower than playing the game on your own computer.

Link to comment
Share on other sites

i dunno..im not gonna discount this in any way..i think they might have found the key to overcome those hurdles and deliver a good experience..im gonna stay postive and hope for the best...another question is..what kind of computing power these server farms have to run this kind of platform..it must be something god awful to be able to supports millions of simultaneous users at once....of course they have to have developed some kind of distributed computing backbone to power it all otherwise it would take the mother of all supercomputers to power all those users...i just can't fathom though what one of the farms look like....imagine pushing crysis to over a millions users at once...i think anyone who knows anything about technology has to respect and appreciate that kind of ingenuity and thinking!

Link to comment
Share on other sites

i dunno..im not gonna discount this in any way..i think they might have found the key to overcome those hurdles and deliver a good experience..

unless they have found a way to manipulate time, it will be physically, scientifically impossible for them to overcome the ping

also, this ingenuity and thinking can only lead to consumers with less, and providers with more... imagine a world where all your games are on onlive, and suddenly there was a game console which actually runs the game on itself, its gonna be the personal computing revolution all over again, as in what happened 30 years ago...

and yeah, i'd love to play a single player game like crysis to pass my time when my internet goes down...

Link to comment
Share on other sites

i didn't say that competition is bad...competition is good for the consumer...it would be a break from the constant upgrading that we all do to our pc's to keep up with the latest games plus many other benefits

Link to comment
Share on other sites

latency is all in the distance and the hops (I have the same latency on my 24Mbps ADSL2+ connection as I did on my old Dial-Up connection, because I live in the same house)

If you have the same latency that you did over dial up as you do on ADSL2+ then you have issues plain and simple. Number of hops only matters if the equipment at said hops is saturated. That is why you always look at your average latency and not your min/max.

Also Distance is a factor, but once again, latency is not a direct correlation to distance, it is based on who your ISP uses to aggregate their data and if you never leave the ISP's own network your latency will be incredibly low.

My latency to my Work PC which is 5 miles from my house is 43ms using Comcast and has to go through 17 hops

Using FIOS my latency is only 18ms to the same computer and has to go through 22 hops.

So while there are 5 more hops for my FIOS connection my latency is still superior because Verizon happens to use the same peering provider as my company does.

Edit: Just noticed you live in Australia, that right there is an issue since as you may or may not be aware all traffic entering/exiting the continent is run through the same handful of international peering points / undersea cable. This means your latency to sites outside the country regardless of your connection are going to be getting into the same traffic jam everytime regardless of connection where your connections to sites on your continent will actually vary. That same reason is why people in Japan and other countries that boast "Well I pay 20 bucks for a gigabit connection!" won't ever see gigabit speeds outside of their continent. That will soon change due to the unity project which is specifically to help increase bandwidth between Us /Japan / Asia. Such long fiber runs are when you start to get into the limitations of light transmission but even that is being overcome now thanks to the inception of "time telescopes"

Edited by Qumahlin
Link to comment
Share on other sites

Im not saying this will be some huge sucess, but some of you people need to stop acting like you have everything figured out. Stop thinking that you somehow know more about the situation after reading a few articles when these people have been working on it for years.

Link to comment
Share on other sites

If you have the same latency that you did over dial up as you do on ADSL2+ then you have issues plain and simple. Number of hops only matters if the equipment at said hops is saturated. That is why you always look at your average latency and not your min/max.

Also Distance is a factor, but once again, latency is not a direct correlation to distance, it is based on who your ISP uses to aggregate their data and if you never leave the ISP's own network your latency will be incredibly low.

My latency to my Work PC which is 5 miles from my house is 43ms using Comcast and has to go through 17 hops

Using FIOS my latency is only 18ms to the same computer and has to go through 22 hops.

So while there are 5 more hops for my FIOS connection my latency is still superior because Verizon happens to use the same peering provider as my company does.

Edit: Just noticed you live in Australia, that right there is an issue since as you may or may not be aware all traffic entering/exiting the continent is run through the same handful of international peering points / undersea cable. This means your latency to sites outside the country regardless of your connection are going to be getting into the same traffic jam everytime regardless of connection where your connections to sites on your continent will actually vary. That same reason is why people in Japan and other countries that boast "Well I pay 20 bucks for a gigabit connection!" won't ever see gigabit speeds outside of their continent. That will soon change due to the unity project which is specifically to help increase bandwidth between Us /Japan / Asia. Such long fiber runs are when you start to get into the limitations of light transmission but even that is being overcome now thanks to the inception of "time telescopes"

Given the same number of hops (not just router hops but all hops including repeaters - which won't show up on any traceroute) and the same length and type of cable and the same equipment, latency will be the same on ADSL2+ as dial-up. Why? Because the electromagnetic wave still has to be transmitted through the same length of cable. This is a physical restraint, not a protocol restraint.

Fundamentally, information cannot travel faster than c (the speed of light). Say, for instance, that the server was physically located on the opposite side of the planet. Even if the information was sent at c across a single piece of tube that contained a perfect vacuum at close to absolute zero, so that the EM wave was travelling at c, and that it was laid in a 'straight' line from you to the server around the Earth. Any wave would take about 67 milliseconds from source to destination. Then factor in that the cables aren't laid in straight lines and that there are repeaters and other equipment in the way that will delay the signal, and then there's the fact that the wave is being transmitted through a medium other than vacuum and the latency goes up even more.

Time telescopes can't reduce latency - in fact the process of compression and decompression of the data adds latency on the order of 1ms per hop (and remember, these aren't IP hops, they're hops at the physical layer), so for a 40 hop transfer using time telescopes all the way, that's an additional 40ms traded off for higher throughput with another 40ms for the response to come back giving us a grand total of a 80ms delay just for compressing the data alone, without factoring in time taken between hops, router load and computation time.

While latency is related to the load on the routers and the routing paths chosen for the packets, don't underestimate the physical limits.

Link to comment
Share on other sites

i have singed up for the beta several times and heard nothing yet...the end of 2009 is almost here...people keep talking about latency but with today's technology they might have developed a way to overcome this...heck i play on servers sometimes with a ping of close to 200 and cant not notice much of an effect.....

That's because in a typical 3D game the graphics are being done on the client locally. You screen updates while data is still being processed. With what they're doing, you have to wait for the data to be processed on their server and sent back before your screen updates. Maybe in an ideal situation it could work for some types of games, but for an FPS it's going to have lag.

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.