NVIDIA's new AI turns videos of the real world into virtual landscapes


Recommended Posts

Attendees of this year's NeurIPS AI conference in Montreal can spend a few moments driving through a virtual city, courtesy of NVIDIA. While that normally wouldn't be much to get worked up over, the simulation is fascinating because of what made it possible. With the help of some clever machine learning techniques and a handy supercomputer, NVIDIA has cooked up a way for AI to chew on existing videos and use the objects and scenery found within them to build interactive environments.

 

NVIDIA's research here isn't just a significant technical achievement; it also stands to make it easier for artists and developers to craft lifelike virtual worlds. Instead of having to meticulously design objects and people to fill a space polygon by polygon, they can use existing machine learning tools to roughly define those entities and let NVIDIA's neural network fill in the rest.

 

"Neural networks — specifically generative models — will change how graphics are created," Bryan Catanzano, NVIDIA's vice president of applied deep learning, said in a statement. "This will enable developers, particularly in gaming and automotive, to create scenes at a fraction of the traditional cost."

 

Here's how it works. Catanzano told reporters that researchers trained the fledgling neural model with dashcam videos taken from self-driving car trials in cities for about a week on one of the company's DGX-1 supercomputers. (NVIDIA CEO Jensen Huang once called the DGX-1 the equivalent of "250 servers in a box," so pulling off a similar feat at home seems all but impossible.)

 

 

 

Full article @ Engadget

Link to comment
Share on other sites

  • 3 weeks later...
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.