Nvidia Uses AI to Render Virtual Worlds in Real Time

Nvidia announced that AI models can now draw new worlds without using traditional modeling techniques or graphics rendering engines. This new technology uses an AI deep neural network to analyze existing videos and then apply the visual elements to new 3D environments.

Nvidia claims this new technology could provide a revolutionary step forward in creating 3D worlds because the AI models are trained from video to automatically render buildings, trees, vehicles, and objects into new 3D worlds, instead of requiring the normal painstaking process of modeling the scene elements.

But the project is still a work in progress. As we can see from the image on the right, which was generated in real time on a Nvidia Titan V graphics card using its Tensor cores, the rendered scene isn’t as crisp as we would expect in real life, and it isn’t as clear as we would expect with a normal modeled scene in a 3D environment. However, the result is much more impressive when we see the real-time output in the YouTube video below. The key here is speed: The AI generates these scenes in real time.

Nvidia AI Rendering

Nvidia’s researchers have also used this technique to model other motions, such as dance moves, and then apply those same moves to other characters in real-time video. That does raise moral questions, especially given the proliferation of altered videos like deep fakes, but Nvidia feels that it is an enabler of technology and the issue should be treated as a security problem that requires a technological solution to prevent people from rendering things that aren’t real.

The big question is when this will come to the gaming realm, but Nvidia cautions that this isn’t a shipping product yet. The company did theorize that it would be useful for enhancing older games by analyzing the scenes and then applying trained models to improve the graphics, among many other potential uses. It could also be used to create new levels and content in older games. In time, the company expects the technology to spread and become another tool in the game developers’ toolbox. The company has open sourced the project, so anyone can download and begin using it today, though it is currently geared towards AI researchers.

Nvidia says this type of AI analysis and scene generation can occur with any type of processor, provided it can deliver enough AI throughput to manage the real-time feed. The company expects that performance and image quality will improve over time.

Nvidia sees this technique eventually taking hold in gaming, automotive, robotics, and virtual reality, but it isn’t committing to a timeline for an actual product. The work remains in the lab for now, but the company expects game developers to begin working with the technology in the future. Nvidia is also conducting a real-time demo of AI-generated worlds at the AI research-focused NeurIPS conference this week.

Source: Nvidia Uses AI to Render Virtual Worlds in Real Time

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com