The days of relying on artists and developers spending months to develop virtual worlds for games, AR/VR and other purposes may be coming to an end: At the NeurIPS artificial intelligence conference in Montreal last week Nvidia showcased their ability to use deep learning combined with dash cam videos to create virtual cities, which can valuable in a variety of ways.
One of the most exciting things about this announcement, as Engadget mentioned, is how “NVIDIA has cooked up a way for AI to chew on existing videos and use the objects and scenery found within them to build interactive environments.” Think about performing these tasks for image recognition alone, which can be complex in itself, then looking at minutes or hours of video footage where each second includes 30 frames (images). Simply said, there’s a lot of processing taking place.
“Neural networks — specifically generative models — will change how graphics are created,” Bryan Catanzano, NVIDIA’s vice president of applied deep learning, said in a statement. “This will enable developers, particularly in gaming and automotive, to create scenes at a fraction of the traditional cost.”
Check out the below video from Nvidia to learn more: