Nvidia rolled out Omniverse in open beta last December – nearly a year before Facebook committed to the concept of a “metaverse” by renaming itself Meta. Omniverse gives 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations.  Since the December launch, Omniverse has been downloaded by more than 70,000 individual creators. Professionals are also using it at over 700 companies, including BMW Group, CannonDesign, Epigraph, Ericsson, architectural firms HKS and KPF, Lockheed Martin and Sony Pictures Animation. “Virtual worlds are essential for the next era of innovation,” Richard Kerris, VP of Omniverse for Nvidia, said to reporters last week. Nvidia’s virtual world version has primarily focused on building “digital twins” – an accurate, digital replica of physical entities. Here’s how Rev Lebaredian, Nvidia’s VP of simulation technology and Omniverse engineering, explained the concept of a digital twin:  Omniverse Replicator is a tool that should ultimately help organizations build better digital twins – and thus, better AI-powered tools in the real world. Nvidia is introducing two different applications built with Replicator, which demonstrate some of its use cases: The first application is Nvidia Drive Sim, a virtual world for hosting the digital twin of vehicles. Next, Nvidia Isaac Sim is a virtual world for the digital twin of manipulation robots.  Data is a necessary prerequisite for building AI models, but “you never have enough data, and never of high enough quality and diversity to make your system as intelligent as you want,” Lebaredian explained. “But if you can synthesize your data, you effectively have an unlimited amount…. of a quality impossible to extract from the real world.” Autonomous vehicles and robots built using this data generated by Replicator can master skills across a range of virtual environments before applying them in the physical world.  While its first two use cases are in robotics and automotive, “this general problem of creating data for AI is one that everyone has,” Lebaredian said. Omniverse Replicator will be available next year to developers to build domain-specific data-generation engines. To further demonstrate the value of digital twins, Nvidia showcased two customer stories. First, Ericsson is using Omniverse to build digital twins for 5G networks. The telecom equipment maker is building city-scale digital twins to help accurately simulate the interplay between 5G cells and the environment. This should help optimize 5G performance and coverage. Next, Nvidia is working with Lockheed Martin, as well as the US Department of Agriculture Forest Service and the Colorado Division of Fire Prevention & Control, to run simulations of wildfires with Omniverse. The team will use variables like wind direction, topography and whatever other information is available to create a digital twin of a wildfire and predict how it will play out.  While building digital twins has clear enterprise value, Nvidia is taking Omniverse beyond replications of the real world with the new Omniverse Avatar platform. Avatar is a full end-to-end platform for creating embodied AIs that humans interact with. It connects Nvidia’s technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics. They can see and speak on a wide range of subjects and understand naturally spoken intent.  While there’s a great deal of justified skepticism about the use of avatars – given that efforts like Second Life failed to catch on – Nvidia’s Lebaredian argued that “today there are many examples of people who use avatars on a daily basis.” He pointed to video games like Fortnite and Roblox.  “You can go to Twitch and see gamers streaming games where they have a virtual avatar in a game engine representing themselves,” he said. “This is very natural for this generation that has grown up with video games and virtual worlds being just like air.” So far, Nvidia has launched a few initiatives targeting Avatar for specific use cases: Project Tokkio leverages Avatar to build customer support agents, Nvidia Drive Concierge is focused on intelligent services in vehicles, and Project Maxine will help customers build avatars – which may or may not look like their real selves – for video conferencing. The company said it had seen notal interest in Project Tokkio from the retail sector.  The many Nvidia technologies behind Avatar include Riva, a new, large software development kit for dealing with advanced speech AI. It recognizes speech across multiple languages and can generate human-like speech responses using text-to-speech capabilities. The Avatar platform’s natural language understanding is based on the Megatron 530B large language model that can recognize, understand and generate human language.  Nvidia has also incorporated Metropolis for computer vision and perception abilities – so avatars can see and understand the humans they’re interacting with – and the Nvidia Merlin framework for recommendations. Avatar animation is powered by Nvidia Video2Face and Audio2Face, 2D and 3D facial animation and rendering technologies. Avatar applications are processed in real-time using the Nvidia Unified Compute Framework. Customers can use Fleet Command for deploying avatars in the field.  Beyond Replicator and Avatar, Nvidia announced a range of other updates to Omniverse, including new AR, VR and multi-GPU rendering features. There are also new integrations for infrastructure and industrial digital-twin applications with software from Bentley Systems and Esri.