How does the meta universe "feed" artificial intelligence models?

16-022-supersoniccontract

AI models have many definitions today. They can be a parallel universe containing the digital version of human beings and the world, or a three-dimensional network that replaces today's two-dimensional network, or a graphical interface for predictive analysis and product design cooperation.

The visual world consists of many mobile parts that contain a variety of data types, interfaces, and artificial intelligence models. The 3D interface contains many data types of time and space related attributes, which are very important for capturing and analyzing past trends and predicting future development trends.

This visual simulation technology has now been applied in some important projects, such as DeepMind's AlphaFold AI research project, which can predict the 3D structure of more than 200 million known proteins. Protein folding is the basis of drug discovery, and AlphaFold is used in medical research to treat COVID-19. In the field of high-performance computing, it provides conditions for researchers to collaborate in virtual simulation.

As one of the biggest supporters of Nvidia, Nvidia promotes this concept through a product called Omniverse, which contains a set of artificial intelligence, software and vision technologies for research and scientific modeling.

Nvidia has been vague about the features of Omniverse products, but recently revealed some information. The platform uses a complex set of technologies to collect, organize, translate and correlate data, which is eventually collected into a dataset. Artificial intelligence models will analyze these datasets and then provide visual models for scientific applications, which may include models such as understanding planetary movement trends or developing drugs.

The latest cooperation case of the platform is that NOAA will use the technology of Omniverse and Lockheed Martin to visualize climate and weather trend data, and then provide these data to researchers for prediction and other research.

The information collected by the OR3D platform developed by Lockheed Martin is very important for visualizing weather and climate data, including data from satellites, oceans, previous atmospheric trends and sensors. These data are specific OR3D file formats and will be built into "connectors" to convert data into file types according to the Universal Scene Description (USD) format.

The USD file format has operators that can combine data such as positioning, direction, color, material and layer into a 3D file. Converting to the USD file format is very important. It allows visualization files to be shared and multiple users to collaborate, which is an important consideration in the virtual world. The USD file is also a converter, which decomposes different types of data in the OR3D file into the original input of the AI model.

Data types can include time and space elements in 3D images, which is particularly important in visualizing climate and weather data. For example, past weather trends need to be captured in seconds or minutes, and maps need to be drawn based on time dependencies.

One of Nvidia's tools, Nucleus, is the main engine of Omniverse, which converts OR3D files into USD files and processes runtime, physical simulation and data mapping from other file formats.

The AI data set can include real-time updated weather data, and then input it into the AI model. Nvidia's multi-step process of importing original image data into USD is complex but scalable. It can support multiple data types and is considered more feasible than API connector (the latter is application specific and cannot be extended for different data types in a single complex model).

The advantage of the USD file format is that it can process different types of data collected from satellites and sensors in real time, which helps to build more accurate AI models. At the same time, it can also be shared, which enables its data to be extended to other applications.