GET3D (Nvidia)

Generates high-quality 3D textured shapes directly from images using PyTorch and NVIDIA Diffrast-Light.

About GET3D (Nvidia)

Nvidia's GET3D is a highly performant AI tool developed for content creators to generate high-quality, explicit textured 3D meshes with arbitrary topology and significant geometric details. The 3D generative model synthesizes complex 3D virtual worlds for use in downstream applications in diverse industries such as architecture, gaming, and film. GET3D aims to provide a versatile tool that can efficiently generate diverse shapes with high-quality geometry and texture, eliminating the guesswork from 3D generation.

TLDR

GET3D is an AI tool developed by Nvidia for 3D content creation of highly performant 3D meshes with arbitrary topology and significant geometric details. The tool synthesizes complex 3D virtual worlds for use in downstream applications in various industries like gaming and architecture. GET3D provides high-quality geometry and texture to generated shapes, enhancing the scalability of the tool. GET3D can generate diverse shapes, including buildings, human characters, animals, chairs, and motorbikes, as it is end-to-end trainable, unlike previous models. The tool has good disentanglement between geometry and texture, enabling a smooth transition between various 3D shapes. GET3D is unique in that users can finetune the 3D generator by providing text prompts to yield a range of meaningful shapes. GET3D builds on previous works in 3D generative modeling, including Learning Deformable Tetrahedral Meshes, Extracting Triangular 3D Models, and Materials, and Lighting From Images, to mention just a few. The tool is available for download on GitHub with a few system requirements, including Ubuntu 20.04 LTS, Python 3.7 or higher, NVIDIA GPU (Volta, Turing, or newer architecture), and CUDA toolkit 11.2 or higher. Interested users can inquire through the company's website for licensing.

Company Overview

GET3D is an AI tool developed by Nvidia for 3D content creation. With the increasing demand for creating massive 3D virtual worlds in various industries, GET3D aims to provide a performant 3D generative model that can synthesize complex, explicit, and textured 3D meshes that can be directly consumed by 3D rendering engines, making them immediately usable in downstream applications.

GET3D generates a 3D SDF and a texture field via two latent codes, utilizing DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. The tool is trained with adversarial losses defined on 2D images, using a rasterization-based differentiable renderer to obtain RGB images and silhouettes. Two 2D discriminators are utilized, one on RGB image and the other on a silhouette, to classify whether the inputs are real or fake. The whole model is end-to-end trainable, generating diverse shapes with arbitrary topology, high-quality geometry, and texture.

Prior works on 3D generative modeling lacked geometric details, were limited in the mesh topology they could produce, typically did not support textures, or used neural renderers in the synthesis process, which made their use in common 3D software non-trivial. GET3D bridges recent success in differentiable surface modeling, differentiable rendering, and 2D Generative Adversarial Networks to train its model from 2D image collections. The AI tool is able to generate high-quality 3D textured meshes, ranging from chairs, animals, motorbikes, and human characters to buildings, achieving significant improvements over previous methods.

The tool also boasts of good disentanglement between geometry and texture, providing a meaningful interpolation and a smooth transition between different shapes for all categories. Users can even provide text prompts to finetune the 3D generator by computing the directional CLIP loss on the rendered 2D images and the provided texts.

GET3D builds upon several previous works, such as Learning Deformable Tetrahedral Meshes for 3D Reconstruction and Extracting Triangular 3D Models, Materials, and Lighting From Images, among others. Business inquiries can be made through their website, where a form can be submitted for Nvidia Research Licensing.

Features

Generative Model

Explicit Textured 3D Meshes

GET3D is a generative model that generates explicit textured 3D meshes with geometric details and high-quality textures. Unlike limited previous works on 3D generative modeling, GET3D supports textures and can produce arbitrary topology shapes, including buildings, cars, human characters, chairs, animals, and motorbikes. The meshes are of high quality and can be directly used in downstream applications as 3D rendering engines without any modification, which makes GET3D an excellent tool for content creation.

End-to-End Trainable

GET3D is end-to-end trainable. The model is trained using two adversarial losses defined on 2D images. The training process includes a rasterization-based differentiable renderer for obtaining RGB images and silhouettes. Two discriminators, each on RGB image and silhouette, are used to classify whether the inputs are real or fake. GET3D is effective in generating diverse shapes using two latent codes. It generates the 3D SDF and texture field through the codes and queries the texture field at surface points to obtain colors.

Improved Performance

GET3D achieves significant improvements over previous 3D generative modeling methods. It bridges recent successes in differentiable surface modeling, differentiable rendering, and 2D Generative Adversarial Networks to yield high-quality 3D textured meshes with rich geometric details, complex topology, and high fidelity textures. GET3D can generate high-quality and diverse 3D textured meshes suited for different industries and use cases.

Disentanglement between Geometry and Texture

Meaningful Interpolation

GET3D achieves disentanglement between geometry and texture, as demonstrated through the meaningful interpolation of each of them. In each row, the tool generates shapes from the same geometry and texture latent codes, but with different texture latent codes and during different geometric interpolation. Each column shows the generated shapes from the same texture latent code with varying geometry codes. This disentanglement offers greater control and versatility during content creation and enhances the scalability of the tool.

Local Perturbation

GET3D is able to generate similar looking shapes with slight differences. The tool can use locally perturbed latent codes through the addition of small noise to generate 3D shapes with slight variations in different parts of the shape. This feature offers more fine-grained control over individual elements of the shape.

Material Generation and View-Dependent Lighting Effects

Combined with DIBR++, GET3D can produce meaningful view-dependent lighting effects and generate materials in an unsupervised manner. The tool delivers these capabilities through its neural rendering engine, making it a powerful and versatile tool for content creation in different industries and use cases.

Text-Guided Shape Generation

Large Amount of Meaningful Shapes

GET3D offers a unique text-guided shape generation feature that allows users to provide text prompts for the kind of shape they want to generate. GET3D finetunes its 3D generator by computing directional CLIP loss on the rendered 2D images and the provided texts from the users. The tool can generate a large amount of meaningful shapes with text prompts from diverse users, thereby enhancing the versatility of the tool.

Related Works

Building on Top of Previous Works

GET3D builds on several previous works, including NeurIPS 2020's Learning Deformable Tetrahedral Meshes for 3D Reconstruction, NeurIPS 2021's Deep Marching Tetrahedra: A Hybrid Representation for High-Resolution 3D Shape Synthesis, CVPR 2022's Extracting Triangular 3D Models, Materials, and Lighting from Images and EG3D: Efficient Geometry-aware 3D Generative Adversarial Networks, as well as NeurIPS 2021's DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer and SIGGRAPH Asia 2020's Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering. These works offer the foundation on which GET3D is built and therefore form part of the tool's features.

FAQ

What is GET3D and what does it do?

GET3D is an AI tool developed by Nvidia for 3D content creation. The tool aims to provide a performant 3D generative model that can synthesize complex, explicit, and textured 3D meshes that can be directly consumed by 3D rendering engines, making them immediately usable in downstream applications. GET3D is able to generate high-quality 3D textured meshes, ranging from chairs, animals, motorbikes, and human characters to buildings, achieving significant improvements over previous methods.

How does GET3D work?

GET3D generates a 3D SDF and a texture field via two latent codes, utilizing DMTet to extract a 3D surface mesh from the SDF and query the texture field at surface points to get colors. The tool is trained with adversarial losses defined on 2D images, using a rasterization-based differentiable renderer to obtain RGB images and silhouettes. Two 2D discriminators are utilized, one on RGB image and the other on a silhouette, to classify whether the inputs are real or fake. The whole model is end-to-end trainable, generating diverse shapes with arbitrary topology, high-quality geometry, and texture.

What makes GET3D different from other 3D generative modeling tools?

Prior works on 3D generative modeling lacked geometric details, were limited in the mesh topology they could produce, typically did not support textures, or used neural renderers in the synthesis process, which made their use in common 3D software non-trivial. GET3D bridges recent success in differentiable surface modeling, differentiable rendering, and 2D Generative Adversarial Networks to train its model from 2D image collections. The AI tool also boasts of good disentanglement between geometry and texture, providing a meaningful interpolation and a smooth transition between different shapes for all categories.

What industries can benefit from using GET3D?

Industries like gaming, film, and architecture can benefit from the use of GET3D in their 3D content creation. The tool can assist in the rapid generation of 3D animals, complex shapes, human characters, unique motorbikes, and even buildings, eliminating the guesswork from 3D creation.

How can users try GET3D, and what are the system requirements?

GET3D is available on GitHub for download and use by developers. The system requirements for the tool include Ubuntu 20.04 LTS, Python 3.7 or higher, NVIDIA GPU (Volta, Turing or newer architecture), and CUDA toolkit 11.2 or higher. For business inquiries, users can visit the GET3D website and submit a form for Nvidia Research Licensing.

GET3D (Nvidia)
Alternatives

Company Results

3DFY provides cutting-edge 3D scanning, modeling, and visualization services powered by advanced artificial intelligence infrastructure.

CSM

CSM is a Text-to-CodeAI tool for 3D world generation, games and programming, accelerating the process with an intelligent copilot.

A new neural network architecture from Google that creates in-between images for temporal up-sampling and slow-motion effects in videos.

NVIDIA Canvas is an easy to use, powerful, and versatile tool that turns simple brushstrokes into vibrant images using NVIDIA GeForce® and RTX™ GPUs.