ABSTRACT
We present an AI-based tool for interactive material generation within the NVIDIA Omniverse environment. Our approach leverages a State-of-the-art Latent Diffusion model with some notable modifications to adapt it to the task of material generation. Specifically, we employ circular-padded convolution layers in place of standard convolution layers. This unique adaptation ensures the production of seamless tiling textures, as the circular padding facilitates seamless blending at image edges. Moreover, we extend the capabilities of our model by training additional decoders to generate various material properties such as surface normals, roughness, and ambient occlusions. Each decoder utilizes the same latent tensor generated by the de-noising UNet to produce a specific material channel. Furthermore, to enhance real-time performance and user interactivity, we optimize our model using NVIDIA TensorRT, resulting in improved inference speed for an efficient and responsive tool.
- AUTOMATIC1111. 2022. Web UI. https://github.com/AUTOMATIC1111/stable-diffusion-webui.Google Scholar
- Zudi Lin, Prateek Garg, Atmadeep Banerjee, Salma Abdel Magid, Deqing Sun, Yulun Zhang, Luc Van Gool, Donglai Wei, and Hanspeter Pfister. 2022. Revisiting RCAN: Improved Training for Image Super-Resolution. arXiv preprint arXiv:2201.11279 (2022).Google Scholar
- Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2021. High-Resolution Image Synthesis with Latent Diffusion Models. arxiv:2112.10752 [cs.CV]Google Scholar
- Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 234–241.Google Scholar
Index Terms
- Interactive AI Material Generation and Editing in NVIDIA Omniverse
Recommendations
Interactive editing of performance-based facial animation
SA '19: SIGGRAPH Asia 2019 Technical BriefsWhile performance-based facial animation efficiently produces realistic animation, it still needs additional editing after automatic solving and retargeting. We review why additional editing is required and present a set of interactive editing solutions ...
Interactive Fur Shaping and Rendering Using Nonuniform-Layered Textures
This system represents furry surfaces as nonuniform layers of texture slices, automatically adjusting the layers to achieve efficient, high-quality rendering. It employs layered shadow maps to simulate self-shadowing. Interactive tools let users ...
Interactive reflection editing
Effective digital content creation tools must be both efficient in the interactions they provide but also allow full user control. There may be occasions, when art direction requires changes that contradict physical laws. In particular, it is known that ...
Comments