“Implementing SOTA Generative AI Pipelines in Your 3D Application” by Gold, Chang and Chang – ACM SIGGRAPH HISTORY ARCHIVES

“Implementing SOTA Generative AI Pipelines in Your 3D Application” by Gold, Chang and Chang

  • 2025 Labs_Gold_Implementing SOTA Generative AI Pipelines in Your 3D Application

Conference:


Experience Type(s):


Labs Type(s):


Title:


    Implementing SOTA Generative AI Pipelines in Your 3D Application

Organizer(s)/Presenter(s):



Description:


    Discover how to integrate state-of-the-art open-source generative AI into your 3D pipeline to flow from idea to 3D asset. In this 90-minute session, you’ll build a ComfyUI workflow that transforms concept art into image arrays in any style, culminating in delivery to AI3D endpoints to generate 3D models.

References:


    [1] API.AI3D.dev. 2025. The Unified Gen AI 3D API. AI3D Foundation.

    [2] Yosun Chang. 2023a. Napkinmatic. In SIGGRAPH ’23 Appy Hour. ACM.

    [3] Yosun Chang. 2023b. Napkinmatic3D. In SIGGRAPH ’23 Real-Time Live!ACM.

    [4] Yosun Chang. 2024a. AI3D Desktop. In SIGGRAPH Appy Hour ’24. ACM.

    [5] Yosun Chang. 2024b. Napkinmatic App as a Ubiquitous Pocket AGI: Utility-Context-Sensitive 3D AR XR HCI for Vision-to-LLM. In SIGGRAPH Appy Hour ’24. ACM.

    [6] Yosun Chang. 2025a. AI3D Co-Create with AI3D Render. In SIGGRAPH Appy Hour ’25. ACM, Vancouver, BC, Canada.

    [7] Yosun Chang. 2025b. AI3D Render. Presented at CVPR 2025 Demo Track.

    [8] Dmitry Tochilkin et al. 2024a. TripoSR: Fast 3D Object Reconstruction from a Single Image. arXiv preprint arXiv:https://arXiv.org/abs/2403.02151 (2024).

    [9] Jianfeng Xiang et al. 2024b. Structured 3D Latents for Scalable and Versatile 3D Generation. arXiv preprint arXiv:https://arXiv.org/abs/2412.01506 (2024).

    [10] Longwen Zhang et al. 2024c. CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets. arxiv:https://arXiv.org/abs/2406.13897

    [11] Mark Boss et al. 2024d. SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement. arxiv:https://arXiv.org/abs/2408.00653 [cs.CV]

    [12] Ryan Burgert et al. 2025a. Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise. In CVPR.

    [13] Zixuan Huang et al. 2025b. SPAR3D: Stable Point-Aware Reconstruction of 3D Objects from Single Images. arxiv:https://arXiv.org/abs/2501.04689 [cs.CV]

    [14] Tencent Hunyuan3D Team. 2025. Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D Assets Generation. arxiv:https://arXiv.org/abs/2501.12202 [cs.CV]


ACM Digital Library Publication:


Overview Page:



Submit a story:

If you would like to submit a story about this experience or presentation, please contact us: historyarchives@siggraph.org