“AI3D Co-Create with AI3D Render” by Chang and Gold – ACM SIGGRAPH HISTORY ARCHIVES

“AI3D Co-Create with AI3D Render” by Chang and Gold

  • 2025 Appy Hour_Chang_AI3D Co-Create with AI3D Render

Conference:


Experience Type(s):


Title:


    AI3D Co-Create with AI3D Render

Developer(s):



Description:


    Use AI to intuitively create 3D objects in reality. The user augments rough primitives and gives AI a prompt, takes a photo from an angle using an AI3D Easel which iteratively helps refine the image until it is ready for image to 3D process. We also introduce AI3D Render.

References:


    [1] Anthropic. 2024. Introducing the model context protocol (mcp). Anthropic Developer Documentation. Accessed: 2025-04-10.
    [2] Yosun Chang. 2015. Wanderlust. https://devpost.com/software/wanderlust-qofj3. Presented at SIGGRAPH Appy Hour 2015, Los Angeles, CA.
    [3] Yosun Chang. 2020. DrawmaticAR: Automagical AR content from written words!. In ACM SIGGRAPH 2020 Real-Time Live!ACM, Virtual Event, USA.
    [4] Yosun Chang. 2023a. Napkinmatic. In SIGGRAPH ’23 Appy Hour. ACM.
    [5] Yosun Chang. 2023b. Napkinmatic3D. In SIGGRAPH ’23 Real-Time Live!ACM.
    [6] Yosun Chang. 2024a. AI3D Camera Obscura. VisionDevCamp, Best of Show.
    [7] Yosun Chang. 2024b. AI3D Co-Create: Primitives. https://ai3d.dev/co-create.
    [8] Yosun Chang. 2024c. AI3D Desktop. In SIGGRAPH Appy Hour ’24. ACM.
    [9] Yosun Chang. 2024d. AI3D Sculpt. ECCV 2024 Demo Track, Milan, Italy.
    [10] Yosun Chang. 2024e. Napkinmatic App as a Ubiquitous Pocket AGI: Utility-Context-Sensitive 3D AR XR HCI for Vision-to-LLM. In SIGGRAPH Appy Hour ’24. ACM.
    [11] Yosun Chang. 2024f. VolumeMatic and the AI3D.foundation. In SIGGRAPH Appy Hour ’24. ACM.
    [12] Mark Boss et al. 2024. SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement. arxiv:https://arXiv.org/abs/2408.00653 [cs.CV]
    [13] Ryan Burgert et al. 2025a. Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise. In CVPR.
    [14] Ruicheng Wang et al. 2025b. MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision. arxiv:https://arXiv.org/abs/2410.19115 [cs.CV]


ACM Digital Library Publication:


Overview Page:



Submit a story:

If you would like to submit a story about this experience or presentation, please contact us: historyarchives@siggraph.org