Preview for Single Image to 3D Rendering with Hunyuan3D-2
Single Image to 3D Rendering with Hunyuan3D-2 workflow diagram

Run this workflow on InstaSD

Get started in minutes! Run this ComfyUI workflow online - no setup required.

Description

Bring your 2D images to life by generating detailed 3D models with this cutting-edge ComfyUI workflow powered by Hunyuan3D 2.0. Built on top of the Hunyuan3D-DiT and Hunyuan3D-Paint foundation models, this setup converts a single image into a high-quality, textured 3D meshβ€”ready for rendering, animation, or creative modification.

🎯 Features

  • Single-Image 3D Generation – Transform any image into a full 3D mesh with rich texture and structural integrity.
  • Hunyuan3D-DiT Backbone – Leverages a scalable flow-based diffusion transformer for precise geometry generation aligned with the input image.
  • High-Resolution Texture Mapping – Uses Hunyuan3D-Paint to synthesize vibrant and realistic textures.
  • Mesh Compatibility – Works with both generated and user-supplied meshes for greater flexibility.
  • Efficient Workflow – Designed for seamless integration into your creative pipeline with minimal setup.

πŸ’‘ Use Cases

  • Product Prototyping – Visualize physical products from concept art or product sketches.
  • Character Creation – Generate detailed 3D character models from concept images.
  • 3D Asset Generation for Games – Turn 2D assets into game-ready 3D objects.
  • AR/VR Content – Quickly develop immersive content from static images.
  • Educational Visualization – Build 3D teaching aids from reference photos or drawings.

βš™οΈ How It Works

  1. Load the Workflow – Open the pre-configured Hunyuan3D-based workflow in ComfyUI.
  2. Upload Your Input Image – Provide a single frontal image (ideal results with clear structure and lighting).
  3. Run Hunyuan3D-DiT – Generates base geometry aligned with your image.
  4. Apply Hunyuan3D-Paint – Synthesizes detailed, high-quality textures over the mesh.
  5. Export or Render – Preview the result in the integrated viewer or export for use in external 3D applications.

πŸ›  Tip: Ensure the input image has good lighting and minimal background clutter for optimal reconstruction quality.

Models

FileDestinationSource
hunyuan3d-dit-v2-0-fp16.safetensors/ComfyUI/models/diffusion_modelsDownload

Nodes

SetNodeGetNodePreviewImageDownloadAndLoadHy3DPaintModelDownloadAndLoadHy3DDelightModelHy3DRenderMultiViewImageResize+Hy3DVAEDecodeHy3DMeshUVWrapLoadImageHy3DGenerateMeshHy3DSampleMultiViewHy3DApplyTextureHy3DExportMeshHy3DBakeFromMultiviewHy3DMeshVerticeInpaintTextureCV2InpaintTextureImageCompositeMaskedMaskToImageSolidMaskTransparentBGSession+ImageRemoveBackground+Preview3DHy3DModelLoaderHy3DIMRemeshHy3DPostprocessMeshNote