Show HN: Meshcraft – Text-to-3D and image-to-3D with selectable AI engines
otmardev Saturday, March 07, 2026Hey HN, I built Meshcraft – a web-based tool that generates 3D models (GLB) from text prompts or images.
What's new since the first Show HN (Feb): Back then it was a basic TripoSR wrapper. A commenter here (thanks vunderba) pointed me to Trellis 2, which was vastly better. Since then I've rebuilt the whole thing:
- Two 3D engines: Standard (Trellis 2 via HuggingFace ZeroGPU) and Premium (Hunyuan v3.1 Pro via fal.ai). Standard is free, Premium costs 50 credits and produces ~1.4M face models with proper PBR materials. - Four image models for text-to-3D: FLUX 1 Schnell, FLUX 2 Dev, GPT Image 1 Mini, GPT Image 1.5. You pick the model, type a prompt, and it generates an image then converts to 3D. - Unified credit system with variable costs per action (1-59 credits depending on engine + image model combo).
Stack: Next.js 16 on Netlify, Supabase (auth + DB + storage), Stripe, HuggingFace ZeroGPU H200, fal.ai serverless for Hunyuan and image generation. Background generation via Netlify Background Functions (up to 15 min async).
What I learned building this:
1. The 3D engine is the quality bottleneck, not the image model. I tested 8 engines before settling on two. Trellis 2 is great for simple objects but struggles with complex geometry (missing fingers, back-side artifacts). Hunyuan v3.1 Pro solves most of these. 2. Image model quality matters less than you'd think for 3D – a $0.003 FLUX schnell image produces nearly the same 3D result as a $0.009 GPT Image 1.5 image. 3. HuggingFace ZeroGPU is incredible for bootstrapping – free H200 inference with a $9/mo Pro account. The cold start and queue times are the trade-off.
Free tier: 5 credits/month, no credit card required. Would love feedback on the generation quality and UX.