[R] We open-sourced FASHN VTON v1.5: a pixel-space, maskless virtual try-on model trained from scratch (972M params, Apache-2.0)
| |
We just open-sourced FASHN VTON v1.5, a virtual try-on model that generates photorealistic images of people wearing garments directly in pixel space. We trained this from scratch (not fine-tuned from an existing diffusion model), and have been running it as an API for the past year. Now we’re releasing the weights and inference code. Why we’re releasing thisMost open-source VTON models are either research prototypes that require significant engineering to deploy, or they’re locked behind restrictive licenses. As state-of-the-art capabilities consolidate into massive generalist models, we think there’s value in releasing focused, efficient models that researchers and developers can actually own, study, and extend commercially. We also want to demonstrate that competitive results in this domain don’t require massive compute budgets. Total training cost was in the $5-10k range on rented A100s. This follows our human parser release from a couple weeks ago. Architecture
Key differentiatorsPixel-space operation: Unlike most diffusion models that work in VAE latent space, we operate directly on RGB pixels. This avoids lossy VAE encoding/decoding that can blur fine garment details like textures, patterns, and text. Maskless inference: No segmentation mask is required on the target person. This improves body preservation (no mask leakage artifacts) and allows unconstrained garment volume. The model learns where clothing boundaries should be rather than being told. Practical details
Links
Quick example
Coming soon
Happy to answer questions about the architecture, training, or implementation. submitted by /u/JYP_Scouter |