This script installs all required dependencies and resolves version differences between the two used projects:
chmod +x setup.sh
./setup.shYou need to provide a HuggingFace token to access the models:
chmod +x hugging_auth.sh
./hugging_auth.sh YOUR_HUGGINGFACE_TOKENReplace YOUR_HUGGINGFACE_TOKEN with your actual Hugging Face token. You can generate one at https://huggingface.co/settings/tokens
Run the application once to download and cache the required models:
python app.pyThis will download the models to the HuggingFace cache directory.
To have permanent local copies of the models:
python download_models.py --models_dir ./modelsThis will download all required models to the specified directory. The default is ./models/.
python app.pypython app_local.py --models_dir ./modelspython app_main.py --model_image path/to/model.jpg --garment_image path/to/garment.jpg --models_dir ./modelsAdditional options:
--output: Output image path (default: output.png)--height: Image height (default: 768)--width: Image width (default: 576)--seed: Random seed (default: 0)--steps: Inference steps (default: 15)--show_type: Display type ("follow model image", "follow height & width", or "all outputs")--group_offloading: Enable group offloading for memory management
Example with all options:
python app_main.py --model_image examples/model.jpg --garment_image examples/garment.jpg --output result.png --height 800 --width 600 --seed 42 --steps 20 --show_type "follow height & width" --models_dir ./models --group_offloadingThe app_main.py file provides core functionality without UI dependencies, making it ideal for creating API endpoints. You can import the VirtualTryOn class from this file to create custom endpoints.
Example:
from app_main import VirtualTryOn
# Load models once during startup
models_dir = "./models"
VirtualTryOn.load_models(models_dir=models_dir)
# Use in API endpoint
def generate_image_endpoint(model_image, garment_image, seed=0):
result = VirtualTryOn.generate_with_fixed_prompt(
model_image,
garment_image,
seed=seed,
models_dir=models_dir
)
return resultThis feature is enabled by default but can be controlled programmatically.
You can enable or disable the enhancement feature using the set_enhancement function:
from app_main import VirtualTryOn
# Disable enhancement
VirtualTryOn.set_enhancement(False)
# Enable enhancement (default)
VirtualTryOn.set_enhancement(True)You can also control enhancement on a per-call basis by setting the use_enhancement parameter:
result = VirtualTryOn.generate_with_fixed_prompt(
model_image,
garment_image,
models_dir=models_dir,
use_enhancement=False # Disable enhancement for this generation only
)After running download_models.py, your models directory will contain:
./models/flux/- FLUX base model components./models/any2any_tryon/- Try-on specific LoRA weights./models/esrgan/- RealESRGAN upsampling model