Skip to content

atharva-deopujari/Virtual-Tryon

Repository files navigation

Virtual Try-On with Image Enhancement

Setup Instructions

1. Run Setup Script

This script installs all required dependencies and resolves version differences between the two used projects:

chmod +x setup.sh
./setup.sh

3. Authenticate with Hugging Face

You need to provide a HuggingFace token to access the models:

chmod +x hugging_auth.sh
./hugging_auth.sh YOUR_HUGGINGFACE_TOKEN

Replace YOUR_HUGGINGFACE_TOKEN with your actual Hugging Face token. You can generate one at https://huggingface.co/settings/tokens

4. Initial Run to Cache Models

Run the application once to download and cache the required models:

python app.py

This will download the models to the HuggingFace cache directory.

5. Download Models Locally

To have permanent local copies of the models:

python download_models.py --models_dir ./models

This will download all required models to the specified directory. The default is ./models/.

Running the Application

Option 1: Gradio Web Interface with Online Models

python app.py

Option 2: Gradio Web Interface with Local Models

python app_local.py --models_dir ./models

Option 3: Command Line Interface with Local Models

python app_main.py --model_image path/to/model.jpg --garment_image path/to/garment.jpg --models_dir ./models

Additional options:

  • --output: Output image path (default: output.png)
  • --height: Image height (default: 768)
  • --width: Image width (default: 576)
  • --seed: Random seed (default: 0)
  • --steps: Inference steps (default: 15)
  • --show_type: Display type ("follow model image", "follow height & width", or "all outputs")
  • --group_offloading: Enable group offloading for memory management

Example with all options:

python app_main.py --model_image examples/model.jpg --garment_image examples/garment.jpg --output result.png --height 800 --width 600 --seed 42 --steps 20 --show_type "follow height & width" --models_dir ./models --group_offloading

Building API Endpoints

The app_main.py file provides core functionality without UI dependencies, making it ideal for creating API endpoints. You can import the VirtualTryOn class from this file to create custom endpoints.

Example:

from app_main import VirtualTryOn

# Load models once during startup
models_dir = "./models"
VirtualTryOn.load_models(models_dir=models_dir)

# Use in API endpoint
def generate_image_endpoint(model_image, garment_image, seed=0):
    result = VirtualTryOn.generate_with_fixed_prompt(
        model_image,
        garment_image,
        seed=seed,
        models_dir=models_dir
    )
    return result

Image Enhancement Feature

This feature is enabled by default but can be controlled programmatically.

Controlling Enhancement

You can enable or disable the enhancement feature using the set_enhancement function:

from app_main import VirtualTryOn

# Disable enhancement
VirtualTryOn.set_enhancement(False)

# Enable enhancement (default)
VirtualTryOn.set_enhancement(True)

You can also control enhancement on a per-call basis by setting the use_enhancement parameter:

result = VirtualTryOn.generate_with_fixed_prompt(
    model_image,
    garment_image,
    models_dir=models_dir,
    use_enhancement=False  # Disable enhancement for this generation only
)

Model Directory Structure

After running download_models.py, your models directory will contain:

  • ./models/flux/ - FLUX base model components
  • ./models/any2any_tryon/ - Try-on specific LoRA weights
  • ./models/esrgan/ - RealESRGAN upsampling model

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors