Flux generation in a heartbeat, literally

Open In Colab

This tutorial demonstrates how to use the pruna package to optimize your Flux model for faster inference. Any execution times given below are measured on an A100 GPU.

This tutorial smashes the Flux model on GPU for faster inference, which will require an A100 or comparable GPUs to run.

1. Loading the Flux Model

First, load your Flux model.

[ ]:
import torch
from diffusers import FluxPipeline

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
# pipe.enable_model_cpu_offload() # save some VRAM by offloading the model to CPU. Remove this if you have enough GPU memory
pipe.to('cuda')

2. Initializing the Smash Config

Next, initialize the smash_config.

[ ]:
from pruna import SmashConfig

# Initialize the SmashConfig
smash_config = SmashConfig()
smash_config['compilers'] = ['flux_caching']
smash_config['comp_flux_caching_cache_interval'] = 2 # Higher is faster, but reduces quality
smash_config['comp_flux_caching_start_step'] = 2 # Best to keep it as the same as cache_interval
smash_config['comp_flux_caching_compile'] = True # Whether to additionally compile the model for extra speed up
smash_config['comp_flux_caching_save_model'] = False # Whether to save the model after compilation or just use it for inference

3. Smashing the Model

Now, you can smash the model, which can take up to 2 minutes. Don’t forget to replace the token by the one provided by PrunaAI.

[ ]:
from pruna import smash

pipe = smash(
    model=pipe,
    token='<your_token>',  # replace <your-token> with your actual token or set to None if you do not have one yet
    smash_config=smash_config,
)

4. Running the Model

Run the model to generate images with accelerated inference.

[ ]:
pipe("A red apple", num_inference_steps=50).images[0]

Wrap Up

Congratulations! You have successfully smashed a Flux model. Enjoy the speed-up!