Automate finding the best SmashConfig with the Optimization Agent (Pro)

The OptimizationAgent is an experimental but powerful feature in pruna_pro that automatically finds the best algorithm configuration for your model based on your specific objectives, requirements and constraints.

Not using pruna_pro? Check out the optimization guide to learn how to optimize your model manually.

Basic Optimization Agent Workflow

pruna_pro follows a simple workflow for optimizing models:

graph LR

User –>|provides| C User –>|provides| Task User –>|provides| Metrics C –>|input to| OptimizationAgent Task –>|defines objective for| OptimizationAgent Metrics –>|input to| Task OptimizationAgent –> InstantSearch[“Instant Search”] OptimizationAgent –> ProbabilisticSearch[“Probabilistic Search”] InstantSearch –>|returns| PrunaModel ProbabilisticSearch –>|returns| PrunaModel User –>|optionally provides| B

subgraph A[“Search Methods”]

InstantSearch ProbabilisticSearch

end

subgraph B[“Optional Enhancements”]

direction TB Tokenizer Processor CalibrationData

end

C[“PreTrained Model”] B –>|informs| OptimizationAgent

style User fill:#bbf,stroke:#333,stroke-width:2px style OptimizationAgent fill:#bbf,stroke:#333,stroke-width:2px style C fill:#bbf,stroke:#333,stroke-width:2px style Task fill:#bbf,stroke:#333,stroke-width:2px style Metrics fill:#bbf,stroke:#333,stroke-width:2px style PrunaModel fill:#bbf,stroke:#333,stroke-width:2px style InstantSearch fill:#f9f,stroke:#333,stroke-width:2px style ProbabilisticSearch fill:#f9f,stroke:#333,stroke-width:2px style Tokenizer fill:#f9f,stroke:#333,stroke-width:2px style CalibrationData fill:#f9f,stroke:#333,stroke-width:2px style Processor fill:#f9f,stroke:#333,stroke-width:2px

Let’s see what that looks like in code.

from pruna_pro import OptimizationAgent
from pruna.evaluation.task import Task
from pruna.data.pruna_datamodule import PrunaDataModule
from diffusers import StableDiffusionPipeline

# Define your task with metrics and your model
model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
task = Task(['latency'], datamodule=PrunaDataModule.from_string('LAION256'))

# Create the optimization agent with the model and your objectives
agent = OptimizationAgent(model=model, task=task)

# Find the best configuration instantly
optimized_model = agent.instant_search()

# Find the best configuration probabilistically
optimized_model = agent.probabilistic_search(
    n_trials=15,              # Number of configurations to try
    n_iterations=15,          # Iterations per evaluation
    n_warmup_iterations=5,    # Warmup iterations per evaluation of efficiency metrics
)

Choosing Optimization Strategies

The OptimizationAgent (experimental) offers two main strategies for finding the optimal configuration: instant search and probabilistic search.

Configure the Optimization Agent

There are a few things you can configure to make the OptimizationAgent work for you.

Configure the Target Metrics

The OptimizationAgent will optimize for all metrics in the task by default.

You can also optionally specify target metrics to focus the search on a subset of the metrics defined in the Task:

from pruna_pro import OptimizationAgent
from pruna.evaluation.task import Task
from pruna.data.pruna_datamodule import PrunaDataModule
from diffusers import StableDiffusionPipeline

task = Task(['latency', 'clip_score'], datamodule=PrunaDataModule.from_string('LAION256'))

model = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

agent = OptimizationAgent(model=model, task=task)

# Set the target metrics
agent.set_target_metrics(['latency'])

optimized_model = agent.probabilistic_search(
    n_trials=15,              # Number of configurations to try
    n_iterations=15,          # Iterations per evaluation
    n_warmup_iterations=5,    # Warmup iterations per evaluation of efficiency metrics
)
INFO - Setting up search space for the OptimizationAgent...
INFO - Evaluating base model...
INFO - Base results:
INFO - clip_score: 27.8676700592041
INFO - latency: 7150.48515625
INFO - Starting probabilistic search...

...

INFO - Trial 5 completed with results:
INFO - clip_score: 28.49169921875
INFO - latency: 7980.82841796875
INFO - Tested configuration:
SmashConfig(
'quantizer': 'hqq_diffusers',
'hqq_diffusers_backend': 'bitblas',
'hqq_diffusers_group_size': 32,
'hqq_diffusers_weight_bits': 8,
)
INFO - --------------------------------
INFO - Trial 4 is on the pareto front with results:
INFO - latency: 5006.469954427083
INFO - Trial 11 is on the pareto front with results:
INFO - latency: 1092.8145833333333
INFO - Trial 10 is on the pareto front with results:
INFO - latency: 580.0285685221354
INFO - Trial 6 is on the pareto front with results:
INFO - latency: 519.7881591796875
INFO - --------------------------------

Configure Tokenizers, Processors or Calibration Data

The OptimizationAgent supports adding various components that might be needed for compression - make sure to specify as many as possible, as this will unlock more compression algorithms that can be taken into account.

from pruna_pro import OptimizationAgent
from pruna.evaluation.task import Task
from pruna.data.pruna_datamodule import PrunaDataModule
from transformers import AutoModelForCausalLM

# Set up model, task and agent
model_id = "facebook/opt-125m"
model = AutoModelForCausalLM.from_pretrained(model_id)
task = Task(['latency'], datamodule=PrunaDataModule.from_string("WikiText"))
agent = OptimizationAgent(model=model, task=task)

# Add a tokenizer if needed
agent.add_tokenizer(model_id)

# Add calibration data if needed
agent.add_data("WikiText")

# Find the best configuration instantly - now with an extended set of compatible compression algorithms
optimized_model = agent.instant_search()

Note

The data provided to the Task might differ from the data given to the OptimizationAgent in cases where you prefer to calibrate on a different dataset than the one used for evaluation.

Best Practices

Define Clear Objectives

Use appropriate metrics in your task definition that represent your optimization goals well.

Provide Sufficient Trials

For probabilistic_search(), more trials generally lead to better results and more evaluation iteration give a more accurate estimate of the model performance.