Algorithms Overview
At its core, the pruna package is a framework of compression algorithms. By offering a consistent interface, it simplifies the integration of diverse compression algorithms. In this section, we will introduce you to all the algorithms you can currently apply with the package. Algorithms marked with “(Pro)” are only available in the pruna_pro package.
pruna wouldn’t be possible without the amazing work of the authors behind these algorithms. 💜 We’re really grateful for their contributions and encourage you to check out their repositories!
Batchers
Batching groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing overall processing time.
ifw
Insanely Fast Whisper is an optimized version of Whisper models that significantly speeds up transcription. It achieves lower latency and higher throughput through low-level code optimizations and efficient batching, making real-time speech recognition more practical.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
16 |
16 or 32 |
Sets the number of bits to use for weight quantization. |
|
16 |
1, 2, 4, 8, 16, 32 or 64 |
The batch size to use for inference. Higher is faster but needs more memory. |
whisper_s2t
WhisperS2T is an optimized speech-to-text pipeline built for Whisper models.
c_translate
, c_generate
, c_whisper
, half
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
False |
True, False |
Whether to quantize to int8 for inference. |
|
16 |
1, 2, 4, 8, 16, 32 or 64 |
The batch size to use for inference. Higher is faster but needs more memory. |
Cachers
Caching is a technique used to store intermediate results of computations to speed up subsequent operations, particularly useful in reducing inference time for machine learning models by reusing previously computed results.
deepcache
DeepCache accelerates inference by leveraging the U-Net blocks of diffusion pipelines to reuse high-level features.
stable_fast
, torch_compile
, half
, hqq_diffusers
, diffusers_int8
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
2 |
1, 2, 3, 4 or 5 |
Interval at which to cache - 1 disables caching. Higher is faster but might affect quality. |
adaptive
(Pro)
Adaptive caching adjusts caching dynamically for each prompt, determining the optimal inference steps to reuse cached outputs.
hyper
, torch_compile
, stable_fast
, hqq_diffusers
, diffusers_int8
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
0.01 |
Range 0.001 to 0.2 |
How much the difference between the current and previous latent can be before caching.Higher is faster, but reduces quality. |
|
4 |
1, 2, 3, 4 or 5 |
How many steps are allowed to be skipped in a row. Higher is faster, but reduces quality. |
auto
(Pro)
Given a speed_factor (e.g., 0.5 to halve latency), auto caching determines the optimal caching schedule to achieve the desired latency reduction.
hyper
, torch_compile
, stable_fast
, hqq_diffusers
, diffusers_int8
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
0.5 |
Range 0.0 to 1.0 |
Controls inference latency. Lower values yield faster inference but may compromise quality. |
flux_caching
(Pro)
Flux caching works similarly to periodic caching, but stores outputs of the transformer blocks instead of the output of the whole backbone.
hyper
, torch_compile
, stable_fast
, diffusers_int8
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
2 |
1, 2, 3, 4, 5, 6 or 7 |
How many model steps to skip in a row. Higher is faster, but reduces quality. |
|
2 |
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 or 10 |
How many steps to wait before starting to cache. |
periodic
(Pro)
After a configurable start_step, periodic caching computes the output of the backbone (can be a UNet or a Transformer) every cache_interval steps and reuses this cached output for the remaining steps.
hyper
, torch_compile
, stable_fast
, hqq_diffusers
, diffusers_int8
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
2 |
1, 2, 3, 4, 5, 6 or 7 |
How many model steps to skip in a row. Higher is faster, but reduces quality. |
|
2 |
0, 1, 2, 3, 4, 5, 6, 7, 8, 9 or 10 |
How many steps to wait before starting to cache. |
Compilers
Compilation optimizes the model for specific hardware.
c_generate
CGenerate employs a custom runtime that leverages optimizations like weight quantization, layer fusion, and batch reordering to boost performance and reduce memory usage on both CPUs and GPUs for Causal LM models.
whisper_s2t
, half
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
16 |
8 or 16 |
Sets the number of bits to use for weight quantization. |
c_translate
CTranslate employs a custom runtime that leverages optimizations like weight quantization, layer fusion, and batch reordering to boost performance and reduce memory usage on both CPUs and GPUs for Causal LM models used for Translation.
whisper_s2t
, half
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
16 |
8 or 16 |
Sets the number of bits to use for weight quantization. |
c_whisper
CWhisper employs a custom runtime that leverages optimizations like weight quantization, layer fusion, and batch reordering to boost performance and reduce memory usage on both CPUs and GPUs for Whisper models.
whisper_s2t
, half
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
16 |
8 or 16 |
Sets the number of bits to use for weight quantization. |
onediff
OneDiff achieves acceleration by converting diffusion model modules into optimized static graphs via PyTorch module compilation. This process fuses operations, applies low-level GPU kernel optimizations, and supports dynamic input shapes without the overhead of re-compilation.
half
.pip install pruna[onediff]
or pip install pruna[full]
.stable_fast
Stable-fast is an optimization framework for Image-Gen models. It accelerates inference by fusing key operations into optimized kernels and converting diffusion pipelines into efficient TorchScript graphs.
deepcache
, half
.pip install pruna[stable-fast]
or pip install pruna[stable-fast-cu11] --extra-index-url https://prunaai.pythonanywhere.com/
or pip install pruna[full]
.torch_compile
Optimizes given model or function using various backends and is compatible with any model containing PyTorch modules.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
default |
default, reduce-overhead, max-autotune, max-autotune-no-cudagraphs |
Compilation mode. |
|
inductor |
inductor, cudagraphs, onnxrt, tvm, openvino, openxla |
Compilation backend. |
|
True |
True, False |
Whether to discover compileable subgraphs or compile the full input graph. |
|
None |
None, True, False |
Whether to use dynamic shape tracing or not. |
x_fast
(Pro)
Based on stable_fast, this compiler speeds up inference latency for any model using a combination of xformers, triton, cudnn, and torch tracing.
quanto
, half
, text_to_text_lora
, text_to_image_lora
.pip install pruna[stable-fast]
or pip install pruna[stable-fast-cu11]
or pip install pruna[full]
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
True |
True, False |
Whether to use xformers for faster inference. |
ipex_llm
(Pro)
This compiler leverages advanced graph optimizations, quantization, and kernel fusion techniques to accelerate PyTorch-based LLM inference on Intel CPUs.
half
.pip install pruna_pro[intel]
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/
or pip install pruna_pro[full]
--extra-index-url https://prunaai.pythonanywhere.com/
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
8 or 4 |
The number of bits to use for weight quantization. |
Distillers
Distillation trains a smaller, simpler model to mimic a larger, more complex model.
hyper
(Pro)
Hyper-SD is a distillation framework that segments the diffusion process into time-step groups to preserve and reformulate the ODE trajectory. By integrating human feedback and score distillation, it enables near-lossless performance with drastically fewer inference steps.
half
, diffusers_int8
, deepcache
, auto
, adaptive
, flux_caching
, periodic
, torch_compile
, stable_fast
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
False |
True, False |
When this is set to True, the model is distilled to even less steps |
Pruners
Pruning removes less important or redundant connections and neurons from a model, resulting in a sparser, more efficient network.
torch_structured
Structured pruning removes entire units like neurons, channels, or filters from a network, leading to a more compact and computationally efficient model while preserving a regular structure that standard hardware can easily optimize.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
MagnitudeImportance |
RandomImportance, MagnitudeImportance, LAMPImportance, TaylorImportance, HessianImportance |
Importance criterion for pruning. |
|
64 |
Range 1 to 256 |
Number of calibration samples for importance computation. |
|
False |
True, False |
Whether to prune head dimensions. |
|
False |
True, False |
Whether to prune number of heads. |
|
False |
True, False |
Whether to perform global pruning. |
|
0.1 |
Range 0.0 to 1.0 |
Sparsity level up to which to prune. |
|
0.0 |
Range 0.0 to 1.0 |
Sparsity level up to which to prune heads. |
|
1 |
Range 1 to 10 |
Number of iterations for pruning. |
torch_unstructured
Unstructured pruning sets individual weights to 0 based on criteria such as magnitude, resulting in sparse weight matrices that retain the overall model architecture but may require specialized sparse computation support to fully exploit the efficiency gains.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
l1 |
random, l1 |
Pruning method to use. |
|
0.1 |
Range 0.0 to 1.0 |
Sparsity level up to which to prune. |
Quantizers
Quantization reduces the precision of the model’s weights and activations, making them much smaller in terms of memory required.
half
Converting model parameters to half precision (FP16) reduces memory usage and can accelerate computations on GPUs that support it.
ifw
, whisper_s2t
, deepcache
, c_translate
, c_generate
, c_whisper
, stable_fast
, onediff
, torch_compile
, torch_structured
, torch_unstructured
.hqq
Half-Quadratic Quantization (HQQ) leverages fast, robust optimization techniques for on-the-fly quantization, eliminating the need for calibration data.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
2, 4 or 8 |
Number of bits to use for quantization. |
|
64 |
8, 16, 32, 64 or 128 |
Group size for quantization. |
hqq_diffusers
Half-Quadratic Quantization (HQQ) leverages fast, robust optimization techniques for on-the-fly quantization, eliminating the need for calibration data and making it applicable to any model. This algorithm is specifically adapted for diffusers models.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
2, 4 or 8 |
Number of bits to use for quantization. |
|
64 |
8, 16, 32, 64 or 128 |
Group size for quantization. |
|
torchao_int4 |
gemlite, bitblas, torchao_int4 or marlin |
Backend to use for quantization. |
awq
Activation-aware Weight Quantization (AWQ) selectively quantizes model weights using a calibration dataset, preserving a small fraction that are important for maintaining performance in LLMs. This minimizes quantization loss, allowing models to operate at 4-bit precision without significantly sacrificing accuracy.
pip install pruna[autoawq]
or pip install pruna[full]
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
128 |
8, 16, 32, 64 or 128 |
Group size for quantization. |
diffusers_int8
BitsAndBytes offers a simple method to quantize models to 8-bit or 4-bit precision. The 8-bit mode blends outlier fp16 values with int8 non-outliers to mitigate performance degradation, while 4-bit quantization further compresses the model and is often used with QLoRA for fine-tuning. This algorithm is specifically adapted for diffusers models.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
4 or 8 |
Number of bits to use for quantization. |
|
False |
True, False |
Whether to enable double quantization. |
|
False |
True, False |
Whether to enable fp32 cpu offload. |
|
fp4 |
fp4, nf4 |
Quantization type to use. |
gptq
GPTQ is a post-training quantization technique that independently quantizes each row of the weight matrix to minimize error. The weights are quantized to int4, stored as int32, and then dequantized on the fly to fp16 during inference, resulting in nearly 4x memory savings and faster performance due to custom kernels that take advantage of the lower precision.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
2, 4 or 8 |
Sets the number of bits to use for weight quantization. |
|
True |
True, False |
Whether to use exllama for quantization. |
|
128 |
64, 128 or 256 |
Group size for quantization. |
llm_int8
BitsAndBytes offers a simple method to quantize models to 8-bit or 4-bit precision. The 8-bit mode blends outlier fp16 values with int8 non-outliers to mitigate performance degradation, while 4-bit quantization further compresses the model and is often used with QLoRA for fine-tuning.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
4 or 8 |
Sets the number of bits to use for weight quantization. |
|
False |
True, False |
Whether to enable double quantization. |
|
False |
True, False |
Whether to enable fp32 cpu offload. |
|
fp4 |
fp4, nf4 |
Quantization type to use. |
quanto
With Quanto, models with int8/float8 weights and float8 activations maintain nearly full-precision accuracy. Lower bit quantization is also supported. When only weights are quantized and optimized kernels are available, inference latency remains comparable, and device memory usage is roughly reduced in proportion to the bitwidth ratio.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
qfloat8 |
qint2, qint4, qint8 or qfloat8 |
Tensor type to use for quantization. |
|
True |
True, False |
Whether to calibrate the model. |
torch_dynamic
This technique converts model weights to lower precision (typically int8) dynamically at runtime, reducing model size and improving inference speed with minimal impact on accuracy and without the need for calibration data.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
qint8 |
quint8 or qint8 |
Tensor type to use for quantization. |
torch_static
In static quantization, both weights and activations are pre-converted to lower precision (e.g., int8) using a calibration process on representative data, which typically yields greater efficiency gains but requires additional steps during model preparation.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
qint8 |
quint8 or qint8 |
Tensor type to use for weight quantization. |
|
qint8 |
quint8 or qint8 |
Tensor type to use for activation quantization. |
|
per_tensor_affine |
per_tensor_symmetric, per_tensor_affine |
Quantization scheme to use. |
|
MinMaxObserver |
MinMaxObserver, MovingAverageMinMaxObserver, PerChannelMinMaxObserver, HistogramObserver |
Observer to use for quantization. |
torchao_autoquant
(Pro)
This algorithm compiles, quantizes and sparsifies weights, gradients, and activations for inference. This algorithm is specifically adapted for Image-Gen models.
Parameter |
Default |
Options |
Description |
---|---|---|---|
|
True |
True, False |
Whether to compile the model after quantization or not. |
higgs
(Pro)
HIGGS is a zero-shot quantization method that uses Hadamard preprocessing to transform weights and then selects MSE-optimal quantization grids.
torch_compile
, torch_unstructured
.pip install pruna_pro[higgs] --extra-index-url https://prunaai.pythonanywhere.com/
or pip install pruna_pro[higgs-cu11] --extra-index-url https://prunaai.pythonanywhere.com/
or pip install pruna_pro[full]
--extra-index-url https://prunaai.pythonanywhere.com/
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/cn/
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
4 |
2, 3 or 4 |
The number of bits to use for weight quantization. |
|
2 |
1 or 2 |
The number of groups to use for weight quantization. |
|
256 |
64, 128 or 256 |
The size of each group. |
|
1024 |
512, 1024 or 2048 |
The size of the hadamard matrix. |
|
1 |
1, 2, 4, 8 or 16 |
Set the batch size when running the inference. It is recommended to keep this the batch size for inference if you want to take adavantage of faster CUDA kernels. |
Recoverers
Recovery (experimental) restores the performance of a model after compression.
text_to_text_perp
(Pro)
This recoverer is a general purpose PERP recoverer for text-to-text models using norm, head and bias finetuning and optionally HuggingFace’s LoRA.
half
, quanto
, torch_dynamic
, llm_int8
, torch_compile
, x_fast
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
8 |
4, 8, 16, 32, 64 or 128 |
Rank of the LoRA layers. |
|
2.0 |
0.5, 1.0 or 2.0 |
Alpha/Rank ratio of the LoRA layers. |
|
None |
None, all-linear |
Target modules for the LoRA layers. |
|
1 |
Range 1 to 4096 |
Batch size for finetuning. |
|
1 |
Range 1 to 1024 |
Number of gradient accumulation steps for finetuning. |
|
1.0 |
Range 0.0 to 4096.0 |
Number of epochs for finetuning. |
|
0.0002 |
Range 0.0 to 1.0 |
Learning rate for finetuning. |
|
none |
none, wandb, tensorboard |
Where to report the finetuning results. |
|
AdamW8bit |
AdamW, AdamW8bit, PagedAdamW8bit |
Which optimizer to use for finetuning. |
text_to_text_inplace_perp
(Pro)
This is the same as text_to_text_perp
, but without LoRA layers which add extra computations and thus slow down the inference of the final model.
half
, quanto
, torch_dynamic
, llm_int8
, torch_compile
, x_fast
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
1 |
Range 1 to 4096 |
Batch size for finetuning. |
|
1 |
Range 1 to 1024 |
Number of gradient accumulation steps for finetuning. |
|
1.0 |
Range 0.0 to 4096.0 |
Number of epochs for finetuning. |
|
0.0002 |
Range 0.0 to 1.0 |
Learning rate for finetuning. |
|
none |
none, wandb, tensorboard |
Where to report the finetuning results. |
|
AdamW8bit |
AdamW, AdamW8bit, PagedAdamW8bit |
Which optimizer to use for finetuning. |
text_to_image_perp
(Pro)
This recoverer is a general purpose PERP recoverer for text-to-image models using norm, head and bias finetuning and optionally HuggingFace’s LoRA.
quanto
, torch_dynamic
, diffusers_int8
, deepcache
, flux_caching
, torch_compile
, x_fast
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
4 |
4, 8, 16, 32, 64 or 128 |
Rank of the LoRA layers. |
|
1.0 |
0.5, 1.0 or 2.0 |
Alpha/Rank ratio of the LoRA layers. |
|
0 |
Range 0 to 4096 |
Batch size for finetuning. |
|
1 |
Range 1 to 1024 |
Number of gradient accumulation steps for finetuning. |
|
1.0 |
Range 0.0 to 4096.0 |
Number of epochs for finetuning. |
|
1e-05 |
Range 0.0 to 1.0 |
Learning rate for finetuning. |
|
True |
True, False |
Whether to use CPU offloading for finetuning. |
|
AdamW8bit |
AdamW8bit, AdamW, Adam |
Which optimizer to use for finetuning. |
text_to_image_inplace_perp
(Pro)
This is the same as text_to_image_perp
, but without LoRA layers which add extra computations and thus slow down the inference of the final model.
quanto
, torch_dynamic
, diffusers_int8
, deepcache
, flux_caching
, torch_compile
, x_fast
.Parameter |
Default |
Options |
Description |
---|---|---|---|
|
0 |
Range 0 to 4096 |
Batch size for finetuning. |
|
1 |
Range 1 to 1024 |
Number of gradient accumulation steps for finetuning. |
|
1.0 |
Range 0.0 to 4096.0 |
Number of epochs for finetuning. |
|
1e-05 |
Range 0.0 to 1.0 |
Learning rate for finetuning. |
|
True |
True, False |
Whether to use CPU offloading for finetuning. |
|
AdamW8bit |
AdamW8bit, AdamW, Adam |
Which optimizer to use for finetuning. |