Deploy Pruna models

pruna offers deployment integrations with the following tools to supercharge your workflows.

Pruna is the bridge to the broader AI ecosystem, making sure your optimized models run smoothly across popular deployment and inference platforms. Whether you’re running on Docker, deploying with TritonServer, building in ComfyUI, or serving with vLLM, Pruna fits right in.

Docker

Deploy Pruna in Docker containers for reproducible, GPU-accelerated environments.

Docker
ComfyUI

Supercharge your Stable Diffusion and Flux workflows with specialized nodes.

ComfyUI
NVIDIA Triton Server

Production-scale AI deployments with scalable inference.

Triton Inference Server
vLLM

High-performance LLM serving with model-level optimizations.

vLLM
AWS AMI

Amazon machine images for running models.

AWS AMI
Replicate

An inference platform for running machine learning models in production.

Replicate
Koyeb

A platform for running machine learning models in production.

Koyeb
Lightning AI LitServe

A flexible serving engine for AI models built on FastAPI to self-host and serve models.

Lightning - LitServe