Use P-Image-Upscale for General Quality Improvements and Final Polish
This notebook shows how to use P-Image-Upscale for general quality improvements on generated or edited images. In practice, one of the best uses is as the final polish pass after generation or editing, once the composition already works.
We will cover three common situations:
you already have an image you like and want better overall quality
you generated an image with
p-imageand want a cleaner, more finished final resultyou edited an image with
p-image-editand want a stronger final version to ship
Hero result
Before any setup, here is the kind of outcome we are targeting.

Starting image

P-Image-Upscale
In this kind of example, the value is not a new composition. The value is:
cleaner edges and stronger subject separation
richer micro-detail and more realistic textures
a more polished final image that looks closer to something you would actually publish
General quality improvements vs. final polish
Use p-image-upscale when the image already works and you want better overall quality: sharper detail, cleaner edges, and a more polished export.
The final polish pass framing is the highest-value version of that same idea. You already finished generation or editing, and now you want the result to feel cleaner, sharper, and more publishable without redesigning the content.
If the scene still needs meaningful content changes, use p-image-edit first. Upscaling is strongest after those decisions are already done.
Step 1: Setup
Install dependencies, configure model clients, and create an artifact directory.
[1]:
%%capture
%pip install -q replicate pillow requests
[2]:
import base64
import os
from pathlib import Path
import requests
from IPython.display import HTML, display
from replicate.client import Client
replicate_token = os.environ.get("REPLICATE_API_TOKEN")
if not replicate_token:
replicate_token = input("Replicate API token (r8_...): " ).strip()
replicate = Client(api_token=replicate_token)
P_IMAGE_MODEL = "prunaai/p-image"
P_IMAGE_EDIT_MODEL = "prunaai/p-image-edit"
P_IMAGE_UPSCALE_MODEL = "prunaai/p-image-upscale:7135ff723ecea89c0f67afcd51e4904904586e351093465bdc7beed45941b3e0"
UPSCALE_INPUT = {
"upscale_mode": "target",
"target": 8,
"enhance_details": True,
"enhance_realism": True,
"output_format": "jpg",
"output_quality": 50,
}
REPO_ROOT = Path.cwd().parents[2]
ARTIFACT_DIR = REPO_ROOT / ".context/p-image-upscale/notebook"
ARTIFACT_DIR.mkdir(parents=True, exist_ok=True)
print(f"Notebook assets will be written to: {ARTIFACT_DIR}")
Notebook assets will be written to: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook
[3]:
def read_output(output):
if hasattr(output, "url") and hasattr(output, "read"):
return str(output.url), output.read()
if hasattr(output, "read"):
return str(getattr(output, "url", "")), output.read()
if isinstance(output, list) and output:
url = str(output[0])
elif isinstance(output, str):
url = output
else:
raise TypeError(f"Unexpected output type: {type(output)!r}")
response = requests.get(url, timeout=60)
response.raise_for_status()
return url, response.content
def save_artifact(name, image_bytes):
path = ARTIFACT_DIR / name
path.write_bytes(image_bytes)
return path
def run_upscale(image_input, save_as):
if isinstance(image_input, Path):
with image_input.open("rb") as handle:
output = replicate.run(P_IMAGE_UPSCALE_MODEL, input={"image": handle, **UPSCALE_INPUT})
else:
output = replicate.run(P_IMAGE_UPSCALE_MODEL, input={"image": image_input, **UPSCALE_INPUT})
url, image_bytes = read_output(output)
path = save_artifact(save_as, image_bytes)
return url, image_bytes, path
def display_slider(left_bytes, right_bytes, left_label, right_label, slider_id):
left = base64.b64encode(left_bytes).decode("ascii")
right = base64.b64encode(right_bytes).decode("ascii")
html = f"""
<div id='{slider_id}' class='img-comp-slider' data-initial-handle='50' data-initial-zoom='1'>
<div class='img-comp-wrap'>
<img class='img-comp-base' src='data:image/jpeg;base64,{left}' alt='{left_label}' />
<div class='img-comp-overlay-wrap'>
<img src='data:image/jpeg;base64,{right}' alt='{right_label}' />
</div>
<div class='img-comp-handle'></div>
<span class='img-comp-hint'><-> Drag</span>
<span class='img-comp-label img-comp-label-left'>{left_label}</span>
<span class='img-comp-label img-comp-label-right'>{right_label}</span>
</div>
</div>
"""
display(HTML(html))
Step 2: You already have an image you like and want better overall quality
Here we start from an existing city/lifestyle image and run only p-image-upscale.
[4]:
step2_existing_url = "https://huggingface.co/datasets/pruna-test/documentation-media/resolve/main/pruna-endpoints/retail_image_generation_1.webp?download=true"
step2_existing_bytes = requests.get(step2_existing_url, timeout=60).content
step2_existing_saved = save_artifact("step2_existing_city_starting.jpg", step2_existing_bytes)
step2_upscaled_url, step2_upscaled_bytes, step2_upscaled_path = run_upscale(
step2_existing_url,
save_as="step2_existing_city_upscaled.jpg",
)
print("Starting image:", step2_existing_saved)
print("Upscaled image:", step2_upscaled_path)
print("Upscaled URL:", step2_upscaled_url)
display_slider(
step2_existing_bytes,
step2_upscaled_bytes,
"starting image",
"p-image-upscale",
"slider-step2",
)
Starting image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step2_existing_city_starting.jpg
Upscaled image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step2_existing_city_upscaled.jpg
Upscaled URL: https://replicate.delivery/xezq/NeOH3k2sFW1SHaeFeFfOAvZXHqJfvy1cSKa3jbFryQYaCJzyC/upscaled_image.jpg
Step 3: You generated with p-image and want a cleaner, more finished final result
Generate a cityscape first, then apply p-image-upscale as a final polish pass.
[5]:
step3_prompt = (
"Cinematic night cityscape, wet street reflections, neon signs, light fog, "
"distant traffic bokeh, realistic photography, high detail"
)
step3_output = replicate.run(
P_IMAGE_MODEL,
input={"prompt": step3_prompt, "aspect_ratio": "16:9", "seed": 42},
)
step3_generated_url, step3_generated_bytes = read_output(step3_output)
step3_generated_path = save_artifact("step3_p_image_city_generated.jpg", step3_generated_bytes)
step3_upscaled_url, step3_upscaled_bytes, step3_upscaled_path = run_upscale(
step3_generated_url,
save_as="step3_p_image_city_upscaled.jpg",
)
print("Generated image:", step3_generated_path)
print("Upscaled image:", step3_upscaled_path)
print("Upscaled URL:", step3_upscaled_url)
display_slider(
step3_generated_bytes,
step3_upscaled_bytes,
"p-image generated",
"p-image-upscale",
"slider-step3",
)
Generated image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step3_p_image_city_generated.jpg
Upscaled image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step3_p_image_city_upscaled.jpg
Upscaled URL: https://replicate.delivery/xezq/jrOz3hkjJvL7AxGxD6EFSnNU6hmwLwgLk9edkgLYFSWMkMLLA/upscaled_image.jpg
Step 4: You edited with p-image-edit and want a stronger final version to ship
Reuse the Step 3 cityscape and add a human face closeup in the foreground with p-image-edit. Then upscale the edited result.
[6]:
step4_edit_prompt = (
"Add a realistic human face closeup in the foreground, looking toward camera, "
"while preserving the city lights, perspective, and nighttime atmosphere"
)
step4_edited_output = replicate.run(
P_IMAGE_EDIT_MODEL,
input={
"images": [step3_generated_url],
"prompt": step4_edit_prompt,
"aspect_ratio": "16:9",
"seed": 42,
},
)
step4_edited_url, step4_edited_bytes = read_output(step4_edited_output)
step4_edited_path = save_artifact("step4_p_image_edit_city_face.jpg", step4_edited_bytes)
step4_upscaled_url, step4_upscaled_bytes, step4_upscaled_path = run_upscale(
step4_edited_url,
save_as="step4_p_image_edit_city_face_upscaled.jpg",
)
print("Edited image:", step4_edited_path)
print("Edited + upscaled image:", step4_upscaled_path)
print("Edited + upscaled URL:", step4_upscaled_url)
display_slider(
step3_generated_bytes,
step4_edited_bytes,
"original cityscape",
"cityscape + face closeup",
"slider-step4-edit",
)
display_slider(
step4_edited_bytes,
step4_upscaled_bytes,
"edited",
"edited + upscaled",
"slider-step4-upscale",
)
Edited image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step4_p_image_edit_city_face.jpg
Edited + upscaled image: /Users/davidberenstein/conductor/workspaces/prunatree/dallas/.context/p-image-upscale/notebook/step4_p_image_edit_city_face_upscaled.jpg
Edited + upscaled URL: https://replicate.delivery/xezq/vhJQBpKrjA5EGZdY6i53mPfeIeAvALSeExnV7SyZYAt1hkZZB/upscaled_image.jpg
Step 5: Reuse this pattern with other models or your own images
Use the same run_upscale helper with remote image URLs from any generation/editing model.
url, bytes_, path = run_upscale(
"https://example.com/image.jpg",
save_as="custom_url_upscaled.jpg",
)
Use p-image-upscale as the final pass once composition is correct and you want a cleaner, sharper, more publishable result.