Open In Colab

How to Improve P-Video with a First Frame from P-Image

Ever had a static image you wished could come to life? This guide shows you how to go from a simple idea to a short video in about a minute. No video editing skills required.

What we’ll do:

  1. Let an LLM write the prompts — You describe your vision; GPT-4o-mini generates the image and video prompts for you.

  2. Create a seed image — P-Image turns the prompt into a beautiful still image.

  3. Optionally refine it — P-Image-Edit lets you tweak the image before animating.

  4. Bring it to motion — P-Video animates your image with the motion you described.

Run the cells below and you’ll see each output appear as we go. By the end, you’ll have a short video ready to download or share.

Models used: p-image, p-image-edit, p-video

Example: a serene landscape like the one below can become a short video with mist drifting and water rippling—all from a single prompt.

Example landscape that can be animated

Setup

First, let’s install the packages we need and connect to Replicate and OpenAI. You’ll need API tokens for both—get them from Replicate and OpenAI.

[1]:
%uv pip install replicate openai requests
/Users/davidberenstein/Documents/programming/pruna/prunatree/.venv/bin/python3: No module named uv
Note: you may need to restart the kernel to use updated packages.
[2]:
import os
import tempfile
import requests
from IPython.display import Image, Video, display
from replicate.client import Client
from openai import OpenAI
[3]:
token = os.environ.get("REPLICATE_API_TOKEN")
if not token:
    token = input("Replicate API token (r8_...): ").strip()
replicate = Client(api_token=token)
[4]:
openai_token = os.environ.get("OPENAI_API_KEY")
if not openai_token:
    openai_token = input("OpenAI API key (sk-...): ").strip()
openai_client = OpenAI(api_key=openai_token)

Step 1: Let the LLM write your prompts

First, let’s let the LLM write your prompts. Instead of crafting them yourself, describe your idea in plain language. The LLM will generate three prompts for you:

  • Image prompt — What the still image should look like (e.g., a serene mountain lake at sunrise).

  • Edit prompt (optional) — Refinements like “add more mist” or “warmer lighting.”

  • Video prompt — How the scene should move (e.g., “gentle ripples on the water, mist drifting slowly”).

Run the cell below. You’ll see the generated prompts printed—feel free to tweak the concept variable and run again to try different ideas.

[5]:
concept = "A serene mountain lake at sunrise with mist rising from the water"

response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {
            "role": "system",
            "content": "You generate prompts for AI image and video generation. Return valid JSON with keys: image_prompt, edit_prompt (optional, can be null), video_prompt. image_prompt: detailed text-to-image prompt. edit_prompt: optional refinement for p-image-edit, or null. video_prompt: motion/camera description for image-to-video.",
        },
        {
            "role": "user",
            "content": f"Create prompts for this concept: {concept}",
        },
    ],
)
import json

prompts = json.loads(response.choices[0].message.content)
image_prompt = prompts["image_prompt"]
edit_prompt = prompts.get("edit_prompt")
video_prompt = prompts["video_prompt"]
print("Image prompt:", image_prompt)
print("Edit prompt:", edit_prompt)
print("Video prompt:", video_prompt)
Image prompt: A serene mountain lake at sunrise, surrounded by majestic mountains with snow-capped peaks. The soft morning light casts a warm golden glow over the landscape. Mist rises gently from the calm water, creating a mystical atmosphere. Reflections of the mountains and sky can be seen in the lake’s surface. Lush green trees frame the scene, and a few wildflowers bloom at the water's edge.
Edit prompt: None
Video prompt: A slow-motion aerial drone shot moving over the mountain lake at sunrise, capturing the rising mist and the tranquil surface of the water, followed by a gradual zoom towards the shore where wildflowers bloom, with gentle ripples forming in the water as the scene transitions from dawn to daylight.

Step 2: Generate your seed image

As we can see from the prompts above, we now have our image prompt. Next, we’ll generate the seed image with P-Image. In a few seconds, you’ll get a high-quality still image—the first frame of your video that we’ll animate next.

When you run the cell, the image will appear below.

[6]:
output = replicate.run(
    "prunaai/p-image",
    input={"prompt": image_prompt, "aspect_ratio": "16:9"},
)
image_url = (
    output
    if isinstance(output, str)
    else output[0]
    if isinstance(output, list)
    else str(output)
)
print("Image URL:", image_url)
display(Image(url=image_url))
Image URL: https://replicate.delivery/xezq/JpXb72Bbs1KnIdQXMUJCUDpfCbRvDptsxGpAkMZ8COno66ILA/output_201268.jpeg

Step 3 (optional): Refine the image

As we saw in Step 1, we may have an edit prompt. If so, we’ll refine the image here with P-Image-Edit before animating—great for small adjustments like adding atmosphere, changing lighting, or fine-tuning details. If there’s no edit prompt, we skip straight to the video step.

The refined image will appear below when you run the cell.

[7]:
if edit_prompt:
    edited_output = replicate.run(
        "prunaai/p-image-edit",
        input={"images": [image_url], "prompt": edit_prompt},
    )
    image_url = edited_output if isinstance(edited_output, str) else edited_output[0]
    print("Edited image URL:", image_url)
    display(Image(url=image_url))
else:
    print("Skipping edit step.")
Skipping edit step.

Step 4: Animate with P-Video

With our final image ready, we’ll now animate it with P-Video. It takes your image and the motion prompt, then generates a short video (about 5 seconds by default). The result will appear below—you can play it right in the notebook or download it from the URL.

Run the cell and watch your still image come to life.

[ ]:
def _extract_url(obj):
    if isinstance(obj, str):
        return obj
    if hasattr(obj, "url"):
        return obj.url
    if hasattr(obj, "content_url"):
        return obj.content_url
    if isinstance(obj, list) and obj:
        return _extract_url(obj[0])
    if isinstance(obj, dict):
        return obj.get("video") or obj.get("output") or (list(obj.values())[0] if obj else None)
    return str(obj)

video_output = replicate.run(
    "prunaai/p-video",
    input={
        "image": image_url,
        "prompt": video_prompt,
        "duration": 5,
        "aspect_ratio": "16:9",
    },
)
video_url = _extract_url(video_output)
r = requests.get(video_url)
r.raise_for_status()
vpath = tempfile.mktemp(suffix=".mp4")
with open(vpath, "wb") as f:
    f.write(r.content)
display(Video(vpath, embed=True))

Conclusion

Congratulations! As we’ve seen, we can use P-Image to generate a beautiful still image from a prompt, and P-Video to animate it with a motion prompt.

This helps to ground the video in a beautiful still image, and allows us to iterate on the image and video prompts to get the best possible results.

You can check out other workflows or sign up for for our API and get started at https://dashboard.pruna.ai/login