Skip to content
Pruna AI Customer Support Portal home
Pruna AI Customer Support Portal home

Qwen-Image-Edit-Plus

This guide walks you through deploying the Pruna-optimized Qwen-Image-Edit-Plus model.

What are the prerequisites?

To run the model, you’ll need:

  • HuggingFace token (HF_TOKEN): Enables you to download the optimized model.

  • Pruna token (PRUNA_TOKEN): Enables you to load and run the model.

  • An environment with pruna_pro installed: pip install pruna_pro==0.2.9


What inputs does the model support?

  • prompt: Input text to generate video from.

  • image: List of images in PIL format to be edited

  • seed: Random seed. Leave blank for random

  • go_fast: We offer a very fast and a conservative option.


How do I load the model?

You can initialize the Pruna-optimized Qwen-Image-Edit-Plus model directly with PrunaProModel.from_pretrained:

from pruna_pro import PrunaProModel self.pipe = PrunaProModel.from_pretrained( "PrunaAI/Qwen-Image-Edit-Plus", token="HF_TOKEN", hf_token="PRUNA_TOKEN" )

What does a minimal working example look like?

Below is a complete script that sets up the pipeline, and generate an image from a prompt.

import torch from pruna_pro import PrunaProModel class Predictor: def setup(self): import logging logging.basicConfig(level=logging.INFO) self.pipe = PrunaProModel.from_pretrained( "PrunaAI/Qwen-Image-Edit-Plus", token="PRUNA_TOKEN", hf_token="HF_TOKEN", verbose=True, # If supported by PrunaProModel log_level="info", # If supported ) def predict( self, prompt, image, go_fast=True, seed=None, ): generator = ( torch.Generator("cuda").manual_seed(seed) if seed is not None else None ) height, width = ASPECT_RATIOS[aspect_ratio] if go_fast: num_inference_steps = 8 else: num_inference_steps = 16 with torch.inference_mode(), torch.no_grad(): image = self.pipe( prompt=prompt, image=image, true_cfg_scale=1.0, num_inference_steps=num_inference_steps, generator=generator, ).images[0] image.save("output.png") return "output.png" if __name__ == "__main__": predictor = Predictor() predictor.setup() from diffusers.utils import load_image image = load_image("https://replicate.delivery/pbxt/NlPRwVX3rSb1lr6dcJw8F1QzBW8dcvFEuvJ3aygYi9iD6W4s/qwen-pose2.png") image_2 = load_image("https://replicate.delivery/pbxt/NlPRvwAlhPyevZ04BY6TEQmIjGYGxW1z9QHiLKabroVnmGe7/replicate-prediction-2rq8q6nrg5rmc0csex6818jzk8.jpeg") prompt = "The woman in image 2 adopts the pose from image 1" output = predictor.predict( prompt="'Bookstore window display. A sign displays “New Arrivals This Week”. Below, a shelf tag with the text “Best-Selling Novels Here”. To the side, a colorful poster advertises “Author Meet And Greet on Saturday” with a central portrait of the author. There are four books on the bookshelf, namely “The light between worlds” “When stars are scattered” “The slient patient” “The night circus”'", image=[image, image_2], ) print(output)