zsxkib/diffbir

✨DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior

Input
Configure the inputs for the AI model.

Random seed to ensure reproducibility. Setting this ensures that multiple runs with the same input produce the same output.

Path to the input image you want to enhance.

1
100

The number of enhancement iterations to perform. More steps might result in a clearer image but can also introduce artifacts.

Whether to use patch-based sampling. This can be useful for very large images to enhance them in smaller chunks rather than all at once.

0
100

Size of each tile (or patch) when 'tiled' option is enabled. Determines how the image is divided during patch-based enhancement.

For 'faces' mode: Indicates if the input images are already cropped and aligned to faces. If not, the model will attempt to do this.

0
100

Distance between the start of each tile when the image is divided for patch-based enhancement. A smaller stride means more overlap between tiles.

1
10

Number of times the enhancement process is repeated by feeding the output back as input. This can refine the result but might also introduce over-enhancement issues.

Use latent image guidance for enhancement. This can help in achieving more accurate and contextually relevant enhancements.

Method used for color correction post enhancement. 'wavelet' and 'adain' offer different styles of color correction, while 'none' skips this step.

0
100

For 'general_scenes': Scale factor for the guidance mechanism. Adjusts the influence of guidance on the enhancement process.

For 'general_scenes': Determines in which space (RGB or latent) the guidance operates. 'latent' can often provide more subtle and context-aware enhancements.

0
100

For 'general_scenes': Number of times the guidance process is repeated during enhancement.

For 'faces' mode: If multiple faces are detected, only enhance the center-most face in the image.

0
100

For 'general_scenes': Specifies when (at which step) the guidance mechanism stops influencing the enhancement.

0
100

For 'general_scenes': Specifies when (at which step) the guidance mechanism starts influencing the enhancement.

For 'faces' mode: Model used to upscale the background in images where the primary subject is a face.

For 'faces' mode: Model used for detecting faces in the image. Choose based on accuracy and speed preferences.

Choose the type of model best suited for the primary content of the image: 'faces' for portraits and 'general_scenes' for everything else.

Select the restoration model that aligns with the content of your image. This model is responsible for image restoration which removes degradations.

1
4

Factor by which the input image resolution should be increased. For instance, a factor of 4 will make the resolution 4 times greater in both height and width.

Disables the initial preprocessing step using SwinIR. Turn this off if your input image is already of high quality and doesn't require restoration.

Reload the image restoration model (SwinIR) if set to True. This can be useful if you've updated or changed the underlying SwinIR model.

0
100

For 'faces' mode: Size of each tile used by the background upsampler when dividing the image into patches.

0
100

For 'faces' mode: Distance between the start of each tile when the background is divided for upscaling. A smaller stride means more overlap between tiles.

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.