ControlNet Inpaint API — Guided Inpainting
Inpaint images with ControlNet guidance for more controllable region regeneration.
Controlnet inpaint is a new feature in Stable Diffusion API. It allows you to inpaint images with controlnet. You can use any of the 8 controlnet models to inpaint images.
--request POST 'https://stablediffusionapi.com/api/v5/controlnet' \
Pass below values in “controlnet_model” for inpaint parameter :
canny,
depth,
hed,
mlsd,
normal,
openpose,
scribble,
segmentation
key : Your API Key
controlnet_model: inpaint
model_id: stable Diffusion Model id
auto_hint: Auto hint image: yes, no
prompt: prompt to modify image, keep it as detailed as possible
init_image: direct image link
mask_image: mask image link
scheduler: scheduler(sampler) you want to use.
num_inference_steps : Number of denoising steps (minimum: 1; maximum: 50)
guidance_scale : Scale for classifier-free guidance (minimum: 1; maximum: 20)
seed : Random seed. Leave blank to randomize the seed
enhance_prompt : Enhance prompts for better results, default : yes, option : yes/no
webhook : webhook to call when image generation is completed
track_id : tracking id to track this api call
Request Body
Body Raw
{
"key": "",
"controlnet_model": "inpaint",
"model_id": "sd-1.5",
"auto_hint": "yes",
"prompt": "a model doing photoshoot, ultra high resolution, 4K image",
"negative_prompt": null,
"init_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"mask_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "no",
"scheduler": "UniPCMultistepScheduler",
"guidance_scale": 7.5,
"strength": 0.7,
"seed": null,
"webhook": null,
"track_id": null
}