ControlNet Main

Now control Stable Diffusion with controlnet. All 8 controlnet models are available in API.


--request POST 'https://stablediffusionapi.com/api/v5/controlnet' \

Pass below values in "controlnet_model" parameter :
canny, depth, hed, mlsd, normal, openpose, scribble, segmentation

key : Your API Key
model_id : Stable Diffusion Model id
controlnet_model: Controlnet model id from above list.
auto_hint: Auto hint image: yes, no
prompt: prompt to modify image, keep it as detailed as possible
init_image: direct image link
scheduler: scheduler(sampler) you want to use.
num_inference_steps : Number of denoising steps (minimum: 1; maximum: 50)
guidance_scale : Scale for classifier-free guidance (minimum: 1; maximum: 20)
seed : Random seed. Leave blank to randomize the seed
enhance_prompt : Enhance prompts for better results, default : yes, option : yes/no
webhook : webhook to call when image generation is completed
track_id : tracking id to track this api call

Request Body

Body Raw
{
 "key": "",
 "model_id": "midjourney",
 "controlnet_model": "canny",
 "auto_hint": "yes",
 "prompt": "a model doing photoshoot, ultra high resolution, 4K image",
 "negative_prompt": null,
 "init_image": "https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png",
 "width": "512",
 "height": "512",
 "samples": "1",
 "num_inference_steps": "30",
 "safety_checker": "no",
 "enhance_prompt": "no",
 "scheduler": "UniPCMultistepScheduler",
 "guidance_scale": 7.5,
 "strength": 0.7,
 "seed": null,
 "webhook": null,
 "track_id": null
}