All Products
Search
Document Center

Platform For AI:Use ComfyUI to deploy an AI video generation model service

Last Updated:Jul 11, 2025

You can create complex AIGC workflow with ComfyUI to generate short videos and animations. Elastic Algorithm Service (EAS) provides scenario-based deployment method to deploy an AI video generation service based on ComfyUI and Stable Video Diffusion with just a few parameters to configure. This topic describes how to deploy a service based on a ComfyUI image and common calling methods.

Select deployment method

The following table describes the available editions for scenario-based deployment: Standard Edition, API Edition, Cluster Edition WebUI, and Serverless.

Suitable scenario

Call method

Billing method

Standard Edition

A single user calls the service on the WebUI or by using API operations when the service is deployed on a single instance.

  • WebUI

  • Online debugging

  • API calls (synchronous)

You are charged based on your deployment configurations. For more information, see Billing of EAS.

API Edition

High concurrency is required. The system creates a queue service instance that requires additional CPU instances.

API calls (asynchronous)

Cluster Edition WebUI

Multiple users call the service on the WebUI at the same time for design team or teaching purposes.

WebUI

Serverless

The demand for computing resources fluctuates significantly. The system automatically scales the service based on your service requests.

WebUI

The service deployment is free of charge. You are billed based on the duration of generation.

  • Cluster Edition WebUI: Each user has their own backend environment and working directory. This helps implement efficient GPU sharing and file management. For information about how a Cluster Edition WebUI service works, see How the Cluster Edition WebUI service works.

  • Serverless: The Serverless edition is supported only in the China (Shanghai) and China (Hangzhou) regions.

If scenario-based deployment cannot meet your business requirements, you can deploy a custom model by using the Alibaba Cloud images of the Standard Edition, Cluster Edition WebUI, and API Edition. You can also configure parameters for custom deployment in the console to implement other features.

Deploy a model service in EAS

Method 1: Scenario-based model deployment (recommended)

  1. Log on to the PAI console. Select a region on the top of the page. Then, select the desired workspace and click Enter Elastic Algorithm Service (EAS).

  2. On the Elastic Algorithm Service (EAS) page, click Deploy Service. In the Scenario-based Model Deployment section, click AI Video Generation: ComfyUI-based Deployment.

  3. On the AI Video Generation: ComfyUI-based Deployment page, configure the key parameters described in the following table.

    Parameter

    Description

    Basic Information

    Service Name

    Specify a name for the model service.

    Edition

    See Select deployment method.

    Model Settings

    You must add model settings in the following scenarios:

    Supported types:

    • OSS: Click image to select an existing Object Storage Service (OSS) directory.

    • NAS: Configure File Storage NAS (NAS) mount target and source path.

    Resource Configuration

    Instance Count

    If you select Standard Edition, we recommend that you set the value to 1.

    Resource Configuration

    We recommend the GU30, A10 or T4 GPU types. By default, GPU > ml.gu7i.c16m60.1-gu30 is selected for cost-effectiveness.

    Note

    ComfyUI supports only single-GPU mode, which means tasks can run on a single-GPU instance or multiple single-GPU instances. ComfyUI does not support multi-GPU concurrent operations.

  4. Click Deploy. The deployment requires approximately 5 minutes to complete. If Service Status changes to Running, the service is deployed.

Method 2: Custom deployment

  1. Log on to the PAI console. Select a region on the top of the page. Then, select the desired workspace and click Enter Elastic Algorithm Service (EAS).

  2. Click Deploy Service. In the Custom Model Deployment section, click Custom Deployment.

  3. On the Custom Deployment page, configure the key parameters described in the following table.

    Parameter

    Description

    Basic Information

    Service Name

    Enter a name for the service. In this example, the name comfyui_svd_demo is used.

    Environment Information

    Deployment Method

    Select Image-based Deployment and select Enable Web App.

    Image Configuration

    In the Alibaba Cloud Image list, select comfyui > comfyui:1.9, where:

    • x.x: specifies Standard Edition.

    • x.x-api: specifies API Edition.

    • x.x-cluster: specifies Cluster Edition.

    Note
    • The image version is updated frequently. We recommend that you select the latest version.

    • For information about the suitable scenarios for each edition, see Select deployment method.

    Model Settings

    You must add model settings in the following scenarios:

    Supported types:

    • OSS

      • Uri: Click image to select an existing OSS directory. Example: oss://bucket-test/data-oss/.

      • Mount Path: Enter /code/data-oss. The OSS directory is mounted to the /code/data-oss path of the image.

    • General-purpose NAS

      • File System: Select a NAS file system.

      • Mount Target: the mount target of the NAS file system. The EAS service uses the mount target to access the NAS file system.

      • File System Path: the NAS path in which the files are stored. Example: /data-oss.

      • Mount Path: Enter /code/data-oss. The path of the NAS file system is mounted to the /code/data-oss path of the image.

    Command

    • After you configure the image version, the system automatically sets the value to python main.py --listen --port 8000.

    • Port number: 8000.

    After you configure the model, you must add the --data-dir parameter to the Command field. The mount directory must be the same as the Mount Path in the Model Settings section. For example, python main.py --listen --port 8000 --data-dir /code/data-oss.

    Resource Information

    Resource Type

    Select Public Resources.

    Instances

    If you use Standard Edition, set the value to 1.

    Deployment Resources

    You must select a GPU-accelerated instance type. We recommend ml.gu7i.c16m60.1-gu30 for cost-effectiveness. If this instance type is unavailable, you can select ecs.gn6i-c16g1.4xlarge instead.

    Note

    ComfyUI supports only single-GPU mode, which means tasks can run on a single-GPU instance or multiple single-GPU instances. ComfyUI does not support multi-GPU concurrent operations.

  4. Click Deploy. The deployment requires approximately 5 minutes to complete. If Service Status changes to Running, the service is deployed.

Call the EAS service

WebUI

You can use the Standard, Cluster, and Serverless editions through the WebUI:

  1. Find the service that you want to call and click View Web App in the Service Type column.

  2. Perform model inference on the WebUI page.

    The ComfyUI image service provides multiple workflow templates. Click Workflow > Browse Templates to view and use these templates. You can also import a custom workflow.image

    In this example, the Wan VACE Text to Video template is used. After the workflow is loaded, select a model in the Load models here section, enter prompts in the Prompt section, and then click Run.

    Note

    When you load a new workflow, errors may occur during execution due to path changes. We recommend that you reselect some models and parameters to ensure normal operation.

    image

    After the workflow is executed, the generated video is displayed in the Save Video section.image

API

Standard Edition supports synchronous calls and online debugging. API Edition supports asynchronous calls.

Note

Common definitions of synchronous calls and asynchronous calls in EAS:

  • Synchronous calls directly send requests to the inference instance without using the queue service of EAS.

  • Asynchronous calls use the queue service of EAS to send requests to the input queue. You can subscribe to the queue service to obtain asynchronous inference results.

Since ComfyUI itself has an asynchronous queue system, even if a synchronous call is made, it is essentially still processed asynchronously. After you send a request, the system will return a Prompt ID. Then, use the Prompt ID to poll for the inference results.

1. Generate request body

The API request body of ComfyUI depends on workflow. First configure the workflow on the WebUI of a Standard Edition service, and then select Workflow > Export (API) in the upper-left corner to save the API format and obtain a JSON file of the workflow.image

  • For synchronous calls, the JSON file content must be included in the prompt field of the request body.

  • For asynchronous calls, the request body is the JSON file content itself.

Click to view a sample request body for synchronous calls of the preceding workflow

{
    "prompt": {
        "3": {
            "inputs": {
                "seed": 367490676387803,
                "steps": 40,
                "cfg": 7,
                "sampler_name": "dpmpp_sde_gpu",
                "scheduler": "karras",
                "denoise": 1,
                "model": [
                    "4",
                    0
                ],
                "positive": [
                    "6",
                    0
                ],
                "negative": [
                    "7",
                    0
                ],
                "latent_image": [
                    "5",
                    0
                ]
            },
            "class_type": "KSampler",
            "_meta": {
                "title": "K sampler"
            }
        },
        "4": {
            "inputs": {
                "ckpt_name": "LandscapeBING_v10.safetensors"
            },
            "class_type": "CheckpointLoaderSimple",
            "_meta": {
                "title": "Checkpoint loader (simple)"
            }
        },
        "5": {
            "inputs": {
                "width": 720,
                "height": 1280,
                "batch_size": 1
            },
            "class_type": "EmptyLatentImage",
            "_meta": {
                "title": "Empty Latent"
            }
        },
        "6": {
            "inputs": {
                "text": "Rocket takes off from the ground, fire, sky, airplane",
                "clip": [
                    "4",
                    1
                ]
            },
            "class_type": "CLIPTextEncode",
            "_meta": {
                "title": "CLIP text encoder"
            }
        },
        "7": {
            "inputs": {
                "text": "",
                "clip": [
                    "4",
                    1
                ]
            },
            "class_type": "CLIPTextEncode",
            "_meta": {
                "title": "CLIP text encoder"
            }
        },
        "8": {
            "inputs": {
                "samples": [
                    "3",
                    0
                ],
                "vae": [
                    "4",
                    2
                ]
            },
            "class_type": "VAEDecode",
            "_meta": {
                "title": "VAE decoding"
            }
        },
        "9": {
            "inputs": {
                "filename_prefix": "ComfyUI",
                "images": [
                    "8",
                    0
                ]
            },
            "class_type": "SaveImage",
            "_meta": {
                "title": "Save the image"
            }
        },
        "13": {
            "inputs": {
                "seed": 510424455529432,
                "steps": 40,
                "cfg": 2.5,
                "sampler_name": "euler_ancestral",
                "scheduler": "karras",
                "denoise": 1,
                "model": [
                    "17",
                    0
                ],
                "positive": [
                    "16",
                    0
                ],
                "negative": [
                    "16",
                    1
                ],
                "latent_image": [
                    "16",
                    2
                ]
            },
            "class_type": "KSampler",
            "_meta": {
                "title": "K sampler"
            }
        },
        "14": {
            "inputs": {
                "samples": [
                    "13",
                    0
                ],
                "vae": [
                    "18",
                    2
                ]
            },
            "class_type": "VAEDecode",
            "_meta": {
                "title": "VAE decoding"
            }
        },
        "15": {
            "inputs": {
                "filename_prefix": "ComfyUI",
                "fps": 10,
                "lossless": false,
                "quality": 85,
                "method": "default",
                "images": [
                    "14",
                    0
                ]
            },
            "class_type": "SaveAnimatedWEBP",
            "_meta": {
                "title": "Save WEBP"
            }
        },
        "16": {
            "inputs": {
                "width": 512,
                "height": 768,
                "video_frames": 35,
                "motion_bucket_id": 140,
                "fps": 15,
                "augmentation_level": 0.15,
                "clip_vision": [
                    "18",
                    1
                ],
                "init_image": [
                    "8",
                    0
                ],
                "vae": [
                    "18",
                    2
                ]
            },
            "class_type": "SVD_img2vid_Conditioning",
            "_meta": {
                "title": "SVD_Image to Video_Condition"
            }
        },
        "17": {
            "inputs": {
                "min_cfg": 1,
                "model": [
                    "18",
                    0
                ]
            },
            "class_type": "VideoLinearCFGGuidance",
            "_meta": {
                "title": "Linear CFG Bootstrap"
            }
        },
        "18": {
            "inputs": {
                "ckpt_name": "svd_xt_image_decoder.safetensors"
            },
            "class_type": "ImageOnlyCheckpointLoader",
            "_meta": {
                "title": "Checkpoint loader (image only)"
            }
        },
        "19": {
            "inputs": {
                "frame_rate": 10,
                "loop_count": 0,
                "filename_prefix": "comfyUI",
                "format": "video/h264-mp4",
                "pix_fmt": "yuv420p",
                "crf": 20,
                "save_metadata": true,
                "pingpong": false,
                "save_output": true,
                "images": [
                    "14",
                    0
                ]
            },
            "class_type": "VHS_VideoCombine",
            "_meta": {
                "title": "Merge to video"
            }
        }
    }
}

2. Initiate a call

Online debugging

Only Standard Edition services support online debugging.

  1. On the Elastic Algorithm Service (EAS) page, find the service that you want to debug and click Online Debugging in the Actions column.

  2. Send a POST request to obtain the prompt ID.

    1. In the Request Parameters section, enter the prepared request body in the Body code editor. Add /prompt to the request URL input box.image

    2. Click Send Request to view the returned result in the Debugging Information section. The following figure shows an example.image

  3. Send a GET request to obtain the inference result based on the prompt ID.

    1. In the Request Parameters section, change the request method to GET and enter /history/<prompt id> in the input box. The following figure shows an example.image

      Replace <prompt id> with the prompt ID that you obtained in Step 2.

    2. Click Send Request to obtain the inference result.

      You can view the generated inference result in the output directory of the mounted storage.

Synchronous call

Only Standard Edition services support synchronous calls.

  1. View invocation method.

    1. In the service list, click the name of the Standard Edition service. In the Basic Information section, click View Endpoint Information.

    2. In the Invocation Method panel, obtain the service endpoint and token.image

  2. Send a POST request to obtain the prompt ID.

    cURL

    • HTTP request method: POST

    • Request URL: <service_url>/prompt

    • Request header:

      Header

      Value

      Description

      Authorization

      <token>

      The authorization key

      Content-Type

      application/json

      The format of the request body

    • Sample code

      curl --location --request POST '<service_url>/prompt' \
      --header 'Authorization: <token>' \
      --header 'Content-Type: application/json' \
      --data-raw '{
          "prompt":
          ...omitted
      }'

      The following table describes the key parameters.

      Parameter

      Description

      <service_url>

      Replace the value with the endpoint that you obtained in Step 1. Delete the forward slash (/) at the end of the endpoint. Example: http://comfyui****.175805416243****.cn-beijing.pai-eas.aliyuncs.com.

      <token>

      Replace the value with the token that you obtained in Step 1. Example: ZGJmNzcwYjczODE1MmVlNWY1NTNiNGYxNDkzODI****NzU2NTFiOA==.

      data-raw

      Set the value to the request body. Example:

      Important

      The first letter of the Boolean value (true and false) in the request body must be lowercase.

      Click to view a sample request body

      {
          "prompt": {
              "3": {
                  "inputs": {
                      "seed": 367490676387803,
                      "steps": 40,
                      "cfg": 7,
                      "sampler_name": "dpmpp_sde_gpu",
                      "scheduler": "karras",
                      "denoise": 1,
                      "model": [
                          "4",
                          0
                      ],
                      "positive": [
                          "6",
                          0
                      ],
                      "negative": [
                          "7",
                          0
                      ],
                      "latent_image": [
                          "5",
                          0
                      ]
                  },
                  "class_type": "KSampler",
                  "_meta": {
                      "title": "K sampler"
                  }
              },
              "4": {
                  "inputs": {
                      "ckpt_name": "LandscapeBING_v10.safetensors"
                  },
                  "class_type": "CheckpointLoaderSimple",
                  "_meta": {
                      "title": "Checkpoint loader (simple)"
                  }
              },
              "5": {
                  "inputs": {
                      "width": 720,
                      "height": 1280,
                      "batch_size": 1
                  },
                  "class_type": "EmptyLatentImage",
                  "_meta": {
                      "title": "Empty Latent"
                  }
              },
              "6": {
                  "inputs": {
                      "text": "Rocket takes off from the ground, fire, sky, airplane",
                      "clip": [
                          "4",
                          1
                      ]
                  },
                  "class_type": "CLIPTextEncode",
                  "_meta": {
                      "title": "CLIP text encoder"
                  }
              },
              "7": {
                  "inputs": {
                      "text": "",
                      "clip": [
                          "4",
                          1
                      ]
                  },
                  "class_type": "CLIPTextEncode",
                  "_meta": {
                      "title": "CLIP text encoder"
                  }
              },
              "8": {
                  "inputs": {
                      "samples": [
                          "3",
                          0
                      ],
                      "vae": [
                          "4",
                          2
                      ]
                  },
                  "class_type": "VAEDecode",
                  "_meta": {
                      "title": "VAE decoding"
                  }
              },
              "9": {
                  "inputs": {
                      "filename_prefix": "ComfyUI",
                      "images": [
                          "8",
                          0
                      ]
                  },
                  "class_type": "SaveImage",
                  "_meta": {
                      "title": "Save the image"
                  }
              },
              "13": {
                  "inputs": {
                      "seed": 510424455529432,
                      "steps": 40,
                      "cfg": 2.5,
                      "sampler_name": "euler_ancestral",
                      "scheduler": "karras",
                      "denoise": 1,
                      "model": [
                          "17",
                          0
                      ],
                      "positive": [
                          "16",
                          0
                      ],
                      "negative": [
                          "16",
                          1
                      ],
                      "latent_image": [
                          "16",
                          2
                      ]
                  },
                  "class_type": "KSampler",
                  "_meta": {
                      "title": "K sampler"
                  }
              },
              "14": {
                  "inputs": {
                      "samples": [
                          "13",
                          0
                      ],
                      "vae": [
                          "18",
                          2
                      ]
                  },
                  "class_type": "VAEDecode",
                  "_meta": {
                      "title": "VAE decoding"
                  }
              },
              "15": {
                  "inputs": {
                      "filename_prefix": "ComfyUI",
                      "fps": 10,
                      "lossless": false,
                      "quality": 85,
                      "method": "default",
                      "images": [
                          "14",
                          0
                      ]
                  },
                  "class_type": "SaveAnimatedWEBP",
                  "_meta": {
                      "title": "Save WEBP"
                  }
              },
              "16": {
                  "inputs": {
                      "width": 512,
                      "height": 768,
                      "video_frames": 35,
                      "motion_bucket_id": 140,
                      "fps": 15,
                      "augmentation_level": 0.15,
                      "clip_vision": [
                          "18",
                          1
                      ],
                      "init_image": [
                          "8",
                          0
                      ],
                      "vae": [
                          "18",
                          2
                      ]
                  },
                  "class_type": "SVD_img2vid_Conditioning",
                  "_meta": {
                      "title": "SVD_Image to Video_Condition"
                  }
              },
              "17": {
                  "inputs": {
                      "min_cfg": 1,
                      "model": [
                          "18",
                          0
                      ]
                  },
                  "class_type": "VideoLinearCFGGuidance",
                  "_meta": {
                      "title": "Linear CFG Bootstrap"
                  }
              },
              "18": {
                  "inputs": {
                      "ckpt_name": "svd_xt_image_decoder.safetensors"
                  },
                  "class_type": "ImageOnlyCheckpointLoader",
                  "_meta": {
                      "title": "Checkpoint loader (image only)"
                  }
              },
              "19": {
                  "inputs": {
                      "frame_rate": 10,
                      "loop_count": 0,
                      "filename_prefix": "comfyUI",
                      "format": "video/h264-mp4",
                      "pix_fmt": "yuv420p",
                      "crf": 20,
                      "save_metadata": true,
                      "pingpong": false,
                      "save_output": true,
                      "images": [
                          "14",
                          0
                      ]
                  },
                  "class_type": "VHS_VideoCombine",
                  "_meta": {
                      "title": "Merge to video"
                  }
              }
          }
      }

    Python

    Sample code:

    import requests
    
    url = "<service_url>/prompt"
    
    payload = {
        "prompt":
        ...omitted
    }
    
    session = requests.session()
    session.headers.update({"Authorization":"<token>"})
    
    
    response = session.post(url=f'{url}', json=payload)
    if response.status_code != 200:
        raise Exception(response.content)
    
    data = response.json()
    print(data)

    The following table describes the key parameters.

    Parameter

    Description

    <service_url>

    Replace the value with the endpoint that you obtained in Step 1. Delete the forward slash (/) at the end of the endpoint. Example: http://comfyui****.175805416243****.cn-beijing.pai-eas.aliyuncs.com.

    <token>

    Replace the value with the token that you obtained in Step 1. ZGJmNzcwYjczODE1MmVlNWY1NTNiNGYxNDkzODI****NzU2NTFiOA==

    payload

    Set the value to the request body. Example:

    Important

    The first letter of the Boolean value (True and False) in the request body must be uppercase.

    Click to view a sample request body

    {
        "prompt": {
            "3": {
                "inputs": {
                    "seed": 367490676387803,
                    "steps": 40,
                    "cfg": 7,
                    "sampler_name": "dpmpp_sde_gpu",
                    "scheduler": "karras",
                    "denoise": 1,
                    "model": [
                        "4",
                        0
                    ],
                    "positive": [
                        "6",
                        0
                    ],
                    "negative": [
                        "7",
                        0
                    ],
                    "latent_image": [
                        "5",
                        0
                    ]
                },
                "class_type": "KSampler",
                "_meta": {
                    "title": "K sampler"
                }
            },
            "4": {
                "inputs": {
                    "ckpt_name": "LandscapeBING_v10.safetensors"
                },
                "class_type": "CheckpointLoaderSimple",
                "_meta": {
                    "title": "Checkpoint loader (simple)"
                }
            },
            "5": {
                "inputs": {
                    "width": 720,
                    "height": 1280,
                    "batch_size": 1
                },
                "class_type": "EmptyLatentImage",
                "_meta": {
                    "title": "Empty Latent"
                }
            },
            "6": {
                "inputs": {
                    "text": "Rocket takes off from the ground, fire, sky, airplane",
                    "clip": [
                        "4",
                        1
                    ]
                },
                "class_type": "CLIPTextEncode",
                "_meta": {
                    "title": "CLIP text encoder"
                }
            },
            "7": {
                "inputs": {
                    "text": "",
                    "clip": [
                        "4",
                        1
                    ]
                },
                "class_type": "CLIPTextEncode",
                "_meta": {
                    "title": "CLIP text encoder"
                }
            },
            "8": {
                "inputs": {
                    "samples": [
                        "3",
                        0
                    ],
                    "vae": [
                        "4",
                        2
                    ]
                },
                "class_type": "VAEDecode",
                "_meta": {
                    "title": "VAE decoding"
                }
            },
            "9": {
                "inputs": {
                    "filename_prefix": "ComfyUI",
                    "images": [
                        "8",
                        0
                    ]
                },
                "class_type": "SaveImage",
                "_meta": {
                    "title": "Save the image"
                }
            },
            "13": {
                "inputs": {
                    "seed": 510424455529432,
                    "steps": 40,
                    "cfg": 2.5,
                    "sampler_name": "euler_ancestral",
                    "scheduler": "karras",
                    "denoise": 1,
                    "model": [
                        "17",
                        0
                    ],
                    "positive": [
                        "16",
                        0
                    ],
                    "negative": [
                        "16",
                        1
                    ],
                    "latent_image": [
                        "16",
                        2
                    ]
                },
                "class_type": "KSampler",
                "_meta": {
                    "title": "K sampler"
                }
            },
            "14": {
                "inputs": {
                    "samples": [
                        "13",
                        0
                    ],
                    "vae": [
                        "18",
                        2
                    ]
                },
                "class_type": "VAEDecode",
                "_meta": {
                    "title": "VAE decoding"
                }
            },
            "15": {
                "inputs": {
                    "filename_prefix": "ComfyUI",
                    "fps": 10,
                    "lossless": False,
                    "quality": 85,
                    "method": "default",
                    "images": [
                        "14",
                        0
                    ]
                },
                "class_type": "SaveAnimatedWEBP",
                "_meta": {
                    "title": "Save WEBP"
                }
            },
            "16": {
                "inputs": {
                    "width": 512,
                    "height": 768,
                    "video_frames": 35,
                    "motion_bucket_id": 140,
                    "fps": 15,
                    "augmentation_level": 0.15,
                    "clip_vision": [
                        "18",
                        1
                    ],
                    "init_image": [
                        "8",
                        0
                    ],
                    "vae": [
                        "18",
                        2
                    ]
                },
                "class_type": "SVD_img2vid_Conditioning",
                "_meta": {
                    "title": "SVD_Image to Video_Condition"
                }
            },
            "17": {
                "inputs": {
                    "min_cfg": 1,
                    "model": [
                        "18",
                        0
                    ]
                },
                "class_type": "VideoLinearCFGGuidance",
                "_meta": {
                    "title": "Linear CFG Bootstrap"
                }
            },
            "18": {
                "inputs": {
                    "ckpt_name": "svd_xt_image_decoder.safetensors"
                },
                "class_type": "ImageOnlyCheckpointLoader",
                "_meta": {
                    "title": "Checkpoint loader (image only)"
                }
            },
            "19": {
                "inputs": {
                    "frame_rate": 10,
                    "loop_count": 0,
                    "filename_prefix": "comfyUI",
                    "format": "video/h264-mp4",
                    "pix_fmt": "yuv420p",
                    "crf": 20,
                    "save_metadata": True,
                    "pingpong": False,
                    "save_output": True,
                    "images": [
                        "14",
                        0
                    ]
                },
                "class_type": "VHS_VideoCombine",
                "_meta": {
                    "title": "Merge to video"
                }
            }
        }
    }

    Sample response:

    {
        "prompt_id": "021ebc5b-e245-4e37-8bd3-00f7b949****",
        "number": 5,
        "node_errors": {}
    }

    You can obtain the prompt ID from the response.

  3. Send a request to obtain the inference result.

    cURL

    • HTTP request method: GET

    • Request URL: <service_url>/history/<prompt_id>

    • Request header:

    • Header

      Value

      Description

      Authorization

      <token>

      The authorization key

    • Sample code:

      curl --location --request GET '<service_url>/history/<prompt_id>' \
           --header 'Authorization: <token>'

      The following table describes the key parameters.

      Parameter

      Description

      <service_url>

      Replace the value with the endpoint that you obtained in Step 1. Delete the forward slash (/) at the end of the endpoint. Example: http://comfyui****.175805416243****.cn-beijing.pai-eas.aliyuncs.com.

      <token>

      Replace the value with the token that you obtained in Step 1. Example: ZGJmNzcwYjczODE1MmVlNWY1NTNiNGYxNDkzODI****NzU2NTFiOA==.

      <prompt_id>

      Replace the value with the prompt ID that you obtained in Step 2.

    Python

    Sample code:

    import requests
    
    # Create the request URL.
    url = "<service_url>/history/<prompt_id>"
    
    session = requests.session()
    session.headers.update({"Authorization":"<token>"})
    
    response = session.get(url=f'{url}')
    
    if response.status_code != 200:
        raise Exception(response.content)
    
    data = response.json()
    print(data)

    The following table describes the key parameters.

    Parameter

    Description

    <service_url>

    Replace the value with the endpoint that you obtained in Step 1. Delete the forward slash (/) at the end of the endpoint. Example: http://comfyui****.175805416243****.cn-beijing.pai-eas.aliyuncs.com.

    <token>

    Replace the value with the token that you obtained in Step 1. Example: ZGJmNzcwYjczODE1MmVlNWY1NTNiNGYxNDkzODI****NzU2NTFiOA==.

    <prompt_id>

    Replace the value with the prompt ID that you obtained in Step 2.

    Sample response:

    Click to view a sample response

    {
        "130bcd6b-5bb5-496c-9c8c-3a1359a0****": {
            "prompt": ...omitted,
            "outputs": {
                "9": {
                    "images": [
                        {
                            "filename": "ComfyUI_1712645398_18dba34d-df87-4735-a577-c63d5506a6a1_.png",
                            "subfolder": "",
                            "type": "output"
                        }
                    ]
                },
                "15": {
                    "images": [
                        {
                            "filename": "ComfyUI_1712645867_.webp",
                            "subfolder": "",
                            "type": "output"
                        }
                    ],
                    "animated": [
                        true
                    ]
                },
                "19": {
                    "gifs": [
                        {
                            "filename": "comfyUI_00002.mp4",
                            "subfolder": "",
                            "type": "output",
                            "format": "video/h264-mp4"
                        }
                    ]
                }
            },
            "status": {
                "status_str": "success",
                "completed": true,
                "messages": ...omitted,
            }
        }
    }
    

    In the outputs section of this sample response, the generated image, WEBP file, and MP4 video are provided. You can find these files by name in the output directory of the mounted storage.

Asynchronous call

Only API Edition services support asynchronous calls, and the calls are sent by using api_prompt.

  1. View invocation method.

    Click Invocation Method in the Service Type column of the API Edition service. In the Invocation Method panel, view the service endpoint and token on the Asynchronous Invocation tab.image

  2. Send request.

    Sample code:

    import requests,io,base64
    from PIL import Image, PngImagePlugin
    
    url = "<service_url>"
    session = requests.session()
    session.headers.update({"Authorization":"<token>"})
    
    work_flow = {
        '3': 
        ...omitted
      }
    
    for i in range(5):
      payload = work_flow
      response = session.post(url=f'{url}/api_prompt?task_id=txt2img_{i}', json=payload)
      if response.status_code != 200:
        exit(f"send request error:{response.content}")
      else:
        print(f"send {i} success, index is {response.content}")

    The following table describes the key parameters.

    Parameter

    Description

    <service_url>

    Replace the value with the endpoint that you obtained in Step 1. Delete the forward slash (/) at the end of the endpoint. Example: http://175805416243****.cn-beijing.pai-eas.aliyuncs.com/api/predict/comfyui_api.

    <token>

    Replace the value with the token that you obtained in Step 1. Example: ZTJhM****TBhMmJkYjM3M2U0NjM1NGE3OGNlZGEyZTdjYjlm****Nw==.

    work_flow

    Configure the request body (the content of the JSON file of the workflow).

    Important

    The first letter of the Boolean values (True and False) in the file must be uppercase.

    Click to view a sample JSON file

    {
      "3": {
        "inputs": {
          "seed": 1021224598837526,
          "steps": 40,
          "cfg": 7,
          "sampler_name": "dpmpp_sde_gpu",
          "scheduler": "karras",
          "denoise": 1,
          "model": [
            "4",
            0
          ],
          "positive": [
            "6",
            0
          ],
          "negative": [
            "7",
            0
          ],
          "latent_image": [
            "5",
            0
          ]
        },
        "class_type": "KSampler",
        "_meta": {
          "title": "K sampler"
        }
      },
      "4": {
        "inputs": {
          "ckpt_name": "LandscapeBING_v10.safetensors"
        },
        "class_type": "CheckpointLoaderSimple",
        "_meta": {
          "title": "Checkpoint loader (simple)"
        }
      },
      "5": {
        "inputs": {
          "width": 720,
          "height": 1280,
          "batch_size": 1
        },
        "class_type": "EmptyLatentImage",
        "_meta": {
          "title": "Empty Latent"
        }
      },
      "6": {
        "inputs": {
          "text": "Rocket takes off from the ground, fire, sky, airplane",
          "clip": [
            "4",
            1
          ]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP text encoder"
        }
      },
      "7": {
        "inputs": {
          "text": "",
          "clip": [
            "4",
            1
          ]
        },
        "class_type": "CLIPTextEncode",
        "_meta": {
          "title": "CLIP text encoder"
        }
      },
      "8": {
        "inputs": {
          "samples": [
            "3",
            0
          ],
          "vae": [
            "4",
            2
          ]
        },
        "class_type": "VAEDecode",
        "_meta": {
          "title": "VAE decoding"
        }
      },
      "9": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "images": [
            "8",
            0
          ]
        },
        "class_type": "SaveImage",
        "_meta": {
          "title": "Save the image"
        }
      },
      "13": {
        "inputs": {
          "seed": 1072245043382649,
          "steps": 40,
          "cfg": 2.5,
          "sampler_name": "euler_ancestral",
          "scheduler": "karras",
          "denoise": 1,
          "model": [
            "17",
            0
          ],
          "positive": [
            "16",
            0
          ],
          "negative": [
            "16",
            1
          ],
          "latent_image": [
            "16",
            2
          ]
        },
        "class_type": "KSampler",
        "_meta": {
          "title": "K sampler"
        }
      },
      "14": {
        "inputs": {
          "samples": [
            "13",
            0
          ],
          "vae": [
            "18",
            2
          ]
        },
        "class_type": "VAEDecode",
        "_meta": {
          "title": "VAE decoding"
        }
      },
      "15": {
        "inputs": {
          "filename_prefix": "ComfyUI",
          "fps": 10,
          "lossless": False,
          "quality": 85,
          "method": "default",
          "images": [
            "14",
            0
          ]
        },
        "class_type": "SaveAnimatedWEBP",
        "_meta": {
          "title": "Save WEBP"
        }
      },
      "16": {
        "inputs": {
          "width": 512,
          "height": 768,
          "video_frames": 35,
          "motion_bucket_id": 140,
          "fps": 15,
          "augmentation_level": 0.15,
          "clip_vision": [
            "18",
            1
          ],
          "init_image": [
            "8",
            0
          ],
          "vae": [
            "18",
            2
          ]
        },
        "class_type": "SVD_img2vid_Conditioning",
        "_meta": {
          "title": "SVD_Image to Video_Condition"
        }
      },
      "17": {
        "inputs": {
          "min_cfg": 1,
          "model": [
            "18",
            0
          ]
        },
        "class_type": "VideoLinearCFGGuidance",
        "_meta": {
          "title": "Linear CFG Bootstrap"
        }
      },
      "18": {
        "inputs": {
          "ckpt_name": "svd_xt_image_decoder.safetensors"
        },
        "class_type": "ImageOnlyCheckpointLoader",
        "_meta": {
          "title": "Checkpoint loader (image only)"
        }
      },
      "19": {
        "inputs": {
          "frame_rate": 10,
          "loop_count": 0,
          "filename_prefix": "comfyUI",
          "format": "video/h264-mp4",
          "pix_fmt": "yuv420p",
          "crf": 20,
          "save_metadata": True,
          "pingpong": False,
          "save_output": True,
          "images": [
            "14",
            0
          ]
        },
        "class_type": "VHS_VideoCombine",
        "_meta": {
          "title": "Merge to video"
        }
      }
    }
  3. Subscribe to results.

    1. Run the following command to install the eas_prediction SDK:

      pip install eas_prediction  --user
    2. Run the following code to obtain the response:

      from eas_prediction import QueueClient
      
      sink_queue = QueueClient('<service_domain>', '<service_name>/sink')
      sink_queue.set_token('<token>')
      sink_queue.init()
      
      watcher = sink_queue.watch(0, 5, auto_commit=False)
      for x in watcher.run():
          if 'task_id' in x.tags:
              print('index {} task_id is {}'.format(x.index, x.tags['task_id']))
          print(f'index {x.index} data is {x.data}')
          sink_queue.commit(x.index)
      

      The following table describes the key parameters.

      Parameter

      Description

      <service_domain>

      Replace the value with the service endpoint that you obtained in Step 1. Example: 139699392458****.cn-hangzhou.pai-eas.aliyuncs.com.

      <service_name>

      Replace the value with the name of the EAS service.

      <token>

      Replace the value with the token that you obtained in Step 1.

      Sample response:

      index 42 task_id is txt2img_0
      index 42 data is b'[{"type": "executed", "data": {"node": "9", "output": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "prompt_id": "c3c983b6-f92b-4dd5-b4dc-442db4d1736f"}}, {"type": "executed", "data": {"node": "15", "output": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "prompt_id": "c3c983b6-f92b-4dd5-b4dc-442db4d1736f"}}, {"type": "executed", "data": {"node": "19", "output": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}, "prompt_id": "c3c983b6-f92b-4dd5-b4dc-442db4d1736f"}}, {"9": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "15": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "19": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}}]'
      index 43 task_id is txt2img_1
      index 43 data is b'[{"9": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "15": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "19": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}}]'
      index 44 task_id is txt2img_2
      index 44 data is b'[{"9": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "15": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "19": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}}]'
      index 45 task_id is txt
      index 45 data is b'[{"9": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "15": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "19": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}}]'
      index 46 task_id is txt2img_4
      index 46 data is b'[{"9": {"images": [{"filename": "ComfyUI_1712647318_8e7f3c93-d2a8-4377-92d5-8eb552adc172_.png", "subfolder": "", "type": "output"}]}, "15": {"images": [{"filename": "ComfyUI_1712647895_.webp", "subfolder": "", "type": "output"}], "animated": [true]}, "19": {"gifs": [{"filename": "comfyUI_00001.mp4", "subfolder": "", "type": "output", "format": "video/h264-mp4"}]}}]'

      You can view the inference result files in the output directory of the mounted storage.

More usage

Use your own workflow in ComfyUI

You can open a workflow from your local file system by selecting Workflow > Open in the upper-left corner of the WebUI page.

image

Mount a custom model and install the ComfyUI plug-in

To use a custom model or install missing nodes (ComfyUI plug-ins), you must mount an OSS or NAS storage to the service. If you use custom deployment, you must also append the --data-dir mount directory parameter to the Command that you enter. For more information, see Method 2: Custom deployment. After you deploy the service, the system automatically creates the directory structure shown in the following figure in the mounted OSS bucket or NAS file system:

image

Where:

  • custom_nodes: This directory is used to store ComfyUI plug-ins.

  • models: This directory is used to store model files.

To do so, perform the following steps:

  1. Upload the model file or plug-in. If you use OSS, see Step 2: Upload files.

    Note

    We recommend that you do not directly install plug-ins in ComfyUI Manager by pulling code from GitHub or other platforms, or download models from the Internet, because the network connection may fail.

    Upload model files

    Upload model files to the appropriate subdirectory of the models directory in the mounted storage. Refer to the instructions for the open source project library of the corresponding node to determine the subdirectory to which to upload the model. Examples:

    • For a Checkpoint loader node, you must upload the model to the models/checkpoints path.

    • For a style model loader, you must upload the model to the models/styles path.

    Upload plug-ins

    We recommend that you upload third-party ComfyUI plug-ins to the custom_nodes directory of the mounted storage.

  2. Restart the service.

    After you upload a model or plug-in to the mounted storage, you must restart the service for the model or plug-in to take effect. You can click Restart in ComfyUI Manager. The restart process requires approximately 5 minutes.

    image

FAQ

What to do when the service gets stuck or ComfyUI fails to generate images?

In most cases, this is because the resource specifications do not meet your business requirements. Check whether the service image and resource specifications are configured as expected. We recommend the GU30, A10 or T4 GPU types. For example, you can select ml.gu7i.c16m60.1-gu30 to ensure cost-effectiveness.

What to do if the model loader displays "undefined"?

Check whether the model directory is correctly configured based on the requirements of the model loader.

If you upload a model file when the service is running, restart the service for the model to take effect.

How does xFormer accelerate image generation?

xFormers is an open source acceleration tool based on Transformer and can effectively accelerate image and video generation and improve GPU utilization. By default, xFormers acceleration is enabled for ComfyUI image-based deployment. The acceleration effect varies depending on the size of the workflow. Models accelerated by GPUs, especially those running on NVIDIA GPUs, have better acceleration effects.

What is the difference between EAS and Function Compute when deploying the Serverless edition of ComfyUI?

  • EAS: suitable for stateful services with a long running duration. The Serverless edition of ComfyUI deployed in EAS allows you to deploy models as online inference services or AI-powered web applications with few clicks and provides features such as automatic scaling and blue-green deployment. You can deploy ComfyUI in EAS by using scenario-based model deployment or custom deployment.

  • Function Compute: suitable for services that require high-quality image generation. The Serverless edition of ComfyUI deployed in Function Compute is based on a serverless architecture and uses the pay-as-you-go billing method. It provides features such as automatic scaling and allows you to use ComfyUI custom models and install plug-ins. You can create applications, select ComfyUI templates, and specify configuration items in the Function Compute 3.0 console.

How do I view the available model files and ComfyUI plug-ins?

  • For default ComfyUI workflows, you need to check the available model files on the corresponding node. For example, you can view the available model files in the drop-down list of the Checkpoint loader.

  • Right-click the WebUI page and select Add Node from the shortcut menu to view all installed ComfyUI plug-ins.

How do I change the default language of the WebUI page?

  1. On the Elastic Algorithm Service (EAS) page, find the service that you want to call and click View Web App in the Service Type column.

  2. On the WebUI page, click image in the lower-left corner.

  3. In the Settings dialog box, set the language in the following two locations. After you set the parameters, refresh the page and reload it.

    • In the left-side navigation pane, select Comfy. In the right area settings, set the target language.image

    • In the left-side navigation pane, select AGL. In the Locale section on the right, set the target language.image

References

  • ComfyUI-based API Edition services use asynchronous queues. For more information about asynchronous calls, see Deploy an asynchronous inference service.

  • You can also use EAS to deploy the following items:

    • You can deploy an LLM application that can be called by using the web UI or API operations. After the LLM application is deployed, use the LangChain framework to integrate enterprise knowledge bases into the LLM application to implement intelligent Q&A and automation features. For more information, see Quickly deploy LLMs in EAS.

    • You can deploy a Retrieval-Augmented Generation (RAG)-based LLM chatbot that is suitable for Q&A, summarization, and other natural language processing (NLP) tasks that rely on specific knowledge bases. For more information, see Deploy a RAG-based LLM chatbot.

Appendixes

How the Cluster Edition WebUI service works

As shown in the following figure:

image
  • The Cluster Edition WebUI service is suitable for multi-user scenarios. The service decouples the client and backend inference instances to allow multiple users to reuse backend inference instances at different times. This improves instance utilization and reduces inference costs.

  • The proxy is used to manage client processes and inference instances. Your operations are processed in your user process. You can manage only files in public directories and personal directories. This isolates the working directories of team members. When you want to use an inference instance to process a request, the proxy identifies an available instance from the backend inference instances to process the inference request.