Skip to main content

Overview

WaveAPI supports multiple video generation models, all accessible through the unified async task endpoint /v1/tasks. Video generation typically takes longer, so use polling or webhook callbacks to retrieve results.

Request Endpoint

POST https://qingbo.dev/v1/tasks

Common Request Parameters

ParameterTypeRequiredDescription
modelstringYesModel name
actionstringNoDefaults to "generate"
promptstringYesVideo description text
durationintegerNoVideo duration (seconds)
aspect_ratiostringNoAspect ratio, e.g., "16:9"
first_framestringNoFirst frame image URL (image-to-video)
last_framestringNoLast frame image URL (requires first_frame)
video_urlstringNoReference video URL
video_urlsarrayNoMultiple reference video URLs
image_urlstringNoReference image URL
image_urlsarrayNoMultiple reference image URLs
generate_audiobooleanNoWhether to generate audio simultaneously
watermarkbooleanNoWhether to add a watermark
callback_urlstringNoWebhook callback URL upon completion
extraobjectNoModel-specific pass-through parameters

Response Format

{
  "task_id": "task_xxx",
  "status": "completed",
  "result": {
    "videos": [
      {
        "url": "https://...",
        "expires_at": 1720000000
      }
    ]
  }
}

Supported Models

VEO3

Google video generation model

Sora2

OpenAI video generation model

Kling

Kuaishou Kling series

More Models

Hailuo, Doubao, Vidu, Wan, and more

Generation Modes

Most video models support the following modes:
ModeParametersDescription
Text-to-videopromptGenerate video from text description
Image-to-videoprompt + first_frame or image_urlGenerate video from an image
First/Last framefirst_frame + last_frameGenerate a transition video between specified start and end frames
Not all models support all modes. Refer to each model’s documentation for specific support details.