Overview
WaveAPI supports multiple video generation models, all accessible through the unified async task endpoint/v1/tasks. Video generation typically takes longer, so use polling or webhook callbacks to retrieve results.
Request Endpoint
Common Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model name |
action | string | No | Defaults to "generate" |
prompt | string | Yes | Video description text |
duration | integer | No | Video duration (seconds) |
aspect_ratio | string | No | Aspect ratio, e.g., "16:9" |
first_frame | string | No | First frame image URL (image-to-video) |
last_frame | string | No | Last frame image URL (requires first_frame) |
video_url | string | No | Reference video URL |
video_urls | array | No | Multiple reference video URLs |
image_url | string | No | Reference image URL |
image_urls | array | No | Multiple reference image URLs |
generate_audio | boolean | No | Whether to generate audio simultaneously |
watermark | boolean | No | Whether to add a watermark |
callback_url | string | No | Webhook callback URL upon completion |
extra | object | No | Model-specific pass-through parameters |
Response Format
Supported Models
VEO3
Google video generation model
Sora2
OpenAI video generation model
Kling
Kuaishou Kling series
More Models
Hailuo, Doubao, Vidu, Wan, and more
Generation Modes
Most video models support the following modes:| Mode | Parameters | Description |
|---|---|---|
| Text-to-video | prompt | Generate video from text description |
| Image-to-video | prompt + first_frame or image_url | Generate video from an image |
| First/Last frame | first_frame + last_frame | Generate a transition video between specified start and end frames |
Not all models support all modes. Refer to each model’s documentation for specific support details.