Fully compatible with the Claude Messages API format
Supports multi-turn conversations and single queries
Supports text, images, and other multimodal content
cURL
Python
JavaScript
Go
Java
curl -X POST https://qingbo.dev/v1/messages \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"max_tokens": 1024,
"system": "你是一个专业的AI助手。",
"messages": [
{
"role": "user",
"content": "解释一下冒泡排序算法。"
}
]
}'
{
"id" : "msg_013Zva2CMHLNnXjNJJKqJ2EF" ,
"type" : "message" ,
"role" : "assistant" ,
"content" : [
{
"type" : "text" ,
"text" : "你好!我是Claude。很高兴见到你。"
}
],
"model" : "claude-sonnet-4-5-20250929" ,
"stop_reason" : "end_turn" ,
"stop_sequence" : null ,
"usage" : {
"input_tokens" : 12 ,
"output_tokens" : 18
}
}
Authorizations
API key for authentication Visit the API Key management page to get your API Key Add the following to your request headers:
API version Specifies the Claude API version to use Example: 2023-06-01
Body
Model name
claude-haiku-4-5-20251001 - Claude 4.5 fast response version
claude-sonnet-4-5-20250929 - Claude 4.5 balanced version
claude-opus-4-1-20250805 - Most powerful Claude 4.1 flagship model
claude-opus-4-1-20250805-thinking - Claude 4.1 Opus deep thinking version
claude-sonnet-4-5-20250929-thinking - Claude 4.5 Sonnet deep thinking version
Message list, supports alternating user and assistant roles Show Message object properties
Role: user (user input) or assistant (model response, for multi-turn conversations or prefilling)
Message content, supports a string or content block array (multimodal)
Maximum tokens to generate The maximum number of tokens before generation stops. The model may stop before reaching this limit. Different models have different maximums. Refer to the model documentation. Minimum: 1
System prompt The system prompt sets Claude’s role, personality, goals, and instructions. String format: {
"system" : "你是一位专业的Python编程导师"
}
Structured format: {
"system" :
[
{
"type" : "text" ,
"text" : "你是一位专业的Python编程导师"
}
]
}
Temperature parameter, range 0-1 Controls output randomness:
Low values (e.g., 0.2): More deterministic, more conservative
High values (e.g., 0.8): More random, more creative
Default: 1.0
Nucleus sampling parameter, range 0-1 Uses nucleus sampling. It is recommended to use either temperature or top_p, not both. Default: 1.0
Top-K sampling Samples only from the top K most probable options, removing “long tail” low-probability responses. Recommended for advanced use cases only.
Whether to enable streaming output
true: Stream responses progressively via Server-Sent Events (SSE)
false: Return the complete response at once
Default: false
Stop sequences Custom text sequences that stop generation when encountered. Up to 4 sequences, each up to 32 tokens.
Metadata An object for tracking or identifying requests.
Response
Unique identifier for the message
Object type, always message
Role type, always assistant
Message content array Content type
text - Text content
tool_use - Tool use (if tools are enabled)
Text content (when type is text)
The model name actually used
Stop reason Possible values:
end_turn - Natural completion
max_tokens - Maximum token limit reached
stop_sequence - Stop sequence encountered
tool_use - Tool use
The stop sequence that triggered the stop (if applicable)
Usage Examples
Single Conversation
{
"model" : "claude-sonnet-4-5-20250929" ,
"max_tokens" : 1024 ,
"messages" : [
{ "role" : "user" , "content" : "解释一下量子计算" }
]
}
Multi-Turn Conversation
{
"model" : "claude-sonnet-4-5-20250929" ,
"max_tokens" : 1024 ,
"messages" : [
{ "role" : "user" , "content" : "你好" },
{ "role" : "assistant" , "content" : "你好!我是Claude。" },
{ "role" : "user" , "content" : "能解释一下AI吗?" }
]
}
Using a System Prompt
{
"model" : "claude-sonnet-4-5-20250929" ,
"max_tokens" : 1024 ,
"system" : "你是一位经验丰富的数据科学家,专长包括统计分析和机器学习。" ,
"messages" : [
{ "role" : "user" , "content" : "如何选择合适的机器学习算法?" }
]
}
Prefilling the Response
{
"model" : "claude-sonnet-4-5-20250929" ,
"max_tokens" : 1024 ,
"messages" : [
{ "role" : "user" , "content" : "列出5个Python最佳实践" },
{ "role" : "assistant" , "content" : "以下是5个Python最佳实践: \n\n 1." }
]
}