Textgenerierung (Chat Completions)
GLM-4 Serien-Sprachmodell Chat-Completion-Schnittstelle. Vollständig kompatibel mit dem OpenAI Chat Completions API-Format, unterstützt Streaming-Ausgabe und Function Calling.
API-Endpunkte
POST
/chat/completionsChat-Completion erstellen
Anfrageparameter
| Parameter | Typ | Erforderlich | Beschreibung |
|---|---|---|---|
model | string | Erforderlich | Modellname: GLM-4-Plus, GLM-4-Air, GLM-4-AirX, GLM-4-Long, GLM-4-FlashX, GLM-4-Flash |
messages | array | Erforderlich | Array von Konversationsnachrichten, jeweils mit Rolle und Inhalt |
temperature | number | Optional | Sampling-Temperatur, Bereich 0-1, Standard 0,7 |
max_tokens | integer | Optional | Maximale Anzahl der zu generierenden Tokens |
stream | boolean | Optional | Ob Streaming-Ausgabe verwendet wird, Standard false |
top_p | number | Optional | Nucleus-Sampling-Parameter, Standard 0,9 |
tools | array | Optional | Verfügbare Tool-Liste (Function Calling) |
Anfrage-Beispiel
Anfrage-Beispiel
{
"model": "GLM-4-Air",
"messages": [
{"role": "system", "content": "你是一个智能助手"},
{"role": "user", "content": "请介绍一下智谱AI"}
],
"temperature": 0.7,
"top_p": 0.9,
"stream": false
}Antwort-Beispiel
Antwort-Beispiel
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "GLM-4-Air",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "智谱AI是一家专注于大模型与认知智能技术的公司..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 100,
"total_tokens": 120
}
}Code-Beispiele
Python
from openai import OpenAI
# 使用代理 API
client = OpenAI(
api_key="your-api-key",
base_url="https://your-proxy-domain.com/v1"
)
response = client.chat.completions.create(
model="GLM-4-Air",
messages=[
{"role": "system", "content": "你是一个智能助手"},
{"role": "user", "content": "请介绍一下智谱AI"}
],
temperature=0.7,
stream=False
)
print(response.choices[0].message.content)JavaScript
import OpenAI from 'openai';
// 使用代理 API
const client = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://your-proxy-domain.com/v1'
});
async function chat() {
const response = await client.chat.completions.create({
model: 'GLM-4-Air',
messages: [
{ role: 'system', content: '你是一个智能助手' },
{ role: 'user', content: '请介绍一下智谱AI' }
],
temperature: 0.7,
stream: false
});
console.log(response.choices[0].message.content);
}
chat();cURL
curl https://your-proxy-domain.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"model": "GLM-4-Air",
"messages": [
{"role": "system", "content": "你是一个智能助手"},
{"role": "user", "content": "请介绍一下智谱AI"}
],
"temperature": 0.7
}'