# gpt-5.1-2025-11-13 > LLM model from openai, available via Core.Today API. - **Provider**: openai - **Model ID**: gpt-5.1-2025-11-13 - **Category**: chat - **Pricing Type**: token_based ## API Endpoint Base URL: https://api.core.today ### Chat Completions POST /llm/openai/v1/chat/completions ## Authentication Header: `Authorization: Bearer YOUR_API_KEY` Note: Your Core.Today API key (`cdt_xxx`) works as Bearer token. ## Input Parameters - `model` (string, **required**): Model identifier. Use `gpt-5.1-2025-11-13` - `messages` (array, **required**): Array of message objects with `role` and `content` - `temperature` (number, optional): Sampling temperature (Default: `1.0`; Range: min: 0, max: 2) - `max_tokens` (integer, optional): Maximum tokens to generate - `stream` (boolean, optional): Enable streaming responses (Default: `false`) - `top_p` (number, optional): Nucleus sampling parameter (Default: `1.0`) ## Example Request ```json { "model": "gpt-5.1-2025-11-13", "messages": [ { "role": "user", "content": "Hello, how are you?" } ] } ``` ## Response Format ```json { "id": "chatcmpl-abc123", "object": "chat.completion", "model": "gpt-5.1-2025-11-13", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Response text here" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30 } } ``` ## Usage Flow 1. POST /llm/openai/v1/chat/completions with model and messages 2. Response is returned synchronously (or streamed if stream=true) 3. Credits are deducted based on token usage ## Token Pricing - Input: 0.0025 credits/token - Cached Input: 0.00025 credits/token - Output: 0.02 credits/token ## Tags openai, chat