# claude-opus-4-6 > LLM model from anthropic, available via Core.Today API. - **Provider**: anthropic - **Model ID**: claude-opus-4-6 - **Category**: chat - **Pricing Type**: token_based ## API Endpoint Base URL: https://api.core.today ### Chat Completions POST /llm/anthropic/v1/messages ## Authentication Header: `Authorization: Bearer YOUR_API_KEY` Note: Your Core.Today API key (`cdt_xxx`) works as Bearer token. ## Input Parameters - `model` (string, **required**): Model identifier. Use `claude-opus-4-6` - `messages` (array, **required**): Array of message objects with `role` and `content` - `temperature` (number, optional): Sampling temperature (Default: `1.0`; Range: min: 0, max: 2) - `max_tokens` (integer, optional): Maximum tokens to generate - `stream` (boolean, optional): Enable streaming responses (Default: `false`) - `top_p` (number, optional): Nucleus sampling parameter (Default: `1.0`) ## Example Request ```json { "model": "claude-opus-4-6", "messages": [ { "role": "user", "content": "Hello, how are you?" } ] } ``` ## Response Format ```json { "id": "chatcmpl-abc123", "object": "chat.completion", "model": "claude-opus-4-6", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Response text here" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30 } } ``` ## Usage Flow 1. POST /llm/anthropic/v1/messages with model and messages 2. Response is returned synchronously (or streamed if stream=true) 3. Credits are deducted based on token usage ## Token Pricing - Input: 0.01 credits/token - Output: 0.05 credits/token ## Tags anthropic, chat