# gemini-3-pro-image-preview > LLM model from google, available via Core.Today API. - **Provider**: google - **Model ID**: gemini-3-pro-image-preview - **Category**: chat - **Pricing Type**: token_based ## API Endpoint Base URL: https://api.core.today ### Chat Completions POST /llm/gemini/v1beta/openai/chat/completions ## Authentication Header: `Authorization: Bearer YOUR_API_KEY` Note: Your Core.Today API key (`cdt_xxx`) works as Bearer token. ## Input Parameters - `model` (string, **required**): Model identifier. Use `gemini-3-pro-image-preview` - `messages` (array, **required**): Array of message objects with `role` and `content` - `temperature` (number, optional): Sampling temperature (Default: `1.0`; Range: min: 0, max: 2) - `max_tokens` (integer, optional): Maximum tokens to generate - `stream` (boolean, optional): Enable streaming responses (Default: `false`) - `top_p` (number, optional): Nucleus sampling parameter (Default: `1.0`) ## Example Request ```json { "model": "gemini-3-pro-image-preview", "messages": [ { "role": "user", "content": "Hello, how are you?" } ] } ``` ## Response Format ```json { "id": "chatcmpl-abc123", "object": "chat.completion", "model": "gemini-3-pro-image-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Response text here" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 10, "completion_tokens": 20, "total_tokens": 30 } } ``` ## Usage Flow 1. POST /llm/gemini/v1beta/openai/chat/completions with model and messages 2. Response is returned synchronously (or streamed if stream=true) 3. Credits are deducted based on token usage ## Token Pricing - Input: 0.004 credits/token - Text Output: 0.024 credits/token - Image Output: 0.24 credits/token Note: When this model generates images, output tokens for image data are charged at the Image Output rate. Text output uses the Text Output rate. Approximate token counts: ~1,120 tokens per 1024px image. ## Image Generation This model can generate images via the chat completions API. Include an image generation instruction in your message content. ### Example (Image Generation) ```json { "model": "gemini-3-pro-image-preview", "messages": [ { "role": "user", "content": "Generate an image of a sunset over the ocean" } ] } ``` The response may include base64-encoded image data in the content. ## Tags google, chat