Images API v1

The images API endpoint allows you to generate AI images using various models from different providers like OpenAI, Together AI, Google, and OpenRouter. Generate stunning visuals from text prompts with a unified API interface.


You will need to generate an API key to authenticate your requests. For more information, visit the User/Org API key documentation.

LLM API Keys Required

You'll need to add the LLM API keys for the image generation providers you want to use. The API key should be passed in the LB-LLM-Key header.


POST/v1/images

Generate AI images by sending a text prompt to the images API endpoint.

Headers

  • Name
    Content-Type
    Type
    string
    Required
    Required
    Description

    Request content type. Needs to be application/json.

  • Name
    Authorization
    Type
    string
    Required
    Required
    Description

    Replace <YOUR_API_KEY> with your user/org API key.

  • Name
    LB-LLM-Key
    Type
    string
    Required
    Required
    Description

    Your LLM provider API key (OpenAI, Together AI, Google, etc.).


Request Body

  • Name
    prompt
    Type
    string
    Required
    Required
    Description

    A text description of the image you want to generate. Be descriptive for best results.

  • Name
    model
    Type
    string
    Required
    Required
    Description

    The image generation model to use. Format: provider:model-name

    Examples:

    • openai:gpt-image-1
    • together:black-forest-labs/FLUX.1-schnell-Free
    • google:gemini-2.5-flash-image-preview
    • openrouter:google/gemini-2.5-flash-image-preview
  • Name
    width
    Type
    number
    Description

    The width of the generated image in pixels. Default varies by model.

    Common values: 1024, 1792, 512

  • Name
    height
    Type
    number
    Description

    The height of the generated image in pixels. Default varies by model.

    Common values: 1024, 1792, 512

  • Name
    n
    Type
    number
    Description

    The number of images to generate.

    Default: 1

  • Name
    steps
    Type
    number
    Description

    Number of inference steps (for models that support it).

    Higher values may produce better quality but take longer.

  • Name
    image_url
    Type
    string
    Description

    Base image URL for image-to-image generation (if supported by the model).

Install the SDK

npm i langbase

Environment variables

.env file

LANGBASE_API_KEY="<YOUR_API_KEY>" OPENAI_API_KEY="<YOUR_OPENAI_KEY>"

Generate an image

Generate Image

POST
/v1/images
curl https://api.langbase.com/v1/images \ -X POST \ -H 'Authorization: Bearer <YOUR_API_KEY>' \ -H 'LB-LLM-Key: <YOUR_OPENAI_KEY>' \ -H 'Content-Type: application/json' \ -d '{ "prompt": "A futuristic cityscape with flying cars and neon lights", "model": "openai:gpt-image-1", "size": "1024x1024" }'

Response

Response of the endpoint is a ImageGenerationResponse object.

ImageGenerationResponse Object

interface ImageGenerationResponse { id: string; provider: string; model: string; object: string; created: number; choices: Choice[]; usage: Usage; }
  • Name
    id
    Type
    string
    Description

    Unique identifier for the image generation request.

  • Name
    provider
    Type
    string
    Description

    The provider used for image generation (e.g., "openai", "together", "google").

  • Name
    model
    Type
    string
    Description

    The model used for image generation.

  • Name
    object
    Type
    string
    Description

    Object type, typically "chat.completion".

  • Name
    created
    Type
    number
    Description

    Unix timestamp of when the image was created.

  • Name
    choices
    Type
    Choice[]
    Description

    Array of generated image results. Can contain more than one element if n is greater than 1.

    Choice Object

    interface Choice { logprobs: null; finish_reason: string; native_finish_reason: string; index: number; message: Message; }

  • Name
    usage
    Type
    Usage
    Description

    Token usage statistics for the request.

    Usage Object

    interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; prompt_tokens_details: PromptTokensDetails; completion_tokens_details: CompletionTokensDetails; }

API Response

{ "id": "chatcmpl-xxx", "provider": "openai", "model": "gpt-image-1", "object": "chat.completion", "created": 1234567890, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": null, "images": [ { "type": "image_url", "image_url": { "url": "https://oaidalleapiprodscus.blob.core.windows.net/..." }, "index": 0 } ] } } ], "usage": { "prompt_tokens": 15, "completion_tokens": 0, "total_tokens": 15, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 1 } } }