Images langbase.images.generate()

Images, an AI Primitive by Langbase, allows you to generate AI images using various models from different providers like OpenAI, Together AI, Google, and OpenRouter. Create stunning visuals from text prompts with a unified API interface.

Image generation enables you to create custom visuals for your applications, from marketing materials to user-generated content. It allows you to leverage the power of state-of-the-art AI models without managing multiple provider integrations.

You can use the images.generate() function to generate AI images from text descriptions. This is particularly useful for content creation, design workflows, and creative applications.


You will need to generate an API key to authenticate your requests. For more information, visit the User/Org API key documentation.

LLM API Keys Required

You'll need to add the LLM API keys for the image generation providers you want to use. Pass your provider API key in the apiKey parameter.


API reference

Generate AI images by running the langbase.images.generate() function.

Function Signature

langbase.images.generate(options); // with types langbase.images.generate(options: ImageGenerationOptions);

  • Name
    options
    Type
    ImageGenerationOptions
    Description

    ImageGenerationOptions Object

    prompt: string; model: string; width?: number; height?: number; n?: number; steps?: number; image_url?: string; apiKey: string;

    Following are the properties of the options object.


prompt

  • Name
    prompt
    Type
    string
    Required
    Required
    Description

    A text description of the image you want to generate. Be descriptive for best results.

model

  • Name
    model
    Type
    string
    Required
    Required
    Description

    The image generation model to use. Format: provider:model-name

    Examples:

    • openai:gpt-image-1
    • openai:dall-e-3
    • openai:dall-e-2
    • together:black-forest-labs/FLUX.1-schnell-Free
    • google:gemini-2.5-flash-image-preview
    • openrouter:google/gemini-2.5-flash-image-preview

apiKey

  • Name
    apiKey
    Type
    string
    Required
    Required
    Description

    Your LLM provider API key (OpenAI, Together AI, Google, etc.).

width

  • Name
    width
    Type
    number
    Description

    The width of the generated image in pixels. Default varies by model.

    Common values: 1024, 1792, 512

height

  • Name
    height
    Type
    number
    Description

    The height of the generated image in pixels. Default varies by model.

    Common values: 1024, 1792, 512

n

  • Name
    n
    Type
    number
    Description

    The number of images to generate.

    Default: 1

steps

  • Name
    steps
    Type
    number
    Description

    Number of inference steps (for models that support it).

    Higher values may produce better quality but take longer.

image_url

  • Name
    image_url
    Type
    string
    Description

    Base image URL for image-to-image generation (if supported by the model).

Usage example

Install the SDK

npm i langbase

Environment variables

.env file

LANGBASE_API_KEY="<USER/ORG-API-KEY>" OPENAI_API_KEY="<YOUR-OPENAI-KEY>"

langbase.images.generate() examples

langbase.images.generate()

import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const result = await langbase.images.generate({ prompt: "A futuristic cityscape with flying cars and neon lights", model: "openai:gpt-image-1", size: "1024x1024", apiKey: process.env.OPENAI_API_KEY! }); console.log('Image URL:', result.choices[0].message.images[0].image_url.url); } main();

Response of langbase.images.generate() is a Promise<ImageGenerationResponse>.

ImageGenerationResponse Type

interface ImageGenerationResponse { id: string; provider: string; model: string; object: string; created: number; choices: Array<{ logprobs: null; finish_reason: string; native_finish_reason: string; index: number; message: { role: string; content: string | null; images: Array<{ type: string; image_url: { url: string; }; index: number; }>; }; }>; usage: { prompt_tokens: number; completion_tokens: number; total_tokens: number; prompt_tokens_details: { cached_tokens: number; }; completion_tokens_details: { reasoning_tokens: number; image_tokens: number; }; }; }
  • Name
    id
    Type
    string
    Description

    Unique identifier for the image generation request.

  • Name
    provider
    Type
    string
    Description

    The provider used for image generation (e.g., "openai", "together", "google").

  • Name
    model
    Type
    string
    Description

    The model used for image generation.

  • Name
    object
    Type
    string
    Description

    The object type, typically "chat.completion".

  • Name
    created
    Type
    number
    Description

    Unix timestamp of when the image was generated.

  • Name
    choices
    Type
    Array<Choice>
    Description

    Array of generated image results. Each choice contains the generated images in the message.images array.

  • Name
    usage
    Type
    Usage
    Description

    Token usage statistics for the request.

ImageGenerationResponse Example

{ "id": "chatcmpl-xxx", "provider": "openai", "model": "gpt-image-1", "object": "chat.completion", "created": 1234567890, "choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "images": [ { "type": "image_url", "image_url": { "url": "https://oaidalleapiprodscus.blob.core.windows.net/..." }, "index": 0 } ] }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 15, "completion_tokens": 0, "total_tokens": 15 } }