rageval - v0.1.1
    Preparing search index...

    Interface AzureOpenAIProviderConfig

    Configuration for the Azure OpenAI provider.

    Pass an AzureOpenAI client from the openai package — it exposes the same .chat.completions.create() interface as the standard OpenAI client.

    The model field should match the deployment name in your Azure resource, which may differ from the underlying model name (e.g. your deployment might be called "my-gpt4o" even though the underlying model is gpt-4o).

    Custom / self-hosted endpoints (Ollama, LocalAI, vLLM): use type: 'openai' with a custom baseURL on a standard OpenAI client instead:

    import OpenAI from 'openai'
    const client = new OpenAI({ baseURL: 'http://localhost:11434/v1', apiKey: 'ollama' })
    evaluate({ provider: { type: 'openai', client, model: 'llama3' }, ... })
    import { AzureOpenAI } from 'openai'
    import { evaluate } from 'rageval'

    const client = new AzureOpenAI({
    endpoint: 'https://my-resource.openai.azure.com',
    apiKey: process.env.AZURE_OPENAI_API_KEY,
    apiVersion: '2025-01-01-preview',
    })

    await evaluate({
    provider: { type: 'azure', client, model: 'gpt-4o' },
    dataset: myDataset,
    })
    interface AzureOpenAIProviderConfig {
        type: "azure";
        client: {
            chat: {
                completions: {
                    create: (
                        params: {
                            model: string;
                            max_tokens: number;
                            temperature?: number;
                            messages: { role: "user" | "assistant" | "system"; content: string }[];
                        },
                        options?: { signal?: AbortSignal },
                    ) => Promise<
                        { choices: { message?: { content?: (...)
                        | (...)
                        | (...) } }[] },
                    >;
                };
            };
        };
        model?: string;
        maxTokens?: number;
        temperature?: number;
        retries?: number;
    }
    Index

    Properties

    type: "azure"
    client: {
        chat: {
            completions: {
                create: (
                    params: {
                        model: string;
                        max_tokens: number;
                        temperature?: number;
                        messages: { role: "user" | "assistant" | "system"; content: string }[];
                    },
                    options?: { signal?: AbortSignal },
                ) => Promise<
                    { choices: { message?: { content?: (...)
                    | (...)
                    | (...) } }[] },
                >;
            };
        };
    }

    An AzureOpenAI client instance from the openai package. Uses structural typing — any object with .chat.completions.create() works.

    model?: string

    Azure deployment name (often matches the underlying model name, e.g. 'gpt-4o').

    'gpt-4o'
    
    maxTokens?: number

    Maximum tokens in the judge's response.

    2048
    
    temperature?: number

    Sampling temperature. Set to 0 for deterministic evaluation runs.

    retries?: number

    Number of retry attempts for transient errors (rate limits, 5xx).

    2