Provider configuration including the AzureOpenAI client instance.
An LlmProvider ready to be passed to evaluate().
import { AzureOpenAI } from 'openai'
import { evaluate } from 'rageval'
const client = new AzureOpenAI({
endpoint: 'https://my-resource.openai.azure.com',
apiKey: process.env.AZURE_OPENAI_API_KEY,
apiVersion: '2025-01-01-preview',
})
const results = await evaluate({
provider: { type: 'azure', client, model: 'gpt-4o' },
dataset: myDataset,
})
Creates an Azure OpenAI LLM provider for use with evaluate.
Azure OpenAI requires a resource endpoint and API key (or managed identity). Pass an
AzureOpenAIclient from theopenaipackage — it has the same.chat.completions.create()interface as the standardOpenAIclient.Custom / self-hosted endpoints (Ollama, LocalAI, vLLM, etc.) can use
{ type: 'openai' }with a custombaseURLinstead: