An AzureOpenAI client instance from the openai package.
Uses structural typing — any object with .chat.completions.create() works.
OptionalmodelAzure deployment name (often matches the underlying model name, e.g. 'gpt-4o').
OptionalmaxMaximum tokens in the judge's response.
OptionaltemperatureSampling temperature. Set to 0 for deterministic evaluation runs.
OptionalretriesNumber of retry attempts for transient errors (rate limits, 5xx).
Configuration for the Azure OpenAI provider.
Pass an
AzureOpenAIclient from theopenaipackage — it exposes the same.chat.completions.create()interface as the standardOpenAIclient.The
modelfield should match the deployment name in your Azure resource, which may differ from the underlying model name (e.g. your deployment might be called"my-gpt4o"even though the underlying model isgpt-4o).Custom / self-hosted endpoints (Ollama, LocalAI, vLLM): use
type: 'openai'with a custombaseURLon a standardOpenAIclient instead:Example