Creates an OpenAI LLM provider for use with evaluate.
Automatically retries transient errors (rate limits, server errors) with exponential back-off. The retry logic checks the SDK's typed .status property before falling back to string matching.
.status
Provider configuration including the OpenAI client instance.
An LlmProvider ready to be passed to evaluate().
evaluate()
import OpenAI from 'openai'import { createOpenAIProvider, evaluate } from 'rageval'const provider = createOpenAIProvider({ type: 'openai', client: new OpenAI(), model: 'gpt-4o', temperature: 0, // recommended for reproducible evaluation retries: 3,}) Copy
import OpenAI from 'openai'import { createOpenAIProvider, evaluate } from 'rageval'const provider = createOpenAIProvider({ type: 'openai', client: new OpenAI(), model: 'gpt-4o', temperature: 0, // recommended for reproducible evaluation retries: 3,})
Creates an OpenAI LLM provider for use with evaluate.
Automatically retries transient errors (rate limits, server errors) with exponential back-off. The retry logic checks the SDK's typed
.statusproperty before falling back to string matching.