When true, the returned instruction requires
the LLM to provide a reasoning field explaining
its score. When false, reasoning is optional.
A multi-line string to append verbatim to any judge prompt.
import { jsonInstruction } from 'rageval'
// Without reasoning (default in evaluate())
const compact = jsonInstruction(false)
// Prompts: {"score": <0.0–1.0>} (reasoning optional)
// With reasoning (when includeReasoning: true is passed to evaluate())
const verbose = jsonInstruction(true)
// Prompts: {"score": <0.0–1.0>, "reasoning": "<your step-by-step analysis>"}
Builds the standard JSON instruction appended to every judge prompt.
When
includeReasoningis false, the prompt still accepts a reasoning field but marks it as optional — this avoids confusing models that always include it. Whentrue, reasoning is explicitly required and described as the step-by-step analysis the model performed.