The raw string returned by the LLM provider.
Object with score clamped to [0, 1] and reasoning string
(empty string when the LLM omitted the reasoning field).
import { parseLlmScore } from 'rageval'
// Direct JSON (common case)
const r1 = parseLlmScore('{"score": 0.85, "reasoning": "The answer is relevant."}')
// r1.score -> 0.85
// r1.reasoning -> "The answer is relevant."
// Markdown-fenced JSON (some models wrap output even when told not to)
const r2 = parseLlmScore('```json\n{"score": 0.4}\n```')
* // r2.score -> 0.4
// r2.reasoning -> "" (field absent; defaults to empty string)
// Preamble text before JSON (common with chain-of-thought models)
const r3 = parseLlmScore('Let me evaluate this carefully.\n\n{"score": 0.9, "reasoning": "Strong."}')
// r3.score -> 0.9
// Score as string (some fine-tuned models return strings)
const r4 = parseLlmScore('{"score": "0.75"}')
// r4.score -> 0.75
// Out-of-range score (clamped to [0, 1])
const r5 = parseLlmScore('{"score": 9.5}')
// r5.score -> 1.0 (clamped from 9.5)
Parses a JSON response from the LLM judge.
The LLM is instructed to return
{"score": 0.0-1.0, "reasoning": "..."}. This function is resilient to common LLM output variations:```json ... ```)"score": "0.85")reasoningfield (defaults to empty string)NOTE: Raw LLM responses are never included in thrown errors to prevent accidental leakage of user data (question/answer/context content) into error monitoring systems. Use debug logging at the call site if needed.