rageval - v0.1.1
    Preparing search index...

    Function parseLlmScore

    • Parses a JSON response from the LLM judge.

      The LLM is instructed to return {"score": 0.0-1.0, "reasoning": "..."}. This function is resilient to common LLM output variations:

      • Markdown code fences (```json ... ```)
      • Preamble text before the JSON object ("Here is my evaluation: {…}")
      • Postamble text after the JSON object
      • Score as a string instead of a number ("score": "0.85")
      • Score outside the [0, 1] range (clamped silently)
      • Missing reasoning field (defaults to empty string)

      NOTE: Raw LLM responses are never included in thrown errors to prevent accidental leakage of user data (question/answer/context content) into error monitoring systems. Use debug logging at the call site if needed.

      Parameters

      • response: string

        The raw string returned by the LLM provider.

      Returns { score: number; reasoning: string }

      Object with score clamped to [0, 1] and reasoning string (empty string when the LLM omitted the reasoning field).

      When no valid JSON object with a numeric score can be found.

      import { parseLlmScore } from 'rageval'

      // Direct JSON (common case)
      const r1 = parseLlmScore('{"score": 0.85, "reasoning": "The answer is relevant."}')
      // r1.score -> 0.85
      // r1.reasoning -> "The answer is relevant."

      // Markdown-fenced JSON (some models wrap output even when told not to)
      const r2 = parseLlmScore('```json\n{"score": 0.4}\n```')
      * // r2.score -> 0.4
      // r2.reasoning -> "" (field absent; defaults to empty string)

      // Preamble text before JSON (common with chain-of-thought models)
      const r3 = parseLlmScore('Let me evaluate this carefully.\n\n{"score": 0.9, "reasoning": "Strong."}')
      // r3.score -> 0.9

      // Score as string (some fine-tuned models return strings)
      const r4 = parseLlmScore('{"score": "0.75"}')
      // r4.score -> 0.75

      // Out-of-range score (clamped to [0, 1])
      const r5 = parseLlmScore('{"score": 9.5}')
      // r5.score -> 1.0 (clamped from 9.5)