rageval - v0.1.1
    Preparing search index...

    Function toJson

    • Serializes an EvaluationResult to a JSON string.

      The JSON structure matches the EvaluationResult schema exactly and can be parsed back with JSON.parse() -- useful for caching results or comparing evaluation runs over time.

      Parameters

      • result: {
            scores: {
                faithfulness?: number;
                contextRelevance?: number;
                answerRelevance?: number;
                contextRecall?: number;
                contextPrecision?: number;
                overall: number;
                [key: string]: unknown;
            };
            samples: {
                id?: string;
                question: string;
                scores: Record<string, number>;
                reasoning?: Record<string, string>;
                tenantId?: string;
                metadata?: Record<string, unknown>;
            }[];
            stats?: Record<
                string,
                { mean: number; min: number; max: number; stddev: number; count: number },
            >;
            meta: {
                totalSamples: number;
                metrics: string[];
                provider: string;
                model: string;
                startedAt: string;
                completedAt: string;
                durationMs: number;
            };
        }

        The evaluation result to export.

        • scores: {
              faithfulness?: number;
              contextRelevance?: number;
              answerRelevance?: number;
              contextRecall?: number;
              contextPrecision?: number;
              overall: number;
              [key: string]: unknown;
          }

          Aggregate scores averaged across all samples.

        • samples: {
              id?: string;
              question: string;
              scores: Record<string, number>;
              reasoning?: Record<string, string>;
              tenantId?: string;
              metadata?: Record<string, unknown>;
          }[]

          Per-sample detailed results.

        • Optionalstats?: Record<
              string,
              { mean: number; min: number; max: number; stddev: number; count: number },
          >

          Per-metric score distribution statistics (min, max, stddev, count).

          Keys are metric names (same as keys in scores, minus overall). Useful for understanding score variance and identifying which questions score poorly. overall is excluded — compute it from individual metric stats.

          const { stats } = await evaluate({ ... })
          // High stddev indicates inconsistent pipeline behaviour:
          if ((stats.faithfulness?.stddev ?? 0) > 0.15) {
          console.warn('Faithfulness varies widely across samples — review your retrieval.')
          }
        • meta: {
              totalSamples: number;
              metrics: string[];
              provider: string;
              model: string;
              startedAt: string;
              completedAt: string;
              durationMs: number;
          }

          Metadata about the evaluation run.

          • totalSamples: number

            Total number of samples evaluated.

          • metrics: string[]

            Names of the metrics that were evaluated.

          • provider: string

            LLM provider used (e.g. 'anthropic', 'openai').

          • model: string

            LLM model used (e.g. 'claude-opus-4-6').

          • startedAt: string

            ISO 8601 timestamp when evaluation started.

          • completedAt: string

            ISO 8601 timestamp when evaluation completed.

          • durationMs: number

            Wall-clock duration of the evaluation in milliseconds.

      • pretty: boolean = true

        Whether to pretty-print with 2-space indentation. Default: true. Pass false for compact output (e.g. when storing in a database).

      Returns string

      JSON string representation of the full evaluation result.

      import { evaluate, toJson } from 'rageval'
      import { writeFileSync } from 'node:fs'

      const result = await evaluate({ ... })
      writeFileSync('eval-result.json', toJson(result))

      // Compact for API responses:
      res.send(toJson(result, false))