rageval - v0.1.1
    Preparing search index...

    Function toSarif

    • Serializes an EvaluationResult to SARIF 2.1.0 format.

      SARIF (Static Analysis Results Interchange Format) is the standard used by GitHub Advanced Security, Azure DevOps, and other code-quality tools. Upload the SARIF file to GitHub to see evaluation failures as code-scanning alerts on your pull requests -- directly in the diff.

      Each sample that scores below failureThreshold on any metric becomes a SARIF "result" with severity "warning" (score < threshold) or "error" (score < 0.4). Samples that pass all thresholds produce no SARIF results.

      Parameters

      • result: {
            scores: {
                faithfulness?: number;
                contextRelevance?: number;
                answerRelevance?: number;
                contextRecall?: number;
                contextPrecision?: number;
                overall: number;
                [key: string]: unknown;
            };
            samples: {
                id?: string;
                question: string;
                scores: Record<string, number>;
                reasoning?: Record<string, string>;
                tenantId?: string;
                metadata?: Record<string, unknown>;
            }[];
            stats?: Record<
                string,
                { mean: number; min: number; max: number; stddev: number; count: number },
            >;
            meta: {
                totalSamples: number;
                metrics: string[];
                provider: string;
                model: string;
                startedAt: string;
                completedAt: string;
                durationMs: number;
            };
        }

        The evaluation result from evaluate().

        • scores: {
              faithfulness?: number;
              contextRelevance?: number;
              answerRelevance?: number;
              contextRecall?: number;
              contextPrecision?: number;
              overall: number;
              [key: string]: unknown;
          }

          Aggregate scores averaged across all samples.

        • samples: {
              id?: string;
              question: string;
              scores: Record<string, number>;
              reasoning?: Record<string, string>;
              tenantId?: string;
              metadata?: Record<string, unknown>;
          }[]

          Per-sample detailed results.

        • Optionalstats?: Record<
              string,
              { mean: number; min: number; max: number; stddev: number; count: number },
          >

          Per-metric score distribution statistics (min, max, stddev, count).

          Keys are metric names (same as keys in scores, minus overall). Useful for understanding score variance and identifying which questions score poorly. overall is excluded — compute it from individual metric stats.

          const { stats } = await evaluate({ ... })
          // High stddev indicates inconsistent pipeline behaviour:
          if ((stats.faithfulness?.stddev ?? 0) > 0.15) {
          console.warn('Faithfulness varies widely across samples — review your retrieval.')
          }
        • meta: {
              totalSamples: number;
              metrics: string[];
              provider: string;
              model: string;
              startedAt: string;
              completedAt: string;
              durationMs: number;
          }

          Metadata about the evaluation run.

          • totalSamples: number

            Total number of samples evaluated.

          • metrics: string[]

            Names of the metrics that were evaluated.

          • provider: string

            LLM provider used (e.g. 'anthropic', 'openai').

          • model: string

            LLM model used (e.g. 'claude-opus-4-6').

          • startedAt: string

            ISO 8601 timestamp when evaluation started.

          • completedAt: string

            ISO 8601 timestamp when evaluation completed.

          • durationMs: number

            Wall-clock duration of the evaluation in milliseconds.

      • failureThreshold: number = 0.6

        Score below which a sample is flagged. Default: 0.6.

      Returns string

      SARIF 2.1.0 JSON string.

      import { evaluate, toSarif } from 'rageval'
      import { writeFileSync } from 'node:fs'

      const result = await evaluate({ ... })
      writeFileSync('rageval.sarif', toSarif(result))
      // Upload via GitHub CLI:
      // gh api /repos/{owner}/{repo}/code-scanning/sarifs --field sarif=@rageval.sarif