rageval - v0.1.1
    Preparing search index...

    Function toJUnit

    • Serializes an EvaluationResult to JUnit XML format.

      JUnit XML is the universal CI test report format — supported by GitHub Actions, GitLab CI, Jenkins, CircleCI, Azure DevOps, and almost every other CI system. Upload the XML file as a test artifact to visualize evaluation failures as failed tests directly in your CI dashboard.

      Each sample becomes a <testcase>. A sample "fails" when any metric score is below the provided failureThreshold (default: 0.5). Failures appear as failed tests in your CI dashboard, making quality regressions immediately visible. Passing samples include their scores in <system-out> for traceability.

      Parameters

      • result: {
            scores: {
                faithfulness?: number;
                contextRelevance?: number;
                answerRelevance?: number;
                contextRecall?: number;
                contextPrecision?: number;
                overall: number;
                [key: string]: unknown;
            };
            samples: {
                id?: string;
                question: string;
                scores: Record<string, number>;
                reasoning?: Record<string, string>;
                tenantId?: string;
                metadata?: Record<string, unknown>;
            }[];
            stats?: Record<
                string,
                { mean: number; min: number; max: number; stddev: number; count: number },
            >;
            meta: {
                totalSamples: number;
                metrics: string[];
                provider: string;
                model: string;
                startedAt: string;
                completedAt: string;
                durationMs: number;
            };
        }

        The evaluation result from evaluate().

        • scores: {
              faithfulness?: number;
              contextRelevance?: number;
              answerRelevance?: number;
              contextRecall?: number;
              contextPrecision?: number;
              overall: number;
              [key: string]: unknown;
          }

          Aggregate scores averaged across all samples.

        • samples: {
              id?: string;
              question: string;
              scores: Record<string, number>;
              reasoning?: Record<string, string>;
              tenantId?: string;
              metadata?: Record<string, unknown>;
          }[]

          Per-sample detailed results.

        • Optionalstats?: Record<
              string,
              { mean: number; min: number; max: number; stddev: number; count: number },
          >

          Per-metric score distribution statistics (min, max, stddev, count).

          Keys are metric names (same as keys in scores, minus overall). Useful for understanding score variance and identifying which questions score poorly. overall is excluded — compute it from individual metric stats.

          const { stats } = await evaluate({ ... })
          // High stddev indicates inconsistent pipeline behaviour:
          if ((stats.faithfulness?.stddev ?? 0) > 0.15) {
          console.warn('Faithfulness varies widely across samples — review your retrieval.')
          }
        • meta: {
              totalSamples: number;
              metrics: string[];
              provider: string;
              model: string;
              startedAt: string;
              completedAt: string;
              durationMs: number;
          }

          Metadata about the evaluation run.

          • totalSamples: number

            Total number of samples evaluated.

          • metrics: string[]

            Names of the metrics that were evaluated.

          • provider: string

            LLM provider used (e.g. 'anthropic', 'openai').

          • model: string

            LLM model used (e.g. 'claude-opus-4-6').

          • startedAt: string

            ISO 8601 timestamp when evaluation started.

          • completedAt: string

            ISO 8601 timestamp when evaluation completed.

          • durationMs: number

            Wall-clock duration of the evaluation in milliseconds.

      • failureThreshold: number = 0.5

        Score below which a sample is marked as failed. Default: 0.5.

      Returns string

      JUnit XML string. Safe to write directly to a file.

      import { evaluate, toJUnit } from 'rageval'
      import { writeFileSync } from 'node:fs'

      // In your CI pipeline:
      const result = await evaluate({ ... })
      writeFileSync('junit-results.xml', toJUnit(result))
      // Then configure your CI to pick up junit-results.xml as a test report.

      // GitHub Actions example:
      // - uses: dorny/test-reporter@v1
      // with:
      // artifact: junit-results.xml
      // name: RAG Quality Report
      // reporter: java-junit