rageval - v0.1.1
    Preparing search index...

    Interface MetricInput

    Input passed to each metric's score() function.

    Mirrors one row of the evaluation dataset — all fields come directly from the RagSample passed to evaluate(). The metric implementations receive this object and use whichever fields are relevant to their scoring logic.

    interface MetricInput {
        question: string;
        answer: string;
        contexts: string[];
        groundTruth?: string;
    }
    Index

    Properties

    question: string

    The query that was posed to the RAG pipeline under evaluation.

    answer: string

    The answer generated by the RAG pipeline under evaluation.

    contexts: string[]

    The document chunks retrieved by the RAG pipeline and injected into the LLM's prompt. Presented to the judge in numbered order ([Context 1], [Context 2], ...). Ordering matters for contextPrecision scoring.

    groundTruth?: string

    The expected / reference answer for this question. Only required by contextRecall -- other metrics ignore this field. When absent, contextRecall returns skipped: true for this sample.