Type alias LabeledCriteria

LabeledCriteria: EvalConfig & {
    evaluatorType: "labeled_criteria";
    criteria?: Criteria | Record<string, string>;
    feedbackKey?: string;
    llm?: Toolkit;
}

Configuration to load a "LabeledCriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria and also provides a "ground truth" label for the evaluator to incorporate in its evaluation.

Type declaration

  • evaluatorType: "labeled_criteria"
  • Optional criteria?: Criteria | Record<string, string>

    The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/labeled-criteria for more information.

  • Optional feedbackKey?: string

    The feedback (or metric) name to use for the logged evaluation results. If none provided, we default to the evaluationName.

  • Optional llm?: Toolkit

    The language model to use as the evaluator.

Param

The criteria to use for the evaluator.

Param

The language model to use for the evaluator.

Returns

The configuration for the evaluator.

Example

const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: "correctness"
}],
};

Example

const evalConfig = {
evaluators: [{
evaluatorType: "labeled_criteria",
criteria: { "mentionsAllFacts": "Does the include all facts provided in the reference?" }
}],
};

Generated using TypeDoc