Class: Langchain::Evals::Ragas::AnswerRelevance

Inherits:
Object
  • Object
show all
Defined in:
lib/langchain/evals/ragas/answer_relevance.rb

Overview

Answer Relevance refers to the idea that the generated answer should address the actual question that was provided. This metric evaluates how closely the generated answer aligns with the initial question or instruction.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(llm:, batch_size: 3) ⇒ AnswerRelevance

Returns a new instance of AnswerRelevance.

Parameters:

  • llm (Langchain::LLM::*)

    Langchain::LLM::* object

  • batch_size (Integer) (defaults to: 3)

    Batch size, i.e., number of generated questions to compare to the original question



15
16
17
18
# File 'lib/langchain/evals/ragas/answer_relevance.rb', line 15

def initialize(llm:, batch_size: 3)
  @llm = llm
  @batch_size = batch_size
end

Instance Attribute Details

#batch_sizeObject (readonly)

Returns the value of attribute batch_size.



11
12
13
# File 'lib/langchain/evals/ragas/answer_relevance.rb', line 11

def batch_size
  @batch_size
end

#llmObject (readonly)

Returns the value of attribute llm.



11
12
13
# File 'lib/langchain/evals/ragas/answer_relevance.rb', line 11

def llm
  @llm
end

Instance Method Details

#score(question:, answer:) ⇒ Float

Returns Answer Relevance score.

Parameters:

  • question (String)

    Question

  • answer (String)

    Answer

Returns:

  • (Float)

    Answer Relevance score



23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# File 'lib/langchain/evals/ragas/answer_relevance.rb', line 23

def score(question:, answer:)
  generated_questions = []

  batch_size.times do |i|
    prompt = answer_relevance_prompt_template.format(
      question: question,
      answer: answer
    )
    generated_questions << llm.complete(prompt: prompt).completion
  end

  scores = generated_questions.map do |generated_question|
    calculate_similarity(original_question: question, generated_question: generated_question)
  end

  # Find the mean
  scores.sum(0.0) / scores.size
end