Class: Langchain::LLM::Anthropic

Inherits:
Base
  • Object
show all
Defined in:
lib/langchain/llm/anthropic.rb

Overview

Wrapper around Anthropic APIs.

Gem requirements:

gem "anthropic", "~> 0.1.0"

Usage:

anthorpic = Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"])

Constant Summary collapse

DEFAULTS =
{
  temperature: 0.0,
  completion_model_name: "claude-2",
  chat_completion_model_name: "claude-3-sonnet-20240229",
  max_tokens_to_sample: 256
}.freeze

Instance Attribute Summary

Attributes inherited from Base

#client

Instance Method Summary collapse

Methods inherited from Base

#default_dimensions, #embed, #summarize

Methods included from DependencyHelper

#depends_on

Constructor Details

#initialize(api_key:, llm_options: {}, default_options: {}) ⇒ Langchain::LLM::Anthropic

Initialize an Anthropic LLM instance

Parameters:

  • api_key (String)

    The API key to use

  • llm_options (Hash) (defaults to: {})

    Options to pass to the Anthropic client

  • default_options (Hash) (defaults to: {})

    Default options to use on every call to LLM, e.g.: { temperature:, completion_model_name:, chat_completion_model_name:, max_tokens_to_sample: }



30
31
32
33
34
35
# File 'lib/langchain/llm/anthropic.rb', line 30

def initialize(api_key:, llm_options: {}, default_options: {})
  depends_on "anthropic"

  @client = ::Anthropic::Client.new(access_token: api_key, **llm_options)
  @defaults = DEFAULTS.merge(default_options)
end

Instance Method Details

#chat(messages: [], model: , max_tokens: , metadata: nil, stop_sequences: nil, stream: nil, system: nil, temperature: , tools: [], top_k: nil, top_p: nil) ⇒ Langchain::LLM::AnthropicResponse

Generate a chat completion for given messages

Parameters:

  • messages (Array<String>) (defaults to: [])

    Input messages

  • model (String) (defaults to: )

    The model that will complete your prompt

  • max_tokens (Integer) (defaults to: )

    Maximum number of tokens to generate before stopping

  • metadata (Hash) (defaults to: nil)

    Object describing metadata about the request

  • stop_sequences (Array<String>) (defaults to: nil)

    Custom text sequences that will cause the model to stop generating

  • stream (Boolean) (defaults to: nil)

    Whether to incrementally stream the response using server-sent events

  • system (String) (defaults to: nil)

    System prompt

  • temperature (Float) (defaults to: )

    Amount of randomness injected into the response

  • tools (Array<String>) (defaults to: [])

    Definitions of tools that the model may use

  • top_k (Integer) (defaults to: nil)

    Only sample from the top K options for each subsequent token

  • top_p (Float) (defaults to: nil)

    Use nucleus sampling.

Returns:

Raises:

  • (ArgumentError)


96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
# File 'lib/langchain/llm/anthropic.rb', line 96

def chat(
  messages: [],
  model: @defaults[:chat_completion_model_name],
  max_tokens: @defaults[:max_tokens_to_sample],
  metadata: nil,
  stop_sequences: nil,
  stream: nil,
  system: nil,
  temperature: @defaults[:temperature],
  tools: [],
  top_k: nil,
  top_p: nil
)
  raise ArgumentError.new("messages argument is required") if messages.empty?
  raise ArgumentError.new("model argument is required") if model.empty?
  raise ArgumentError.new("max_tokens argument is required") if max_tokens.nil?

  parameters = {
    messages: messages,
    model: model,
    max_tokens: max_tokens,
    temperature: temperature
  }
  parameters[:metadata] =  if 
  parameters[:stop_sequences] = stop_sequences if stop_sequences
  parameters[:stream] = stream if stream
  parameters[:system] = system if system
  parameters[:tools] = tools if tools.any?
  parameters[:top_k] = top_k if top_k
  parameters[:top_p] = top_p if top_p

  response = client.messages(parameters: parameters)

  Langchain::LLM::AnthropicResponse.new(response)
end

#complete(prompt:, model: @defaults[:completion_model_name], max_tokens_to_sample: @defaults[:max_tokens_to_sample], stop_sequences: nil, temperature: @defaults[:temperature], top_p: nil, top_k: nil, metadata: nil, stream: nil) ⇒ Langchain::LLM::AnthropicResponse

Generate a completion for a given prompt

Parameters:

  • prompt (String)

    Prompt to generate a completion for

  • model (String) (defaults to: @defaults[:completion_model_name])

    The model to use

  • max_tokens_to_sample (Integer) (defaults to: @defaults[:max_tokens_to_sample])

    The maximum number of tokens to sample

  • stop_sequences (Array<String>) (defaults to: nil)

    The stop sequences to use

  • temperature (Float) (defaults to: @defaults[:temperature])

    The temperature to use

  • top_p (Float) (defaults to: nil)

    The top p value to use

  • top_k (Integer) (defaults to: nil)

    The top k value to use

  • metadata (Hash) (defaults to: nil)

    The metadata to use

  • stream (Boolean) (defaults to: nil)

    Whether to stream the response

Returns:

Raises:

  • (ArgumentError)


49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/langchain/llm/anthropic.rb', line 49

def complete(
  prompt:,
  model: @defaults[:completion_model_name],
  max_tokens_to_sample: @defaults[:max_tokens_to_sample],
  stop_sequences: nil,
  temperature: @defaults[:temperature],
  top_p: nil,
  top_k: nil,
  metadata: nil,
  stream: nil
)
  raise ArgumentError.new("model argument is required") if model.empty?
  raise ArgumentError.new("max_tokens_to_sample argument is required") if max_tokens_to_sample.nil?

  parameters = {
    model: model,
    prompt: prompt,
    max_tokens_to_sample: max_tokens_to_sample,
    temperature: temperature
  }
  parameters[:stop_sequences] = stop_sequences if stop_sequences
  parameters[:top_p] = top_p if top_p
  parameters[:top_k] = top_k if top_k
  parameters[:metadata] =  if 
  parameters[:stream] = stream if stream

  # TODO: Implement token length validator for Anthropic
  # parameters[:max_tokens_to_sample] = validate_max_tokens(prompt, parameters[:completion_model_name])

  response = client.complete(parameters: parameters)
  Langchain::LLM::AnthropicResponse.new(response)
end